CN111931484B - Data transmission method based on big data - Google Patents

Data transmission method based on big data Download PDF

Info

Publication number
CN111931484B
CN111931484B CN202010756623.7A CN202010756623A CN111931484B CN 111931484 B CN111931484 B CN 111931484B CN 202010756623 A CN202010756623 A CN 202010756623A CN 111931484 B CN111931484 B CN 111931484B
Authority
CN
China
Prior art keywords
data
text
face
data table
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010756623.7A
Other languages
Chinese (zh)
Other versions
CN111931484A (en
Inventor
于梦丽
黄艳伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou caicaibao Internet Service Co.,Ltd.
Original Assignee
Guizhou Caicaibao Internet Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Caicaibao Internet Service Co ltd filed Critical Guizhou Caicaibao Internet Service Co ltd
Priority to CN202010756623.7A priority Critical patent/CN111931484B/en
Publication of CN111931484A publication Critical patent/CN111931484A/en
Application granted granted Critical
Publication of CN111931484B publication Critical patent/CN111931484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Software Systems (AREA)
  • Acoustics & Sound (AREA)
  • Bioethics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention relates to a data transmission method based on big data, which comprises the steps of obtaining at least two voice signals in a preset time period, carrying out voice recognition and keyword extraction on each voice signal to obtain each target text keyword and corresponding occurrence times, further obtaining a target data table, and realizing effective and reliable processing on the voice signals to obtain key information in the voice signals; then, the target data table is encrypted according to the key file returned by the background server to obtain an encrypted data table, so that the transmission safety of the target data table is improved, the target data table is prevented from being lost or stolen, and even if the encrypted data table is stolen, the data cannot be lost due to the encryption of the encrypted data table, and the data transmission safety is improved; and finally, after the actual face image of the person in charge of data transmission passes the identification and verification, the encrypted data table is transmitted, so that the safety of data transmission is improved, and data loss or data theft caused by operation of irrelevant persons is prevented.

Description

Data transmission method based on big data
Technical Field
The invention relates to a data transmission method based on big data.
Background
With the progress of electronic technology and the development of internet technology, data transmission technology is more and more widely applied. In general, in the process of data transmission, it is necessary to process initial data information and then transmit the processed data information. With the development of speech recognition technology and speech control technology, there are many cases where the collected original speech signal needs to be processed, such as: and performing voice recognition processing or voice filtering processing, and transmitting the processed signals to other related equipment or transmitting the processed signals to related personnel, so that the other related equipment can perform corresponding processing according to the acquired signals, or the related personnel can perform subsequent actions according to the acquired signals. However, when the voice signal is transmitted, the current data transmission method simply processes the voice signal and directly outputs the processed voice signal, which cannot effectively and reliably process the voice signal, and does not perform some security measures, thus easily resulting in data loss or data theft.
Disclosure of Invention
In order to solve the technical problem, the invention provides a data transmission method based on big data.
The invention adopts the following technical scheme:
a big data-based data transmission method comprises the following steps:
acquiring at least two voice signals in a preset time period;
carrying out voice recognition on each voice signal to obtain each text data;
processing each text data according to a preset keyword database to obtain a text keyword in each text data;
acquiring the occurrence frequency of each text keyword according to the obtained text keywords, determining different target text keywords according to the occurrence frequency of each text keyword, and associating each target text keyword with the corresponding occurrence frequency;
filling each target text keyword and the corresponding occurrence frequency into a preset initial blank data table to obtain a target data table; the initial blank data table comprises data filling sub-tables with the same number as target text keywords, each data filling sub-table corresponds to each target text keyword one by one, and each data filling sub-table comprises a keyword filling area and an occurrence frequency filling area;
sending a key file acquisition request to a background server according to the target data table so as to indicate the background server to generate and return a key file for encryption;
receiving the key file sent by the background server;
encrypting the target data table according to the key file to obtain an encrypted data table;
acquiring an actual face image of a person in charge of data transmission;
inputting the actual face image into a preset face image recognition model, and judging whether the face recognition passes the verification;
and if the face identification passes the verification, transmitting the encrypted data table.
Preferably, the process of acquiring the face image recognition model includes:
acquiring a face image sample, wherein the face image sample comprises at least two front face sample images and at least two non-front face sample images;
for any non-frontal face sample image, extracting the face key features of the non-frontal face sample image to obtain the face key features of the non-frontal face sample image; further obtaining the face key characteristics of each non-frontal face sample image;
according to the key features of the human face, performing face correction on each non-face sample image to obtain a corrected face image of each non-face sample image;
and taking the at least two face sample images and the corrected non-face sample images as the face images as a training set, and training to obtain the face image recognition model.
Preferably, the process of acquiring the at least two non-frontal face sample images includes:
acquiring an original front face image;
and processing the original front face image to obtain at least two side face images with different angles of the original front face image, wherein the side face images with different angles are the at least two non-front face sample images.
Preferably, the encrypting the target data table according to the key file to obtain an encrypted data table includes:
extracting an encryption key in the key file;
encrypting the target data table according to the encryption key to obtain the encrypted data table;
and deleting the received key file returned by the background server.
Preferably, the performing speech recognition on each speech signal to obtain each text data includes:
comparing the voice signal with the pronunciation of each historical text data in a preset historical text database for any voice signal;
if the similarity of the pronunciation of the voice signal and certain historical text data is greater than or equal to a preset similarity, the certain historical text data is the text data corresponding to the voice signal;
if the similarity of the pronunciation of the voice signal and all the historical text data is smaller than the preset similarity, performing voice recognition on the voice signal according to a preset voice recognition algorithm to obtain text data corresponding to the voice signal, storing the obtained text data corresponding to the voice signal into the historical text database, and updating the historical text database.
The invention has the beneficial effects that:
after at least two voice signals within a preset time period are acquired, voice recognition is carried out on each voice signal to obtain each text data, each text data is processed according to a preset keyword database to obtain text keywords in each text data, a target data table is obtained by combining the occurrence frequency of each text keyword, and the target data table is used for transmission, so that the data transmission method based on the big data can effectively and reliably process the voice signals to obtain the key information in the voice signals; before the target data table is transmitted, a key file acquisition request is sent to a background server, the background server generates and returns a key file, the key file is used for encryption, then the target data table is encrypted according to the key file, the transmission safety of the target data table can be improved through the encryption process, the target data table is prevented from being lost or stolen, and the target data table is reliably encrypted, so that the data cannot be lost even if the encrypted data table is stolen, and the data transmission safety is improved. After encryption, the face images of the personnel responsible for data transmission need to be verified, the encrypted data table is transmitted only after face identification verification, identity verification is further added before data transmission, the safety of data transmission can be further improved, and data loss or theft caused by operation of irrelevant personnel is prevented.
Drawings
Fig. 1 is a schematic flow chart of a data transmission method based on big data according to the present invention.
Detailed Description
The embodiment provides a data transmission method based on big data, and a hardware execution main body of the data transmission method based on big data can be a notebook computer, a desktop computer, an intelligent mobile terminal and the like. Because the interaction with the background server is involved in the data transmission method based on the big data, a hardware execution main body of the data transmission method based on the big data is in communication connection with the background server, and can be in wired communication or wireless communication.
As shown in fig. 1, the method for transmitting data based on big data includes:
acquiring at least two voice signals in a preset time period:
wherein the preset time period is specifically set by actual conditions, such as one week, one half month or one month. The specific number of the voice signals is set according to actual needs. Each voice signal can be obtained from the big data voice signal stored in the relevant memory, or can be a collected real-time voice signal of the relevant person in the conversation process, such as a plurality of sections of voice signals of the customer collected when the customer service person and the customer communicate by telephone. It should be understood that the specific number of the acquired voice signals and the acquisition manner are determined by the specific application scenario, and the embodiment is not limited to the above example.
And carrying out voice recognition on each voice signal to obtain each text data:
a historical text database and a voice recognition algorithm are preset in a hardware execution main body of the big data-based data transmission method. Wherein the historical text database comprises historical text data obtained by recognition before, and the speech recognition algorithm is a speech recognition algorithm disclosed in the prior art.
Since the speech recognition process of each speech signal is the same, the following description will be given taking any one speech signal as an example:
comparing the voice signal with the pronunciation of each historical text data in the historical text database, namely comparing the pronunciation corresponding to the voice signal with the pronunciation of each historical text data in the historical text database: if the similarity of the pronunciation of the voice signal and the pronunciation of a certain historical text data in the historical text database is greater than or equal to the preset similarity, the voice signal is represented to be highly similar to the pronunciation of the historical text data in the historical text database, the voice signal can be judged to be recognized before, at the moment, the voice signal is not recognized according to a conventional voice recognition algorithm, and the certain historical text data is directly used as text data corresponding to the voice signal. It should be understood that the preset similarity is set by the actual recognition accuracy, such as 95%; if the similarity of the pronunciation of the voice signal and the pronunciation of all the historical text data in the historical text database is smaller than the preset similarity, the voice signal is represented to be dissimilar to the pronunciation of all the historical text data in the historical text database, and it can be judged that the voice signal is not recognized before, the voice signal is subjected to voice recognition according to a preset voice recognition algorithm, and the text data is obtained through recognition, so that the text data obtained through recognition is new text data which is not recognized before. Then, the obtained new text data is stored in the historical text database, and the historical text database is updated, and it should be understood that the new text data becomes the historical text data in the historical text database after being stored in the historical text database.
Therefore, initially, the historical text database may be a blank database, and the text data obtained by the first speech recognition is stored in the blank database to obtain the historical text database. In the subsequent process, through a pronunciation comparison mode, if the similarity of the pronunciation of a certain voice signal and the pronunciation of a certain historical text data in the historical text database is greater than or equal to the preset similarity, voice recognition is not carried out according to a preset voice recognition algorithm, the certain historical text data is directly used as the text data of the voice signal, and accordingly, the historical text database is not updated; and if the similarity of the voice signal and the pronunciation of all the historical text data in the historical text database is smaller than the preset similarity, performing voice recognition according to a preset voice recognition algorithm to obtain new text data, and updating the historical text database according to the new text data. In this way, the historical text database is updated.
The voice recognition process can improve the accuracy and reliability of voice recognition. It will be appreciated that other speech recognition processes known in the art may be employed in addition to the speech recognition process described above.
Processing each text data according to a preset keyword database to obtain a text keyword in each text data:
the hardware execution main body of the data transmission method based on big data is preset with a keyword database, at least one keyword is stored in the keyword database, and it should be understood that the number of keywords in the keyword database and the specific content of the keywords are determined by actual needs, such as actual application scenarios.
For any one text data, the text data is input into a keyword database to obtain a text keyword in the text data. And processing other text data according to the process to obtain corresponding text keywords. And finally, obtaining the text keywords in each text data.
Acquiring the occurrence frequency of each text keyword according to the obtained text keywords, determining different target text keywords according to the occurrence frequency of each text keyword, and associating each target text keyword and the corresponding occurrence frequency:
because of the relative independence between text keywords, the same text keywords may exist. Therefore, after each text keyword is obtained, the occurrence frequency of each text keyword is obtained according to the repeated occurrence frequency of each text keyword. Such as: three text data are set, and the text keywords in the first text data are as follows: A. b and C, the text keywords in the second text data are: A. b and D, the text keywords in the third text data are as follows: B. c and E. The number of occurrences of the text keyword a is two, the number of occurrences of the text keyword B is three, the number of occurrences of the text keyword C is two, the number of occurrences of the text keyword D is one, and the number of occurrences of the text keyword E is one.
Different target text keywords are determined according to the occurrence frequency of each text keyword, wherein the target text keywords are different text keywords after the statistics of the occurrence frequency of each text keyword is completed, and it can also be understood that the occurrence frequency of each text keyword is set as the text keyword which occurs when 1. For example, in the above, the text keywords are: A. b, C, A, B, D, B, C and E, and the target text keywords are: A. b, C, D and E.
Associating each target text keyword and the corresponding occurrence number refers to: and establishing a corresponding relation between each target text keyword and the occurrence frequency, and determining the corresponding occurrence frequency according to each target text keyword.
Filling each target text keyword and the corresponding occurrence frequency into a preset initial blank data table to obtain a target data table; the initial blank data table comprises data filling sub-tables with the same number as target text keywords, each data filling sub-table corresponds to each target text keyword one by one, and each data filling sub-table comprises a keyword filling area and an occurrence frequency filling area:
an initial blank data table is preset in a hardware execution main body of the data transmission method based on the big data, and the initial blank data table comprises data filling sub-tables with the same number as that of target text keywords. For example, since the target text keywords are: A. b, C, D, and E, the number of target text keywords is 5, and the initial blank data table includes 5 data fill sub-tables. It should be understood that the initial blank data table may include a sufficient number of data-filled sub-tables (of course, including a sufficient number of data-filled sub-tables means including the same number of data-filled sub-tables as the target text keywords), from which the same number of data-filled sub-tables as the target text keywords are determined in a predetermined order according to the number of the target text keywords, wherein the predetermined order is set by actual needs.
Each data-filled sub-table corresponds to each target text keyword one to one, and it should be understood that each data-filled sub-table has a one-to-one correspondence relationship with each target text keyword. For any data filling sub-table, the data filling sub-table includes a keyword filling area and an occurrence number filling area, the keyword filling area is used for filling the corresponding target text keywords, and the occurrence number filling area is used for filling the occurrence numbers of the corresponding target text keywords.
Table 1 shows a specific embodiment of the initial blank data table, where table 1 includes 5 columns, each column representing one data-filled sub-table, and each column including two cells, the last cell representing a key-filled area and the next cell representing a number-of-occurrences-filled area.
TABLE 1
And filling each target text keyword and the corresponding occurrence frequency into the initial blank data table to obtain a target data table, specifically filling into a corresponding area in a corresponding data filling sub-table. For example, the target text keyword a and the occurrence count — 2 times, the target text keyword B and the occurrence count — 3 times, the target text keyword C and the occurrence count — 2 times, the target text keyword D and the occurrence count — 1 time, and the target text keyword E and the occurrence count — 1 time are filled into the corresponding region in the corresponding data filling sub-table to obtain the target data table, as shown in table 2.
TABLE 2
A B C D E
2 3 2 1 1
According to the target data table, sending a key file acquisition request to a background server to indicate the background server to generate and return a key file for encryption:
and sending a key file acquisition request to the background server according to the obtained target data table, namely after the target data table is obtained, so as to instruct the background server to generate and return the key file for encryption. It should be appreciated that the key file acquisition request may be a string having a specific number of bits, which is sent to the backend server after the target data table is obtained. And after receiving the key file acquisition request, the background server generates a key file for the encryption of the target data table. It should be understood that the key file contains an encryption key, which is used to encrypt the target data table, and further, for the purpose of decryption later, the key file may further include a decryption key corresponding to the encryption key, which is used to decrypt the encrypted target data table. Of course, the decryption process may not be part of the big data based data transmission method, and the decryption process is implemented by an external related device.
Receiving the key file sent by the background server:
and after the background server generates the key file, returning the key file to the hardware execution main body of the data transmission method based on the big data. The hardware execution main body of the data transmission method based on the big data receives the key file sent by the background server.
Encrypting the target data table according to the key file to obtain an encrypted data table:
and encrypting the target data table according to the key file, namely according to the encryption key in the key file to obtain an encrypted data table. It should be understood that the encryption algorithm for encrypting the target data table belongs to the encryption algorithms disclosed in the prior art, and the embodiment is not particularly limited. An encryption process is given below: firstly, extracting an encryption key in a key file; then, according to the encryption key, encrypting the target data table to obtain an encrypted data table; and finally, deleting the received key file returned by the background server. It should be understood that after the target data table is encrypted according to the encryption key in the key file, deleting the received key file can ensure the security of the key file, prevent the key file from being stolen, and further prevent the encrypted data table from being illegally decrypted, thereby resulting in data theft.
Acquiring an actual face image of a person in charge of data transmission:
after the encrypted data table is obtained, data transmission is not directly performed. And the data transmission is carried out by manual operation of personnel in charge of data transmission, specifically, the actual face image of the personnel in charge of data transmission is subjected to face recognition verification, and the encrypted data table is transmitted after the verification is passed. The personnel responsible for data transmission are set by the actual application scenario, such as the staff responsible for network security. Therefore, an actual face image of a person in charge of data transmission is acquired, and it should be understood that the hardware implementation subject of the big data based data transmission method is provided with a facial image acquisition device (such as a camera) through which the actual face image of the person in charge of data transmission is acquired.
Inputting the actual face image into a preset face image recognition model, and judging whether the face recognition passes the verification:
the hardware execution main body of the data transmission method based on big data is preset with a face image recognition model, and it should be understood that the face image recognition model is trained in advance. The face image recognition model can be trained by adopting a training process in the prior art, and as a specific implementation mode, the following training process is given:
the method comprises the steps of obtaining face image samples, wherein the face image samples comprise at least two front face sample images and at least two non-front face sample images. Each of the face sample images and the non-face sample images may be acquired in advance, and the specific number of the face sample images and the non-face sample images is set according to actual needs. The non-frontal face sample image can be directly obtained through different angles, and can also be obtained through the following obtaining process: acquiring an original front face image; and processing the original front face image to obtain at least two side face images with different angles of the original front face image, wherein the side face images with different angles are at least two non-front face sample images. As a specific embodiment, the StarGAN neural network model may be used to process the original frontal face image. The mode that the side face images at different angles are generated through different angles of the front face image is adopted, so that the reliability of the non-front face sample image can be guaranteed, and the accuracy of face recognition is improved. It should be understood that the sample images are collected while maintaining the same expression, i.e., a normal natural expression, in each sample image.
And for any non-frontal face sample image, extracting the face key features of the non-frontal face sample image to obtain the face key features of the non-frontal face sample image. The algorithm adopted by the face key feature extraction is an algorithm disclosed in the prior art, and as a specific implementation mode, the face key feature extraction can be performed on the non-frontal face sample image by adopting an ASM algorithm or an AAM algorithm. The face key features may be the outer contour of the face and edge feature points of various organs, such as the edge feature points of the eyes, mouth or nose. It should be understood that the order of calibration of key features of the face in each non-frontal sample image remains consistent.
And the other non-frontal face sample images also adopt the processing process to finally obtain the human face key characteristics of the non-frontal face sample images.
And according to the key features of the human face, performing face correction on each non-face sample image to obtain a corrected face image of each non-face sample image. In this embodiment, the non-frontal face sample image may be corrected by adopting an image enhancement processing method such as translation, scaling, or rotation of the image. The specific implementation means of the translation enhancing mode is as follows: moving the face image in any direction for a certain distance, wherein the maximum moving distance is set according to actual needs, such as 30 pixel points; the specific implementation means of the scaling enhancement mode is as follows: the face image is reduced or enlarged by a certain proportion (for example, 5% -15%), and then is cut into images with fixed size at random; the specific implementation means of the rotation enhancement mode is as follows: the face image is rotated by a certain angle in any direction, and the rotation angle is not large and is usually not more than 60 degrees. As a specific embodiment, the following two image correction methods may be adopted, and one of them is selected according to actual needs, where the first one is: according to the key features of the human face, each non-frontal face sample image is subjected to rotation adjustment and then translation adjustment; the second method is as follows: and according to the key features of the human face, carrying out zoom adjustment and then translation adjustment on each non-frontal face sample image. It should be understood that the image modification effect is higher when the rotation adjustment or the zoom adjustment is performed first and then the translation adjustment is performed than when the translation adjustment is performed first and then the zoom adjustment or the rotation adjustment is performed. Moreover, scaling adjustment and rotation adjustment usually do not occur simultaneously, otherwise, the acquisition process of the face image recognition model becomes too complex, and the recognition accuracy cannot be effectively ensured.
And finally, taking the obtained at least two front face sample images and the corrected front face image of each non-front face sample image as a training set, and training to obtain the face image recognition model.
And inputting the actual face image into the face image recognition model, judging whether the face recognition passes the verification, and judging whether the face recognition passes the verification to understand whether the actual face image is the face image in the face image recognition model.
If the face identification passes the verification, the encrypted data table is transmitted:
if the face recognition verification passes, for example: and if the actual face image is the face image in the face image recognition model, transmitting the encrypted data table. It will be appreciated that the encrypted data table may be transmitted to an external associated device, which decrypts the encrypted data table to obtain the desired data. For decryption, the external related device may be preset with the above key file, and decrypt the encrypted data table by using the decryption key in the key file.
The above-mentioned embodiments are merely illustrative of the technical solutions of the present invention in a specific embodiment, and any equivalent substitutions and modifications or partial substitutions of the present invention without departing from the spirit and scope of the present invention should be covered by the claims of the present invention.

Claims (4)

1. A data transmission method based on big data is characterized by comprising the following steps:
acquiring at least two voice signals in a preset time period;
carrying out voice recognition on each voice signal to obtain each text data;
processing each text data according to a preset keyword database to obtain a text keyword in each text data;
acquiring the occurrence frequency of each text keyword according to the obtained text keywords, determining different target text keywords according to the occurrence frequency of each text keyword, and associating each target text keyword with the corresponding occurrence frequency;
filling each target text keyword and the corresponding occurrence frequency into a preset initial blank data table to obtain a target data table; the initial blank data table comprises data filling sub-tables with the same number as target text keywords, each data filling sub-table corresponds to each target text keyword one by one, and each data filling sub-table comprises a keyword filling area and an occurrence frequency filling area;
sending a key file acquisition request to a background server according to the target data table so as to indicate the background server to generate and return a key file for encryption;
receiving the key file sent by the background server;
encrypting the target data table according to the key file to obtain an encrypted data table;
acquiring an actual face image of a person in charge of data transmission;
inputting the actual face image into a preset face image recognition model, and judging whether the face recognition passes the verification;
if the face identification passes the verification, transmitting the encrypted data table;
the encrypting the target data table according to the key file to obtain an encrypted data table includes:
extracting an encryption key in the key file;
encrypting the target data table according to the encryption key to obtain the encrypted data table;
and deleting the received key file returned by the background server.
2. The big data-based data transmission method according to claim 1, wherein the obtaining process of the face image recognition model comprises:
acquiring a face image sample, wherein the face image sample comprises at least two front face sample images and at least two non-front face sample images;
for any non-frontal face sample image, extracting the face key features of the non-frontal face sample image to obtain the face key features of the non-frontal face sample image; further obtaining the face key characteristics of each non-frontal face sample image;
according to the key features of the human face, performing face correction on each non-face sample image to obtain a corrected face image of each non-face sample image;
and taking the at least two face sample images and the corrected non-face sample images as the face images as a training set, and training to obtain the face image recognition model.
3. The big-data based data transmission method according to claim 2, wherein the at least two non-frontal sample images are acquired by a process comprising:
acquiring an original front face image;
and processing the original front face image to obtain at least two side face images with different angles of the original front face image, wherein the side face images with different angles are the at least two non-front face sample images.
4. The big data-based data transmission method according to claim 1, wherein the performing speech recognition on each speech signal to obtain each text data comprises:
comparing the voice signal with the pronunciation of each historical text data in a preset historical text database for any voice signal;
if the similarity of the pronunciation of the voice signal and certain historical text data is greater than or equal to a preset similarity, the certain historical text data is the text data corresponding to the voice signal;
if the similarity of the pronunciation of the voice signal and all the historical text data is smaller than the preset similarity, performing voice recognition on the voice signal according to a preset voice recognition algorithm to obtain text data corresponding to the voice signal, storing the obtained text data corresponding to the voice signal into the historical text database, and updating the historical text database.
CN202010756623.7A 2020-07-31 2020-07-31 Data transmission method based on big data Active CN111931484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010756623.7A CN111931484B (en) 2020-07-31 2020-07-31 Data transmission method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010756623.7A CN111931484B (en) 2020-07-31 2020-07-31 Data transmission method based on big data

Publications (2)

Publication Number Publication Date
CN111931484A CN111931484A (en) 2020-11-13
CN111931484B true CN111931484B (en) 2022-02-25

Family

ID=73315819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010756623.7A Active CN111931484B (en) 2020-07-31 2020-07-31 Data transmission method based on big data

Country Status (1)

Country Link
CN (1) CN111931484B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144051A (en) * 2014-07-24 2014-11-12 上海斐讯数据通信技术有限公司 Remote voice encryption and decryption method
CN107330301A (en) * 2017-08-25 2017-11-07 遵义博文软件开发有限公司 Managing medical information platform based on recognition of face
CN108989322A (en) * 2018-07-28 2018-12-11 努比亚技术有限公司 data transmission method, mobile terminal and computer readable storage medium
CN109726265A (en) * 2018-12-13 2019-05-07 深圳壹账通智能科技有限公司 Assist information processing method, equipment and the computer readable storage medium of chat
CN109741750A (en) * 2018-05-09 2019-05-10 北京字节跳动网络技术有限公司 A kind of method of speech recognition, document handling method and terminal device
CN109767335A (en) * 2018-12-15 2019-05-17 深圳壹账通智能科技有限公司 Double record quality detecting methods, device, computer equipment and storage medium
CN110543846A (en) * 2019-08-29 2019-12-06 华南理工大学 Multi-pose face image obverse method based on generation countermeasure network
CN110928980A (en) * 2019-11-15 2020-03-27 中山大学 Ciphertext data storage and retrieval method for mobile cloud computing
CN111402892A (en) * 2020-03-23 2020-07-10 郑州智利信信息技术有限公司 Conference recording template generation method based on voice recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107511832A (en) * 2016-06-15 2017-12-26 深圳光启合众科技有限公司 High in the clouds interaction systems and its more sensing type intelligent robots and perception interdynamic method
CN111429638B (en) * 2020-04-13 2021-10-26 重庆匠技智能科技有限公司 Access control method based on voice recognition and face recognition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144051A (en) * 2014-07-24 2014-11-12 上海斐讯数据通信技术有限公司 Remote voice encryption and decryption method
CN107330301A (en) * 2017-08-25 2017-11-07 遵义博文软件开发有限公司 Managing medical information platform based on recognition of face
CN109741750A (en) * 2018-05-09 2019-05-10 北京字节跳动网络技术有限公司 A kind of method of speech recognition, document handling method and terminal device
CN108989322A (en) * 2018-07-28 2018-12-11 努比亚技术有限公司 data transmission method, mobile terminal and computer readable storage medium
CN109726265A (en) * 2018-12-13 2019-05-07 深圳壹账通智能科技有限公司 Assist information processing method, equipment and the computer readable storage medium of chat
CN109767335A (en) * 2018-12-15 2019-05-17 深圳壹账通智能科技有限公司 Double record quality detecting methods, device, computer equipment and storage medium
CN110543846A (en) * 2019-08-29 2019-12-06 华南理工大学 Multi-pose face image obverse method based on generation countermeasure network
CN110928980A (en) * 2019-11-15 2020-03-27 中山大学 Ciphertext data storage and retrieval method for mobile cloud computing
CN111402892A (en) * 2020-03-23 2020-07-10 郑州智利信信息技术有限公司 Conference recording template generation method based on voice recognition

Also Published As

Publication number Publication date
CN111931484A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN111340008B (en) Method and system for generation of counterpatch, training of detection model and defense of counterpatch
WO2020177392A1 (en) Federated learning-based model parameter training method, apparatus and device, and medium
US10375066B2 (en) Authentication method and system by garbled circuit
US6735695B1 (en) Methods and apparatus for restricting access of a user using random partial biometrics
US11444774B2 (en) Method and system for biometric verification
US8312290B2 (en) Biometric method and apparatus and biometric data encryption method thereof
US20150347857A1 (en) Method and system for verifying identities
WO2020082572A1 (en) Training method of generative adversarial network, related device, and medium
WO2019231698A1 (en) Machine learning for document authentication
WO2021047482A1 (en) Method and system for performing steganographic technique
CN110874571B (en) Training method and device of face recognition model
CN105635099A (en) Identity authentication method, identity authentication system, terminal and server
CN112288398A (en) Surface label verification method and device, computer equipment and storage medium
CN113766085B (en) Image processing method and related device
CN114187547A (en) Target video output method and device, storage medium and electronic device
CN112132996A (en) Door lock control method, mobile terminal, door control terminal and storage medium
CN112802138A (en) Image processing method and device, storage medium and electronic equipment
CN112381000A (en) Face recognition method, device, equipment and storage medium based on federal learning
CN113873088B (en) Interactive method and device for voice call, computer equipment and storage medium
KR20160001728A (en) USB Dvice having an Iris Recognition Security Function and Controlling Method for the Same
US9424478B2 (en) Multimodal biometric profiling
US11769348B2 (en) Face recognition method and edge device
CN111931484B (en) Data transmission method based on big data
CN114090994A (en) Face recognition authentication method and system based on block chain
CN116545774B (en) Audio and video conference security method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220210

Address after: 550000 room 1, floor 11, unit 1, building 11-12, Tianyi International Plaza, No. 33, Changling South Road, Guiyang National High tech Industrial Development Zone, Guiyang City, Guizhou Province

Applicant after: Guizhou caicaibao Internet Service Co.,Ltd.

Address before: 450000 new campus of Zhengzhou University, No.100, science Avenue, Zhengzhou City, Henan Province

Applicant before: Yu Mengli

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant