CN111832329A - QR two-dimensional code-based one-way cross-network file transmission method - Google Patents
QR two-dimensional code-based one-way cross-network file transmission method Download PDFInfo
- Publication number
- CN111832329A CN111832329A CN202010601820.1A CN202010601820A CN111832329A CN 111832329 A CN111832329 A CN 111832329A CN 202010601820 A CN202010601820 A CN 202010601820A CN 111832329 A CN111832329 A CN 111832329A
- Authority
- CN
- China
- Prior art keywords
- dimensional code
- image
- classifier
- file
- haar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1439—Methods for optical code recognition including a method step for retrieval of the optical code
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- General Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a QR two-dimensional code-based one-way cross-network transmission method, which comprises the steps of converting file information of a sending end into a two-dimensional code image, scanning the two-dimensional code image generated by the sending end by a receiving end through a camera, preprocessing the image captured by the camera through two-dimensional code receiving software, calculating an integral diagram of the image, extracting a Haar-like characteristic, inputting the integral diagram into a trained cascade Adaboost classifier, and analyzing a result identified by the cascade classifier through a Zbar two-dimensional code detection algorithm so as to obtain a sent file. Compared with the existing single two-dimensional code display and fixed matrix two-dimensional code display modes, the method has higher data transmission efficiency and transmission accuracy.
Description
Technical Field
The invention relates to the technical field of communication systems, in particular to a file transmission method across a physical isolation network.
Background
In the present day of the rapid development of information digitization, computer processing documents and networking office become daily office means of government organs, enterprise and public institutions and the like. The security of data transmission is particularly important, and for most administrative units and enterprise and public institutions, the intranet used by the office of the unit is strictly forbidden to be connected with an external network (internet), and the intranet and the internet can be physically and completely isolated. For data exchange between an internal network and an external network, a manual disk carving mode after approval is adopted, so that the operation is complicated, the efficiency is low, and the efficient utilization of information resources is restricted.
In order to solve the problem of cross-network file transmission caused by network physical isolation, a QR two-dimensional code-based one-way cross-network transmission method is provided, a two-dimensional code image is used as a medium, namely, information of a sending end is converted into the two-dimensional code image, a receiving end identifies the information contained in the two-dimensional code image, and cross-network transmission of information data is realized.
Disclosure of Invention
The invention aims to solve the problem of file transmission between physically isolated networks and improve the file transmission efficiency.
The invention utilizes the characteristic that the two-dimensional code can express information, selects the two-dimensional code as a medium for transmitting information across networks, and selects the QR code as the transmission medium of the invention because the QR code has low requirement on shooting angle, high identification and analysis speed, large capacity and strong error correction capability, integrates the factors and the like, and the two-dimensional code is a variety of PDF417 codes, QR codes, Hanxin codes, MaxiCode codes, ShotCode codes and the like. The plain text is used as a special file, is different from other ordinary files in terms of encoding mode, and is consistent with the ordinary files in the transmission processing process, so that in the subsequent technical scheme description process, the plain text is used as one of the files and is generally referred to as the file for description.
The implementation mode of the invention is as follows:
the method for transmitting the one-way cross-network file based on the QR two-dimensional code comprises the steps of firstly, coding a file to be transmitted into a QR two-dimensional code image on a sending host, calculating an upper limit of the capacity of QR two-dimensional code storage information according to the version and the error correction level of the used QR two-dimensional code, setting the capacity of the QR two-dimensional code storage information smaller than the upper limit, and cutting original file information into a plurality of unit files according to the capacity of the QR two-dimensional code storage information; coding the unit file to generate a two-dimensional code image, and determining the number of the two-dimensional code images on one frame of screen according to the size of the two-dimensional code image and the screen resolution of a host of a sending end; circularly displaying the QR two-dimensional code image on a screen;
secondly, the resolution ratio and the frame number of a camera of a receiving host are configured, and the frame number of the camera of the receiving host is higher than the speed of refreshing and displaying a two-dimensional code image on a screen; continuously capturing a screen image of a sending host by using a camera of a receiving host, and analyzing two-dimensional code information contained in the image according to a decoding method of the two-dimensional code; the camera continuously captures images until all the images are captured, and two-dimensional code decoding is sequentially carried out on each captured image according to the time sequence; if one screen image of the sending host contains a plurality of two-dimensional codes at the same time, analyzing the two-dimensional codes and splicing according to head sequence identification of the two-dimensional codes; if the content obtained after the image analysis is spliced, discarding the repeated content; if the content which is not spliced exists in the image, splicing the part of the content; after capturing an image by a camera on a receiving host, preprocessing the image captured by the camera by two-dimensional code receiving software, calculating an integral graph of the image and extracting a Haar-like feature, inputting the Haar-like feature into a trained cascade Adaboost classifier, and identifying by the cascade classifier to obtain a result and outputting the result; analyzing the image identified by the cascade classifier by adopting a Zbar two-dimensional code detection algorithm for the identification output result, splicing and repeatedly detecting according to the sequence identifier after the image is analyzed, comparing the sequence identifier of the analyzed two-dimensional code, discarding the two-dimensional code if the two-dimensional code containing the identifier exists, and splicing according to the sequence identifier if the two-dimensional code containing the identifier does not exist; and splicing the contents analyzed by the two-dimensional code image together according to the sequence identification information of the contents, and discarding the sequence identification to obtain a final receiving file. Fig. 1 is a functional diagram of a QR two-dimensional code one-way cross-network file transmission method based on a cascade classifier. FIG. 2 is a flow chart of the method of the present invention.
The method is characterized in that original file information is cut into a plurality of unit files according to the capacity of QR two-dimensional code storage information, and the specific process is as follows: the QR two-dimensional code storage information capacity is size, the size of an original file is data, the number of unit files obtained after cutting is n, n is int (data/size) +1, int represents rounding-down, the unit of size and data is bytes, and the size value is smaller than the size of two-dimensional code bytes corresponding to the two-dimensional code version and the error correction level; sequentially adding identifiers for representing the sequence at the first byte of the cut unit file header, then coding according to the selected coding mode, and generating a corresponding QR two-dimensional code image according to the QR two-dimensional code generation rule.
When the sending file is a plain text, adopting a corresponding coding mode according to the text information content, such as a digital coding mode, a character coding mode, a byte coding mode and a Chinese character coding mode; for non-text type transmission files, the file is read in a byte stream, and a byte encoding mode is used.
The specific steps of the two-dimensional code image recognition comprise four steps of Haar-like feature extraction, Adaboost strong classifier training, cascade classifier training and two-dimensional code image recognition. Fig. 3 is a flow of a two-dimensional code identification algorithm based on a cascade classifier.
For the extraction of the Haar-like characteristics, the Haar-like characteristic value of a pixel of the two-dimensional code image is obtained by carrying out integral graph operation on the pixel value in the image, and the characteristic value at the position of each image pixel point is the sum of all pixel values in the upper left corner area of the point. FIG. 4 is a schematic diagram of a Haar-like feature rectangle. Assuming that the zero point of the pixel point position coordinate system is at the upper left corner, the calculation formula of the Haar-like characteristic value is as follows
ii(x,y)=∑i(k,j) k≤x,j≤y (1)
In the formula, ii (x, y) represents the sum of all pixel values in the upper left corner area of the position coordinate (x, y) of the pixel point of the image, and i (k, j) represents the pixel value of the coordinate (k, j).
For training the Adaboost strong classifier, the weak classifier is expressed as h (x) epsilon { -1, +1}, x represents the input sample, the output value is +1 or-1, +1 represents the positive class, -1 represents the negative class, and the Adaboost strong classifier training step is as follows:
expression of a sample set as X { (X)1,y1),(x2,y2),(x3,y3),...(xn,yn) Where the index i denotes the ith sample, xiA feature vector, y, representing the sampleiClass label, y, representing the sampleiThe value is +1 or-1, and +1 represents xiFor positive samples, -1 denotes xiIs a negative sample;
initializing weights, assuming that the number of training samples in the training set is N, each sample is initially given the same weight, i.e. 1/N,
the subscript 11, 12, …, 1N in the formula denotes the 1 st, 2 … th, N samples in the initial training set, w1iRepresents the initial weight of the ith sample in the initial training set, D1Indicating that the training set weight distribution is initialized.
Thirdly, the iteration frequency is T, each iteration calculates the error rate of the weak classifier corresponding to each feature under the current sample weight, and if T represents the current training frequency, DtRepresents the weight distribution of the current training sample, the weak classifier error rate of the t-th iterationtIs calculated by the formula
In the formula, w1iRepresents the initial weight, x, of the ith sample in the initial training setiA feature vector, h, representing the sampletThe function represents a weak classifier, the I function represents an indication function, and the output is 1 when the input is True and 0 when the input is False.
Fourthly, calculating the current weak classifier ht(X) weight coefficient αt:
In the formula (I), the compound is shown in the specification,tthe updating formula of the error rate weight distribution of the weak classifier solved in the third step is as follows:
wherein ZtTo normalize constant, alphatIs a weak classifier htWeight coefficient of (X), XiFeature vector, y, representing a sampleiA presentation category label;
fifthly, as the iteration times are increased, the error rate is smaller and smaller, and finally the Adaboost strong classifier H is obtainedfinal(x) Is composed of
In the formula, alphatIs a weak classifier ht(X) weight coefficient.
For the training of the cascade classifiers, after setting the minimum detectable rate, namely the recognition rate of positive samples, and the maximum false alarm rate, namely the recognition rate of negative samples, training each strong classifier in sequence; and only when the detection rate and the recognition rate of all the strong classifiers meet the set values, the next layer of strong classifier training is carried out, and the finally obtained cascade classifier is formed by connecting a plurality of trained Adaboost strong classifiers in series. FIG. 5 is a schematic diagram of a cascade classifier.
For two-dimensional code image recognition, Haar-like features extracted from a current two-dimensional code image are input into a cascade Adaboost classifier trained before, and a result is obtained and output through recognition of the cascade classifier.
The QR two-dimensional code decoding effect is positively correlated with the resolution capability of the camera, namely the higher the resolution ratio is, the clearer the captured image is, and the stronger the identification capability of the QR two-dimensional code is. The camera frame number is the number of images captured by the camera in one second.
The invention has the beneficial effects that:
the two-dimension code generation method disclosed by the invention is operated on a sending end host, the two-dimension code sending software divides a text/file to be transmitted into specific data blocks, generates a two-dimension code corresponding to each data block, and then sequentially displays the two-dimension codes corresponding to the data blocks on a software main interface in a matrix form. And then, two-dimensional code receiving software positioned on the receiving end host refreshes and captures a two-dimensional code image of the display area through the camera. The method can set batch generation display and capture acquisition of a plurality of two-dimensional codes, has higher data transmission efficiency compared with the existing single two-dimensional code display and fixed matrix two-dimensional code display modes, and can flexibly configure single-screen data capacity according to the hardware configuration of the camera.
The two-dimensional code identification method disclosed by the invention is operated on a receiving end host, the two-dimensional code receiving software captures a two-dimensional code image by using the camera, and the captured image is detected and identified by using the trained cascade AdaBoost classifier, so that the multi-code quick detection is realized, and the detection accuracy is further improved.
Drawings
FIG. 1 is a functional diagram of a QR two-dimensional code one-way cross-network file transmission method based on a cascade classifier;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a flow chart of a two-dimensional code recognition algorithm based on a cascade classifier;
FIG. 4 is a schematic diagram of a Haar-like feature rectangle;
FIG. 5 is a schematic diagram of a cascade classifier;
FIG. 6 is a diagram illustrating text transmission by a sender;
fig. 7 is a schematic diagram of a QR two-dimensional code image generated by a sending end;
fig. 8 is a schematic diagram of a receiving end receiving and analyzing a QR two-dimensional code image to generate a text.
Detailed Description
The technical solution of the present invention will be described in detail with reference to a specific embodiment.
As shown in fig. 1, at the sending end, the sending host calculates the upper limit capacity max of the QR two-dimensional code storage information according to the used QR two-dimensional code version and the error correction level, then sets the capacity of the single QR two-dimensional code storage information as size, and cuts the original file according to the size of the capacity, wherein the size is smaller than max. Assuming that the size of an original file is data, and the number of unit files obtained after cutting is n, then n is int (data/size) +1, int represents rounding-down, and the units of size, max and data are bytes; sequentially adding identifiers used for representing the sequence at the first byte of the cut unit file header, then coding according to the selected coding mode, generating a corresponding QR two-dimensional code image according to the QR two-dimensional code generation rule, and circularly displaying the QR two-dimensional code image on a screen.
When the file is a plain text, for the plain text information, adopting a corresponding coding mode, such as a numerical coding mode, a character coding mode, a byte coding mode and a Chinese character coding mode, according to the content of the text information; for file information, the file is read in a byte stream, using a byte encoding mode.
And after the two-dimension code image is generated, determining the number of the two-dimension code images on one frame of screen according to the size of the two-dimension code image and the screen resolution of the host computer at the sending end. Assuming that the resolution of the screen of the sending end host is h × w and the side length of the two-dimensional code image is s pixels, the number of the two-dimensional code images on each frame of screen is calculated to be m ═ h/s × [ w/s ]. And drawing the two-dimensional code on the screen of the sending host in a zigzag manner from top left to bottom right in the sequence of the two-dimensional code header identifications. And after one frame of screen is displayed for 2 seconds, refreshing and displaying the next frame of image on the screen until all the two-dimensional code images are displayed on the screen.
At a receiving end, the resolution and the frame number of a camera of a receiving host are configured, and the frame number of the camera of the receiving host is higher than the speed of refreshing and displaying a two-dimensional code image on a screen; continuously capturing a screen image of a sending host by using a camera of a receiving host, and analyzing two-dimensional code information contained in the image according to a decoding method of the two-dimensional code; the camera continuously captures images until all the images are captured, and two-dimensional code decoding is sequentially carried out on each captured image according to the time sequence; if one screen image of the sending host contains a plurality of two-dimensional codes at the same time, analyzing the two-dimensional codes and splicing according to head sequence identification of the two-dimensional codes; if the content obtained after the image analysis is spliced, discarding the repeated content; if the content which is not spliced exists in the image, splicing the part of the content; after capturing an image by a camera on a receiving host, preprocessing the image captured by the camera by two-dimensional code receiving software, calculating an integral graph of the image and extracting a Haar-like feature, inputting the Haar-like feature into a trained cascade Adaboost classifier, and identifying by the cascade classifier to obtain a result and outputting the result; analyzing the image identified by the cascade classifier by adopting a Zbar two-dimensional code detection algorithm for the identification output result, splicing and repeatedly detecting according to the sequence identifier after the image is analyzed, comparing the sequence identifier of the analyzed two-dimensional code, discarding the two-dimensional code if the two-dimensional code containing the identifier exists, and splicing according to the sequence identifier if the two-dimensional code containing the identifier does not exist; and splicing the contents analyzed by the two-dimensional code image together according to the sequence identification information of the contents, and discarding the sequence identification to obtain a final receiving file.
The specific steps of the two-dimensional code image recognition comprise four steps of Haar-like feature extraction, Adaboost strong classifier training, cascade classifier training and two-dimensional code image recognition. For the extraction of the Haar-like characteristics, the Haar-like characteristic value of a pixel of the two-dimensional code image is obtained by carrying out integral graph operation on the pixel value in the image, and the characteristic value at the position of each image pixel point is the sum of all pixel values in the upper left corner area of the point. Assuming that the zero point of the pixel point position coordinate system is at the upper left corner, the calculation formula of the Haar-like characteristic value is as follows
ii (x, y) ═ Σ i (k, j) k ≦ x, j ≦ y (7), where ii (x, y) represents the sum of all pixel values in the upper-left corner region of the image pixel point position coordinate (x, y), and i (k, j) represents the coordinate (k, j) pixel value.
For training the Adaboost strong classifier, the weak classifier is expressed as h (x) epsilon { -1, +1}, x represents the input sample, the output value is +1 or-1, +1 represents the positive class, -1 represents the negative class, and the Adaboost strong classifier training step is as follows:
expression of a sample set as X { (X)1,y1),(x2,y2),(x3,y3),...(xn,yn) Where the index i denotes the ith sample, xiA feature vector, y, representing the sampleiClass label, y, representing the sampleiThe value is +1 or-1, and +1 represents xiFor positive samples, -1 denotes xiIs a negative sample;
initializing weights, assuming that the number of training samples in the training set is N, each sample is initially given the same weight, i.e. 1/N,
the subscript 11, 12, …, 1N in the formula denotes the 1 st, 2 … th, N samples in the initial training set, w1iRepresents the initial weight of the ith sample in the initial training set, D1Indicating that the training set weight distribution is initialized.
Thirdly, the iteration frequency is T, each iteration calculates the error rate of the weak classifier corresponding to each feature under the current sample weight, and if T represents the current training frequency, DtRepresents the weight distribution of the current training sample, the weak classifier error rate of the t-th iterationtIs calculated by the formula
In the formula, w1iRepresents the initial weight, x, of the ith sample in the initial training setiA feature vector, h, representing the sampletThe function represents a weak classifier, the I function represents an indication function, and the output is 1 when the input is True and 0 when the input is False.
Fourthly, calculating the current weak classifier ht(X) weight coefficient αt:
In the formula (I), the compound is shown in the specification,tthe updating formula of the error rate weight distribution of the weak classifier solved in the third step is as follows:
wherein ZtTo normalize constant, alphatIs a weak classifier htWeight coefficient of (X), XiFeature vector, y, representing a sampleiA presentation category label;
fifthly, as the iteration times are increased, the error rate is smaller and smaller, and finally the obtained Adaboost score is strongClass device Hfinal(x) Is composed of
In the formula, alphatIs a weak classifier ht(X) weight coefficient.
For the training of the cascade classifiers, after setting the minimum detectable rate, namely the recognition rate of positive samples, and the maximum false alarm rate, namely the recognition rate of negative samples, training each strong classifier in sequence; and only when the detection rate and the recognition rate of all the strong classifiers meet the set values, the next layer of strong classifier training is carried out, and the finally obtained cascade classifier is formed by connecting a plurality of trained Adaboost strong classifiers in series.
For two-dimensional code image recognition, Haar-like features extracted from a current two-dimensional code image are input into a cascade Adaboost classifier trained before, and a result is obtained and output through recognition of the cascade classifier.
After the image of each frame is analyzed, all the decoded contents are merged together and stored. If the transmitted information is a text, the stored content is the text content. If the transmitted information is a file, the file is read in a byte stream mode during encoding, so that the decoded content is also a byte stream, and the byte stream is stored locally and is the file to be transmitted.
Fig. 6 is a schematic diagram of text transmission by a sending end, and fig. 7 is a schematic diagram of a QR two-dimensional code image generated by the sending end. Fig. 8 is a schematic diagram of a receiving end receiving and analyzing a QR two-dimensional code image to generate a text. After verification, the text content received by the receiving end is completely consistent with the text content sent by the transmitting end.
In the text transmission process, the task manager is checked to know that the CPU use condition is 0% -3% when the text is not sent, the CPU use condition is 2% -7% after the text is sent, and the CPU occupation of the invention is increased by about 3%. The invention uses the memory and is related to the size of the text information to be sent, and the larger the text information to be sent is, the larger the memory is occupied.
In the process of receiving the text, the task manager is checked to know that when the text is not received, the CPU use condition is 0% to 2%, the memory use condition is 2.01G, after the text is received, the CPU use condition is 18% to 24%, the memory use condition is 2.19G, the occupation of the CPU is increased by about 22%, and the occupation of the memory is increased by 0.19G.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (4)
1. A one-way cross-network file transmission method based on QR two-dimensional codes is characterized in that:
firstly, encoding a file to be transmitted into a QR two-dimensional code image on a sending host, calculating a QR two-dimensional code storage information capacity upper limit according to a used QR two-dimensional code version and an error correction level, setting a QR two-dimensional code storage information capacity smaller than the upper limit, and cutting original file information into a plurality of unit files according to the QR two-dimensional code storage information capacity; coding the unit file to generate a two-dimensional code image, and determining the number of the two-dimensional code images on one frame of screen according to the size of the two-dimensional code image and the screen resolution of a host of a sending end; circularly displaying the QR two-dimensional code image on a screen;
secondly, the resolution ratio and the frame number of a camera of a receiving host are configured, and the frame number of the camera of the receiving host is higher than the speed of refreshing and displaying a two-dimensional code image on a screen; continuously capturing a screen image of a sending host by using a camera of a receiving host, and analyzing two-dimensional code information contained in the image according to a decoding method of the two-dimensional code; the camera continuously captures images until all the images are captured, and two-dimensional code decoding is sequentially carried out on each captured image according to the time sequence; if one screen image of the sending host contains a plurality of two-dimensional codes at the same time, analyzing the two-dimensional codes and splicing according to head sequence identification of the two-dimensional codes; if the content obtained after the image analysis is spliced, discarding the repeated content; if the content which is not spliced exists in the image, splicing the part of the content; after capturing an image by a camera on a receiving host, preprocessing the image captured by the camera by two-dimensional code receiving software, calculating an integral graph of the image and extracting a Haar-like feature, inputting the Haar-like feature into a trained cascade Adaboost classifier, and identifying by the cascade classifier to obtain a result and outputting the result; analyzing the image identified by the cascade classifier by adopting a Zbar two-dimensional code detection algorithm for the identification output result, splicing and repeatedly detecting according to the sequence identifier after the image is analyzed, comparing the sequence identifier of the analyzed two-dimensional code, discarding the two-dimensional code if the two-dimensional code containing the identifier exists, and splicing according to the sequence identifier if the two-dimensional code containing the identifier does not exist; and splicing the contents analyzed by the two-dimensional code image together according to the sequence identification information of the contents, and discarding the sequence identification to obtain a final receiving file.
2. The QR two-dimensional code-based unidirectional cross-network file transmission method according to claim 1, wherein the method comprises the following specific steps of cutting the original file information into a plurality of unit files according to the capacity of the QR two-dimensional code storage information: the QR two-dimensional code storage information capacity is size, the size of an original file is data, the number of unit files obtained after cutting is n, n is int (data/size) +1, int represents rounding-down, the unit of size and data is bytes, and the size value is smaller than the size of two-dimensional code bytes corresponding to the two-dimensional code version and the error correction level; sequentially adding identifiers for representing the sequence at the first byte of the cut unit file header, then coding according to the selected coding mode, and generating a corresponding QR two-dimensional code image according to the QR two-dimensional code generation rule.
3. The QR two-dimensional code-based unidirectional cross-network file transmission method of claim 1, wherein: when the sending file is a plain text, adopting corresponding digital coding, character coding, byte coding and Chinese character coding modes according to the text information content; for non-text type transmission files, the file is read in a byte stream, and a byte encoding mode is used.
4. The QR two-dimensional code-based unidirectional cross-network file transmission method of claim 1, wherein: the specific steps of two-dimensional code image recognition comprise four steps of Haar-like feature extraction, Adaboost strong classifier training, cascade classifier training and image recognition,
for the extraction of the Haar-like characteristics, the Haar-like characteristic value of a pixel of the two-dimensional code image is obtained by integral graph operation on the pixel value in the image, the characteristic value at the position of each pixel point of the image is the sum of all pixel values in the upper left corner area of the point, and the calculation formula of the Haar-like characteristic value is that the zero point of the pixel point position coordinate system is assumed to be at the upper left corner
ii(x,y)=∑i(k,j) k≤x,j≤y
In the formula, ii (x, y) represents the sum of all pixel values in the upper left corner area of the position coordinate (x, y) of the image pixel point, and i (k, j) represents the pixel value of the coordinate (k, j);
for Adaboost strong classifier training, a weak classifier is expressed as h (x) epsilon { -1, +1}, x represents an input sample, an output value is +1 or-1, +1 represents a positive class, and-1 represents a negative class;
for the training of the cascade classifiers, after setting the minimum detectable rate, namely the recognition rate of positive samples, and the maximum false alarm rate, namely the recognition rate of negative samples, training each strong classifier in sequence; only when the detection rate and the recognition rate of all the strong classifiers meet the set values, the next layer of strong classifier training is carried out, and the finally obtained cascade classifier is formed by connecting a plurality of trained Adaboost strong classifiers in series;
for image recognition, Haar-like features extracted from a current two-dimensional code image are input into a cascade Adaboost classifier trained before, and a result is obtained and output through recognition of the cascade classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010601820.1A CN111832329A (en) | 2020-06-28 | 2020-06-28 | QR two-dimensional code-based one-way cross-network file transmission method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010601820.1A CN111832329A (en) | 2020-06-28 | 2020-06-28 | QR two-dimensional code-based one-way cross-network file transmission method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111832329A true CN111832329A (en) | 2020-10-27 |
Family
ID=72899375
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010601820.1A Pending CN111832329A (en) | 2020-06-28 | 2020-06-28 | QR two-dimensional code-based one-way cross-network file transmission method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111832329A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112686332A (en) * | 2021-01-12 | 2021-04-20 | 代唯 | AI image recognition-based text-based intelligence-creating reading method and system |
CN115116577A (en) * | 2022-07-26 | 2022-09-27 | 青岛美迪康数字工程有限公司 | Quality control data transmission method and device based on two-dimensional code |
CN117614947A (en) * | 2023-11-20 | 2024-02-27 | 广州新赫信息科技有限公司 | Identification and authentication method and system for secure cross-network service |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103150552A (en) * | 2013-02-06 | 2013-06-12 | 湖北微驾技术有限公司 | Driving training management method based on people counting |
CN103268461A (en) * | 2013-04-25 | 2013-08-28 | 浙江成功软件开发有限公司 | Intranet-extranet physical isolation data exchange method based on QR (quick response) code |
CN103347097A (en) * | 2013-07-26 | 2013-10-09 | 国家电网公司 | Method and system for file transmission |
CN103346960A (en) * | 2013-06-18 | 2013-10-09 | 万达信息股份有限公司 | Recognizable figure-based data transmission method across networks |
CN105049425A (en) * | 2015-06-28 | 2015-11-11 | 南威软件股份有限公司 | Physical isolation transmission method based on two-dimension code |
CN108985093A (en) * | 2018-06-26 | 2018-12-11 | 江苏擎天信息科技有限公司 | A kind of security type meeting management system based under physical isolation |
CN109409298A (en) * | 2018-10-30 | 2019-03-01 | 哈尔滨理工大学 | A kind of Eye-controlling focus method based on video processing |
CN110417720A (en) * | 2019-02-27 | 2019-11-05 | 国家电网公司东北分部 | A method of carrying out information transmission in the case where physical isolation |
CN111062302A (en) * | 2019-12-12 | 2020-04-24 | 中电鸿信信息科技有限公司 | Internal and external network physical isolation data exchange method based on combination of icon recognition and OCR (optical character recognition) |
-
2020
- 2020-06-28 CN CN202010601820.1A patent/CN111832329A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103150552A (en) * | 2013-02-06 | 2013-06-12 | 湖北微驾技术有限公司 | Driving training management method based on people counting |
CN103268461A (en) * | 2013-04-25 | 2013-08-28 | 浙江成功软件开发有限公司 | Intranet-extranet physical isolation data exchange method based on QR (quick response) code |
CN103346960A (en) * | 2013-06-18 | 2013-10-09 | 万达信息股份有限公司 | Recognizable figure-based data transmission method across networks |
CN103347097A (en) * | 2013-07-26 | 2013-10-09 | 国家电网公司 | Method and system for file transmission |
CN105049425A (en) * | 2015-06-28 | 2015-11-11 | 南威软件股份有限公司 | Physical isolation transmission method based on two-dimension code |
CN108985093A (en) * | 2018-06-26 | 2018-12-11 | 江苏擎天信息科技有限公司 | A kind of security type meeting management system based under physical isolation |
CN109409298A (en) * | 2018-10-30 | 2019-03-01 | 哈尔滨理工大学 | A kind of Eye-controlling focus method based on video processing |
CN110417720A (en) * | 2019-02-27 | 2019-11-05 | 国家电网公司东北分部 | A method of carrying out information transmission in the case where physical isolation |
CN111062302A (en) * | 2019-12-12 | 2020-04-24 | 中电鸿信信息科技有限公司 | Internal and external network physical isolation data exchange method based on combination of icon recognition and OCR (optical character recognition) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112686332A (en) * | 2021-01-12 | 2021-04-20 | 代唯 | AI image recognition-based text-based intelligence-creating reading method and system |
CN115116577A (en) * | 2022-07-26 | 2022-09-27 | 青岛美迪康数字工程有限公司 | Quality control data transmission method and device based on two-dimensional code |
CN115116577B (en) * | 2022-07-26 | 2022-11-29 | 青岛美迪康数字工程有限公司 | Quality control data transmission method and device based on two-dimensional code |
CN117614947A (en) * | 2023-11-20 | 2024-02-27 | 广州新赫信息科技有限公司 | Identification and authentication method and system for secure cross-network service |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111832329A (en) | QR two-dimensional code-based one-way cross-network file transmission method | |
JP3191057B2 (en) | Method and apparatus for processing encoded image data | |
CN111476067A (en) | Character recognition method and device for image, electronic equipment and readable storage medium | |
CN110781460A (en) | Copyright authentication method, device, equipment, system and computer readable storage medium | |
CN111444349B (en) | Information extraction method, information extraction device, computer equipment and storage medium | |
JP2014232533A (en) | System and method for ocr output verification | |
CN110738262B (en) | Text recognition method and related product | |
CN113254654B (en) | Model training method, text recognition method, device, equipment and medium | |
CN109241325B (en) | Large-scale face retrieval method and device based on depth features | |
CN113762050B (en) | Image data processing method, device, equipment and medium | |
CN113569833A (en) | Text document-based character recognition method, device, equipment and storage medium | |
CN113094478B (en) | Expression reply method, device, equipment and storage medium | |
CN114612743A (en) | Deep learning model training method, target object identification method and device | |
CN113627207A (en) | Bar code identification method and device, computer equipment and storage medium | |
CN112580108A (en) | Signature and seal integrity verification method and computer equipment | |
CN113111880A (en) | Certificate image correction method and device, electronic equipment and storage medium | |
CN115240203A (en) | Service data processing method, device, equipment and storage medium | |
CN114821613A (en) | Extraction method and system of table information in PDF | |
CN113011254B (en) | Video data processing method, computer equipment and readable storage medium | |
KR102043693B1 (en) | Machine learning based document management system | |
CN112132026A (en) | Animal identification method and device | |
CN108460811B (en) | Face image processing method and device and computer equipment | |
CN110727743A (en) | Data identification method and device, computer equipment and storage medium | |
CN110674678A (en) | Method and device for identifying sensitive mark in video | |
CN114677700A (en) | Identification method and device of identity, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201027 |