CN111311241A - Two-dimensional code reading method and device based on scene perception - Google Patents
Two-dimensional code reading method and device based on scene perception Download PDFInfo
- Publication number
- CN111311241A CN111311241A CN201811513262.2A CN201811513262A CN111311241A CN 111311241 A CN111311241 A CN 111311241A CN 201811513262 A CN201811513262 A CN 201811513262A CN 111311241 A CN111311241 A CN 111311241A
- Authority
- CN
- China
- Prior art keywords
- payment
- dimensional code
- image
- mobile payment
- payment system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/30—Payment architectures, schemes or protocols characterised by the use of specific devices or networks
- G06Q20/32—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
- G06Q20/327—Short range or proximity payments by means of M-devices
- G06Q20/3274—Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being displayed on the M-device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/146—Methods for optical code recognition the method including quality enhancement steps
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Toxicology (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Quality & Reliability (AREA)
- Accounting & Taxation (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a two-dimensional code recognizing and reading method based on scene perception, which is characterized in that a mobile payment device which is arranged in a plurality of scenes and is provided with a plurality of sensors and supports cooperative application of a plurality of sensors, an input device and a printing device is networked, and the mobile payment device is connected with an electronic terminal and a server cluster; acquiring parameter data to generate a two-dimensional code image; when the payment event is triggered, code scanning payment and non-contact card swiping payment are supported, and the first distance measurement sampling point is monitored, the reading operation is completed in a buckling and scanning mode. The method adopts an Otsu algorithm to perform rough segmentation and secondary segmentation aiming at the two-dimensional code image, and finishes recognition and payment when a plurality of conditions such as a payment event is triggered are monitored. The method is based on the perception of the formed end-to-end generating model to the scene, and flexibly and sensitively supports code scanning payment and non-contact card swiping payment under the networking environment aiming at various scenes. The disclosure also provides a two-dimensional code reading device based on scene perception.
Description
Technical Field
The disclosure relates to the technical field of mobile payment and the technical field of image recognition, in particular to a two-dimensional code recognizing and reading method and device based on scene perception.
Background
In the prior art, the cash register modes applied to a plurality of scenes are pos machine card swiping modes, cash modes and the like. The electronic terminal is opened for the payer in a few scenes, static two-dimensional codes provided by a plurality of scenes are scanned (in the process of manufacturing and image acquisition of static payment images, impurities, interference and the like are inevitably mixed in the images, so that the problems of noise, blurring and uneven gray scale exist in the images), information of the two-dimensional codes is read, and payment operation is completed. The cash register mode makes the cash register single, and the mobile payment device cannot accurately sense and adapt to different environments due to different real environments of a plurality of scenes. Therefore, higher requirements are provided for the recognition of the two-dimensional code and even the accuracy, the rapidness and the flexibility of payment.
Disclosure of Invention
In order to solve technical problems in the prior art, the disclosed embodiment provides a two-dimensional code reading method and device based on scene perception, a plurality of sensors are configured in a plurality of scenes, a convolution-cycle neural network architecture is adopted, and a convolution neural network and a cycle neural network architecture of computer vision and neural machine translation are combined to form a mobile payment device with an end-to-end generation model, and the mobile payment device, an electronic terminal for controlling the mobile payment device and a server cluster which are arranged in the plurality of scenes and support cooperative application with an input device and a printing device are connected; the method comprises the steps of acquiring data of a plurality of parameters which are sent by a server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; grouping continuous ranging data acquired by at least one ranging sensor aiming at a generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; when a payment event is monitored to be triggered, whether an initial system of the mobile payment device supports a multi-form payment system or not is judged, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is used for presetting a single scene and a single payment position and combining a payment system formed by a virtual value storage card or a real value storage card, and the open payment system is used for presetting at least two scenes and at least two payment positions and combining a payment system formed by the virtual value storage card or the real value storage card; and if the multi-form payment system is supported and the first distance measurement sampling point is monitored, starting and receiving the reading operation finished by the user in a buckling and scanning mode.
In a first aspect, an embodiment of the present disclosure provides a two-dimensional code reading method based on scene perception, including the following steps: the method comprises the steps of configuring a plurality of sensors arranged in a plurality of scenes, adopting a convolution-circulation neural network architecture, combining a convolution neural network and a circulation neural network architecture of computer vision and neural machine translation to form mobile payment equipment with an end-to-end generation model, and connecting the mobile payment equipment supporting cooperative application with input equipment and a printing device, an electronic terminal controlling the mobile payment equipment and a server cluster which are arranged in the plurality of scenes, wherein the mobile payment equipment supporting cooperative application with the input equipment and the printing device comprises the plurality of sensors which are used for collecting and detecting environmental parameters of the mobile payment equipment supporting cooperative application with the input equipment and the printing device, which are arranged in the plurality of scenes; acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; grouping continuous ranging data acquired by at least one ranging sensor for the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; when a payment event is monitored to be triggered, judging whether an initial system of the mobile payment device supports a multi-form payment system or not, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value storage card or a real value storage card in a preset single scene and a preset single payment position, and the open payment system is combined with the payment system formed by the virtual value storage card or the real value storage card in a preset at least two scenes and at least two payment positions; and receiving the recognition operation finished by the user in a buckling and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
In a second aspect, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method described above.
In a third aspect, the disclosed embodiments provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method described above when executing the program.
In a fourth aspect, an embodiment of the present disclosure provides a two-dimensional code reading device based on scene perception, where the device includes: the system comprises a networking and connecting module, a convolution-cycle neural network architecture, a convolution neural network and a cycle neural network architecture, wherein the networking and connecting module is used for networking mobile payment equipment with an end-to-end generation model, which is arranged in a plurality of scenes, and connecting the mobile payment equipment, an electronic terminal for controlling the mobile payment equipment and a server cluster, which are arranged in the plurality of scenes and support the cooperative application of input equipment and a printing device, wherein the mobile payment equipment support the cooperative application of the input equipment and the printing device comprises a plurality of sensors which are used for collecting and detecting environmental parameters of the mobile payment equipment support the cooperative application of the input equipment and the printing device, which are arranged in the plurality of scenes; the acquisition and image generation module is used for acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; the classification module is used for grouping continuous ranging data acquired by at least one ranging sensor aiming at the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; the system comprises a judging module, a payment processing module and a payment processing module, wherein the judging module is used for judging whether an initial system of the mobile payment device supports a multi-form payment system when a payment event is triggered, the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting a single scene and a single payment position, and the open payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting at least two scenes and at least two payment positions; and the deducting and scanning reading module is used for receiving the reading operation finished by the user in a deducting and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
The invention provides a two-dimensional code reading method and a device based on scene perception, which are characterized in that a convolution-circulation neural network architecture is adopted for configuring a plurality of sensors in a plurality of scenes, an end-to-end generation model mobile payment device is formed by combining the convolution neural network and the circulation neural network architecture of computer vision and neural machine translation, and the mobile payment device supporting the cooperative application of input equipment and a printing device, an electronic terminal for controlling the mobile payment device and a server cluster which are arranged in the plurality of scenes are connected; the method comprises the steps of acquiring data of a plurality of parameters which are sent by a server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; grouping continuous ranging data acquired by at least one ranging sensor aiming at a generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; when a payment event is monitored to be triggered, whether an initial system of the mobile payment device supports a multi-form payment system or not is judged, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is used for presetting a single scene and a single payment position and combining a payment system formed by a virtual value storage card or a real value storage card, and the open payment system is used for presetting at least two scenes and at least two payment positions and combining a payment system formed by the virtual value storage card or the real value storage card; and if the multi-form payment system is supported and the first distance measurement sampling point is monitored, starting and receiving the reading operation finished by the user in a buckling and scanning mode. The method comprises the steps of firstly, networking is carried out, aiming at a two-dimensional code image, rough segmentation and secondary segmentation are carried out on the two-dimensional code image by adopting an Otsu algorithm, segmentation operation suitable for the two-dimensional code image is completed through shape testing on a result obtained after the secondary segmentation operation, rapid feature extraction can be carried out on the two-dimensional code image through deep learning, the two-dimensional code image is intercepted as a payment image after payment information sent by a server is received, when a payment event is monitored to be triggered, whether an initial system of the mobile payment device supports a multi-form payment system or not is judged, and if the multi-form payment system is supported and a first distance measurement sampling point is monitored, reading operation completed by a user in a buckling and scanning mode is started and received. In addition, payment display is completed through a shared window of the liquid crystal window and the light guide plate window, even environmental parameters of mobile payment equipment arranged in multiple scenes can be collected and detected through a plurality of sensors, an end-to-end generation model is formed, scene perception is achieved, and the beneficial effect of technical support is achieved for accurately and quickly completing two-dimensional code image recognition and even subsequent payment. In addition, the operation of recognizing and reading the image can be efficiently, accurately and quickly realized aiming at the two-dimensional code image under the networking environment supporting the multi-form payment system based on a plurality of scene conditions, so that the subsequent payment operation can be quickly, efficiently and flexibly completed, and the system has safety and applicability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced as follows:
fig. 1 is a schematic flow chart illustrating steps of a two-dimensional code reading method based on scene perception in an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating steps of a two-dimensional code reading method based on scene awareness according to another embodiment of the present invention; and
fig. 3 is a schematic structural diagram of a two-dimensional code reading device based on scene perception in an embodiment of the present invention.
Detailed Description
The present application will now be described in further detail with reference to the accompanying drawings and examples.
In the following description, the terms "first" and "second" are used for descriptive purposes only and are not intended to indicate or imply relative importance. The following description provides embodiments of the disclosure, which may be combined or substituted for various embodiments, and this application is therefore intended to cover all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes feature A, B, C and another embodiment includes feature B, D, then this application should also be considered to include an embodiment that includes one or more of all other possible combinations of A, B, C, D, even though this embodiment may not be explicitly recited in text below.
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, specific embodiments of a two-dimensional code reading method and device based on scene perception according to the present invention are described in further detail below with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 is a schematic flow chart of a two-dimensional code reading method based on scene awareness in an embodiment, which specifically includes the following steps:
102, configuring a plurality of sensors in a plurality of scenes, adopting a convolution-circulation neural network architecture, combining a convolution neural network and a circulation neural network architecture of computer vision and neural machine translation to form mobile payment equipment with an end-to-end generation model, and connecting the mobile payment equipment which is arranged in the plurality of scenes and supports the cooperative application of input equipment and a printing device, an electronic terminal for controlling the mobile payment equipment and a server cluster. The mobile payment equipment supporting the cooperative application of the input equipment and the printing device comprises a plurality of sensors, and the sensors are used for acquiring and detecting environmental parameters of the mobile payment equipment supporting the cooperative application of the input equipment and the printing device and arranged in a plurality of scenes. When the collected and monitored environment parameters of the mobile payment device are larger than the preset environment parameters, namely the preset environment threshold value can be understood, the mobile payment device can complete reminding operation through the internal main controller. The reminding operation includes but is not limited to an acousto-optic reminding or an acoustic buzzer reminding. In addition, it should be noted that the mobile payment device is configured with an end-to-end generation model, and the model is used for training the scene perception image and the corresponding description sentence, and the parameters of the model are obtained after the likelihood probability of the target and the corresponding target description sentence is maximized. Specifically, a scene perception image data set is constructed, wherein the scene perception image data set comprises sample images and label texts obtained after the sample images are manually labeled, and it needs to be stated that each label in the label texts corresponds to each sample image one by one; mirror symmetry processing is carried out on the original sample image, corresponding label texts are adjusted, so that a scene perception image data set is enhanced, then the label texts and the corresponding sample images which are mirror symmetry processed are written into a dictionary, and the enhanced scene perception image data set is obtained; constructing a deep convolution-circulation neural network, and training the deep convolution-circulation neural network by using the enhanced scene perception image data set, wherein the deep convolution-circulation neural network comprises the deep convolution neural network and the deep circulation neural network, and the input enhanced scene perception image data set is concentrated by the deep convolution neural networkAfter extracting features of the image, embedding the image into a vector with a fixed dimensionality, using information of the last hidden layer of the deep convolutional neural network as input of the deep convolutional neural network, and generating a corresponding description sentence by maximizing the probability of a correct word through the deep convolutional neural network; the method comprises the steps of obtaining an image with a preset size to be processed in real time, inputting the image into a deep convolutional neural network, obtaining a feature map with the preset size, reading the obtained feature map by adopting a long-time memory network, maximizing the probability of a correct word through a log function, and outputting a description sentence of the image. Preferably, the trial obtains 720 × 576 × 2 images of the preset size to be processed, and obtains 24 × 18 × 512 characteristic maps of the preset size. Further, it should be noted that in deep-loop neural network modeling, the variable number of words is represented by a fixed-length hidden state or memory h, where h is updated to h when a new input x is acceptedt+1And the f function adopts a long-time memory network, wherein the expression is as follows: h ist+1=f(ht,xt). In addition, it should be noted that the deep convolutional neural network is a series of convolution, excitation and pooling processes. Wherein VGG16 is adopted as a convolution network, and pooling is carried out by adopting a maximum value.
In addition, it should be noted that the mobile payment device may be a spine type mobile payment device configured with a common window of the liquid crystal window and the light guide plate window, and includes a code scanning lamp bowl. Specifically, the common window of the liquid crystal window and the light guide plate window specifically comprises a common window main body; a fixed window is arranged on the main body; the fixed window comprises a first characteristic window and a second characteristic window which are arranged in a crossed manner; at least one fixing device is arranged on each of the first characteristic window and the second characteristic window. The fixing device comprises a clamping hook and a clamping groove; the clamping hooks and the clamping grooves are respectively arranged on two opposite sides of the first characteristic window or the second characteristic window, and the liquid crystal window or the light guide plate window can be fixed through the clamping hooks after being clamped through the clamping grooves. In addition, the fixing device comprises hooks arranged in pairs, and each pair of hooks is respectively arranged on two opposite sides of the first characteristic window or the second characteristic window. The clamping hook comprises a fixed connecting part and a clamping part; the fixed connecting part is fixedly connected with the common window main body; the clamping portion is fixedly arranged on one side of the fixed connecting portion. One side that main part was kept away from to joint portion is provided with the slip-in inclined plane, can make things convenient for the entering of light guide plate window or liquid crystal window. The side face of the clamping part close to the common window, namely one side of the light panel window or the liquid crystal window main body is vertical to the fixed connecting part. The fixed connecting part is made of elastic material. On different hooks, the distances between the clamping parts and the common window main body are different. The first characteristic window and the second characteristic window are coaxially arranged. A third characteristic window is also included; the third characteristic window is respectively communicated with the first characteristic window and the second characteristic window in a cross way; at least one fixing device is arranged on the third characteristic window.
In addition, the input device is an input keyboard which is cooperated with a desktop computer, or an input keyboard of a PC all-in-one machine, or a digital function keyboard with a calculation function and an auxiliary payment operation function. Printing device is the printer, specifically includes: the paper feeding machine comprises a machine body, wherein a paper feeding inlet and a printing outlet are arranged, a thermal printing module is arranged in the machine body, at least one low-temperature cooling cavity is arranged between the paper feeding inlet and the thermal printing module, and a laminating module, a cold pressing module and a cutting module are sequentially connected between the thermal printing module and the printing outlet through a transmission mechanism. Specifically, the at least one low-temperature cooling cavity is used for reducing the surface temperature of the paper by using cold air; the thermal printing module is used for printing paper and transmitting the printed paper to the film covering module; the film laminating module is used for receiving the paper transmitted by the thermal printing module, laminating the paper and transmitting the paper subjected to film laminating to the cold pressing module; the cold pressing module is used for receiving the paper transmitted by the film covering module, carrying out cold pressing on the paper on the surface of the film covering module and transmitting the cold-pressed paper to the cutting module; the cutting module is used for receiving the paper conveyed by the cold pressing module, cutting the paper according to the specification, and conveying the cut paper to the printing outlet. In addition, a low-temperature cooling cavity is arranged between the thermal printing module and the film covering module. In addition, still be provided with interconnect's display module, controller on the organism, thermal-sensitive printing module, tectorial membrane module, the module of colding pressing, tailor the module and all be connected with the controller and give the controller with operating condition, and the controller gives display module with operating condition transmission.
In one embodiment, the connecting the mobile payment device supporting the cooperative application with the input device and the printing device, the electronic terminal controlling the mobile payment device, and the server cluster, which are arranged in a plurality of scenes, comprises: connecting at least one mobile payment device arranged in a plurality of scenes with a cloud server cluster through WIFI; and connecting at least one mobile payment device arranged in a plurality of scenes with an electronic terminal for controlling the mobile payment device through Bluetooth connection. In addition, at least one mobile payment device arranged in a plurality of scenes can be connected with the electronic terminal for controlling the mobile payment device through wired connection. Therefore, the diversity and the multi-selectivity of the networking layout are improved.
And 104, acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for payment of the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters.
In addition, it should be noted that the two-dimensional code image may be generated by combining the data of a plurality of parameters with the product code. Further, acquiring setting information required in the two-dimensional code; converting the acquired setting information into a binary file; carrying out information segmentation processing required by a structural link mode on the converted binary file to generate a plurality of different binary information with structural link characteristic characters, wherein the number of segments in the information segmentation processing required by the structural link mode on the converted binary file can be set by two-dimensional code numerical values in a range of 2-32 according to the size and application of set information; the binary file is disassembled into a plurality of corresponding parts within the range of 2-32, and corresponding start characters and end characters are respectively added before and after the binary file of each part which is disassembled; providing original binary coding information which is coded one by one for a coding part corresponding to each split part; a plurality of binary information which are different and have structural link characteristic characters are coded by two-dimensional codes which are encrypted or not one by one and are correspondingly sequenced, then information in commodity codes is obtained by combination, information conversion, encryption and sequencing are carried out according to the principle, and finally a plurality of two-dimensional code images which are sequenced according to a certain sequence are formed.
In one embodiment, the two-dimensional code reading method based on scene awareness provided by the present disclosure further includes: selecting a plurality of two-dimensional code images as training sample sets, and judging the number of the training sample sets; if the number of the training sample sets is insufficient, amplifying the sample sets to a preset number range; creating a CNN network, and initializing each parameter value of the CNN and each parameter value of the SVM; creating a Gabor filter and applying to the sample image IiExtracting the dimensions of theta-0, pi/8, pi/4, 3 pi/8, pi/2, 5 pi/8, 3 pi/4 and 7 pi/8, f-0, f-1, f-2, f-3 and f-4 to generate 40 characteristic maps; using 9-9 grid to reduce the dimension of feature map with 70-70 size to 8-8, connecting the first positions of feature map to form a feature vector Xi1=[x11,x12,…x1,m](ii) a For the same sample image I according to the size of batch valueiSequencing and inputting the created CNN network, and calculating the output of each convolution layer and each pooling layer in the hidden layer; wherein, the output of the pooling layer is used as a CNN network extraction characteristic part Xi2=[x21,x22,…x2,n]。
Suppose that the strong features of all samples are X1=[x11,x12,…x1,M]The characteristic automatically extracted by the CNN network is X2=[x21,x22,…x2,N]And for the feature vector X1、X2Carrying out standardization processing and serial fusion to obtain a fusion characteristic W ═ W1,w2,…,wM+N)=(αX1,βX2). Using PCA method to reduce dimension of W and obtaining final fusion characteristic vector W*And will fuse the feature vectors W*Inputting the training data into the SVM to train to reach the preset range error or train to reach the preset maximum iterative training generation times. Therefore, good algorithm technical support is provided for automatically extracting the characteristics of the two-dimension code image to quickly recognize and read the two-dimension code subsequently.
Further, the two-dimensional code reading method based on scene perception further includes: intercepting the generated two-dimensional code image suitable for the mobile payment equipment, and dividing the payment image after the two-dimensional code image is intercepted as the payment image; according to the Otsu algorithm, performing rough segmentation operation on the region of interest in the divided payment image, wherein the Otsu algorithm is to divide the original image into two images, namely a foreground image and a background image, by using a threshold value. Specifically, the prospect is: points, mass moments and average gray levels of the foreground under the current threshold are represented by n1, csum and m 1; background: the number of points, the mass moment and the average gray level of the background under the current threshold are represented by n2, sum-csum and m 2. When the optimal threshold is taken, the difference between the background and the foreground is the largest, and the key is how to select a standard for measuring the difference, namely an Otsu algorithm, namely the maximum between-class variance, which is represented by sb, and the maximum between-class variance which is represented by fmax. Further, regarding the sensitivity of Otsu's algorithm to noise and target size, it only produces better segmentation effect on images with a single peak between classes variance. When the size ratio of the target to the background is very different, the inter-class variance criterion function may present double peaks or multiple peaks, which is not good, but the greater amount of algorithm is the least time-consuming. Further, the formula for the Otsu algorithm is derived as: recording t as a segmentation threshold of the foreground and the background, wherein the number of foreground points accounts for w0 of the image proportion, and the average gray level is u 0; the number of background points is w1 in the image scale, and the average gray scale is u 1. The total average gray scale of the image is: u-w 0 u0+ w1 u 1. The variance of the foreground and background images can be expressed by the following formula:
g (w 0 (u0-u) (u0-u) + w1 (u1-u) (u1-u) (w 0) w1 (u0-u1) (u0-u 1). It should be noted that the above formula is a variance formula. The formula for g can be referred to in probability theory, i.e. the expression for sb as described below. When the variance g is maximum, the difference between the foreground and the background at this time can be considered as maximum, and the gray level t at this time is the optimal threshold sb — w0 — w1 (u1-u0) (u0-u 1).
Further, performing secondary segmentation on the roughly segmented payment image by using an active contour model of the gradient vector flow; and completing the segmentation operation suitable for the payment image by shape testing on the result obtained after the secondary segmentation operation.
Further, it should be noted that dividing the payment image includes: selecting a segmentation channel based on a statistical rule of payment image data of a training sample; selecting a segmentation threshold value in a segmentation channel, and performing foreground and background segmentation on the payment image; and carrying out communication area analysis according to the segmented foreground pixels and background pixels to obtain a qualified two-dimensional code area, wherein the payment image subblocks are divided in the qualified two-dimensional code area in a preset row and preset column dividing mode, and the preset row and the preset column are equivalent numerical values. Thereby providing the necessary data basis for subsequent rapid recognition of the payment image.
Further, selecting the split channel based on statistical rules of the payment image data of the training samples comprises: based on the statistical rules of the payment image data of the training samples, the distribution conditions of the image values in different color channels are obtained, and the color channel with the largest image value variance is obtained from the distribution conditions to form a segmentation channel. In addition, it should be further noted that selecting a segmentation threshold in a segmentation channel, and performing foreground and background segmentation on the payment image includes: obtaining a segmentation threshold value through a minimization algorithm in the Dajin algorithm; acquiring an image pixel value of a payment image; and performing dichotomy segmentation according to the image pixel value and the segmentation threshold value to obtain the foreground and the background. Further, it should be noted that, performing bisection segmentation according to the image pixel value and the segmentation threshold, and acquiring the foreground and the background includes: acquiring a region of which the image pixel value is higher than a segmentation threshold value as a foreground; and acquiring a region of which the image pixel value is lower than or equal to the segmentation threshold as a background.
Furthermore, performing connected region analysis according to the segmented foreground pixels and background pixels, and acquiring the two-dimensional code regions meeting the conditions includes: clustering the segmented foreground pixels and background pixels to form a communication area; and selecting the area with the largest size and meeting the prior position information in the communication area to form a two-dimensional code area meeting the conditions, and outputting the two-dimensional code area meeting the conditions. Furthermore, it should be noted that the result obtained after the second segmentation operation is performed by shape testingThe image segmentation operation comprises the following steps: and completing the graph segmentation operation suitable for the payment image according to the result obtained after the secondary segmentation operation through an area test, wherein the area test is to judge whether the number of the pixel points in the region of interest meets a pixel point threshold interval of a preset normal two-dimensional code area. Furthermore, it should be noted that the performing of the segmentation operation applicable to the payment image by the shape test on the result obtained after the secondary segmentation operation includes: completing the graph segmentation operation suitable for the payment image by a simple malformation degree calculation formula gamma l/N on the result obtained after the rough segmentation operation through a malformation degree testpCalculating the degree of deformity of the region of interest, wherein l is the perimeter of the region of interest, and N ispThe number of pixel points in the region of interest is counted; presetting a high threshold gamma of degree of deformityT(ii) a When gamma is less than or equal to gammaTJudging that the result obtained after the rough segmentation operation passes the deformity degree test; when gamma > gammaTAnd then, carrying out secondary rough segmentation operation on the region of interest by the segmentation method of the active contour model based on the gradient vector flow, and completing the segmentation operation suitable for the payment image by shape testing on the result obtained after the secondary rough segmentation operation.
And 106, grouping continuous ranging data acquired by at least one ranging sensor aiming at the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of the corresponding sequences, and classifying the ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points.
Specifically, classify the range finding data point in every data set to divide into first range finding sampling point and second range finding sampling point and include: one or more distance measurement data points are selected from each data group according to a selection rule to serve as first distance measurement sampling points, the rest distance measurement data points serve as second distance measurement sampling points, and meanwhile, the same number of key distance measurement sampling points are contained in each data group. It should be noted that the selection rule includes at least any one of the following: selecting a range data point of an effective range measurement from each data set as a first range sampling point, wherein the effective range measurement satisfies the following condition: reading data exceeding a preset distance, measuring a target distance or measuring a signal within a preset range; and determining the distance between each data group and other ranging data points as ranging data points exceeding a preset distance threshold value to serve as first ranging sampling points.
And 108, when the payment event is monitored to be triggered, judging whether the initial system of the mobile payment device supports a multi-form payment system. The multi-form payment system supports code scanning payment and non-contact card swiping payment, and comprises a closed payment system and an open payment system. The closed payment system is a payment system which is formed by presetting a single scene, a single payment position and combining a virtual value storage card or a real value storage card; the open payment system is a payment system which is formed by presetting at least two scenes and at least two payment positions and combining a virtual stored value card or a real stored value card.
Further, it is understood that a closed payment system, such as a savings system in a retail store, may be used by a consumer to save money for later use, based on a stored value card (virtual or physical) that can only be held back at one store, and a mobile application may be deployed to allow the consumer to hold back the stored value. When the system is used, the QR code or the bar code is displayed at a point of sale, a user can store the value of the card infinitely, only money which is wanted to be spent in a specific merchant is stored, the financial information and the bank account exposure of the user are avoided, and the system is also an effective mode for arranging budget for specific types of consumption for the user. Such as groceries or restaurants, merchants typically combine customer loyalty with a closed payment system, such as a closed payment card, to keep customers back streaming. In addition, open payment systems, such as a savings system in a retail store, a savings system in a restaurant or grocery store, where the consumer may have a deposit for later use, may be based on a stored value card (virtual or real) that can be held back at multiple stores, and a mobile application may be deployed to allow the consumer to hold back the stored value. When the method is used, the QR code or the bar code is displayed at a point of sale, a user can store the value of the card infinitely and only stores money which the user wants to spend in a specific merchant, so that the compatibility and the data sharing of the financial information and the bank account of the user in a plurality of payment scenes and payment positions are facilitated, and the method is an effective way for the user to arrange budget for specific types of consumption. Such as groceries or restaurants, merchants typically combine customer consumption replacement with an open payment system, such as an open payment card, to allow customers to consume back-streams.
And step 110, receiving the recognition operation finished by the user in a buckling and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored. In addition, it should be further explained that the two-dimensional code reading method based on scene perception provided by the present disclosure further includes: and payment display is completed through a common window of the liquid crystal window and the light guide plate window. From this, the convenience and the ease of use of payment after through accurate and swift recognition two-dimensional code have been improved. Note that, the common window of the liquid crystal window and the light guide plate window is used to display a specific payment amount and a payment status (the payment status includes payment in progress, payment success or payment failure). In addition, it should be noted that, when the common window of the liquid crystal window and the light guide plate window is not used for payment display, the common window is used for playing the advertisement information pushed by the cloud server and the propaganda content of the scene where the common window is located. Therefore, the beneficial effects that the shared window of the liquid crystal window and the light guide plate window is multifunctional, flexible and efficient to operate and display payment are achieved.
Specifically, if support the multi-form payment system and when monitoring first range finding sampling point, the recognition operation that the user was accomplished through the mode of deducting and sweeping includes: establishing a mapping relation between the characteristics of the cash register commodity and the price of the cash register commodity; according to the mapping relation, the commodity price in each commodity and the price of the commodity corresponding to the current payment image are obtained; and finishing the cash register operation according to the price of the commodity corresponding to the current payment image. And obtaining the commodity price of each commodity according to the mapping relation, accumulating the commodity prices, and obtaining the price of the commodity corresponding to the current payment image. It can be understood that the prices of the accumulated commodities are pre-stored, and the prices of the commodities can be quickly analyzed and obtained through deep learning according to historical data of user shopping. It should be noted that, in order to increase the user experience, the data of the cash register operation and the completion status are displayed. In addition, it should be noted that the snap-scanning mode is that the user holds the electronic terminal by hand, and faces the display screen of the electronic terminal and the two-dimensional code scanning window of the mobile payment device. The technical field personnel can understand that the built-in binocular camera or at least one optical sensor of mobile payment equipment can be effectively sensed by the buckling and sweeping mode, and technical support is provided for efficient payment.
In an embodiment, it should be noted that the two-dimensional code reading method based on scene awareness according to the present disclosure further includes: after the payment event is monitored to be triggered, when the electronic terminal is charged, deleting the payment image from the picture library, and setting a default picture in a built-in system of the electronic terminal as a prompt image; and when the current electric quantity of the electronic terminal is lower than a preset electric quantity threshold value, setting a default picture in a built-in system of the electronic terminal as a prompt image. The prompt image is a power-off low-power prompt image of the mobile payment device. In addition, the method further comprises the following steps: the method comprises the steps of obtaining the illumination intensity of a screen of the electronic terminal and the illumination intensity reflected by the screen of the electronic terminal in a preset time period, and constructing a screen illumination intensity database aiming at the electronic terminal and a screen reflection illumination intensity database aiming at the electronic terminal. Therefore, payment operation can be completed quickly and accurately by adapting corresponding illumination intensity of different mobile payment equipment models according to different scenes.
In order to more clearly and accurately understand and apply the two-dimensional code reading method based on scene perception related to the present disclosure, the following example is performed in conjunction with fig. 2, and it should be noted that the scope of protection of the present disclosure is not limited to the following example.
Specifically, the steps 201 to 208 are sequentially: receiving a plurality of images; dividing N × N subblocks into the image, performing rough segmentation operation through an Otsu algorithm, judging whether the region of interest accords with the basic form of the two-dimensional code, and if the region of interest accords with the basic form of the two-dimensional code, sending the image of the region of interest to a preset feature model to finish feature extraction of the payment image; if the region of interest does not accord with the basic form of the two-dimensional code, performing secondary segmentation operation on the active contour model based on the gradient vector flow, and then judging whether the region of interest accords with the basic form of the two-dimensional code, if so, sending the image of the region of interest to a preset feature model to finish feature extraction of the payment image; and if the region of interest does not conform to the basic form of the two-dimensional code, removing impurities in the payment image.
It is understood that the received payment image is divided; according to the Dajin algorithm, performing rough segmentation operation and secondary segmentation operation on the region of interest in the divided payment image; and completing the segmentation operation suitable for the payment image according to the result obtained after the secondary segmentation operation through shape testing. Specifically, for a payment image, the payment image is roughly segmented by adopting an Otsu algorithm and secondarily segmented by an active contour model of a gradient vector flow to obtain the payment image which is free of noise and convenient to read; the results of the above segmentation were then subjected to shape testing.
It should be noted that the test conditions are: and (6) area testing. Number N Of pixels in ROI (Region Of Interest)pI.e. whether the ROI area conforms to the range of the normal two-dimensional code area [ Nmin,Nmax]Within; and (5) testing the degree of deformity. Calculating the formula gamma as l/N by simple malformation degreepCalculating the malformation degree of the ROI region, wherein l is the perimeter of the ROI and is provided with a high malformation degree threshold value gammaTWhen gamma is less than or equal to gammaTThe test passed. Further, if the test condition passes, the ROI is a payment image and enters a feature extraction module; if the ROI region that does not pass the test condition, i.e., the pay image with noise or foreign matter, is possible, the segmentation method based on the active contour model of the gradient vector flow performs a secondary segmentation on the ROI region, and then performs a shape test on the secondary segmentation result, with the test condition being as described above. Wherein, as can be understood by those skilled in the art, the ROI is an impurity when the test is not passed, and is directly discarded; the ROI region that passes the test is the payment image,and entering a preset feature extraction module to extract features of the payment image.
As will be understood by those skilled in the art, the classical active contour model often has certain disadvantages when selecting an initial contour curve, such as being far away from a target curve and unable to converge on the target curve, and also has a poor convergence effect on a concave edge. Aiming at the problems, the traditional active contour model is improved, and an active contour model based on gradient vector flow is provided. The active contour model based on gradient vector flow replaces a Gaussian potential energy field in a traditional model, and the mathematical theoretical basis of the active contour model is Helmholtz theorem in an electromagnetic field. Compared with a Gaussian potential energy field, the gradient vector diagram of the whole image is obtained based on the field of the gradient vector flow, so that the action range of the external force field is larger. This also means that even if the selected initial contour is far from the target contour, it will eventually converge to the target contour through successive approximation. Meanwhile, after the external force action range is enlarged, the external force action of the concave part at the target contour is enlarged, so that the boundary can be converged to the concave part.
The invention provides a two-dimensional code reading method based on scene perception, which comprises the steps of configuring a plurality of sensors in a plurality of scenes, adopting a convolution-circulation neural network architecture, combining a convolution neural network and a circulation neural network architecture of computer vision and neural machine translation to form mobile payment equipment with an end-to-end generation model, and connecting the mobile payment equipment which is arranged in the plurality of scenes and supports the cooperative application of input equipment and a printing device, an electronic terminal for controlling the mobile payment equipment and a server cluster; the method comprises the steps of acquiring data of a plurality of parameters which are sent by a server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; grouping continuous ranging data acquired by at least one ranging sensor aiming at a generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; when a payment event is monitored to be triggered, whether an initial system of the mobile payment device supports a multi-form payment system or not is judged, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is used for presetting a single scene and a single payment position and combining a payment system formed by a virtual value storage card or a real value storage card, and the open payment system is used for presetting at least two scenes and at least two payment positions and combining a payment system formed by the virtual value storage card or the real value storage card; and if the multi-form payment system is supported and the first distance measurement sampling point is monitored, starting and receiving the reading operation finished by the user in a buckling and scanning mode. The method comprises the steps of firstly, networking is carried out, aiming at a two-dimensional code image, rough segmentation and secondary segmentation are carried out on the two-dimensional code image by adopting an Otsu algorithm, segmentation operation suitable for the two-dimensional code image is completed through shape testing on a result obtained after the secondary segmentation operation, rapid feature extraction can be carried out on the two-dimensional code image through deep learning, the two-dimensional code image is intercepted as a payment image after payment information sent by a server is received, when a payment event is monitored to be triggered, whether an initial system of the mobile payment device supports a multi-form payment system or not is judged, and if the multi-form payment system is supported and a first distance measurement sampling point is monitored, reading operation completed by a user in a buckling and scanning mode is started and received. In addition, payment display is completed through a shared window of the liquid crystal window and the light guide plate window, even environmental parameters of mobile payment equipment arranged in multiple scenes can be collected and detected through a plurality of sensors, an end-to-end generation model is formed, scene perception is achieved, and the beneficial effect of technical support is achieved for accurately and quickly completing two-dimensional code image recognition and even subsequent payment. In addition, the operation of recognizing and reading the image can be efficiently, accurately and quickly realized aiming at the two-dimensional code image under the networking environment supporting the multi-form payment system based on a plurality of scene conditions, so that the subsequent payment operation can be quickly, efficiently and flexibly completed, and the system has safety and applicability.
Based on the same invention concept, the invention also provides a two-dimensional code reading device based on scene perception. Because the principle of the device for solving the problems is similar to that of the two-dimensional code reading method based on scene perception, the implementation of the device can be realized according to the specific steps of the method, and repeated parts are not repeated.
Fig. 3 is a schematic structural diagram of a two-dimensional code reading device based on scene perception in an embodiment. This two-dimensional code recognition device 10 based on scene perception includes: networking and connecting module 100, acquiring and image generating module 200, classifying module 300, judging module 400 and deduction scanning and reading module 500.
The networking and connecting module 100 is configured to configure a plurality of sensors in a plurality of scenes, adopt a convolutional-cyclic neural network architecture, combine a convolutional neural network and a cyclic neural network architecture of computer vision and neural machine translation, form a mobile payment device with an end-to-end generation model, and connect the mobile payment device supporting cooperative application with an input device and a printing apparatus, an electronic terminal controlling the mobile payment device, and a server cluster, which are arranged in the plurality of scenes, wherein the mobile payment device supporting cooperative application with the input device and the printing apparatus includes a plurality of sensors, and the plurality of sensors are configured to acquire and detect environmental parameters of the mobile payment device supporting cooperative application with the input device and the printing apparatus, which are arranged in the plurality of scenes; the acquisition and image generation module 200 is configured to acquire data of multiple parameters, which are sent by the server cluster and are suitable for being recognized and read by the mobile payment device, in real time, and generate a two-dimensional code image suitable for the mobile payment device according to the data of the multiple parameters; the classification module 300 is configured to group continuous ranging data acquired by at least one ranging sensor for a generated two-dimensional code image into a plurality of data groups according to an adjacent relationship of corresponding sequences, and classify ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, where each data group includes the same number of ranging data points; the judging module 400 is configured to judge whether an initial system of the mobile payment device supports a multi-form payment system when it is monitored that a payment event is triggered, where the multi-form payment system supports code scanning payment and contactless card swiping payment, the multi-form payment system includes a closed payment system and an open payment system, the closed payment system presets a single scene and a single payment position and combines a payment system composed of a virtual value storage card or a real value storage card, and the open payment system presets at least two scenes and at least two payment positions and combines a payment system composed of a virtual value storage card or a real value storage card; the deduction and scanning recognition module 500 is used for receiving the recognition operation finished by the user in the deduction and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
The invention provides a two-dimensional code reading device based on scene perception, which comprises the following steps of firstly, networking a plurality of sensors arranged in a plurality of scenes through a networking and connecting module, adopting a convolution-cycle neural network architecture, combining a convolution neural network and a cycle neural network architecture translated by computer vision and a neural machine to form a mobile payment device with an end-to-end generation model, and connecting the mobile payment device, an electronic terminal for controlling the mobile payment device and a server cluster which are arranged in the plurality of scenes and support the cooperative application of input equipment and a printing device; the acquisition and image generation module acquires data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generates a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; grouping continuous ranging data acquired by at least one ranging sensor aiming at the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences through a classification module, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; when the payment event is monitored to be triggered, whether an initial system of the mobile payment device supports a multi-form payment system or not is judged through the judging module, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is used for presetting a single scene and a single payment position and combining a payment system formed by a virtual value storage card or a real value storage card, and the open payment system is used for presetting at least two scenes and at least two payment positions and combining a payment system formed by the virtual value storage card or the real value storage card; and finally, if the multi-form payment system is supported and the first distance measurement sampling point is monitored through the buckling scanning reading module, starting and receiving the reading operation finished by the user in a buckling scanning mode. The device firstly carries out networking, aiming at a two-dimensional code image, rough segmentation and secondary segmentation are carried out on the two-dimensional code image by adopting an Otsu algorithm, segmentation operation suitable for the two-dimensional code image is completed through shape testing on a result obtained after the secondary segmentation operation, rapid characteristic extraction can be carried out on the two-dimensional code image through deep learning, after payment information sent by a server is received, the two-dimensional code image is intercepted as a payment image, when a payment event is monitored to be triggered, whether an initial system of a mobile payment device supports a multi-form payment system or not is judged, and if the initial system supports the multi-form payment system and a first distance measurement sampling point is monitored, reading operation completed by a buckling and scanning mode of a user is started and received. In addition, payment display is completed through a shared window of the liquid crystal window and the light guide plate window, even environmental parameters of mobile payment equipment arranged in multiple scenes can be collected and detected through a plurality of sensors, an end-to-end generation model is formed, scene perception is achieved, and the beneficial effect of technical support is achieved for accurately and quickly completing two-dimensional code image recognition and even subsequent payment. In addition, the operation of recognizing and reading the image can be efficiently, accurately and quickly realized aiming at the two-dimensional code image under the networking environment supporting the multi-form payment system based on a plurality of scene conditions, so that the subsequent payment operation can be quickly, efficiently and flexibly completed, and the system has safety and applicability.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by the processor in fig. 1 or fig. 2. The embodiment of the invention also provides a computer program product containing the instruction. When the computer program product is run on a computer, it causes the computer to perform the method of fig. 1 or fig. 2 described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The foregoing description has been presented for purposes of illustration and description. This description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (10)
1. A two-dimension code reading method based on scene perception is characterized by comprising the following steps:
the method comprises the steps of configuring a plurality of sensors arranged in a plurality of scenes, adopting a convolution-circulation neural network architecture, combining a convolution neural network and a circulation neural network architecture of computer vision and neural machine translation to form mobile payment equipment with an end-to-end generation model, and connecting the mobile payment equipment supporting cooperative application with input equipment and a printing device, an electronic terminal controlling the mobile payment equipment and a server cluster which are arranged in the plurality of scenes, wherein the mobile payment equipment supporting cooperative application with the input equipment and the printing device comprises the plurality of sensors which are used for collecting and detecting environmental parameters of the mobile payment equipment supporting cooperative application with the input equipment and the printing device, which are arranged in the plurality of scenes;
acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters;
grouping continuous ranging data acquired by at least one ranging sensor for the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points;
when a payment event is monitored to be triggered, judging whether an initial system of the mobile payment device supports a multi-form payment system or not, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value storage card or a real value storage card in a preset single scene and a preset single payment position, and the open payment system is combined with the payment system formed by the virtual value storage card or the real value storage card in a preset at least two scenes and at least two payment positions;
and receiving the recognition operation finished by the user in a buckling and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
2. The two-dimensional code reading method based on scene awareness of claim 1, wherein the connecting the mobile payment device supporting cooperative application with the input device and the printing apparatus, the electronic terminal controlling the mobile payment device, and the server cluster, which are disposed in a plurality of scenes, comprises: connecting at least one mobile payment device arranged in a plurality of scenes with a cloud server cluster through WIFI;
and connecting the at least one mobile payment device arranged in a plurality of scenes with the electronic terminal for controlling the mobile payment device through Bluetooth connection.
3. The two-dimensional code reading method based on scene perception according to claim 1, further comprising: acquiring capability values corresponding to a plurality of protocol stacks in the mobile payment equipment and a channel identifier currently bound with the protocol stack with the maximum value of the capability values;
selecting a corresponding channel according to the acquired channel identifier;
and completing the payment operation applicable to the mobile payment device through the selected channel.
4. The two-dimensional code reading method based on scene perception according to claim 1, further comprising: selecting a plurality of two-dimensional code images as a training sample set, and judging the number of the training sample set;
if the number of the training sample sets is insufficient, amplifying the sample sets to a preset number range;
creating a CNN network, and initializing each parameter value of the CNN and each parameter value of the SVM;
creating a Gabor filterAnd for sample image IiExtracting the dimensions of theta-0, pi/8, pi/4, 3 pi/8, pi/2, 5 pi/8, 3 pi/4 and 7 pi/8, f-0, f-1, f-2, f-3 and f-4 to generate 40 characteristic maps;
using 9-9 grid to reduce the dimension of feature map with 70-70 size to 8-8, connecting the first positions of feature map to form a feature vector Xi1=[x11,x12,…x1,m];
For the same sample image I according to the size of batch valueiSequencing and inputting the created CNN network, and calculating the output of each convolution layer and each pooling layer in the hidden layer; wherein the output of the pooling layer is used as the CNN network extraction feature part Xi2=[x21,x22,…x2,n];
Suppose that the strong features of all samples are X1=[x11,x12,…x1,M]The characteristic automatically extracted by the CNN network is X2=[x21,x22,…x2,N]And for the feature vector X1、X2Carrying out standardization processing and serial fusion to obtain a fusion characteristic W ═ W1,w2,…,wM+N)=(αX1,βX2);
Using PCA method to reduce dimension of W and obtaining final fusion characteristic vector W*And fusing the feature vectors W*Inputting the training data into the SVM to train to reach the preset range error or train to reach the preset maximum iterative training generation times.
5. The two-dimensional code reading method based on scene perception according to claim 1, further comprising: the method comprises the steps of obtaining the illumination intensity of a screen of the electronic terminal and the illumination intensity reflected by the screen of the electronic terminal in a preset time period, and constructing a screen illumination intensity database aiming at the electronic terminal and a screen reflection illumination intensity database aiming at the electronic terminal.
6. The two-dimensional code reading method based on scene perception according to claim 1, further comprising: intercepting the generated two-dimensional code image suitable for the mobile payment equipment, and dividing the payment image after the two-dimensional code image is intercepted as the payment image;
according to the Dajin algorithm, performing rough segmentation operation on the region of interest in the divided payment image;
performing secondary segmentation on the roughly segmented payment image by using an active contour model of a gradient vector flow;
and completing the segmentation operation suitable for the payment image by shape testing on the result obtained after the secondary segmentation operation.
7. The two-dimensional code reading method based on scene perception according to claim 6, wherein the dividing the payment image includes: selecting a segmentation channel based on statistical rules of the payment image data of training samples;
selecting a segmentation threshold value in the segmentation channel, and performing foreground and background segmentation on the payment image;
and carrying out communication area analysis according to the segmented foreground pixels and background pixels to obtain a qualified two-dimensional code area, wherein the payment image subblocks are divided in the qualified two-dimensional code area in a preset row and preset column dividing mode, and the preset row and the preset column are equivalent numerical values.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1-7 are implemented when the program is executed by the processor.
10. The utility model provides a two-dimensional code recognition device based on scene perception which characterized in that, the device includes:
the networking and connecting module is used for networking the mobile payment equipment which is arranged in a plurality of scenes, is provided with a plurality of sensors, adopts a convolution-circulation neural network architecture, combines a convolution neural network and a circulation neural network architecture of computer vision and neural machine translation to form an end-to-end generation model, and connects the mobile payment device supporting the cooperative application with the input device and the printing device, the electronic terminal controlling the mobile payment device and the server cluster which are arranged in a plurality of scenes, wherein the mobile payment device supporting the application in cooperation with the input device and the printing apparatus includes a plurality of sensors, the sensors are used for acquiring and detecting environmental parameters of the mobile payment equipment which is arranged in a plurality of scenes and supports cooperative application of input equipment and a printing device;
the acquisition and image generation module is used for acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters;
the classification module is used for grouping continuous ranging data acquired by at least one ranging sensor aiming at the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points;
the system comprises a judging module, a payment processing module and a payment processing module, wherein the judging module is used for judging whether an initial system of the mobile payment device supports a multi-form payment system when a payment event is triggered, the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting a single scene and a single payment position, and the open payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting at least two scenes and at least two payment positions;
and the deducting and scanning reading module is used for receiving the reading operation finished by the user in a deducting and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811513262.2A CN111311241A (en) | 2018-12-11 | 2018-12-11 | Two-dimensional code reading method and device based on scene perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811513262.2A CN111311241A (en) | 2018-12-11 | 2018-12-11 | Two-dimensional code reading method and device based on scene perception |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111311241A true CN111311241A (en) | 2020-06-19 |
Family
ID=71150537
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811513262.2A Withdrawn CN111311241A (en) | 2018-12-11 | 2018-12-11 | Two-dimensional code reading method and device based on scene perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111311241A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117057384A (en) * | 2023-08-15 | 2023-11-14 | 厦门中盾安信科技有限公司 | User code string generation method, medium and device supporting multi-type business handling |
-
2018
- 2018-12-11 CN CN201811513262.2A patent/CN111311241A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117057384A (en) * | 2023-08-15 | 2023-11-14 | 厦门中盾安信科技有限公司 | User code string generation method, medium and device supporting multi-type business handling |
CN117057384B (en) * | 2023-08-15 | 2024-05-17 | 厦门中盾安信科技有限公司 | User code string generation method, medium and device supporting multi-type business handling |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111311226A (en) | Machine vision-based two-dimensional code reading method and device under complex background | |
CN111311244A (en) | Passive code scanning method and device based on QR (quick response) code | |
CN111311233A (en) | Passive code scanning method and device based on multi-trigger mode | |
CN111311241A (en) | Two-dimensional code reading method and device based on scene perception | |
CN111311230A (en) | Two-dimensional code reading method and device with displacement sensor | |
CN111311227A (en) | Method and device suitable for in-screen type biological feature and two-dimensional code recognition | |
CN111310492A (en) | In-screen two-dimensional code reading method and device suitable for adjustable light source | |
CN111311248A (en) | Method and device for recognizing and reading two-dimensional code under low-power-consumption screen | |
CN109816393B (en) | Method and device for identifying and verifying biological characteristics under screen | |
CN111311225A (en) | Optical module encryption-based in-screen payment method and device | |
CN111311229A (en) | Chinese-sensible code based passive code scanning method and device | |
CN111311237A (en) | Face and bar code double-recognition method and device | |
CN111310490A (en) | Two-dimensional code reading method and device suitable for ARM processor architecture | |
CN111310763A (en) | Method and device for dual recognition of identity card and bar code | |
CN111311236A (en) | Two-dimensional code reading method and device with temperature and humidity sensor | |
CN111310497A (en) | Two-dimensional code reading method and device based on Android system | |
CN111311231A (en) | Two-dimensional code reading method and device used under real-time operating system | |
CN111311232A (en) | Two-dimensional code reading method and device with positioning sensor | |
CN111310500A (en) | Two-dimensional code reading method and device based on Windows system | |
CN111310498A (en) | Two-dimensional code reading method and device based on Linux system | |
CN111311234A (en) | Two-dimensional code reading method and device suitable for MIPS processor architecture | |
CN111311222A (en) | Waving code scanning method and device suitable for multiple communication modes | |
CN111311243A (en) | Code buckling, scanning and scanning method and device suitable for multiple video communication modes | |
CN111310495A (en) | Two-dimensional code reading method and device with optical sensor | |
CN111311235A (en) | Buckling scanning code scanning method and device for identifying multi-trigger mode of bill |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200619 |
|
WW01 | Invention patent application withdrawn after publication |