CN111311223A - Multi-user barrier-free code scanning payment method and device applied to android system - Google Patents

Multi-user barrier-free code scanning payment method and device applied to android system Download PDF

Info

Publication number
CN111311223A
CN111311223A CN201811511570.1A CN201811511570A CN111311223A CN 111311223 A CN111311223 A CN 111311223A CN 201811511570 A CN201811511570 A CN 201811511570A CN 111311223 A CN111311223 A CN 111311223A
Authority
CN
China
Prior art keywords
payment
image
mobile payment
machine learning
full
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811511570.1A
Other languages
Chinese (zh)
Inventor
王越
晏成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Inspiry Technology Co Ltd
Original Assignee
Beijing Inspiry Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Inspiry Technology Co Ltd filed Critical Beijing Inspiry Technology Co Ltd
Priority to CN201811511570.1A priority Critical patent/CN111311223A/en
Publication of CN111311223A publication Critical patent/CN111311223A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3274Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being displayed on the M-device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Cash Registers Or Receiving Machines (AREA)

Abstract

The invention provides a multi-user barrier-free code scanning payment method applied to an android system, which is characterized in that a mobile payment device which is arranged in multiple scenes and is provided with a self-learning-based machine learning model, two full screens, two cameras and an android control system based on a central processing unit is networked, supports, input equipment and a printing device are cooperatively applied, and the mobile payment device is connected with an electronic terminal and a server cluster; acquiring parameter data to generate a two-dimensional code image; and when the payment event is triggered and code scanning and non-contact card swiping payment are supported and the first distance measurement sampling point is monitored, the self-learning image recognition and payment of multiple users are completed through buckling scanning. The method adopts a preset algorithm for decoding, and when a bar code is sensed in front, a display screen is changed into transparent glass for reading; through algorithm processing, the barcode content and the display information are simultaneously read based on two full-screen and double cameras. The disclosure also provides a device for multi-user barrier-free code scanning payment applied to the android system.

Description

Multi-user barrier-free code scanning payment method and device applied to android system
Technical Field
The disclosure relates to the technical field of mobile payment and the technical field of image recognition, in particular to a method and a device for multi-user barrier-free code scanning payment applied to an android system.
Background
In the prior art, the cash register modes applied to a plurality of scenes are pos machine card swiping modes, cash modes and the like. The electronic terminal is opened for the payer in a few scenes, static two-dimensional codes provided by a plurality of scenes are scanned (in the process of manufacturing and image acquisition of static payment images, impurities, interference and the like are inevitably mixed in the images, so that the problems of noise, blurring and uneven gray scale exist in the images), information of the two-dimensional codes is read, and payment operation is completed. The cash register mode enables the cash register mode to be single, and two-dimensional code self-learning reading and payment operation can not be achieved based on two comprehensive screens or even two double cameras under the hardware condition. The method has no application accuracy, flexibility, diversity and applicability.
Disclosure of Invention
In order to solve the technical problem in the prior art, the disclosed embodiment provides a method and a device for multi-user barrier-free code scanning payment applied to an android system, wherein a self-learning-based machine learning model, two full-screen screens, an outer shell connected with the two full-screen screens, two cameras suitable for bar code reading and a mobile payment device of an android control system based on a central processing unit are arranged in a plurality of scenes, and the mobile payment device, an electronic terminal for controlling the mobile payment device and a server cluster which are arranged in the plurality of scenes and support the cooperative application of an input device and a printing device are connected; the method comprises the steps of acquiring data of a plurality of parameters which are sent by a server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; grouping continuous ranging data extracted by at least one ranging sensor aiming at a generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; when a payment event is monitored to be triggered, whether an initial system of the mobile payment device supports a multi-form payment system or not is judged, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is used for presetting a single scene and a single payment position and combining a payment system formed by a virtual value storage card or a real value storage card, and the open payment system is used for presetting at least two scenes and at least two payment positions and combining a payment system formed by the virtual value storage card or the real value storage card; if the multi-form payment system is supported and the first distance measurement sampling point is monitored, the recognition and payment operation finished by a user in a buckling and scanning mode is started and received.
In a first aspect, an embodiment of the present disclosure provides a method for multi-user barrier-free code scanning payment applied to an android system, including the following steps: the method comprises the steps of networking a machine learning model which is arranged in a plurality of scenes and is configured based on self-learning, two full-face screens, a shell body which is connected with the two full-face screens, double cameras suitable for bar code reading and a mobile payment device of an android control system based on a central processing unit, connecting the mobile payment device which is arranged in a plurality of scenes and supports the cooperative application of an input device and a printing device, an electronic terminal for controlling the mobile payment device and a server cluster, wherein the mobile payment device which supports the cooperative application of the input device and the printing device comprises the two cameras suitable for bar code reading, and the two cameras suitable for bar code reading are used for bidirectionally sensing light change of 360 degrees at full angles in real time so as to trigger a liquid crystal screen to complete the change operation of display contents through the central processing unit, the full screen is a full transparent display screen, the camera and other light or image perception function modules suitable for bar code reading of the mobile payment device are arranged below the full transparent display screen, the camera or other light or image perception function modules suitable for bar code reading of the mobile payment device directly realize perception on light or images through the full screen, the self-learning-based machine learning model is used for collecting positive samples and negative samples required by training machine learning, then model training and model testing are carried out, an original machine learning model is created and deployed on line, unidentifiable two-dimensional code images detected by the original machine learning model deployed on line are stored as negative samples, when the number of the negative samples reaches a set threshold value, a machine learning training task is triggered, and a new machine learning model is created, updating the model according to a set model updating strategy so as to complete self-learning reading of the two-dimensional code image; acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; grouping continuous ranging data extracted by at least one ranging sensor for the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; when a payment event is monitored to be triggered, judging whether an initial system of the mobile payment device supports a multi-form payment system or not, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value storage card or a real value storage card in a preset single scene and a preset single payment position, and the open payment system is combined with the payment system formed by the virtual value storage card or the real value storage card in a preset at least two scenes and at least two payment positions; and receiving the recognizing and payment operation finished by the user in a buckling and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
In a second aspect, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method described above.
In a third aspect, the disclosed embodiments provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method described above when executing the program.
In a fourth aspect, an embodiment of the present disclosure provides a multi-user barrier-free code scanning payment method and apparatus applied to an android system, where the apparatus includes: the networking and connecting module is used for networking a self-learning-based machine learning model, two full-face screens, a shell body, two cameras and a mobile payment device based on an android control system of a central processing unit, wherein the self-learning-based machine learning model, the two full-face screens, the shell body, the two cameras and the mobile payment device are arranged in a plurality of scenes, the two cameras are suitable for bar code reading, the mobile payment device is based on an android control system of the central processing unit, the mobile payment device is cooperatively applied to the input device and the printing device, the electronic terminal is used for controlling the mobile payment device, the mobile payment device is supported to be cooperatively applied to the input device and the printing device and comprises two cameras suitable for bar code reading, the two cameras suitable for bar code reading are used for bidirectionally sensing light ray change of 360 degrees of full angles in real time, so that a liquid crystal screen is triggered to complete change operation of display contents through the central processing unit, the full screen is a full transparent display screen, the camera and other light or image perception function modules suitable for bar code reading of the mobile payment device are arranged below the full transparent display screen, the camera or other light or image perception function modules suitable for bar code reading of the mobile payment device directly realize perception on light or images through the full screen, the self-learning-based machine learning model is used for collecting positive samples and negative samples required by training machine learning, then model training and model testing are carried out, an original machine learning model is created and deployed on line, unidentifiable two-dimensional code images detected by the original machine learning model deployed on line are stored as negative samples, when the number of the negative samples reaches a set threshold value, a machine learning training task is triggered, and a new machine learning model is created, updating the model according to a set model updating strategy so as to complete self-learning reading of the two-dimensional code image; the acquisition and image generation module is used for acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters; the classification module is used for grouping continuous ranging data extracted by at least one ranging sensor aiming at the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points; the system comprises a judging module, a payment processing module and a payment processing module, wherein the judging module is used for judging whether an initial system of the mobile payment device supports a multi-form payment system when a payment event is triggered, the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting a single scene and a single payment position, and the open payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting at least two scenes and at least two payment positions; and the deducting and scanning payment module is used for receiving the reading and payment operation finished by the user in a deducting and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
The invention provides a method and a device for multi-user barrier-free code scanning payment applied to an android system. In addition, still accomplish the payment through the sharing window of liquid crystal window and light guide plate window and show, can also detect the light change through the two-way 360 full angles of two cameras that are applicable to the bar code recognition even to trigger the change operation that the LCD screen accomplished the display content through central processing unit, and improve the high efficiency that detects. Specifically, when the bar code is sensed in front, the display content of the transparent screen is removed, and the display screen is changed into transparent glass so that the bar code can be conveniently read. In addition, the algorithm optimization processing can be used for simultaneously reading the bar code content and the face information based on the first full screen or screen display content and the second full screen or screen display content. Furthermore, the self-learning image recognition operation can be efficiently, accurately and quickly realized aiming at the two-dimensional code image under the network distribution environment supporting the multi-form payment system based on a plurality of scene conditions, so that the subsequent payment operation can be quickly, efficiently and flexibly completed, and the safety and the applicability are realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced as follows:
fig. 1 is a schematic flowchart illustrating steps of a method for multi-user barrier-free code scanning payment applied to an android system in an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for multi-user barrier-free code scanning payment applied to an android system according to another embodiment of the present invention; and
fig. 3 is a schematic structural diagram of an apparatus for multi-user barrier-free code scanning payment applied to an android system in an embodiment of the present invention.
Detailed Description
The present application will now be described in further detail with reference to the accompanying drawings and examples.
As shown in fig. 1, which is a schematic flow chart of a method for multi-user barrier-free code scanning payment applied to an android system in an embodiment, specifically includes the following steps:
102, networking a machine learning model which is arranged in a plurality of scenes and is configured based on self-learning, two full-face screens, an outer shell connected with the two full-face screens, double cameras suitable for bar code reading and mobile payment equipment of an android control system based on a central processing unit, and connecting the mobile payment equipment which is arranged in the plurality of scenes and supports the cooperative application of input equipment and a printing device, an electronic terminal for controlling the mobile payment equipment and a server cluster. The mobile payment equipment supporting cooperative application with the input equipment and the printing device comprises two cameras suitable for bar code reading, wherein the two cameras suitable for bar code reading are used for sensing 360-degree full-angle light change in a bidirectional real-time mode, so that the change operation of display content is completed by triggering the liquid crystal screen through the central processing unit. It can be understood that, for example, when a camera suitable for bar code reading can sense that a bar code exists in front, display content is removed, and a display screen is changed into transparent glass so as to be convenient for reading the bar code. Therefore, the intelligence and the usability of the mobile payment device are enhanced. In addition, the bar code content can be read while the content is displayed on the screen through the processing based on the preset algorithm in the central processing unit. Therefore, the diversity and the flexibility of the mobile payment equipment are improved. In addition, it should be noted that the full-face screen is a full-transparent display screen, the camera and other light or image perception functional modules of the mobile payment device, which are suitable for barcode reading, are arranged under the full-transparent display screen, and the camera or other light or image perception functional modules of the mobile payment device, which are suitable for barcode reading, can directly realize perception on light or images through the full-face screen. Further, the other light or image perception function modules comprise at least one of a light supplement lamp and a face recognition module. Therefore, the accuracy of the first full-screen perception image is improved, and the diversity and the flexibility of the second full-screen perception face information are improved. Preferably, the first full screen is arranged above the mobile payment device body, and the second full screen is arranged on the side of the mobile payment device body which is perpendicular to the first full screen. Furthermore, a black backlight film is attached under the fully transparent display screen, the backlight film is provided with openings at the positions of the camera and other light or image perception functional modules, and the openings are not smaller than the size of the camera and other light or image perception functional modules. The camera and other light or image perception function modules are circular; the opening on the backlight film is a round hole.
In addition, it should be noted that the machine learning model based on self-learning is used for collecting positive samples and negative samples required by training machine learning, then model training and model testing are performed, an original machine learning model is created and deployed on the line, unidentifiable two-dimensional code images detected by the original machine learning model deployed on the line are stored as negative samples, when the number of the negative samples reaches a set threshold value, a machine learning training task is triggered, a new machine learning model is created, and model updating is performed according to a set model updating strategy so as to complete self-learning reading of the two-dimensional code images.
In addition, it should be further explained that triggering the machine learning training task to create the new machine learning model includes the following steps, i.e., data cleaning, feature extraction, model training, and model testing. The model updating according to the set model updating strategy comprises the following steps: setting: the accuracy and the distribution area of the new machine learning model are curP and curA respectively, the accuracy and the distribution area of the original machine learning model are prevP and prevA respectively, the residence time of the original machine learning model is T, and time parameters K1 and K2 are obtained; if the currp > prevP, then model updating is performed, otherwise, a new machine learning model distribution region curA is calculated as currp, the original machine learning model distribution region prevA is calculated as follows,
A. if T < ═ K1, prevA < ═ prevP;
B. if T < K2, prevA ═ prevP (1+ (K2-T)/(K2-K1));
C. if T > -k 2, prevA-0;
generating a Random number R (Random (0.5) × (curA + prevA), if R < curA, updating the model, otherwise, not updating.
As can be understood by those skilled in the art, the control system based on the central processing unit mainly comprises a code scanning sensing module, a backlight module, a light supplementing module, an image acquisition module, a decoding module and a main control module. Specifically, sweep a yard response module and be located mobile payment equipment, sweep a yard box top promptly, but perception user's the yard action of sweeping sends this perception information to host system, and host system chip handles promptly. And after the main control chip receives the code scanning signal of the user, the transparent display window is controlled to close all display contents, the transparent display window becomes transparent glass, and the two-dimensional code image information can be allowed to pass through the transparent display window for imaging. The sensing module includes but is not limited to one or more of the following forms: an infrared distance sensing module; an ultrasonic distance sensing module; a light sensing module; and an electromagnetic field induction module. In addition, it should be noted that the code scanning sensing module can also be operated and realized by multiplexing the image acquisition module, and the movement change or the light ray change is monitored by the image acquisition module to sense the code scanning action.
Furthermore, the transparent display window can display corresponding prompt guide information according to the instruction sent by the main control module, and can also close all displays according to the instruction sent by the main control module to become a transparent glass window, so that the light of the two-dimensional code image is allowed to normally pass through, and imaging is carried out in the image acquisition module. The transparent display window can be selected from transparent liquid crystal display screens, such as transparent TN-LCD, transparent TFT-LCD, transparent OLED display screens, transparent PDP display screens and plasma display screens.
Furthermore, it should be noted that the backlight module uses a white LED lamp as a light source, and forms uniform white light through a light guide plate, a light bowl, and the like, and the uniform white light is used as a backlight source of the transparent display screen, and is also used as a code scanning light supplement to obtain clear two-dimensional code image information. And the light supplementing module is positioned above the transparent display screen, namely positioned on two sides of the transparent display light window together with the image acquisition module. When the paper bar code is read, clear bar code image information can be obtained as auxiliary illumination. The image acquisition module can acquire image information in front of the equipment and sends the acquired two-dimensional code image information to the decoding module. The decoding module carries out image processing on the acquired two-dimensional code image information, carries out two-dimensional code decoding according to a decoding algorithm and transmits a decoding result to the main control module. And the main control module is used for realizing the operation and control of each functional module of the equipment.
Further, before scanning the code, the device displays scanning guide information, money prompting information or other advertisement contents on the transparent display window. When the code scanning sensing module senses the code scanning payment action, the window display information is closed to form a completely transparent code scanning reading optical window, the image acquisition module acquires the two-dimensional code information in front through the transparent optical window, the decoding module acquires the two-dimensional code image information for decoding, and after the decoding is completed, the transparent display window displays the result information, the advertisement information and other contents. The device then enters the next code scanning loop logic. Therefore, the diversity and flexibility of the application of the mobile payment device are increased.
In addition, it should be further noted that the body of the mobile payment device involved in the method for multi-user barrier-free code scanning payment applied to the android system proposed by the present disclosure may be further configured with at least one sensor. The at least one sensor is used for collecting and detecting a plurality of environment parameters of the mobile payment device which is arranged in a plurality of scenes and supports the cooperative application of the input device and the printing device. When the collected and monitored plurality of environmental parameters of the mobile payment device are larger than the preset plurality of environmental parameters, the mobile payment device can complete reminding operation through the internal master controller. The reminding operation includes but is not limited to an acousto-optic reminding or an acoustic buzzer reminding. Therefore, the interactivity and the experience of the mobile payment device are improved. Further, it should be noted that the sensors applied in the method of multi-user barrier-free code scanning payment of the android system proposed by the present disclosure include, but are not limited to, a light sensor, a linear displacement sensor, an angular displacement sensor, and a temperature and humidity sensor. Therefore, the usability of the electronic terminal under the real-time operating system under the condition of recognizing, reading and paying of the mobile payment device with the multiple sensors is improved.
In addition, it should be noted that the mobile payment device may be a spine type mobile payment device configured with a common window of the liquid crystal window and the light guide plate window, and includes a code scanning lamp bowl. Specifically, the common window of the liquid crystal window and the light guide plate window specifically comprises a common window main body; a fixed window is arranged on the main body; the fixed window comprises a first characteristic window and a second characteristic window which are arranged in a crossed manner; at least one fixing device is arranged on each of the first characteristic window and the second characteristic window. The fixing device comprises a clamping hook and a clamping groove; the clamping hooks and the clamping grooves are respectively arranged on two opposite sides of the first characteristic window or the second characteristic window, and the liquid crystal window or the light guide plate window can be fixed through the clamping hooks after being clamped through the clamping grooves. In addition, the fixing device comprises hooks arranged in pairs, and each pair of hooks is respectively arranged on two opposite sides of the first characteristic window or the second characteristic window. The clamping hook comprises a fixed connecting part and a clamping part; the fixed connecting part is fixedly connected with the common window main body; the clamping portion is fixedly arranged on one side of the fixed connecting portion. One side that main part was kept away from to joint portion is provided with the slip-in inclined plane, can make things convenient for the entering of light guide plate window or liquid crystal window. The side face of the clamping part close to the common window, namely one side of the light panel window or the liquid crystal window main body is vertical to the fixed connecting part. The fixed connecting part is made of elastic material. On different hooks, the distances between the clamping parts and the common window main body are different. The first characteristic window and the second characteristic window are coaxially arranged. A third characteristic window is also included; the third characteristic window is respectively communicated with the first characteristic window and the second characteristic window in a cross way; at least one fixing device is arranged on the third characteristic window.
In addition, the input device is an input keyboard which is cooperated with a desktop computer, or an input keyboard of a PC all-in-one machine, or a digital function keyboard with a calculation function and an auxiliary payment operation function. Printing device is the printer, specifically includes: the paper feeding machine comprises a machine body, wherein a paper feeding inlet and a printing outlet are arranged, a thermal printing module is arranged in the machine body, at least one low-temperature cooling cavity is arranged between the paper feeding inlet and the thermal printing module, and a laminating module, a cold pressing module and a cutting module are sequentially connected between the thermal printing module and the printing outlet through a transmission mechanism. Specifically, the at least one low-temperature cooling cavity is used for reducing the surface temperature of the paper by using cold air; the thermal printing module is used for printing paper and transmitting the printed paper to the film covering module; the film laminating module is used for receiving the paper transmitted by the thermal printing module, laminating the paper and transmitting the paper subjected to film laminating to the cold pressing module; the cold pressing module is used for receiving the paper transmitted by the film covering module, carrying out cold pressing on the paper on the surface of the film covering module and transmitting the cold-pressed paper to the cutting module; the cutting module is used for receiving the paper conveyed by the cold pressing module, cutting the paper according to the specification, and conveying the cut paper to the printing outlet. In addition, a low-temperature cooling cavity is arranged between the thermal printing module and the film covering module. In addition, still be provided with interconnect's display module, controller on the organism, thermal-sensitive printing module, tectorial membrane module, the module of colding pressing, tailor the module and all be connected with the controller and give the controller with operating condition, and the controller gives display module with operating condition transmission.
In one embodiment, the connecting the mobile payment device supporting the cooperative application with the input device and the printing device, the electronic terminal controlling the mobile payment device, and the server cluster, which are arranged in a plurality of scenes, comprises: connecting at least one mobile payment device arranged in a plurality of scenes with a cloud server cluster through WIFI; and connecting at least one mobile payment device arranged in a plurality of scenes with an electronic terminal for controlling the mobile payment device through Bluetooth connection. In addition, at least one mobile payment device arranged in a plurality of scenes can be connected with the electronic terminal for controlling the mobile payment device through wired connection. Therefore, the diversity and the multi-selectivity of the networking layout are improved.
And 104, acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for payment of the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters.
In addition, it should be noted that the two-dimensional code image may be generated by combining the data of a plurality of parameters with the product code. Further, acquiring setting information required in the two-dimensional code; converting the acquired setting information into a binary file; carrying out information segmentation processing required by a structural link mode on the converted binary file to generate a plurality of different binary information with structural link characteristic characters, wherein the number of segments in the information segmentation processing required by the structural link mode on the converted binary file can be set by two-dimensional code numerical values in a range of 2-32 according to the size and application of set information; the binary file is disassembled into a plurality of corresponding parts within the range of 2-32, and corresponding start characters and end characters are respectively added before and after the binary file of each part which is disassembled; providing original binary coding information which is coded one by one for a coding part corresponding to each split part; a plurality of binary information which are different and have structural link characteristic characters are coded by two-dimensional codes which are encrypted or not one by one and are correspondingly sequenced, then information in commodity codes is obtained by combination, information conversion, encryption and sequencing are carried out according to the principle, and finally a plurality of two-dimensional code images which are sequenced according to a certain sequence are formed.
In one embodiment, the method for multi-user barrier-free code scanning payment applied to the android system proposed by the present disclosure further includes: selecting a plurality of two-dimensional code images as training sample sets, and judging the number of the training sample sets; if the number of the training sample sets is insufficient, amplifying the sample sets to a preset number range; creating a CNN network, and initializing each parameter value of the CNN and each parameter value of the SVM; creating a Gabor filter and applying to the sample image IiExtracting the dimensions of theta-0, pi/8, pi/4, 3 pi/8, pi/2, 5 pi/8, 3 pi/4 and 7 pi/8, f-0, f-1, f-2, f-3 and f-4 to generate 40 characteristic maps; using 9-9 grid to reduce the dimension of feature map with 70-70 size to 8-8, connecting the first positions of feature map to form a feature vector Xi1=[x11,x12,…x1,m](ii) a For the same sample image I according to the size of batch valueiSequencing and inputting the created CNN network, and calculating the output of each convolution layer and each pooling layer in the hidden layer; wherein, the output of the pooling layer is used as a CNN network extraction characteristic part Xi2=[x21,x22,…x2,n]。
Suppose that the strong features of all samples are X1=[x11,x12,…x1,M]The characteristic automatically extracted by the CNN network is X2=[x21,x22,…x2,N]And for the feature vector X1、X2Carrying out standardization processing and serial fusion to obtain a fusion characteristic W ═ W1,w2,…,wM+N)=(αX1,βX2). And reducing the dimension of the W by using a PCA method, obtaining a final fusion characteristic vector W, inputting the fusion characteristic vector W into the SVM for training to reach the preset range error or train to reach the preset maximum iterative training generation times. ByTherefore, good algorithm technical support is provided for automatically extracting the characteristics of the two-dimension code image and rapidly recognizing and reading the two-dimension code in the follow-up process.
Further, the method for multi-user barrier-free code scanning payment applied to the android system, which is related by the present disclosure, further includes: intercepting the generated two-dimensional code image suitable for the mobile payment equipment, and dividing the payment image after the two-dimensional code image is intercepted as the payment image; according to the Otsu algorithm, performing rough segmentation operation on the region of interest in the divided payment image, wherein the Otsu algorithm is to divide the original image into two images, namely a foreground image and a background image, by using a threshold value. Specifically, the prospect is: points, mass moments and average gray levels of the foreground under the current threshold are represented by n1, csum and m 1; background: the number of points, the mass moment and the average gray level of the background under the current threshold are represented by n2, sum-csum and m 2. When the optimal threshold is taken, the difference between the background and the foreground is the largest, and the key is how to select a standard for measuring the difference, namely an Otsu algorithm, namely the maximum between-class variance, which is represented by sb, and the maximum between-class variance which is represented by fmax. Further, regarding the sensitivity of Otsu's algorithm to noise and target size, it only produces better segmentation effect on images with a single peak between classes variance. When the size ratio of the target to the background is very different, the inter-class variance criterion function may present double peaks or multiple peaks, which is not good, but the greater amount of algorithm is the least time-consuming. Further, the formula for the Otsu algorithm is derived as: recording t as a segmentation threshold of the foreground and the background, wherein the number of foreground points accounts for w0 of the image proportion, and the average gray level is u 0; the number of background points is w1 in the image scale, and the average gray scale is u 1. The total average gray scale of the image is: u-w 0 u0+ w1 u 1. The variance of the foreground and background images can be expressed by the following formula:
g (w 0 (u0-u) (u0-u) + w1 (u1-u) (u1-u) (w 0) w1 (u0-u1) (u0-u 1). It should be noted that the above formula is a variance formula. The formula for g can be referred to in probability theory, i.e. the expression for sb as described below. When the variance g is maximum, the difference between the foreground and the background at this time can be considered as maximum, and the gray level t at this time is the optimal threshold sb — w0 — w1 (u1-u0) (u0-u 1).
Further, performing secondary segmentation on the roughly segmented payment image by using an active contour model of the gradient vector flow; and completing the segmentation operation suitable for the payment image by shape testing on the result obtained after the secondary segmentation operation.
Further, it should be noted that dividing the payment image includes: selecting a segmentation channel based on a statistical rule of payment image data of a training sample; selecting a segmentation threshold value in a segmentation channel, and performing foreground and background segmentation on the payment image; and analyzing a communication region according to the segmented foreground pixels and background pixels to obtain a coding region meeting the conditions, wherein the coding region meeting the conditions is divided into payment image subblocks in a preset row and preset column dividing mode, and the preset row and the preset column are equivalent numerical values. Thereby providing the necessary data basis for subsequent rapid recognition of the payment image.
Further, selecting the split channel based on statistical rules of the payment image data of the training samples comprises: based on the statistical rules of the payment image data of the training samples, the distribution conditions of the image values in different color channels are obtained, and the color channel with the largest image value variance is obtained from the distribution conditions to form a segmentation channel. In addition, it should be further noted that selecting a segmentation threshold in a segmentation channel, and performing foreground and background segmentation on the payment image includes: obtaining a segmentation threshold value through a minimization algorithm in the Dajin algorithm; acquiring an image pixel value of a payment image; and performing dichotomy segmentation according to the image pixel value and the segmentation threshold value to obtain the foreground and the background. Further, it should be noted that, performing bisection segmentation according to the image pixel value and the segmentation threshold, and acquiring the foreground and the background includes: acquiring a region of which the image pixel value is higher than a segmentation threshold value as a foreground; and acquiring a region of which the image pixel value is lower than or equal to the segmentation threshold as a background.
Further, performing connected region analysis according to the segmented foreground pixels and background pixels, and acquiring a coding region meeting the conditions includes: clustering the segmented foreground pixels and background pixels to form a communication area; selectingAnd selecting the area with the largest size and meeting the prior position information in the communication area to form a coding area meeting the conditions, and outputting the coding area meeting the conditions. Further, it should be noted that the performing of the segmentation operation suitable for the payment image by the shape test on the result obtained after the secondary segmentation operation includes: and completing the graph segmentation operation suitable for the payment image according to the result obtained after the secondary segmentation operation through an area test, wherein the area test is to judge whether the number of the pixel points in the region of interest meets the pixel point threshold interval of a preset normal coding area. Furthermore, it should be noted that the performing of the segmentation operation applicable to the payment image by the shape test on the result obtained after the secondary segmentation operation includes: completing the graph segmentation operation suitable for the payment image by a simple malformation degree calculation formula gamma l/N on the result obtained after the rough segmentation operation through a malformation degree testpCalculating the degree of deformity of the region of interest, wherein l is the perimeter of the region of interest, and N ispThe number of pixel points in the region of interest is counted; presetting a high threshold gamma of degree of deformityT(ii) a When gamma is less than or equal to gammaTJudging that the result obtained after the rough segmentation operation passes the deformity degree test; when gamma > gammaTAnd then, carrying out secondary rough segmentation operation on the region of interest by the segmentation method of the active contour model based on the gradient vector flow, and completing the segmentation operation suitable for the payment image by shape testing on the result obtained after the secondary rough segmentation operation.
And 106, grouping continuous ranging data extracted by at least one ranging sensor aiming at the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points.
In addition, it should be noted that the method for multi-user barrier-free code scanning payment applied to the android system provided by the disclosure can also be used for recognizing and reading a combined image of a two-dimensional code image and an identity card image. Specifically, the specific steps of combining the two-dimensional code image with the acquired identification card image are as follows: acquiring a two-dimensional code image and an identity card image to be fused; determining an alpha image of the two-dimensional code image; adjusting the transparency of the two-dimensional code image through the alpha image to obtain a foreground image; adjusting the transparency of the identity card image through the alpha image to obtain a background image; and fusing the foreground image and the background image. Further, before acquiring the two-dimensional code image and the identity card image to be fused, the method further comprises the following steps: and adjusting the size of the two-dimensional code image or the identity card image so that the two-dimensional code image is superposed at the preset position of the identity card image.
Further, determining an alpha image of the two-dimensional code image includes: determining an alpha image of the two-dimensional code image by an image coding processing method; determining an alpha image of the two-dimensional code image by the image coding processing method comprises the following steps: reading the two-dimensional code image through a first OpenCV function to convert the two-dimensional code image into a first matrix; separating the first matrix through a second OpenCV function to obtain an alpha image channel matrix; and determining an alpha image according to the alpha image channel matrix. Further, adjusting the transparency of the two-dimensional code image through the alpha image to obtain a foreground image, comprising: separating the first matrix through a second OpenCV function to obtain a first RGB image channel matrix; point-multiplying the first RGB image channel matrix and the alpha image channel matrix through a third OpenCV function to obtain a foreground matrix; and determining a foreground image according to the foreground matrix. In addition, it should be further noted that, adjusting the transparency of the identity card image through the alpha image to obtain the background image includes: calculating an alpha pixel inverse matrix; reading the identity card image through a first OpenCV function to convert the identity card image into a second matrix; separating the second matrix through a second OpenCV function to obtain a second RGB image channel matrix; point-multiplying the second RGB image channel matrix and the alpha pixel inverse matrix through a third OpenCV function to obtain a background matrix; and determining a background image according to the background matrix. Further, fusing the foreground image and the background image, comprising: adding the foreground matrix and the background matrix to obtain a fusion image matrix; and merging the fused image matrix through a fourth OpenCV function to obtain a fused image. And combining the generated two-dimensional code image with the fused image of the acquired identification card image by at least one ranging sensor to extract continuous ranging data, grouping the continuous ranging data into a plurality of data groups according to the adjacent relation of the corresponding sequences, and classifying the ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points. Therefore, the applicability of the mobile payment equipment for reading the bar code is improved.
In addition, it should be further noted that classifying the ranging data points in each data group to be divided into a first ranging sample point and a second ranging sample point includes: one or more distance measurement data points are selected from each data group according to a selection rule to serve as first distance measurement sampling points, the rest distance measurement data points serve as second distance measurement sampling points, and meanwhile, the same number of key distance measurement sampling points are contained in each data group. It should be noted that the selection rule includes at least any one of the following: selecting a range data point of an effective range measurement from each data set as a first range sampling point, wherein the effective range measurement satisfies the following condition: reading data exceeding a preset distance, measuring a target distance or measuring a signal within a preset range; and determining the distance between each data group and other ranging data points as ranging data points exceeding a preset distance threshold value to serve as first ranging sampling points.
And 108, when the payment event is monitored to be triggered, judging whether the initial system of the mobile payment device supports a multi-form payment system. The multi-form payment system supports code scanning payment and non-contact card swiping payment, and comprises a closed payment system and an open payment system. The closed payment system is a payment system which is formed by presetting a single scene, a single payment position and combining a virtual value storage card or a real value storage card; the open payment system is a payment system which is formed by presetting at least two scenes and at least two payment positions and combining a virtual stored value card or a real stored value card.
Further, it is understood that a closed payment system, such as a savings system in a retail store, may be used by a consumer to save money for later use, based on a stored value card (virtual or physical) that can only be held back at one store, and a mobile application may be deployed to allow the consumer to hold back the stored value. When the system is used, the QR code or the bar code is displayed at a point of sale, a user can store the value of the card infinitely, only money which is wanted to be spent in a specific merchant is stored, the financial information and the bank account exposure of the user are avoided, and the system is also an effective mode for arranging budget for specific types of consumption for the user. Such as groceries or restaurants, merchants typically combine customer loyalty with a closed payment system, such as a closed payment card, to keep customers back streaming. In addition, open payment systems, such as a savings system in a retail store, a savings system in a restaurant or grocery store, where the consumer may have a deposit for later use, may be based on a stored value card (virtual or real) that can be held back at multiple stores, and a mobile application may be deployed to allow the consumer to hold back the stored value. When the method is used, the QR code or the bar code is displayed at a point of sale, a user can store the value of the card infinitely and only stores money which the user wants to spend in a specific merchant, so that the compatibility and the data sharing of the financial information and the bank account of the user in a plurality of payment scenes and payment positions are facilitated, and the method is an effective way for the user to arrange budget for specific types of consumption. Such as groceries or restaurants, merchants typically combine customer consumption replacement with an open payment system, such as an open payment card, to allow customers to consume back-streams.
And step 110, receiving the recognition and payment operation finished by the user in a buckling and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored. In addition, it should be further noted that the method for multi-user barrier-free code scanning payment applied to the android system proposed by the present disclosure further includes: and payment display is completed through a common window of the liquid crystal window and the light guide plate window. From this, the convenience and the ease of use of payment after through accurate and swift recognition two-dimensional code have been improved. Note that, the common window of the liquid crystal window and the light guide plate window is used to display a specific payment amount and a payment status (the payment status includes payment in progress, payment success or payment failure). In addition, it should be noted that, when the common window of the liquid crystal window and the light guide plate window is not used for payment display, the common window is used for playing the advertisement information pushed by the cloud server and the propaganda content of the scene where the common window is located. Therefore, the beneficial effects that the shared window of the liquid crystal window and the light guide plate window is multifunctional, flexible and efficient to operate and display payment are achieved.
Specifically, if support the multi-form payment system and when monitoring first range finding sampling point, the recognition operation that the user was accomplished through the mode of deducting and sweeping includes: establishing a mapping relation between the characteristics of the cash register commodity and the price of the cash register commodity; according to the mapping relation, the commodity price in each commodity and the price of the commodity corresponding to the current payment image are obtained; and finishing the cash register operation according to the price of the commodity corresponding to the current payment image. And obtaining the commodity price of each commodity according to the mapping relation, accumulating the commodity prices, and obtaining the price of the commodity corresponding to the current payment image. It can be understood that the prices of the accumulated commodities are pre-stored, and the prices of the commodities can be quickly analyzed and obtained through deep learning according to historical data of user shopping. It should be noted that, in order to increase the user experience, the data of the cash register operation and the completion status are displayed. In addition, it should be noted that the snap-scanning mode is that the user holds the electronic terminal by hand, and faces the display screen of the electronic terminal and the two-dimensional code scanning window of the mobile payment device. The technical field personnel can understand that the built-in binocular camera or at least one sensor of the mobile payment equipment can be effectively sensed in a buckling and sweeping mode, and technical support is provided for efficient payment.
In one embodiment, it should be noted that the present disclosure relates to a method for multi-user barrier-free code-scanning payment applied to an android system, further including: after the payment event is monitored to be triggered, when the electronic terminal is charged, deleting the payment image from the picture library, and setting a default picture in a built-in system of the electronic terminal as a prompt image; and when the current electric quantity of the electronic terminal is lower than a preset electric quantity threshold value, setting a default picture in a built-in system of the electronic terminal as a prompt image. The prompt image is a power-off low-power prompt image of the mobile payment device. In addition, the method further comprises the following steps: the method comprises the steps of obtaining the illumination intensity of a screen of the electronic terminal and the illumination intensity reflected by the screen of the electronic terminal in a preset time period, and constructing a screen illumination intensity database aiming at the electronic terminal and a screen reflection illumination intensity database aiming at the electronic terminal. Therefore, payment operation can be completed quickly and accurately by adapting corresponding illumination intensity of different mobile payment equipment models according to different scenes.
In order to more clearly and accurately understand and apply the method for multi-user barrier-free code scanning payment applied to the android system, the following example is made in conjunction with fig. 2, and it should be noted that the scope of protection of the present disclosure is not limited to the following example.
Specifically, the steps 201 to 208 are sequentially: receiving a plurality of images; dividing the image into N × N sub-blocks, performing rough segmentation operation through an Otsu algorithm, judging whether the region of interest conforms to the basic coding form, and if the region of interest conforms to the basic coding form, sending the image of the region of interest to a preset feature model to complete feature extraction of the payment image; if the region of interest does not accord with the basic coding form, performing secondary segmentation operation on the active contour model based on the gradient vector flow, and then judging whether the region of interest accords with the basic coding form, if so, sending the image of the region of interest to a preset feature model to finish feature extraction of the payment image; and if the interested region does not conform to the basic form of the code, removing impurities in the payment image.
It is understood that the received payment image is divided; according to the Dajin algorithm, performing rough segmentation operation and secondary segmentation operation on the region of interest in the divided payment image; and completing the segmentation operation suitable for the payment image according to the result obtained after the secondary segmentation operation through shape testing. Specifically, for a payment image, the payment image is roughly segmented by adopting an Otsu algorithm and secondarily segmented by an active contour model of a gradient vector flow to obtain the payment image which is free of noise and convenient to read; the results of the above segmentation were then subjected to shape testing.
It should be noted that the test conditions are: and (6) area testing. Number N Of pixels in ROI (Region Of Interest)pI.e. whether the ROI area corresponds to the range of the normal coding area [ N ]min,Nmax]Within; and (5) testing the degree of deformity. Calculating the formula gamma as l/N by simple malformation degreepCalculating the malformation degree of the ROI region, wherein l is the perimeter of the ROI and is provided with a high malformation degree threshold value gammaTWhen gamma is less than or equal to gammaTThe test passed. Further, if the test condition passes, the ROI is a payment image and enters a feature extraction module; if the ROI region that does not pass the test condition, i.e., the pay image with noise or foreign matter, is possible, the segmentation method based on the active contour model of the gradient vector flow performs a secondary segmentation on the ROI region, and then performs a shape test on the secondary segmentation result, with the test condition being as described above. Wherein, as can be understood by those skilled in the art, the ROI is an impurity when the test is not passed, and is directly discarded; and the ROI passing the test is a payment image, and a preset feature extraction module is used for carrying out feature extraction on the payment image.
As will be understood by those skilled in the art, the classical active contour model often has certain disadvantages when selecting an initial contour curve, such as being far away from a target curve and unable to converge on the target curve, and also has a poor convergence effect on a concave edge. Aiming at the problems, the traditional active contour model is improved, and an active contour model based on gradient vector flow is provided. The active contour model based on gradient vector flow replaces a Gaussian potential energy field in a traditional model, and the mathematical theoretical basis of the active contour model is Helmholtz theorem in an electromagnetic field. Compared with a Gaussian potential energy field, the gradient vector diagram of the whole image is obtained based on the field of the gradient vector flow, so that the action range of the external force field is larger. This also means that even if the selected initial contour is far from the target contour, it will eventually converge to the target contour through successive approximation. Meanwhile, after the external force action range is enlarged, the external force action of the concave part at the target contour is enlarged, so that the boundary can be converged to the concave part.
The invention provides a multi-user barrier-free code scanning payment method applied to an android system, which comprises the steps of networking, roughly dividing and secondarily dividing a two-dimensional code image by adopting an Otsu algorithm, completing the division operation suitable for the two-dimensional code image by shape testing on the result obtained after the secondary division operation, rapidly extracting the characteristics of the two-dimensional code image through deep learning, intercepting the two-dimensional code image as a payment image after receiving payment information sent by a server, judging whether an initial system of a mobile payment device supports a multi-form payment system or not when monitoring that a payment event is triggered, and starting and receiving the reading operation completed by a user in a buckling scanning mode if the initial system supports the multi-form payment system and monitors a first distance measurement sampling point. In addition, still accomplish the payment through the sharing window of liquid crystal window and light guide plate window and show, can also detect the light change through the two-way 360 full angles of two cameras that are applicable to the bar code recognition even, and improve the high efficiency that detects to trigger the change operation that the LCD screen accomplished the display content through central processing unit. Specifically, when the bar code is sensed in front, the display content of the transparent screen is removed, and the display screen is changed into transparent glass so that the bar code can be conveniently read. In addition, the algorithm optimization processing can be used for simultaneously reading the bar code content and the face information based on the first full screen or screen display content and the second full screen or screen display content. Furthermore, the self-learning image recognition operation can be efficiently, accurately and quickly realized aiming at the two-dimensional code image under the network distribution environment supporting the multi-form payment system based on a plurality of scene conditions, so that the subsequent payment operation can be quickly, efficiently and flexibly completed, and the safety and the applicability are realized.
Based on the same inventive concept, the invention also provides a device for multi-user barrier-free code scanning payment applied to the android system. Because the principle of solving the problems of the device is similar to the method for multi-user barrier-free code scanning payment applied to the android system, the implementation of the device can be realized according to the specific steps of the method, and repeated parts are not repeated.
Fig. 3 is a schematic structural diagram of an apparatus for multi-user barrier-free code scanning payment applied to the android system in an embodiment. This be applied to device 10 of barrier-free sign indicating number payment of multi-user of android system includes: networking and connection module 100, acquisition and image generation module 200, classification module 300, judgment module 400, and swipe payment module 500.
Wherein, the networking and connecting module 100 is used for networking a self-learning-based machine learning model arranged in a plurality of scenes, two full-face screens, an outer shell connected with the two full-face screens, a double camera suitable for bar code reading and a mobile payment device based on an android control system of a central processing unit, and connecting the mobile payment device which is arranged in a plurality of scenes and supports the cooperative application of an input device and a printing device, an electronic terminal for controlling the mobile payment device and a server cluster, wherein, the mobile payment device which supports the cooperative application of the input device and the printing device comprises two cameras suitable for bar code reading, the two cameras suitable for bar code reading are used for bidirectionally sensing the light change of 360 degrees in full angles in real time so as to trigger a liquid crystal screen to complete the change operation of display contents through the central processing unit, the full screen is a full transparent display screen, a camera and other light or image perception function modules which are suitable for bar code reading of the mobile payment equipment are arranged under the full transparent display screen, the camera and other light or image perception function modules which are suitable for bar code reading of the mobile payment equipment directly realize perception on light or images through the full screen, the machine learning model based on self learning is used for collecting positive samples and negative samples which are required by training machine learning, then model training and model testing are carried out, an original machine learning model is created and deployed on the line, and the unidentifiable two-dimensional code image detected by the original machine learning model deployed on the line is stored as a negative sample, triggering a machine learning training task when the number of the negative samples reaches a set threshold value, creating a new machine learning model, updating the model according to a set model updating strategy so as to complete self-learning reading of the two-dimensional code image; the acquisition and image generation module 200 is configured to acquire data of multiple parameters, which are sent by the server cluster and are suitable for being recognized and read by the mobile payment device, in real time, and generate a two-dimensional code image suitable for the mobile payment device according to the data of the multiple parameters; the classification module 300 is configured to group continuous ranging data extracted by at least one ranging sensor for a generated two-dimensional code image into a plurality of data groups according to an adjacent relationship of corresponding sequences, and classify ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, where each data group includes the same number of ranging data points; the judging module 400 is configured to judge whether an initial system of the mobile payment device supports a multi-form payment system when it is monitored that a payment event is triggered, where the multi-form payment system supports code scanning payment and contactless card swiping payment, the multi-form payment system includes a closed payment system and an open payment system, the closed payment system presets a single scene and a single payment position and combines a payment system composed of a virtual value storage card or a real value storage card, and the open payment system presets at least two scenes and at least two payment positions and combines a payment system composed of a virtual value storage card or a real value storage card; the deduction and scanning payment module 500 is used for receiving the recognition and payment operations finished by the user in the deduction and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
The invention provides a multi-user barrier-free code scanning payment device applied to an android system, which comprises the steps of networking, roughly dividing and secondarily dividing a two-dimensional code image by adopting an Otsu algorithm, completing the division operation suitable for the two-dimensional code image by shape testing on the result obtained after the secondary division operation, rapidly extracting the characteristics of the two-dimensional code image through deep learning, intercepting the two-dimensional code image as a payment image after receiving payment information sent by a server, judging whether an initial system of a mobile payment device supports a multi-form payment system or not when monitoring that a payment event is triggered, and starting and receiving the reading operation completed by a buckling scanning mode by a user if the initial system supports the multi-form payment system and monitors a first distance measurement sampling point. In addition, still accomplish the payment through the sharing window of liquid crystal window and light guide plate window and show, can also detect the light change through the two-way 360 full angles of two cameras that are applicable to the bar code recognition even to trigger the change operation that the LCD screen accomplished the display content through central processing unit, and improve the high efficiency that detects. Specifically, when the bar code is sensed in front, the display content of the transparent screen is removed, and the display screen is changed into transparent glass so that the bar code can be conveniently read. In addition, the algorithm optimization processing can be used for simultaneously reading the bar code content and the face information based on the first full screen or screen display content and the second full screen or screen display content. Furthermore, the self-learning image recognition operation can be efficiently, accurately and quickly realized aiming at the two-dimensional code image under the network distribution environment supporting the multi-form payment system based on a plurality of scene conditions, so that the subsequent payment operation can be quickly, efficiently and flexibly completed, and the safety and the applicability are realized.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims. The foregoing description has been presented for purposes of illustration and description. This description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method for multi-user barrier-free code scanning payment applied to an android system is characterized by comprising the following steps:
the method comprises the steps of networking a machine learning model which is arranged in a plurality of scenes and is configured based on self-learning, two full-face screens, a shell body which is connected with the two full-face screens, double cameras suitable for bar code reading and a mobile payment device of an android control system based on a central processing unit, connecting the mobile payment device which is arranged in a plurality of scenes and supports the cooperative application of an input device and a printing device, an electronic terminal for controlling the mobile payment device and a server cluster, wherein the mobile payment device which supports the cooperative application of the input device and the printing device comprises the two cameras suitable for bar code reading, and the two cameras suitable for bar code reading are used for bidirectionally sensing light change of 360 degrees at full angles in real time so as to trigger a liquid crystal screen to complete the change operation of display contents through the central processing unit, the full screen is a full transparent display screen, the camera and other light or image perception function modules suitable for bar code reading of the mobile payment device are arranged below the full transparent display screen, the camera or other light or image perception function modules suitable for bar code reading of the mobile payment device directly realize perception on light or images through the full screen, the self-learning-based machine learning model is used for collecting positive samples and negative samples required by training machine learning, then model training and model testing are carried out, an original machine learning model is created and deployed on line, unidentifiable two-dimensional code images detected by the original machine learning model deployed on line are stored as negative samples, when the number of the negative samples reaches a set threshold value, a machine learning training task is triggered, and a new machine learning model is created, updating the model according to a set model updating strategy so as to complete self-learning reading of the two-dimensional code image;
acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time, and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters;
grouping continuous ranging data extracted by at least one ranging sensor for the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points;
when a payment event is monitored to be triggered, judging whether an initial system of the mobile payment device supports a multi-form payment system or not, wherein the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value storage card or a real value storage card in a preset single scene and a preset single payment position, and the open payment system is combined with the payment system formed by the virtual value storage card or the real value storage card in a preset at least two scenes and at least two payment positions;
and receiving the recognizing and payment operations finished by a plurality of users in a buckling and scanning mode in a preset time period if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
2. The method for multi-user barrier-free code scanning payment applied to the android system as claimed in claim 1, wherein the connecting the mobile payment device supporting cooperative application with the input device and the printing apparatus, the electronic terminal controlling the mobile payment device, and the server cluster, which are arranged in a plurality of scenes, comprises: connecting at least one mobile payment device arranged in a plurality of scenes with a cloud server cluster through WIFI;
and connecting the at least one mobile payment device arranged in a plurality of scenes with the electronic terminal for controlling the mobile payment device through Bluetooth connection.
3. The method for multi-user barrier-free code-scanning payment applied to the android system as claimed in claim 1, further comprising: acquiring capability values corresponding to a plurality of protocol stacks in the mobile payment equipment and a channel identifier currently bound with the protocol stack with the maximum value of the capability values;
selecting a corresponding channel according to the acquired channel identifier;
and completing the payment operation applicable to the mobile payment device through the selected channel.
4. The method for multi-user barrier-free code-scanning payment applied to the android system as claimed in claim 1, further comprising: selecting a plurality of two-dimensional code images as a training sample set, and judging the number of the training sample set;
if the number of the training sample sets is insufficient, amplifying the sample sets to a preset number range;
creating a CNN network, and initializing each parameter value of the CNN and each parameter value of the SVM;
creating a Gabor filter and applying to the sample image IiExtracting the dimensions of theta-0, pi/8, pi/4, 3 pi/8, pi/2, 5 pi/8, 3 pi/4 and 7 pi/8, f-0, f-1, f-2, f-3 and f-4 to generate 40 characteristic maps;
using 9-9 grid to reduce the dimension of feature map with 70-70 size to 8-8, connecting the first positions of feature map to form a feature vector Xi1=[x11,x12,…x1,m];
For the same sample image I according to the size of batch valueiSequencing and inputting the created CNN network, and calculating the output of each convolution layer and each pooling layer in the hidden layer; wherein the output of the pooling layer is used as the CNN network extraction feature part Xi2=[x21,x22,…x2,n];
Suppose that the strong features of all samples are X1=[x11,x12,…x1,M]The characteristic automatically extracted by the CNN network is X2=[x21,x22,…x2,N]And for the feature vector X1、X2Carrying out standardization processing and serial fusion to obtain a fusion characteristic W ═ W1,w2,…,wM+N)=(αX1,βX2);
Using PCA method to reduce dimension of W and obtaining final fusion characteristic vector W*And fusing the feature vectors W*Inputting the training data into the SVM to train to reach the preset range error or train to reach the preset maximum iterative training generation times.
5. The method for multi-user barrier-free code-scanning payment applied to the android system as claimed in claim 1, further comprising: the method comprises the steps of obtaining the illumination intensity of a screen of the electronic terminal and the illumination intensity reflected by the screen of the electronic terminal in a preset time period, and constructing a screen illumination intensity database aiming at the electronic terminal and a screen reflection illumination intensity database aiming at the electronic terminal.
6. The method for multi-user barrier-free code-scanning payment applied to the android system as claimed in claim 1, further comprising: intercepting the generated two-dimensional code image suitable for the mobile payment equipment, and dividing the payment image after the two-dimensional code image is intercepted as the payment image;
according to the Dajin algorithm, performing rough segmentation operation on the region of interest in the divided payment image;
performing secondary segmentation on the roughly segmented payment image by using an active contour model of a gradient vector flow;
and completing the segmentation operation suitable for the payment image by shape testing on the result obtained after the secondary segmentation operation.
7. The method of multi-user barrier-free code-scanning payment applied to android system of claim 6, wherein the dividing the payment image comprises: selecting a segmentation channel based on statistical rules of the payment image data of training samples;
selecting a segmentation threshold value in the segmentation channel, and performing foreground and background segmentation on the payment image;
and analyzing a communication region according to the segmented foreground pixels and background pixels to obtain a coding region meeting the conditions, wherein the coding region meeting the conditions is divided into the payment image subblocks in a preset row and preset column dividing mode, and the preset row and the preset column are equal numerical values.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1-7 are implemented when the program is executed by the processor.
10. An apparatus for multi-user barrier-free code-scanning payment applied to an android system, the apparatus comprising:
the networking and connecting module is used for networking a self-learning-based machine learning model, two full-face screens, a shell body, two cameras and a mobile payment device based on an android control system of a central processing unit, wherein the self-learning-based machine learning model, the two full-face screens, the shell body, the two cameras and the mobile payment device are arranged in a plurality of scenes, the two cameras are suitable for bar code reading, the mobile payment device is based on an android control system of the central processing unit, the mobile payment device is cooperatively applied to the input device and the printing device, the electronic terminal is used for controlling the mobile payment device, the mobile payment device is supported to be cooperatively applied to the input device and the printing device and comprises two cameras suitable for bar code reading, the two cameras suitable for bar code reading are used for bidirectionally sensing light ray change of 360 degrees of full angles in real time, so that a liquid crystal screen is triggered to complete change operation of display contents through the central processing unit, the full screen is a full transparent display screen, the camera and other light or image perception function modules suitable for bar code reading of the mobile payment device are arranged below the full transparent display screen, the camera or other light or image perception function modules suitable for bar code reading of the mobile payment device directly realize perception on light or images through the full screen, the self-learning-based machine learning model is used for collecting positive samples and negative samples required by training machine learning, then model training and model testing are carried out, an original machine learning model is created and deployed on line, unidentifiable two-dimensional code images detected by the original machine learning model deployed on line are stored as negative samples, when the number of the negative samples reaches a set threshold value, a machine learning training task is triggered, and a new machine learning model is created, updating the model according to a set model updating strategy so as to complete self-learning reading of the two-dimensional code image;
the acquisition and image generation module is used for acquiring data of a plurality of parameters which are sent by the server cluster and are suitable for being recognized and read by the mobile payment equipment in real time and generating a two-dimensional code image suitable for the mobile payment equipment according to the data of the plurality of parameters;
the classification module is used for grouping continuous ranging data extracted by at least one ranging sensor aiming at the generated two-dimensional code image into a plurality of data groups according to the adjacent relation of corresponding sequences, and classifying ranging data points in each data group to be divided into a first ranging sampling point and a second ranging sampling point, wherein each data group comprises the same number of ranging data points;
the system comprises a judging module, a payment processing module and a payment processing module, wherein the judging module is used for judging whether an initial system of the mobile payment device supports a multi-form payment system when a payment event is triggered, the multi-form payment system supports code scanning payment and non-contact card swiping payment, the multi-form payment system comprises a closed payment system and an open payment system, the closed payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting a single scene and a single payment position, and the open payment system is combined with a payment system formed by a virtual value card or a real value card in a mode of presetting at least two scenes and at least two payment positions;
and the deducting and scanning payment module is used for receiving the reading and payment operation finished by the user in a deducting and scanning mode if the multi-form payment system is supported and the first distance measurement sampling point is monitored.
CN201811511570.1A 2018-12-11 2018-12-11 Multi-user barrier-free code scanning payment method and device applied to android system Withdrawn CN111311223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811511570.1A CN111311223A (en) 2018-12-11 2018-12-11 Multi-user barrier-free code scanning payment method and device applied to android system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811511570.1A CN111311223A (en) 2018-12-11 2018-12-11 Multi-user barrier-free code scanning payment method and device applied to android system

Publications (1)

Publication Number Publication Date
CN111311223A true CN111311223A (en) 2020-06-19

Family

ID=71146515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811511570.1A Withdrawn CN111311223A (en) 2018-12-11 2018-12-11 Multi-user barrier-free code scanning payment method and device applied to android system

Country Status (1)

Country Link
CN (1) CN111311223A (en)

Similar Documents

Publication Publication Date Title
CN111311244A (en) Passive code scanning method and device based on QR (quick response) code
TWI769387B (en) Payment processing method, apparatus, and self-checkout device
CN111311233A (en) Passive code scanning method and device based on multi-trigger mode
CN111311227A (en) Method and device suitable for in-screen type biological feature and two-dimensional code recognition
CN111311248A (en) Method and device for recognizing and reading two-dimensional code under low-power-consumption screen
CN111310492A (en) In-screen two-dimensional code reading method and device suitable for adjustable light source
CN111311225A (en) Optical module encryption-based in-screen payment method and device
CN109816393B (en) Method and device for identifying and verifying biological characteristics under screen
CN111311241A (en) Two-dimensional code reading method and device based on scene perception
CN111311226A (en) Machine vision-based two-dimensional code reading method and device under complex background
CN111311223A (en) Multi-user barrier-free code scanning payment method and device applied to android system
CN111311240A (en) Multi-user barrier-free code scanning payment method and device applied to IOS (input/output system)
CN111310501A (en) Two-dimensional code reading method and device suitable for comprehensive screen
CN111311239A (en) Perspective two-dimensional code reading method and device suitable for double-screen double-camera
CN111310494A (en) Method and device for recognizing and reading two-dimensional code under screen based on double-screen display
CN111310493A (en) Method and device for identifying and reading two-dimensional code under screen based on multiple sensors
CN111311229A (en) Chinese-sensible code based passive code scanning method and device
CN111310499A (en) Method and device for identifying and reading two-dimensional code under screen based on photoelectric sensor
CN111311242A (en) Method and device suitable for in-screen quick reading of two-dimensional code
CN111310496A (en) Under-screen two-dimensional code reading method and device with adaptive light supplement function
CN111311237A (en) Face and bar code double-recognition method and device
CN111311235A (en) Buckling scanning code scanning method and device for identifying multi-trigger mode of bill
CN111310490A (en) Two-dimensional code reading method and device suitable for ARM processor architecture
CN111311224A (en) Waving code scanning method and device for identifying multi-trigger mode of bill
CN111311222A (en) Waving code scanning method and device suitable for multiple communication modes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200619