CN113487374A - Block E-commerce platform transaction system based on 5G network - Google Patents

Block E-commerce platform transaction system based on 5G network Download PDF

Info

Publication number
CN113487374A
CN113487374A CN202110019818.8A CN202110019818A CN113487374A CN 113487374 A CN113487374 A CN 113487374A CN 202110019818 A CN202110019818 A CN 202110019818A CN 113487374 A CN113487374 A CN 113487374A
Authority
CN
China
Prior art keywords
face
image
layer
model
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110019818.8A
Other languages
Chinese (zh)
Inventor
曾丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shiyan Shifengda Industry And Trade Co ltd
Original Assignee
Shiyan Shifengda Industry And Trade Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shiyan Shifengda Industry And Trade Co ltd filed Critical Shiyan Shifengda Industry And Trade Co ltd
Priority to CN202110019818.8A priority Critical patent/CN113487374A/en
Publication of CN113487374A publication Critical patent/CN113487374A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Finance (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Strategic Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a block e-commerce platform transaction system based on a 5G network, which solves the problems of insufficient accuracy, safety and the like of the conventional common face recognition and comprises a face image acquisition module, a face image analysis module, an image processing module and a cloud server; the face image acquisition module comprises a binocular camera, and the binocular camera comprises a first camera and a second camera which are located at the same horizontal position. The invention adopts double cameras to collect double data for analysis and comparison and analysis, improves the face recognition accuracy and is safer to use.

Description

Block E-commerce platform transaction system based on 5G network
Technical Field
The invention relates to the technical field of e-commerce platform transaction systems, in particular to a block e-commerce platform transaction system based on a 5G network.
Background
Electronic commerce has evolved as an important part of the economy, where consumers can now purchase items online and deliver them without visiting a physical business. With the rapid development of information technology, online payment is more and more widely approaching the lives of people due to the advantages of convenience, easy operation, high efficiency and the like. Block chains (English) are intelligent peer-to-peer networks that use distributed databases to identify, spread, and document information, also known as value Internet.
The existing online payment mechanism is a face recognition payment mechanism, but face information with high similarity of a user can be recognized and passed in the face recognition process, the existing online payment mechanism has insecurity, the matching rate and the accuracy of face recognition are reduced, and for example, in places where facing family members have different common face recognition application environments, namely, the family members can easily obtain mutual personal data such as photos of faces, so that the face recognition cheat can be carried out through effective photos, and the application of the face recognition on products is defective.
In addition, the prior art also has the problems that the definition of the acquired face image data is insufficient, the resolution of the originally acquired face image is low, the face contour modeling cannot be accurately performed, and the definition contrast analysis of local facial features is difficult, so that the final face recognition accuracy is low.
In addition, in the fields of computer face recognition, vision and the like, a computer system can simulate a human vision processing mode, analyze an input image and output position and category information of a target in the image, and is one of important applications. However, the detection algorithm has the defects of intensive calculation and parameter redundancy, and if the universality of the algorithm can be improved and the redundant information in the network data transmission is optimized and eliminated, the network data transmission efficiency can be effectively improved or the detection precision loss can be reduced.
Disclosure of Invention
In order to overcome the defects of the prior art, one of the objectives of the present invention is to provide a 5G network-based block e-commerce platform transaction system, which solves the problems of insufficient accuracy, poor security, and the like of the conventional common face recognition.
One of the purposes of the invention is realized by adopting the following technical scheme: a block E-commerce platform transaction system based on a 5G network comprises a face image acquisition module, a face image analysis module, an image processing module and a cloud server; the face image acquisition module comprises a binocular camera, the binocular camera comprises a first camera and a second camera which are located at the same horizontal position, and the face data acquisition step is as follows:
1. acquiring prestored face information in account information of a user;
2. the method comprises the steps that a first camera collects first face information of a user;
3. collecting label information of commodities purchased by a user;
4. the second camera acquires second face information of the user;
5. transmitting the first face information, the second face information and the prestored face information to a face image analysis module for analysis, and transmitting the data to an image processing module for comparison;
6. and when the first face information and the second face information are matched with the prestored face information, the cloud server deducts the shopping amount according to the commodity label information and the user finishes payment. The cloud server may store a blockchain network structure.
Furthermore, the analysis method of the face image analysis module is to construct a first face and a second face image data set, wherein the image data set comprises a plurality of face image paired data, namely, a high-resolution face image and a corresponding low-resolution face image; cutting all face image pairs in the image data set to obtain a local face image cut block;
inputting the obtained local human face image cutouts in batches, wherein the feedforward neural network comprises convolution calculation and a depth structure, and the feedforward neural network comprises convolution calculation and a depth structure and also comprises a human face global time cyclic neural network and a human face local reinforcing neural network, namely, the local human face image cutouts are respectively input into the human face global time cyclic neural network and the human face local reinforcing neural network for feature extraction;
in a human face global time cycle neural network, utilizing initialization convolution to map and correspond low-resolution local human face image cutout blocks from a human face image space to a system feature space to obtain initial basic human face features, extracting the basic human face features through a plurality of dense residual modules, concentrating the outputs of different residual modules, modeling the mutual corresponding relation between different stages and a space region through a recursion module, and learning to obtain human face global contour features from the initial basic human face features;
in a human face local reinforcing neural network, acquiring an input image through a data acquisition device with a specific size to obtain a plurality of resolution local human face image cutouts, and in order to obtain proper local human face image cutouts and keep a certain local structure, performing non-repeated acquisition on the input image through a data acquisition device with the original input size of 1/i to obtain i low-resolution local human face image cutouts;
in order to enhance modeling of local feature association, extracting the local face features by utilizing a plurality of multi-path residual error operation processes, and learning the corresponding relation between low-resolution face image cutout blocks and high-resolution face image cutout blocks by combining face feature information of different path paths and stages to obtain feature expression based on the local face cutout blocks;
inputting the obtained feature expressions of the face local cutouts into an upper acquisition layer, compiling and rearranging the feature expressions of the face local cutouts by utilizing sub-pixel convolution, and corresponding to a system global face space to obtain a feature corresponding graph with the original input face resolution, namely the complete local features of the whole face;
combining the obtained global contour feature of the human face and the local feature of the whole human face by using a plurality of convolution, inputting the combined feature into an upper acquisition layer, performing super-resolution analysis by using sub-pixel convolution to realize global and local combined expression of the human face feature, outputting a corresponding human face residual error human face image corresponding to an original human face image space, accumulating the human face residual error human face image obtained by regression and interpolation data of a low-resolution clear human face image, and outputting to obtain a clear human face image as a final human face image.
Furthermore, the proposed face dual-path deep fusion integration network is optimized by minimizing cosine similarity of the clear face obtained by output and the original face with high resolution, so that analysis of the face with low resolution contour is realized; the optimization specifically comprises the following steps: the clear face image generated by the network is controlled to be as close to the original high-resolution face image as possible by using the super-resolution cost function, so that the optimization of the face analysis method by means of depth integration is realized.
Further, the image processing module adopts a face target detection model, and optimizes the face target detection model as follows:
introducing a batch standardization layer of a human face target model based on deep learning, and taking shape parameters in the standardization layer as important scale coefficients acquired by each channel of each convolution layer of the evaluation depth model for the model characteristics; the batch standardization layer is used for carrying out standardization operation on the input of the batch standardization layer and then introducing learnable and reconfigurable shape parameters; the input of the batch normalization layer is an convolution layer output characteristic image, the characteristic image of each channel is used as an independent neural module, a weight sharing strategy is used, each channel characteristic image only has a pair of reconstruction shape parameters, namely each magnification and reduction coefficient and each offset coefficient are in one-to-one correspondence with the input characteristic image channel, and the magnification and reduction coefficient shape parameters of the batch normalization layer are used as channel importance scale coefficients required by model shearing;
performing sparse iteration according to the obtained importance scale coefficient of the model convolution channel, specifically, adding a constraint on the sum of absolute values of each element in kernel sparse vectors of shape parameters of all standardized layers of the model in a loss cost function of the original model, so that the shape parameters are more sparse and most of the shape parameters are close to 0; the constraint of the sum of absolute values of each element in the kernel sparsification vector is increased, specifically, a sub-item related to the shape parameter is added to the iteration loss cost function of the original face model, the sub-item is essentially a penalty coefficient of the sum of absolute values of all shape parameter values of the model, and the larger the value of the sub-item is, the larger the influence of the shape parameter on the iteration loss cost function is; in the model iteration process, the loss cost value function is continuously reduced, the sum of absolute values of all shape parameter values is continuously reduced, more shape parameter values are continuously close to 0, and sparse iteration of the convolution channel importance scale coefficient is realized; stopping iteration when the model cost value does not fluctuate greatly along with the iteration times and most of shape parameter values approach to 0, and obtaining a model weight proportion value;
the iterative loss cost function of the original model comprises four parts: the first part is a coordinate loss cost function of the center of the frame, and the loss cost function is used for expressing the difference between the coordinate of the center of a prediction bounding box generated by an nth candidate boundary of an mth grid and the coordinate of the center of a labeling bounding box of a real target when the nth candidate boundary is responsible for the real target; the horizontal and vertical coordinates and the width and height of the central point of the predicted boundary frame output by the model relative to the grid relative value and the candidate boundary relative value are converted into a calculation process for the real coordinates and the real width and height of the predicted boundary of the image; the second part is a boundary width and height loss cost function which is used for expressing the difference between the predicted boundary size generated by the candidate boundary and the labeling boundary size of the real target when the nth candidate boundary of the mth grid is responsible for the real target; the third part is a confidence probability loss cost function, for the optical remote sensing image, most contents do not contain the object to be detected, namely the cost contribution of the calculation part without the object is larger than that of the calculation part with the object, so that the model tends to predict that the cell does not contain the object, and the loss function reduces the contribution weight proportion of the calculation part without the object; the fourth part is a type loss cost function which is used for expressing the difference between the predicted boundary type probability generated by the candidate boundary and the labeling boundary type probability of the real target when the nth candidate boundary of the mth grid is responsible for the real target;
performing folded layer channel shearing according to the model shape parameter value after sparse iteration, wherein most of the obtained model shape parameter values are close to 0 after channel sparse iteration, and each channel for inputting the characteristic image of the layer corresponds to one shape parameter value according to the meaning of the normalized layer shape parameter; discarding the characteristic diagram channel with the importance lower than the pruning percentage, and discarding the convolution kernel corresponding to the discarded characteristic diagram channel, thereby finishing the channel cutting process; the pruning percentage refers to the proportion of all shape parameters after kernel sparsization iteration, namely, all shape parameters of the model are sorted from small to large, feature graph channels corresponding to the shape parameters with the quantity corresponding to the prior pruning percentage are cut, and convolution kernels corresponding to the feature graph channels are discarded at the same time; when the pruning percentage is high, channel clipping may temporarily cause some loss of precision, but this can be largely circumvented by model fine tuning in subsequent steps; specifically, for the folded layers, judging whether the number of channels of each layer is zero after pruning, if so, forcibly retaining a single channel of the filter corresponding to the characteristic diagram parameter with the maximum absolute value of the shape parameter, and avoiding the damage of the model structure caused by excessive pruning; channel cutting is not carried out on the folded layer without the subsequent access batch normalization layer; for a shortcut layer, judging whether the number of two folding layer channels connected with the layer is consistent after pruning, if not, marking the two folding layer channels with the number of channels which are not pruned being 1 and the number of pruned channels being 0, generating two groups of one-dimensional binary vectors, carrying out OR operation on each bit of the two groups of vectors to obtain a one-dimensional vector, wherein the two folding layer channels corresponding to the vector bit number with the content of 1 are reserved, and the two folding layer channels corresponding to the vector bit number with the content of 0 are pruned; for the pooling layer, the upper acquisition layer and the connection layer are not subjected to parameter pruning; the maximum pooling layer refers to performing maximum pooling operation on the feature image of each channel dimension, namely, cutting and shearing the feature image into a plurality of small blocks with pooling sizes without duplication, taking the maximum number in each small block, and discarding other nodes and then keeping the original plane structure to obtain an output face feature image; the shortcut layer is used for performing corresponding channel parameter superposition operation on the two input folded layer characteristic images, and the input folded layer characteristic images are required to have the same number of channels; the upper acquisition layer is used for inserting new elements into the input characteristic image by adopting a method of respectively performing linear interpolation in two directions between pixel point values; the connection layer integrates the input face feature images on the channel dimension according to the sequence, namely the number of channels of the face feature images output by the connection layer is equal to the sum of the number of channels of the input feature images, and the feature image arrays are directly integrated on the channel dimension in the coding implementation;
performing face model reiteration on the same data set according to the obtained model parameter weight ratio after channel shearing; the iteration loss cost function is an original model loss cost function in sparse iteration; when the model cost value does not fluctuate greatly along with the iteration times any more, stopping training to obtain the model weight proportion, wherein the iteration specifically comprises the following steps: and dividing grids for the input iteration set picture, generating a predicted boundary in each grid through a candidate boundary with a preset size, calculating a loss cost function through a predicted boundary parameter and a marked real frame parameter, calculating all pictures in the iteration to obtain a current iteration loss cost function value, and finishing one iteration.
Further, a processor, a sensor module, a lens, an MCU controller and a 5G module are arranged inside the binocular camera; the processor is respectively connected with the sensor module, the lens, the MCU controller and the 5G module; the MCU controller is connected with a plurality of peripheral devices; the 5G module is used for uploading image data shot by the camera through a 5G network.
Further, the peripheral equipment comprises one or more of a key, a buzzer, an indicator light and a USB interface.
Furthermore, the acquisition of the label information of the purchased commodity takes the completion of the acquisition of the first face information of the user by the first camera as a trigger condition.
Further, when any one of the first face information and the second face information is not matched with the prestored face information, the system automatically deletes the association between the purchased commodity and the account information.
Compared with the prior art, the invention has the beneficial effects that:
the invention adopts double cameras to collect double data for analysis and comparison and analysis, improves the face recognition accuracy and is safer to use.
The long-range dependency relationship of the global face features of the platform is jointly learned by adopting an convolution and dense residual error network, so that modeling of the global face contour of two face data acquisition is assisted; meanwhile, the additionally arranged human face local reinforcing neural network learns the corresponding relation between the low-resolution human face local shear block and the high-resolution accurate human face shear block and is used for enhancing the modeling of the local human face features, particularly the human face five-sense organ features; moreover, by combining the global contour feature of the system face and the local feature of the whole system face, the global and local face features can be represented in a combined manner, and high-definition system face analysis is obtained; and finally, by jointly expressing the global and local facial features, the global and local features can be effectively extracted and integrated, so that the global facial features are more accurately modeled, and the analysis effect is better.
The gradient of the model is improved through a batch standardization layer, so that a larger learning rate is allowed, the iteration speed can be greatly increased, and the strong dependence on initialization is reduced; meanwhile, the shape parameters of the batch standardization layer have channel amplification and reduction properties, and are introduced as scale coefficients required for evaluating the importance degree of each channel, so that extra parameters and calculation workload cannot be brought to the model; in addition, by adding a constraint on the sum of absolute values of each element in kernel sparsification vectors of shape parameters of all standardized layers of the model, the problems that the shape parameters of the standardized layers do not have high sparsity in the original model and the values are distributed dispersedly are solved, and the subsequent channel shearing process according to the channel importance scale coefficient is facilitated; finally, aiming at the problems that the existing target detection model refining algorithm based on deep learning is poor in universality and large in detection precision loss, the shape parameters of a standardized layer are introduced as scale coefficients for evaluating the importance of a model convolution channel, redundant channel information is automatically identified by the model through sparse iteration, the redundant parameters can be safely removed on the premise of not influencing the generalization performance, and meanwhile, the precision loss caused by channel shearing is effectively compensated through fine tuning iteration.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a block diagram of the present embodiment.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, a block e-commerce platform transaction system based on a 5G network may be used in various places requiring face recognition, such as a PC terminal, a mobile terminal, and an entity vending machine.
The system mainly comprises a face image acquisition module, a face image analysis module, an image processing module and a cloud server, wherein the cloud server is used for storing data.
The human face image acquisition module comprises two binocular cameras, different human face images are acquired through the two cameras, human face characteristic points are calibrated, three-dimensional characteristics are constructed by utilizing calibrated parameters and input into the human face image analysis module for image processing, and the problems that the existing human face recognition accuracy is insufficient and the like are solved.
Specifically, this embodiment compares binocular camera including the first camera and the second camera that are in same horizontal position, through two cameras data acquisition respectively, avoids artificial fake action, improves system security.
Specifically, the human face data acquisition steps are as follows:
1. acquiring prestored face information in account information of a user;
2. the method comprises the steps that a first camera collects first face information of a user;
3. collecting label information of commodities purchased by a user;
4. the second camera acquires second face information of the user;
5. transmitting the first face information, the second face information and the prestored face information to a face image analysis module for analysis, and transmitting the data to an image processing module for comparison;
6. and when the first face information and the second face information are matched with the prestored face information, the cloud server deducts the shopping amount according to the commodity label information and the user finishes payment.
The analysis method of the face image analysis module is to construct a first face and a second face image data set, wherein the image data set comprises a plurality of face image paired data, namely, high-resolution and high-resolution face images and corresponding low-resolution face images; cutting all face image pairs in the image data set to obtain a local face image cut block;
inputting the obtained local human face image cutouts in batches, wherein the feedforward neural network comprises convolution calculation and a depth structure, and the feedforward neural network comprises convolution calculation and a depth structure and also comprises a human face global time cyclic neural network and a human face local reinforcing neural network, namely, the local human face image cutouts are respectively input into the human face global time cyclic neural network and the human face local reinforcing neural network for feature extraction;
in a human face global time cycle neural network, utilizing initialization convolution to map and correspond low-resolution local human face image cutout blocks from a human face image space to a system feature space to obtain initial basic human face features, extracting the basic human face features through a plurality of dense residual modules, concentrating the outputs of different residual modules, modeling the mutual corresponding relation between different stages and a space region through a recursion module, and learning to obtain human face global contour features from the initial basic human face features;
in a human face local reinforcing neural network, acquiring an input image through a data acquisition device with a specific size to obtain a plurality of resolution local human face image cutouts, and in order to obtain proper local human face image cutouts and keep a certain local structure, performing non-repeated acquisition on the input image through a data acquisition device with the original input size of 1/i to obtain i low-resolution local human face image cutouts;
in order to enhance modeling of local feature association, extracting the local face features by utilizing a plurality of multi-path residual error operation processes, and learning the corresponding relation between low-resolution face image cutout blocks and high-resolution face image cutout blocks by combining face feature information of different path paths and stages to obtain feature expression based on the local face cutout blocks;
inputting the obtained feature expressions of the face local cutouts into an upper acquisition layer, compiling and rearranging the feature expressions of the face local cutouts by utilizing sub-pixel convolution, and corresponding to a system global face space to obtain a feature corresponding graph with the original input face resolution, namely the complete local features of the whole face;
combining the obtained global contour feature of the human face and the local feature of the whole human face by using a plurality of convolution, inputting the combined feature into an upper acquisition layer, performing super-resolution analysis by using sub-pixel convolution to realize global and local combined expression of the human face feature, outputting a corresponding human face residual error human face image corresponding to an original human face image space, accumulating the human face residual error human face image obtained by regression and interpolation data of a low-resolution clear human face image, and outputting to obtain a clear human face image as a final human face image.
Optimizing the proposed face dual-path deep fusion integration network by minimizing cosine similarity of the clear face obtained by output and the original face with high resolution, and realizing analysis of the low-resolution contour face; the optimization specifically comprises the following steps: the clear face image generated by the network is controlled to be as close to the original high-resolution face image as possible by using the super-resolution cost function, so that the optimization of the face analysis method by means of depth integration is realized.
The image processing module comprises a method for improving the performance of a face target extraction network by using a face picture, and the specific method comprises the following steps:
introducing a batch standardization layer of a human face target model based on deep learning, and taking shape parameters in the standardization layer as important scale coefficients acquired by each channel of each convolution layer of the evaluation depth model for the model characteristics; the batch standardization layer is used for carrying out standardization operation on the input of the batch standardization layer and then introducing learnable and reconfigurable shape parameters; the input of the batch normalization layer is an convolution layer output characteristic image, the characteristic image of each channel is used as an independent neural module, a weight sharing strategy is used, each channel characteristic image only has a pair of reconstruction shape parameters, namely each magnification and reduction coefficient and each offset coefficient are in one-to-one correspondence with the input characteristic image channel, and the magnification and reduction coefficient shape parameters of the batch normalization layer are used as channel importance scale coefficients required by model shearing;
performing sparse iteration according to the obtained importance scale coefficient of the model convolution channel, specifically, adding a constraint on the sum of absolute values of each element in kernel sparse vectors of shape parameters of all standardized layers of the model in a loss cost function of the original model, so that the shape parameters are more sparse and most of the shape parameters are close to 0; the constraint of the sum of absolute values of each element in the kernel sparsification vector is increased, specifically, a sub-item related to the shape parameter is added to the iteration loss cost function of the original face model, the sub-item is essentially a penalty coefficient of the sum of absolute values of all shape parameter values of the model, and the larger the value of the sub-item is, the larger the influence of the shape parameter on the iteration loss cost function is; in the model iteration process, the loss cost value function is continuously reduced, the sum of absolute values of all shape parameter values is continuously reduced, more shape parameter values are continuously close to 0, and sparse iteration of the convolution channel importance scale coefficient is realized; stopping iteration when the model cost value does not fluctuate greatly along with the iteration times and most of shape parameter values approach to 0, and obtaining a model weight proportion value;
the iterative loss cost function of the original model comprises four parts: the first part is a coordinate loss cost function of the center of the frame, and the loss cost function is used for expressing the difference between the coordinate of the center of a prediction bounding box generated by an nth candidate boundary of an mth grid and the coordinate of the center of a labeling bounding box of a real target when the nth candidate boundary is responsible for the real target; the horizontal and vertical coordinates and the width and height of the central point of the predicted boundary frame output by the model relative to the grid relative value and the candidate boundary relative value are converted into a calculation process for the real coordinates and the real width and height of the predicted boundary of the image; the second part is a boundary width and height loss cost function which is used for expressing the difference between the predicted boundary size generated by the candidate boundary and the labeling boundary size of the real target when the nth candidate boundary of the mth grid is responsible for the real target; the third part is a confidence probability loss cost function, for the optical remote sensing image, most contents do not contain the object to be detected, namely the cost contribution of the calculation part without the object is larger than that of the calculation part with the object, so that the model tends to predict that the cell does not contain the object, and the loss function reduces the contribution weight proportion of the calculation part without the object; the fourth part is a type loss cost function which is used for expressing the difference between the predicted boundary type probability generated by the candidate boundary and the labeling boundary type probability of the real target when the nth candidate boundary of the mth grid is responsible for the real target;
performing folded layer channel shearing according to the model shape parameter value after sparse iteration, wherein most of the obtained model shape parameter values are close to 0 after channel sparse iteration, and each channel for inputting the characteristic image of the layer corresponds to one shape parameter value according to the meaning of the normalized layer shape parameter; discarding the characteristic diagram channel with the importance lower than the pruning percentage, and discarding the convolution kernel corresponding to the discarded characteristic diagram channel, thereby finishing the channel cutting process; the pruning percentage refers to the proportion of all shape parameters after kernel sparsization iteration, namely, all shape parameters of the model are sorted from small to large, feature graph channels corresponding to the shape parameters with the quantity corresponding to the prior pruning percentage are cut, and convolution kernels corresponding to the feature graph channels are discarded at the same time; when the pruning percentage is high, channel clipping may temporarily cause some loss of precision, but this can be largely circumvented by model fine tuning in subsequent steps; specifically, for the folded layers, judging whether the number of channels of each layer is zero after pruning, if so, forcibly retaining a single channel of the filter corresponding to the characteristic diagram parameter with the maximum absolute value of the shape parameter, and avoiding the damage of the model structure caused by excessive pruning; channel cutting is not carried out on the folded layer without the subsequent access batch normalization layer; for a shortcut layer, judging whether the number of two folding layer channels connected with the layer is consistent after pruning, if not, marking the two folding layer channels with the number of channels which are not pruned being 1 and the number of pruned channels being 0, generating two groups of one-dimensional binary vectors, carrying out OR operation on each bit of the two groups of vectors to obtain a one-dimensional vector, wherein the two folding layer channels corresponding to the vector bit number with the content of 1 are reserved, and the two folding layer channels corresponding to the vector bit number with the content of 0 are pruned; for the pooling layer, the upper acquisition layer and the connection layer are not subjected to parameter pruning; the maximum pooling layer refers to performing maximum pooling operation on the feature image of each channel dimension, namely, cutting and shearing the feature image into a plurality of small blocks with pooling sizes without duplication, taking the maximum number in each small block, and discarding other nodes and then keeping the original plane structure to obtain an output face feature image; the shortcut layer is used for performing corresponding channel parameter superposition operation on the two input folded layer characteristic images, and the input folded layer characteristic images are required to have the same number of channels; the upper acquisition layer is used for inserting new elements into the input characteristic image by adopting a method of respectively performing linear interpolation in two directions between pixel point values; the connection layer integrates the input face feature images on the channel dimension according to the sequence, namely the number of channels of the face feature images output by the connection layer is equal to the sum of the number of channels of the input feature images, and the feature image arrays are directly integrated on the channel dimension in the coding implementation;
performing face model reiteration on the same data set according to the obtained model parameter weight ratio after channel shearing; the iteration loss cost function is an original model loss cost function in sparse iteration; when the model cost value does not fluctuate greatly along with the iteration times any more, stopping training to obtain the model weight proportion, wherein the iteration specifically comprises the following steps: and dividing grids for the input iteration set picture, generating a predicted boundary in each grid through a candidate boundary with a preset size, calculating a loss cost function through a predicted boundary parameter and a marked real frame parameter, calculating all pictures in the iteration to obtain a current iteration loss cost function value, and finishing one iteration.
In addition, a processor, a sensor module, a lens, an MCU controller and a 5G module are arranged in the binocular camera; the processor is respectively connected with the sensor module, the lens, the MCU controller and the 5G module; the lens adopts an automatic zooming module, and the MCU controller is connected with various peripheral devices; the 5G module is used for uploading the image data shot by the camera through a 5G network, and the fifth generation communication technology is high in book transmission speed and high in transmission efficiency.
The peripheral equipment comprises one or more of a key, a buzzer, an indicator light and a USB interface, so that the operation, the upgrade, the maintenance and the like are convenient, and the buzzer and the indicator light are used for warning of the conditions such as faults and the like.
The acquisition of the label information of the purchased commodity is finished by acquiring the first face information of the user through the first camera as a trigger condition, the face information is more unique than the user account information, and the face information and the commodity label information are bound, so that the management and tracking of system data are facilitated.
Or when any one of the first face information and the second face information is not matched with the prestored face information, the system judges that the purchase is failed and the account is not matched with the face, so that the system automatically deletes the association between the purchased commodity and the account information, only associates the face information with the label information of the commodity, and independently extracts the data for storage for the management and tracking of the system data.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.

Claims (8)

1. A block E-commerce platform transaction system based on a 5G network is characterized in that: the system comprises a face image acquisition module, a face image analysis module, an image processing module and a cloud server; the face image acquisition module comprises a binocular camera, the binocular camera comprises a first camera and a second camera which are located at the same horizontal position, and the face data acquisition step is as follows:
1. acquiring prestored face information in account information of a user;
2. the method comprises the steps that a first camera collects first face information of a user;
3. collecting label information of commodities purchased by a user;
4. the second camera acquires second face information of the user;
5. transmitting the first face information, the second face information and the prestored face information to a face image analysis module for analysis, and transmitting the data to an image processing module for comparison;
6. and when the first face information and the second face information are matched with the prestored face information, the cloud server deducts the shopping amount according to the commodity label information and the user finishes payment.
2. A block e-commerce platform transaction system based on 5G network as claimed in claim 1, wherein: the analysis method of the face image analysis module is to construct a first face and a second face image data set, wherein the image data set comprises a plurality of face image paired data, namely high-resolution face images and corresponding low-resolution face images; cutting all face image pairs in the image data set to obtain a local face image cut block;
inputting the obtained local human face image cutouts in batches, wherein the feedforward neural network comprises convolution calculation and a depth structure, and the feedforward neural network comprises convolution calculation and a depth structure and also comprises a human face global time cyclic neural network and a human face local reinforcing neural network, namely, the local human face image cutouts are respectively input into the human face global time cyclic neural network and the human face local reinforcing neural network for feature extraction;
in a human face global time cycle neural network, utilizing initialization convolution to map and correspond low-resolution local human face image cutout blocks from a human face image space to a system feature space to obtain initial basic human face features, extracting the basic human face features through a plurality of dense residual modules, concentrating the outputs of different residual modules, modeling the mutual corresponding relation between different stages and a space region through a recursion module, and learning to obtain human face global contour features from the initial basic human face features;
in a human face local reinforcing neural network, acquiring an input image through a data acquisition device with a specific size to obtain a plurality of resolution local human face image cutouts, and in order to obtain proper local human face image cutouts and keep a certain local structure, performing non-repeated acquisition on the input image through a data acquisition device with the original input size of 1/i to obtain i low-resolution local human face image cutouts;
in order to enhance modeling of local feature association, extracting the local face features by utilizing a plurality of multi-path residual error operation processes, and learning the corresponding relation between low-resolution face image cutout blocks and high-resolution face image cutout blocks by combining face feature information of different path paths and stages to obtain feature expression based on the local face cutout blocks;
inputting the obtained feature expressions of the face local cutouts into an upper acquisition layer, compiling and rearranging the feature expressions of the face local cutouts by utilizing sub-pixel convolution, and corresponding to a system global face space to obtain a feature corresponding graph with the original input face resolution, namely the complete local features of the whole face;
combining the obtained global contour feature of the human face and the local feature of the whole human face by using a plurality of convolution, inputting the combined feature into an upper acquisition layer, performing super-resolution analysis by using sub-pixel convolution to realize global and local combined expression of the human face feature, outputting a corresponding human face residual error human face image corresponding to an original human face image space, accumulating the human face residual error human face image obtained by regression and interpolation data of a low-resolution clear human face image, and outputting to obtain a clear human face image as a final human face image.
3. A block e-commerce platform transaction system based on 5G network as claimed in claim 2, wherein: optimizing the proposed face dual-path deep fusion integration network by minimizing cosine similarity of the clear face obtained by output and the original face with high resolution, and realizing analysis of the low-resolution contour face; the optimization specifically comprises the following steps: the clear face image generated by the network is controlled to be as close to the original high-resolution face image as possible by using the super-resolution cost function, so that the optimization of the face analysis method by means of depth integration is realized.
4. A block e-commerce platform transaction system based on 5G network as claimed in claim 1, wherein: the image processing module adopts a face target detection model, and optimizes the face target detection model as follows:
introducing a batch standardization layer of a human face target model based on deep learning, and taking shape parameters in the standardization layer as important scale coefficients acquired by each channel of each convolution layer of the evaluation depth model for the model characteristics; the batch standardization layer is used for carrying out standardization operation on the input of the batch standardization layer and then introducing learnable and reconfigurable shape parameters; the input of the batch normalization layer is an convolution layer output characteristic image, the characteristic image of each channel is used as an independent neural module, a weight sharing strategy is used, each channel characteristic image only has a pair of reconstruction shape parameters, namely each magnification and reduction coefficient and each offset coefficient are in one-to-one correspondence with the input characteristic image channel, and the magnification and reduction coefficient shape parameters of the batch normalization layer are used as channel importance scale coefficients required by model shearing;
performing sparse iteration according to the obtained importance scale coefficient of the model convolution channel, specifically, adding a constraint on the sum of absolute values of each element in kernel sparse vectors of shape parameters of all standardized layers of the model in a loss cost function of the original model, so that the shape parameters are more sparse and most of the shape parameters are close to 0; the constraint of the sum of absolute values of each element in the kernel sparsification vector is increased, specifically, a sub-item related to the shape parameter is added to the iteration loss cost function of the original face model, the sub-item is essentially a penalty coefficient of the sum of absolute values of all shape parameter values of the model, and the larger the value of the sub-item is, the larger the influence of the shape parameter on the iteration loss cost function is; in the model iteration process, the loss cost value function is continuously reduced, the sum of absolute values of all shape parameter values is continuously reduced, more shape parameter values are continuously close to 0, and sparse iteration of the convolution channel importance scale coefficient is realized; stopping iteration when the model cost value does not fluctuate greatly along with the iteration times and most of shape parameter values approach to 0, and obtaining a model weight proportion value;
the iterative loss cost function of the original model comprises four parts: the first part is a coordinate loss cost function of the center of the frame, and the loss cost function is used for expressing the difference between the coordinate of the center of a prediction bounding box generated by an nth candidate boundary of an mth grid and the coordinate of the center of a labeling bounding box of a real target when the nth candidate boundary is responsible for the real target; the horizontal and vertical coordinates and the width and height of the central point of the predicted boundary frame output by the model relative to the grid relative value and the candidate boundary relative value are converted into a calculation process for the real coordinates and the real width and height of the predicted boundary of the image; the second part is a boundary width and height loss cost function which is used for expressing the difference between the predicted boundary size generated by the candidate boundary and the labeling boundary size of the real target when the nth candidate boundary of the mth grid is responsible for the real target; the third part is a confidence probability loss cost function, for the optical remote sensing image, most contents do not contain the object to be detected, namely the cost contribution of the calculation part without the object is larger than that of the calculation part with the object, so that the model tends to predict that the cell does not contain the object, and the loss function reduces the contribution weight proportion of the calculation part without the object; the fourth part is a type loss cost function which is used for expressing the difference between the predicted boundary type probability generated by the candidate boundary and the labeling boundary type probability of the real target when the nth candidate boundary of the mth grid is responsible for the real target;
performing folded layer channel shearing according to the model shape parameter value after sparse iteration, wherein most of the obtained model shape parameter values are close to 0 after channel sparse iteration, and each channel for inputting the characteristic image of the layer corresponds to one shape parameter value according to the meaning of the normalized layer shape parameter; discarding the characteristic diagram channel with the importance lower than the pruning percentage, and discarding the convolution kernel corresponding to the discarded characteristic diagram channel, thereby finishing the channel cutting process; the pruning percentage refers to the proportion of all shape parameters after kernel sparsization iteration, namely, all shape parameters of the model are sorted from small to large, feature graph channels corresponding to the shape parameters with the quantity corresponding to the prior pruning percentage are cut, and convolution kernels corresponding to the feature graph channels are discarded at the same time; when the pruning percentage is high, channel clipping may temporarily cause some loss of precision, but this can be largely circumvented by model fine tuning in subsequent steps; specifically, for the folded layers, judging whether the number of channels of each layer is zero after pruning, if so, forcibly retaining a single channel of the filter corresponding to the characteristic diagram parameter with the maximum absolute value of the shape parameter, and avoiding the damage of the model structure caused by excessive pruning; channel cutting is not carried out on the folded layer without the subsequent access batch normalization layer; for a shortcut layer, judging whether the number of two folding layer channels connected with the layer is consistent after pruning, if not, marking the two folding layer channels with the number of channels which are not pruned being 1 and the number of pruned channels being 0, generating two groups of one-dimensional binary vectors, carrying out OR operation on each bit of the two groups of vectors to obtain a one-dimensional vector, wherein the two folding layer channels corresponding to the vector bit number with the content of 1 are reserved, and the two folding layer channels corresponding to the vector bit number with the content of 0 are pruned; for the pooling layer, the upper acquisition layer and the connection layer are not subjected to parameter pruning; the maximum pooling layer refers to performing maximum pooling operation on the feature image of each channel dimension, namely, cutting and shearing the feature image into a plurality of small blocks with pooling sizes without duplication, taking the maximum number in each small block, and discarding other nodes and then keeping the original plane structure to obtain an output face feature image; the shortcut layer is used for performing corresponding channel parameter superposition operation on the two input folded layer characteristic images, and the input folded layer characteristic images are required to have the same number of channels; the upper acquisition layer is used for inserting new elements into the input characteristic image by adopting a method of respectively performing linear interpolation in two directions between pixel point values; the connection layer integrates the input face feature images on the channel dimension according to the sequence, namely the number of channels of the face feature images output by the connection layer is equal to the sum of the number of channels of the input feature images, and the feature image arrays are directly integrated on the channel dimension in the coding implementation;
performing face model reiteration on the same data set according to the obtained model parameter weight ratio after channel shearing; the iteration loss cost function is an original model loss cost function in sparse iteration; when the model cost value does not fluctuate greatly along with the iteration times any more, stopping training to obtain the model weight proportion, wherein the iteration specifically comprises the following steps: and dividing grids for the input iteration set picture, generating a predicted boundary in each grid through a candidate boundary with a preset size, calculating a loss cost function through a predicted boundary parameter and a marked real frame parameter, calculating all pictures in the iteration to obtain a current iteration loss cost function value, and finishing one iteration.
5. A block e-commerce platform transaction system based on 5G network as claimed in claim 1, wherein: a processor, a sensor module, a lens, an MCU (microprogrammed control unit) controller and a 5G module are arranged in the binocular camera; the processor is respectively connected with the sensor module, the lens, the MCU controller and the 5G module; the MCU controller is connected with a plurality of peripheral devices; the 5G module is used for uploading image data shot by the camera through a 5G network.
6. A block e-commerce platform transaction system based on 5G network as claimed in claim 5, wherein: the peripheral comprises one or more of a key, a buzzer, an indicator light and a USB interface.
7. A block e-commerce platform transaction system based on 5G network as claimed in claim 1, wherein: the acquisition of the label information of the purchased commodity takes the completion of the acquisition of the first face information of the user by the first camera as a trigger condition.
8. A block e-commerce platform transaction system based on 5G network as claimed in claim 1, wherein: when any one of the first face information and the second face information is not matched with the pre-stored face information, the system automatically deletes the association between the purchased commodity and the account information.
CN202110019818.8A 2021-01-07 2021-01-07 Block E-commerce platform transaction system based on 5G network Withdrawn CN113487374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110019818.8A CN113487374A (en) 2021-01-07 2021-01-07 Block E-commerce platform transaction system based on 5G network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110019818.8A CN113487374A (en) 2021-01-07 2021-01-07 Block E-commerce platform transaction system based on 5G network

Publications (1)

Publication Number Publication Date
CN113487374A true CN113487374A (en) 2021-10-08

Family

ID=77933308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110019818.8A Withdrawn CN113487374A (en) 2021-01-07 2021-01-07 Block E-commerce platform transaction system based on 5G network

Country Status (1)

Country Link
CN (1) CN113487374A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011963A (en) * 2023-10-07 2023-11-07 四川金投科技股份有限公司 Intelligent lock and intelligent door control system based on electronic key

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011963A (en) * 2023-10-07 2023-11-07 四川金投科技股份有限公司 Intelligent lock and intelligent door control system based on electronic key
CN117011963B (en) * 2023-10-07 2023-12-08 四川金投科技股份有限公司 Intelligent lock and intelligent door control system based on electronic key

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
US20220414911A1 (en) Three-dimensional reconstruction method and three-dimensional reconstruction apparatus
CN111507378A (en) Method and apparatus for training image processing model
EP4322031A1 (en) Recommendation method, recommendation model training method, and related product
CN110222718B (en) Image processing method and device
CN114332578A (en) Image anomaly detection model training method, image anomaly detection method and device
CN110287873A (en) Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN111052128B (en) Descriptor learning method for detecting and locating objects in video
CN111967527B (en) Peony variety identification method and system based on artificial intelligence
CN113095370A (en) Image recognition method and device, electronic equipment and storage medium
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN114048468A (en) Intrusion detection method, intrusion detection model training method, device and medium
CN114861842B (en) Few-sample target detection method and device and electronic equipment
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN114612902A (en) Image semantic segmentation method, device, equipment, storage medium and program product
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN115115825A (en) Method and device for detecting object in image, computer equipment and storage medium
CN115222954A (en) Weak perception target detection method and related equipment
Wang Remote sensing image semantic segmentation algorithm based on improved ENet network
CN113487374A (en) Block E-commerce platform transaction system based on 5G network
CN116310850B (en) Remote sensing image target detection method based on improved RetinaNet
CN115222896B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer readable storage medium
CN116071570A (en) 3D target detection method under indoor scene
CN116612382A (en) Urban remote sensing image target detection method and device
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20211008

WW01 Invention patent application withdrawn after publication