CN114511909A - Face brushing payment intention identification method, device and equipment - Google Patents

Face brushing payment intention identification method, device and equipment Download PDF

Info

Publication number
CN114511909A
CN114511909A CN202210180422.6A CN202210180422A CN114511909A CN 114511909 A CN114511909 A CN 114511909A CN 202210180422 A CN202210180422 A CN 202210180422A CN 114511909 A CN114511909 A CN 114511909A
Authority
CN
China
Prior art keywords
face
image
brushing
candidate
face brushing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210180422.6A
Other languages
Chinese (zh)
Inventor
尹英杰
丁菁汀
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210180422.6A priority Critical patent/CN114511909A/en
Publication of CN114511909A publication Critical patent/CN114511909A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Accounting & Taxation (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification discloses a method, a device and equipment for recognizing willingness to pay by brushing face. The scheme comprises the following steps: acquiring a face brushing image, and determining a candidate to be identified in the face brushing image; according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image; extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image; and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics. The safety of the face brushing payment can be improved.

Description

Face brushing payment intention identification method, device and equipment
Technical Field
The specification relates to the technical field of computers, in particular to a method, a device and equipment for recognizing willingness to pay by swiping a face.
Background
With the development of computer and internet technologies, many services can be performed on line, and the development of various online service platforms is promoted. Wherein, brush face payment indicates the novel payment mode based on technologies such as artificial intelligence, machine vision, 3D sensing, big data realization, through adopting face identification as authentication's payment mode, has brought very big convenience for the user, receives user's general liking.
At present, in a face-brushing payment scene, a user to be paid needs to stand in front of equipment with a face-brushing payment function to perform face recognition after starting face-brushing payment. However, during the face brushing process, a plurality of users may stand in front of the device, which may cause the plurality of users to appear in the face brushing image captured by the device. At this time, when the device performs face recognition on the face brushing image, it is difficult to determine which user is the current user to be paid, that is, which user has a will of face brushing payment. In other words, only the current user to be paid has a willingness to swipe a face, and the other users do not have a willingness to swipe a face.
Based on this, brush face payment wish recognition is the important link to brushing face safety guarantee in the payment system, helps promoting to brush face safety and experiences, but, if equipment discerns other users, will appear the mistake and brush face payment to reduce the security of brushing face payment.
Based on this, a more secure identification scheme is required for face-brushing payments.
Disclosure of Invention
One or more embodiments of the present specification provide a method, an apparatus, a device, and a storage medium for recognizing a willingness to pay by swiping a face, so as to solve the following technical problems: a more secure identification scheme is needed for face-brushing payments.
To solve the above technical problem, one or more embodiments of the present specification are implemented as follows:
one or more embodiments of the present specification provide a method for recognizing a willingness to pay by swiping a face, including:
acquiring a face brushing image, and determining a candidate to be identified in the face brushing image;
according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image;
extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image;
and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
One or more embodiments of the present specification provide a device for recognizing a willingness to pay by swiping a face, including:
the acquisition module acquires a face brushing image and determines a candidate to be identified in the face brushing image;
the generating module is used for respectively generating corresponding mask images according to the areas of the candidate persons in the face brushing image so as to distinguish the areas from other areas in the face brushing image;
the extraction module is used for extracting the features of the face brushing image and obtaining fusion features according to the features of the face brushing image and the mask image;
and the identification module identifies whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
One or more embodiments of the present specification provide a device for recognizing a willingness to pay by swiping a face, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a face brushing image, and determining a candidate to be identified in the face brushing image;
according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image;
extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image;
and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
One or more embodiments of the present specification provide a non-volatile computer storage medium having stored thereon computer-executable instructions configured to:
acquiring a face brushing image, and determining a candidate to be identified in the face brushing image;
according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image;
extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image;
and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
At least one technical scheme adopted by one or more embodiments of the specification can achieve the following beneficial effects:
through the areas of the candidate persons in the face brushing image, the corresponding mask maps are respectively generated, the characteristic information of the candidate persons can be clearer, the differences between the face brushing willingness and the face brushing willingness are increased, the characteristics are fused, whether the candidate persons have the face brushing willingness or not is identified, the image comparison enhancement effect is realized, the attention is focused on the candidate persons with the face brushing willingness, the candidate persons with the face brushing willingness and the candidate persons without the face brushing willingness in the face brushing image can be accurately distinguished, the face brushing willingness of the candidate persons in the face brushing image is identified more pertinently, the face brushing safety experience can be enhanced, and the availability safety of a face brushing system is effectively guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic flow chart of a method for recognizing a willingness to pay by swiping a face according to one or more embodiments of the present disclosure;
fig. 2 is a schematic diagram of a framework of a system for recognizing a willingness to swipe payment provided in one or more embodiments of the present specification;
fig. 3 is a schematic flowchart of a method for recognizing a willingness to pay by swiping based on end-to-end learning of a deep convolutional neural network, according to one or more embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of a device for recognizing a willingness to pay by brushing face according to one or more embodiments of the present disclosure;
fig. 5 is a schematic structural diagram of a device for recognizing a willingness to swipe payment according to one or more embodiments of the present disclosure.
Detailed Description
The embodiment of the specification provides a method, a device, equipment and a storage medium for recognizing a brushing willingness-to-pay.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
Fig. 1 is a schematic flowchart of a method for recognizing a willingness to pay by swiping a face according to one or more embodiments of the present disclosure. The process can be executed by an electronic device with a face-brushing payment function, the electronic device can be a terminal with an image data processing function, for example, the electronic device can be a mobile terminal such as a mobile phone, a tablet, a notebook, and the like, and can also be a fixed terminal such as a desktop, or a server, and some input parameters or intermediate results in the process allow manual intervention and adjustment to help improve accuracy.
The process in fig. 1 may include the following steps:
s102: the face brushing method comprises the steps of obtaining a face brushing image, and determining a candidate to be identified in the face brushing image.
In one or more embodiments of the present description, after receiving the face brushing payment instruction, the electronic device may acquire the face brushing image through a pre-installed image capturing device, or the electronic device may generate the face brushing payment instruction according to a payment order and acquire the face brushing image through the image capturing device. The brushing image may be a single frame image obtained from a video or an image.
The candidate to be identified is a user who needs to pay related fees, and it needs to be stated that the candidate wants to perform face brushing payment, needs to perform identity information registration on a corresponding client, and inputs face information, so that when the candidate starts face brushing payment, after the candidate is identified to have a recognition intention of face brushing payment, the candidate is subjected to identity authentication through the pre-registered face information.
That is, the face information of the candidate is included in the face brushing image, whether the candidate has a willingness to brush the face can be obtained by identifying the face information, and then the candidate can be authenticated through the face information.
The number of candidates in front of the image pickup apparatus may be one or more, and when the number of candidates is plural, the face brushing image includes plural candidates, and when the number of candidates is one, the face brushing image includes one candidate. Meanwhile, the face brushing image not only includes face information of the candidate, but also includes other characteristic information of the candidate, such as trunk information and limb information, and also includes other objects which do not need to be identified, such as tables and chairs, hanging objects and the like included in the environment where the candidate is located.
In addition, in a normal case, when the electronic device executes a single face-brushing payment instruction, the specific candidate currently starting face-brushing payment is subjected to identity authentication, and the specific candidate generally has a face-brushing willingness, that is, when the single face-brushing payment instruction is executed, even if a plurality of candidates are included in a face-brushing image, the plurality of candidates do not all have the face-brushing willingness, only the specific candidate has the face-brushing willingness, and the specific candidate can be considered as safe in the willingness to pay, and other candidates are not safe in the willingness to pay.
For example, in public places, off-line Internet of Things (IoT) face brushing machines are often used for face brushing payments. The face brushing IoT machine in the public place is a machine with a face brushing function set in public consumption scenes such as business surpasses, convenience stores, restaurants, wine hotels, campus education medical treatment and campus education.
If A candidate clicks face-brushing payment, face-brushing payment is started, an IoT face-brushing machine receives a face-brushing payment instruction, a face-brushing image is obtained through camera equipment, the IoT face-brushing machine is located in an open public place, the situation that multiple candidates queue for payment exists, the face-brushing image obtained by the IoT machine possibly comprises multiple candidates, however, in the multiple candidates, only the candidate A actually has face-brushing payment will, at the moment, on the premise that the candidate A needs to be recognized to have the face-brushing payment will, identity authentication is carried out on the candidate A through face information of the candidate A, namely, other candidates do not actually have the face-brushing payment will.
Further, if the B candidate is ranked behind the a candidate, even if the B candidate has no willingness to swipe, the image capturing apparatus may capture the a candidate while capturing the B candidate during the face-swipe authentication, resulting in the captured face-swipe image including both the a candidate and the B candidate. If the electronic equipment does not regard the candidate A as a face brushing user but recognizes the candidate B in the process of identifying the face brushing image, the electronic equipment directly authenticates the candidate B without identifying whether the candidate B has the intention of face brushing payment, and after the authentication is passed, the electronic equipment pays through the account of the candidate B, so that the asset of the candidate B is mistakenly brushed, and the asset of the candidate B is lost.
S104: and respectively generating corresponding mask images according to the located areas of the candidate persons in the face brushing image so as to distinguish the located areas from other areas in the face brushing image.
In one or more embodiments of the present specification, the region may include appearance feature information of the candidate, such as face information, torso information, and limb information, but in order to increase accuracy of the recognition result, the region mainly includes the face information of the candidate. Meanwhile, the located area can be determined according to the position information of the candidate in the face brushing image.
The masking operation of the image refers to recalculating the value of each pixel in the image through a mask kernel, describing the influence degree of the pixel points in the field on the new pixel value by the mask kernel, and simultaneously performing weighted average on the pixel points according to the weighting factors in the mask operator, wherein the image masking operation is commonly used in areas of image smoothing, edge detection, feature analysis and the like, so that the area of a candidate in the face brushing image and other areas in the face brushing image can be distinguished through the masking operation.
It should be noted that, a single candidate corresponds to a single mask map in a region of the face brushing image, that is, if there are multiple candidates in the face brushing image, a corresponding mask map is generated for each candidate, and multiple mask maps are finally obtained.
That is, in the single mask map, the located region can be distinguished from other regions, for example, the located region filling value is 1, and the other region filling value is 0 (other different filling values with higher distinguishing degrees may also be adopted). That is, by generating a mask map corresponding to each candidate, the feature information of the candidate can be made clearer, and the difference between the willingness to swipe a face and the willingness to swipe no face can be increased.
S106: and extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image.
In one or more embodiments of the present specification, how to extract features of a brush face image is not limited herein, for example, features of the brush face image are extracted by a feature extraction model. The features may include face features, torso features, and extremity features of each candidate. The face features can be global features of faces of the candidate persons, and the accuracy of the recognition result can be improved by recognizing the faces through the global features.
Of course, after the mask map is obtained, the fused features may be obtained by inputting the features of the brush face image and the mask map into the fused feature extraction model.
Through the characteristic of drawing the face of brushing the face image, combine the characteristic of the face of brushing the face image with the mask picture, a passageway has newly been increased for the characteristic of the face of brushing the face image in other words, increase the passageway quantity of the characteristic of the face of brushing the face image to obtain the fusion characteristic, in the fusion characteristic, then pay attention to the face characteristic of the candidate who corresponds in the face of brushing the face image more, can more accurately distinguish the candidate who has the intention of brushing the face in the face of brushing the face image and the candidate who does not have the intention of brushing the face.
S108: and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
In one or more embodiments of the present specification, face brushing image information is determined according to the fusion features, and the face brushing image information includes feature information of candidate persons. The feature information of the candidate corresponding to the mask map of the fusion feature is more concerned, and the feature information mainly comprises face information of the corresponding candidate. That is, when one mask map is generated, a recognition process is performed to recognize whether or not a candidate corresponding to the mask map has a desire to pay for face brushing.
It should be noted that, a preset rule may be combined to identify whether the corresponding candidate has a willingness to swipe a face, for example, if it is identified that the face region of the candidate is located in the middle region, the candidate is considered to have the willingness to swipe a face, or if it is identified that the face region of the candidate occupies a large part of the area of the face-swiped image, and the face angle meets a preset angle threshold, the candidate is considered to have the willingness to swipe a face.
Further, the feature input willingness-to-pay recognition model can be fused, the face brushing image information is recognized through the willingness-to-pay recognition model, the processing result is output according to the preset rule, then, the face brushing willingness-to-pay probability value is generated according to the processing result, and whether the candidate has the face brushing willingness-to-pay can be judged through the face brushing willingness-to-pay probability value.
The processing result can be a vector which is generated by the willingness-to-pay recognition model and used for representing the probability value of the willingness-to-pay by brushing the face.
For example, if the probability value is greater than the preset probability threshold, it may be considered that the candidate has a willingness to swipe a face, that is, after the face-swiping payment instruction of the electronic device starts face-swiping payment for the candidate, the candidate is generated, and if the probability value is less than or equal to the preset probability threshold, it may be considered that the candidate does not have the willingness to swipe a face. That is, the face-brushing payment instruction of the electronic device is generated not after the candidate starts face-brushing payment, but after other candidates start face-brushing payment.
Further, if the number of the candidate persons with the probability value larger than the preset probability is multiple, it indicates that the result of the face brushing payment intention identification is not credible, and the authentication failure is prompted. If the candidate with the probability value larger than the preset probability does not exist, the result of the face brushing payment intention recognition is not credible, and authentication failure is prompted.
By the method of the figure 1, corresponding mask maps are respectively generated for the areas of the candidate in the face brushing image, the feature information of the candidate can be clearer, the difference between the face brushing willingness and the face brushing willingness is increased, whether the candidate has the face brushing willingness or not is identified by fusing the features, the image comparison effect can be enhanced, the purpose of focusing attention on the candidate with the face brushing willingness is achieved, the candidate with the face brushing willingness and the candidate without the face brushing willingness in the face brushing image can be accurately distinguished, the face brushing willingness of the candidate in the face brushing image is identified more pertinently, and the face brushing safety experience can be enhanced.
Based on the process of fig. 1, some specific embodiments and embodiments of the process are also provided in the present specification, and the description is continued below.
In one or more embodiments of the present description, in generating the corresponding mask map,
specifically, after the candidate is determined, the face area of the candidate in the face brushing image is extracted, for example, the face of the candidate in the face brushing image is first extracted through a face extraction model, and then the face area of the candidate is determined through the position information of the face. Then, the face area is processed, and a face area selection frame is determined. The face region selection frame may have a plurality of display modes, for example, a circular frame, a rectangular frame, an irregular polygonal frame, etc., but there is a precondition that the face region selection frame must completely surround the face region of the candidate in order to ensure the accuracy of the recognition result.
After the face region selection frame is obtained, a first filling region of a mask image corresponding to the candidate is determined according to the face region selection frame of the candidate. The shape of the first filling area may have various display manners, which are not limited herein, such as a circular area, a rectangular area, an irregular polygonal area, and the like. However, there is a precondition that, in order to ensure the accuracy of the recognition result, the face region selection box is combined to be as close to the actual face region as possible when determining the first filling region.
After the first filling area is determined, a second filling area other than the first filling area is continuously determined in the brush face image, and different filling values are given to the first filling area and the second filling area.
In order to make the first filled region in the mask map coincide with the face region in the brush face image as much as possible, the mask map having a resolution equal to that of the brush face image is generated after different filling values are assigned to the first filled region and the second filled region.
Further, since the face area is mostly a circular area or an elliptical area, in order to make the first filling area more fit to the face area of the candidate, the first filling area is taken as a circular area.
Specifically, when the face region is processed and the face region selection frame is determined, the face region selection frame is determined as a rectangular frame. After the face area of the rectangular frame is obtained, the face frame width and the face frame height are calculated according to the position of the rectangular frame in the face brushing image, and the radius of the circular area is calculated according to the face frame width and the face frame height.
When the radius of the circular area is calculated, because the face area is a circular area or an elliptical area, when a rectangular frame is initially generated, the face area is similar to an inscribed circle of the rectangular frame, so that the face area is restored to the maximum extent, and the first filling area is ensured to include all the face area as much as possible, and the maximum value between the half length of the width of the rectangular frame and the half length of the height of the rectangular frame is taken as the radius of the circular area.
Therefore, the center of the rectangular frame is used as the center of circle, the half length of the longest side of the rectangular frame is determined as the radius, and then the first filling area of the mask map corresponding to the candidate is determined based on the circular area formed by the center of circle and the radius.
For example, assume that the position of the face rectangle frame of the candidate in the face brushing image is (x)1,y1,x2,y2) Wherein x is1And x2Respectively, the position coordinate of the rectangular frame width on the x axis, y1And y2Respectively, the position coordinates of the rectangular frame height on the y axis.
Then the expression for calculating the face frame width is w ═ x2-x1And w is the width of the face frame. In addition, x is1Has a position coordinate less than x2The position coordinates of (a).
Then the expression of the face box height is calculated as h ═ y2-y1Wherein h is the height of the face frame, and needs to beTo be noted, y is1Has a position coordinate less than y2The position coordinates of (a).
The expression for determining the radius of the circular area is
Figure BDA0003520521000000071
Wherein R is the radius of the circular area.
Based on this, the position coordinates of the center of the rectangular frame are
Figure BDA0003520521000000072
R is
Figure BDA0003520521000000073
In one or more embodiments of the present disclosure, the present solution is more intuitively described in more detail in conjunction with fig. 2-3.
Fig. 2 is a schematic diagram of a framework of a system for recognizing a willingness to swipe face according to one or more embodiments of the present disclosure.
In one or more embodiments of the present specification, in order to identify the brushing will of a candidate more accurately, a deep convolutional neural network end-to-end learning manner is adopted for a brushing image obtained in a brushing payment process, so that safety detection of the brushing will of the candidate can be achieved, and a candidate region attention mechanism is introduced, so that a mask map is integrated into network learning, so that the brushing will of the candidate in the brushing image can be identified more specifically, and thus, the brushing safety experience can be enhanced.
As shown in fig. 2, the face brushing willingness-to-pay recognition system is implemented in an end-to-end learning manner by using a deep convolutional neural network, and includes a face brushing image, a mask map generated by a region where a candidate is located in the face brushing image, a first convolutional network module, a region attention mechanism implementation module where the candidate is located, a third convolutional module, and a network output.
It should be noted that the first convolutional network and the third convolutional network have a specific corresponding relationship, that is, the first convolutional network and the third convolutional network have related network types, which are not limited herein, but are not split, for example, the first convolutional network and the third convolutional network belong to different parts of the same identification network type.
The face brushing image and the generated mask image are used as input data of the deep convolutional neural network, and network output is face brushing payment intention probability values, namely intention safety probability and intention non-safety probability.
Based on the above, in the process of identifying the willingness to swipe, firstly, the features of the swipe image are extracted through the first convolution network, the features of the swipe image and the mask map are input into the region attention mechanism implementation module where the candidate is located, and the region attention mechanism implementation module where the candidate is located processes the features of the swipe image and the mask map and outputs the fusion features. And finally, inputting the fusion features into a third convolution network, and processing the fusion features through the third convolution network to obtain a processing result.
The following description is continued on how the candidate region attention mechanism implementation module specifically processes the features of the face brushing image and the mask image and outputs the fusion features.
As shown in fig. 2, the candidate region attention mechanism implementation module includes a feature of the face brushing image, a mask image after resolution reduction processing, a second convolution network module, and a fusion feature.
Specifically, in the process of recognizing the brushing willingness to pay, the brushing image is input into a first convolution network in a first convolution network module, the characteristics of the brushing image are extracted through the first convolution network, and the mask image is subjected to resolution reduction processing to obtain the mask image subjected to resolution reduction processing so as to adapt to the characteristics of the brushing image.
And then, inputting the features of the face brushing image and the mask image after resolution reduction processing into a second convolution neural network module, and fusing the features of the face brushing image and the mask image after resolution reduction processing through a second convolution network to obtain fused features.
The recognition of the face brushing payment intention of the candidate is realized in an end-to-end learning mode of the deep convolutional neural network, and the face brushing payment intention of the candidate in the face brushing image can be more pertinently learned and judged by the network through an attention mechanism implementation module of the region where the candidate is located. Particularly in public places, the technical effects that the candidate A starts to brush faces and the assets of the candidate B are mistakenly brushed can be effectively prevented.
More intuitively, fig. 3 is a schematic flow chart of a method for recognizing a willingness to pay by swiping based on end-to-end learning of a deep convolutional neural network, according to one or more embodiments of the present disclosure.
The flow in fig. 3 may include the following steps:
s302: the face brushing method comprises the steps of obtaining a face brushing image, and determining a candidate to be identified in the face brushing image.
S304: and respectively generating corresponding mask images according to the located areas of the candidate persons in the face brushing image so as to distinguish the located areas from other areas in the face brushing image.
S306: and extracting the characteristics of the brushing image through a first convolution network.
It should be noted that the first convolutional network is obtained through supervised training in advance.
S308: and performing resolution reduction processing on the mask image so as to adapt to the characteristics of the brushing face image.
For example, the resolution of the mask map is reduced by nearest neighbor sampling to generate a mask map having the same resolution as that of the brush face image, and the mask map can be adapted to the feature of the brush face image so that the second convolution network can perform processing.
S310: and fusing the characteristics of the face brushing image and the mask image after resolution reduction processing through a second convolution network to obtain fused characteristics. It should be noted that the second convolutional network is obtained through supervised training in advance.
In one or more embodiments of the present specification, in the process of obtaining the fusion feature, the feature of the face brushing image and the mask image after the resolution reduction processing are connected according to the channel dimension, and the feature obtained by the connection is input to the second convolution network for processing, so as to obtain the fusion feature.
The number of convolution layers of the second convolution network is not specifically limited herein. That is, the number of convolution layers of the second convolutional network may be made up of 1 or more convolutional layers. Meanwhile, the resolution of the fusion features is the same as the features of the face brushing image, and the number of the feature channels of the fusion features is the same as the features of the face brushing image.
S312: and inputting the fusion features into a third convolution network corresponding to the first convolution network for processing to obtain a processing result, wherein the first convolution network and the third convolution network are obtained by splitting the same convolution network in advance. It should be noted that the third convolutional network is obtained through supervised training in advance.
In one or more embodiments of the present specification, the first convolutional network and the third convolutional network are previously split from the same convolutional network, for example, the convolutional network is response, ShUffleNet V2, and the like.
In the splitting process, the first convolutional network and the third convolutional network can be respectively used as the front part and the rear part of the same convolutional network.
The position of the split may be determined by the size of the resolution of the features of the brushed face image. Even if the extraction of the feature of the brushed-face image has not been started (assuming that the model is not constructed yet at this time), there may be a desired resolution (which is referred to as a target resolution, for example, a resolution of the brushed-face image) for the feature of the brushed-face image
Figure BDA0003520521000000091
) So as to correspondingly split the convolution network and complete the model construction, and the convolution network (the first convolution network) of the previous part can just output the characteristics of the resolution.
And determining the convolutional layer matched with the target resolution in the convolutional layers in the same convolutional network. And finally, taking the matched convolution layer as a splitting point, splitting the same convolution network into a front part and a rear part, wherein the front part is used as a first convolution network, and the rear part is used as a third convolution network.
S314: and generating a probability value according to the processing result to indicate whether the corresponding candidate has the willingness to pay by brushing the face.
For example, the probability value is compared with a set threshold probability, and if the probability value is greater than the set threshold probability, it is determined that the will of the candidate is safe, that is, the candidate has a will of face brushing payment. And if the probability value is smaller than or equal to the set threshold probability, determining that the willingness of the candidate is unsafe, namely the candidate does not have the willingness to pay by brushing the face.
In light of the foregoing, the following continues to describe how supervised training can be performed for the first convolutional network, the second convolutional network, and the third convolutional network.
In one or more embodiments of the present description, a training data set needs to be established first, and then network training is performed through the training data set.
Specifically, when a training data set is established, a face brushing sample image containing a user confirmed as a face brushing user is obtained first, namely, a candidate enables face brushing payment, the face brushing image is collected through the camera device, and no matter whether a plurality of candidate persons to be identified exist in the face brushing image or not, in the face brushing image, the candidate persons are used as the face brushing user, and the face brushing image is used as the face brushing sample image. For example, a face brushing image is collected by a camera on the IoT face brushing machine under the line, and if the candidate a enables face brushing payment, the candidate a is used as a face brushing user in the face brushing image, and the face brushing image is used as a face brushing sample image.
It should be noted that, each time the face brushing payment is enabled, the corresponding face brushing image is collected once. For example, if the candidate a enables face brushing payment, the IoT face brushing tool acquires an image for a nearby user, so as to obtain a single face brushing image, and if the candidate B enables face brushing payment, the IoT face brushing tool acquires an image for a nearby user again, so as to obtain a single face brushing image again.
After the face brushing sample image is obtained, the face brushing user is marked as having the face brushing willingness to pay, and a corresponding mask image is generated, so that a positive sample is obtained. When the corresponding mask image is generated, the position of the face selection frame of the face brushing user in the face brushing image is selected from the face brushing image, and then the corresponding mask image is generated through the position. And when the user who swipes the face is marked as having the willingness to swipe the face, the user can mark the user through a willingness mark tag, for example, the willingness tag is {0,1}, where 1 represents having the willingness to swipe the face, and 0 represents not having the willingness to swipe the face.
After the face brushing sample image is obtained, because a plurality of candidate persons to be identified may exist in the face brushing sample image, if other users who are photographed by the way are also included in the face brushing sample image, the other users are marked as not having the intention of face brushing payment, and a corresponding mask map is generated, so that a negative sample is obtained. For example, if the candidate a enables a face-brushing payment, the candidate B is marked in the face-brushing image as not having a willingness to brush the face.
And finally, performing supervised training on the first convolution network, the second convolution network and the third convolution network according to the obtained positive sample and the negative sample.
When supervised training is performed, firstly, random sampling is performed on positive samples and negative samples in a training data set, and training batch and corresponding label tags are generated. The training batch and its corresponding label tag are then input into the initial deep convolutional neural network. The initial deep convolutional neural network comprises an untrained first convolutional network, a second convolutional network and a third convolutional network.
And then, outputting a probability value by the initial deep convolutional neural network, calculating a loss function through the probability value and a corresponding label, and performing network training without interrupting the optimized loss function through a gradient descent method, thereby completing supervised training and obtaining the deep convolutional neural network. The rule for identifying whether the candidate has the face brushing payment will or not can be obtained through network training. For example, if the deep convolutional neural network recognizes that the face region of the candidate is located in the middle region, the candidate is considered to have the willingness to pay by brushing the face.
Based on the same idea, one or more embodiments of the present specification further provide apparatuses and devices corresponding to the above-described method, as shown in fig. 4 and 5.
Fig. 4 is a schematic structural diagram of a device for recognizing a willingness to pay by brushing face according to one or more embodiments of the present disclosure, where the device includes:
an obtaining module 402, configured to obtain a face brushing image, and determine a candidate to be identified in the face brushing image;
a generating module 404, configured to generate corresponding mask maps according to the located regions of the candidate persons in the face brushing image, so as to distinguish the located regions from other regions in the face brushing image;
the extraction module 406 is configured to extract the features of the brushed face image, and obtain a fusion feature according to the features of the brushed face image and the mask image;
and the identifying module 408 is used for identifying whether each candidate has the willingness to pay by brushing the face according to the fusion characteristics.
Optionally, the generating module 404 is configured to, for each determined candidate, perform:
determining a first filling area of a mask image corresponding to the candidate and a second filling area outside the first filling area according to the face area selection box of the candidate;
generating the mask map having a resolution identical to that of the brush face image by assigning different fill values to the first fill region and the second fill region.
Optionally, the face region selection frame is a rectangular frame;
the generating module 404 determines the center of the rectangular frame as a circle center, and determines a half length of the longest side of the rectangular frame as a radius;
and determining a circular area formed based on the circle center and the radius as a first filling area of the mask map corresponding to the candidate.
Optionally, the extracting module 406 is configured to extract features of the brushed face image through a first convolution network;
performing resolution reduction processing on the mask image to adapt to the characteristics of the face brushing image;
and fusing the characteristics of the face brushing image and the mask image after resolution reduction processing through a second convolution network to obtain fused characteristics.
Optionally, the extracting module 406 connects the features of the brushed face image and the mask image after the resolution reduction processing according to a channel dimension;
and inputting the connected features into the second convolution network for processing to obtain fusion features.
Optionally, the identifying module 408 inputs the fusion feature into a third convolutional network corresponding to the first convolutional network for processing, so as to obtain a processing result, where the first convolutional network and the third convolutional network are obtained by splitting the same convolutional network in advance;
and generating a probability value according to the processing result to indicate whether the corresponding candidate has the willingness to pay by brushing the face.
Optionally, the recognition module 408 determines a target resolution as a resolution of a feature of the brushed face image;
determining a convolutional layer matched with the target resolution in convolutional layers in the same convolutional network;
and taking the matched convolutional layer as a splitting point, splitting the same convolutional network into a former part and a latter part, wherein the former part is taken as the first convolutional network, and the latter part is taken as the third convolutional network.
Optionally, the device further includes a supervised training module, where the supervised training module acquires a face brushing sample image including a face brushing user confirmed to be a face brushing user;
marking the face brushing user as having a face brushing willingness to pay, and generating a corresponding mask map to obtain a positive sample;
if the face brushing sample image also contains other users shot by the way, marking the other users as not having the intention of face brushing payment, and generating a corresponding mask map to obtain a negative sample;
and carrying out supervised training on the first convolutional network, the second convolutional network and the third convolutional network according to the obtained samples.
Optionally, the face brushing image includes at least two human faces.
Optionally, the device is applied to an offline IoT face brushing tool, and the face brushing image is acquired by the IoT face brushing tool for nearby users.
Fig. 5 is a schematic structural diagram of a device for recognizing a willingness to pay by brushing face according to one or more embodiments of the present specification, where the device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a face brushing image, and determining a candidate to be identified in the face brushing image;
according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image;
extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image;
and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
Based on the same idea, one or more embodiments of the present specification further provide a non-volatile computer storage medium for identifying willingness-to-swipe payment, corresponding to the above method, and storing computer-executable instructions configured to:
acquiring a face brushing image, and determining a candidate to be identified in the face brushing image;
according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image;
extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image;
and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, the present specification embodiments may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is merely one or more embodiments of the present disclosure and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (21)

1. A face brushing willingness-to-pay recognition method comprises the following steps:
acquiring a face brushing image, and determining a candidate to be identified in the face brushing image;
according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image;
extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image;
and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
2. The method according to claim 1, wherein the generating a corresponding mask map according to the region of each candidate in the face brushing image respectively includes:
respectively aiming at each determined candidate, executing:
determining a first filling area of a mask image corresponding to the candidate and a second filling area outside the first filling area according to the face area selection box of the candidate;
generating the mask map having a resolution identical to that of the brush face image by assigning different fill values to the first fill region and the second fill region.
3. The method of claim 2, wherein the face region selection box is a rectangular box;
the determining, according to the face area selection box of the candidate, a first filling area of a mask map corresponding to the candidate specifically includes:
determining the center of the rectangular frame as a circle center, and determining the half length of the longest edge of the rectangular frame as a radius; and determining a circular area formed based on the circle center and the radius as a first filling area of the mask map corresponding to the candidate.
4. The method according to claim 1, wherein the extracting the features of the brushed face image and obtaining the fusion features according to the features of the brushed face image and the mask map specifically include:
extracting the characteristics of the brushing image through a first convolution network;
performing resolution reduction processing on the mask image to adapt to the characteristics of the face brushing image;
and fusing the characteristics of the brushing image and the mask image after resolution reduction processing through a second convolution network to obtain fused characteristics.
5. The method according to claim 4, wherein fusing, by using the second convolution network, the feature of the brushed image and the mask map after the resolution reduction processing to obtain a fused feature specifically includes:
according to the channel dimension, connecting the characteristics of the face brushing image with the mask image after resolution reduction processing; and inputting the connected features into the second convolution network for processing to obtain fusion features.
6. The method of claim 4, wherein identifying whether each candidate has a willingness to swipe a face according to the fusion features comprises:
inputting the fusion characteristics into a third convolution network corresponding to the first convolution network for processing to obtain a processing result, wherein the first convolution network and the third convolution network are obtained by splitting the same convolution network in advance; and generating a probability value according to the processing result to indicate whether the corresponding candidate has the willingness to pay by brushing the face.
7. The method according to claim 6, wherein the splitting specifically comprises:
determining a target resolution as a resolution of a feature of the brushed face image;
determining a convolutional layer matched with the target resolution in convolutional layers in the same convolutional network;
and taking the matched convolutional layer as a splitting point, splitting the same convolutional network into a former part and a latter part, wherein the former part is taken as the first convolutional network, and the latter part is taken as the third convolutional network.
8. The method of claim 6, prior to identifying whether each of the candidates has a willingness to swipe according to the fused features, the method further comprising:
acquiring a face brushing sample image containing a confirmed face brushing user;
marking the face brushing user as having a face brushing willingness to pay, and generating a corresponding mask map to obtain a positive sample;
if the face brushing sample image also contains other users shot by the way, marking the other users as not having the intention of face brushing payment, and generating a corresponding mask map to obtain a negative sample;
and carrying out supervised training on the first convolutional network, the second convolutional network and the third convolutional network according to the obtained samples.
9. A method as claimed in any one of claims 1 to 8, wherein the brushed face image comprises at least two human faces.
10. The method of any of claims 1-8 applied to an offline IoT facer, the facer image captured by the IoT facer for nearby users.
11. A brushing will-of-payment recognition device, comprising:
the acquisition module acquires a face brushing image and determines a candidate to be identified in the face brushing image;
the generating module is used for respectively generating corresponding mask images according to the areas of the candidate persons in the face brushing image so as to distinguish the areas from other areas in the face brushing image;
the extraction module is used for extracting the features of the face brushing image and obtaining fusion features according to the features of the face brushing image and the mask image;
and the identification module identifies whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
12. The apparatus of claim 11, wherein the generating module performs, for each of the determined candidates:
determining a first filling area of a mask image corresponding to the candidate and a second filling area outside the first filling area according to the face area selection box of the candidate;
generating the mask map having a resolution identical to that of the brush face image by assigning different fill values to the first fill region and the second fill region.
13. The apparatus of claim 12, wherein the face region selection box is a rectangular box;
the generating module is used for determining the center of the rectangular frame as the circle center and determining the half length of the longest edge of the rectangular frame as the radius;
and determining a circular area formed based on the circle center and the radius as a first filling area of the mask map corresponding to the candidate.
14. The apparatus of claim 11, the extraction module to extract features of the brushed face image through a first convolutional network;
performing resolution reduction processing on the mask image to adapt to the characteristics of the face brushing image;
and fusing the characteristics of the face brushing image and the mask image after resolution reduction processing through a second convolution network to obtain fused characteristics.
15. The apparatus of claim 14, the extraction module to connect features of the brushed face image and the deresolved mask map according to a channel dimension;
and inputting the connected features into the second convolution network for processing to obtain fusion features.
16. The apparatus of claim 14, wherein the identifying module inputs the fused feature into a third convolutional network corresponding to the first convolutional network for processing, so as to obtain a processing result, and the first convolutional network and the third convolutional network are obtained by splitting the same convolutional network in advance;
and generating a probability value according to the processing result to indicate whether the corresponding candidate has the willingness to pay by brushing the face.
17. The apparatus of claim 16, the recognition module to determine a target resolution as a resolution of a feature of the brush face image;
determining a convolutional layer matched with the target resolution in convolutional layers in the same convolutional network;
and taking the matched convolution layer as a splitting point, splitting the same convolution network into a front part and a rear part, wherein the front part is taken as the first convolution network, and the rear part is taken as the third convolution network.
18. The apparatus of claim 16, further comprising a supervised training module that obtains a brush sample image containing confirmed brushing users;
marking the face brushing user as having a face brushing willingness to pay, and generating a corresponding mask map to obtain a positive sample;
if the face brushing sample image also contains other users shot by the way, marking the other users as not having the intention of face brushing payment, and generating a corresponding mask map to obtain a negative sample;
and carrying out supervised training on the first convolutional network, the second convolutional network and the third convolutional network according to the obtained samples.
19. An apparatus as claimed in any one of claims 11 to 16, wherein the brushed face image comprises at least two human faces.
20. The apparatus of any of claims 11-16, applied to an offline IoT facer, the facer image captured by the IoT facer for nearby users.
21. A brushing willingness-to-pay recognition device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a face brushing image, and determining a candidate to be identified in the face brushing image;
according to the located area of each candidate in the face brushing image, respectively generating corresponding mask images to distinguish the located area from other areas in the face brushing image;
extracting the features of the face brushing image, and obtaining fusion features according to the features of the face brushing image and the mask image;
and identifying whether each candidate has a face brushing willingness to pay according to the fusion characteristics.
CN202210180422.6A 2022-02-25 2022-02-25 Face brushing payment intention identification method, device and equipment Pending CN114511909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210180422.6A CN114511909A (en) 2022-02-25 2022-02-25 Face brushing payment intention identification method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210180422.6A CN114511909A (en) 2022-02-25 2022-02-25 Face brushing payment intention identification method, device and equipment

Publications (1)

Publication Number Publication Date
CN114511909A true CN114511909A (en) 2022-05-17

Family

ID=81552717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210180422.6A Pending CN114511909A (en) 2022-02-25 2022-02-25 Face brushing payment intention identification method, device and equipment

Country Status (1)

Country Link
CN (1) CN114511909A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016158748A1 (en) * 2015-03-31 2016-10-06 日本電気株式会社 Payment system, payment device, program, and payment method
US20200026917A1 (en) * 2017-03-30 2020-01-23 Beijing 7Invensun Technology Co., Ltd. Authentication method, apparatus and system
CN111292092A (en) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 Face brushing payment method and device and electronic equipment
CN112258193A (en) * 2019-08-16 2021-01-22 创新先进技术有限公司 Payment method and device
CN112418243A (en) * 2020-10-28 2021-02-26 北京迈格威科技有限公司 Feature extraction method and device and electronic equipment
CN112766176A (en) * 2021-01-21 2021-05-07 深圳市安软科技股份有限公司 Training method of lightweight convolutional neural network and face attribute recognition method
CN113553961A (en) * 2021-07-27 2021-10-26 北京京东尚科信息技术有限公司 Training method and device of face recognition model, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016158748A1 (en) * 2015-03-31 2016-10-06 日本電気株式会社 Payment system, payment device, program, and payment method
US20200026917A1 (en) * 2017-03-30 2020-01-23 Beijing 7Invensun Technology Co., Ltd. Authentication method, apparatus and system
CN112258193A (en) * 2019-08-16 2021-01-22 创新先进技术有限公司 Payment method and device
CN111292092A (en) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 Face brushing payment method and device and electronic equipment
CN112418243A (en) * 2020-10-28 2021-02-26 北京迈格威科技有限公司 Feature extraction method and device and electronic equipment
CN112766176A (en) * 2021-01-21 2021-05-07 深圳市安软科技股份有限公司 Training method of lightweight convolutional neural network and face attribute recognition method
CN113553961A (en) * 2021-07-27 2021-10-26 北京京东尚科信息技术有限公司 Training method and device of face recognition model, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
才华, 肖普山: "生物识别技术在金融支付领域应用探索 生物识别技术在金融支付领域应用探索", 计算机应用与软件, vol. 38, no. 04, 12 April 2021 (2021-04-12) *

Similar Documents

Publication Publication Date Title
CN110570200B (en) Payment method and device
US11463631B2 (en) Method and apparatus for generating face image
TWI753271B (en) Resource transfer method, device and system
CN111292092B (en) Face brushing payment method and device and electronic equipment
US20180068198A1 (en) Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network
KR102173123B1 (en) Method and apparatus for recognizing object of image in electronic device
KR20190129826A (en) Biometrics methods and apparatus, systems, electronic devices, storage media
US11263634B2 (en) Payment method and device
KR20190028349A (en) Electronic device and method for human segmentation in image
CN109670444B (en) Attitude detection model generation method, attitude detection device, attitude detection equipment and attitude detection medium
CN111523413A (en) Method and device for generating face image
CN114238904B (en) Identity recognition method, and training method and device of dual-channel hyper-resolution model
CN113012054A (en) Sample enhancement method and training method based on sectional drawing, system and electronic equipment thereof
CN115115959A (en) Image processing method and device
CN108596070A (en) Character recognition method, device, storage medium, program product and electronic equipment
CN115311178A (en) Image splicing method, device, equipment and medium
CN111199169A (en) Image processing method and device
CN111259757A (en) Image-based living body identification method, device and equipment
CN113392763B (en) Face recognition method, device and equipment
CN118194230A (en) Multi-mode video question-answering method and device and computer equipment
CN113610884A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110321821B (en) Human face alignment initialization method and device based on three-dimensional projection and storage medium
CN110059576A (en) Screening technique, device and the electronic equipment of picture
CN115546908A (en) Living body detection method, device and equipment
CN114511909A (en) Face brushing payment intention identification method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination