CN113128391A - Goods loading and taking method of unmanned vending machine based on face recognition - Google Patents

Goods loading and taking method of unmanned vending machine based on face recognition Download PDF

Info

Publication number
CN113128391A
CN113128391A CN202110405525.3A CN202110405525A CN113128391A CN 113128391 A CN113128391 A CN 113128391A CN 202110405525 A CN202110405525 A CN 202110405525A CN 113128391 A CN113128391 A CN 113128391A
Authority
CN
China
Prior art keywords
data
vending machine
processing
processing result
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110405525.3A
Other languages
Chinese (zh)
Other versions
CN113128391B (en
Inventor
龚庆祝
周梓荣
陈云
尹波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Convenisun Technology Co ltd
Original Assignee
Guangdong Convenisun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Convenisun Technology Co ltd filed Critical Guangdong Convenisun Technology Co ltd
Priority to CN202110405525.3A priority Critical patent/CN113128391B/en
Publication of CN113128391A publication Critical patent/CN113128391A/en
Application granted granted Critical
Publication of CN113128391B publication Critical patent/CN113128391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F11/00Coin-freed apparatus for dispensing, or the like, discrete articles
    • G07F11/02Coin-freed apparatus for dispensing, or the like, discrete articles from non-movable magazines
    • G07F11/04Coin-freed apparatus for dispensing, or the like, discrete articles from non-movable magazines in which magazines the articles are stored one vertically above the other
    • G07F11/16Delivery means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Control Of Vending Devices And Auxiliary Devices For Vending Devices (AREA)

Abstract

The invention provides a goods loading and taking method of an unmanned vending machine based on face recognition, which comprises the following steps: acquiring a goods loading and taking instruction of the unmanned vending machine, and controlling the unmanned vending machine to collect facial images of a user based on the goods loading and taking instruction; analyzing and processing the collected face image, and uploading a processing result to a cloud data processing end; matching and identifying the processing result and stored face data in the cloud data processing terminal; after the matching is successful, the user carries out goods loading and taking operation on the unmanned vending machine, and meanwhile, whether the goods loading and taking operation is finished or not is judged by the detection end of the unmanned vending machine; if the goods are finished, the automatic vending machine automatically closes the goods loading and taking bin door; get the goods instruction through getting the goods and acquire user's facial image to discernment facial image, thereby the completion is got the goods operation to the shipment of unmanned aerial vehicle vending machine, simultaneously, get the goods door through the unmanned aerial vehicle vending machine self-closing who gets after the goods operation will accomplish the shipment, improved unmanned aerial vehicle vending machine's intellectuality.

Description

Goods loading and taking method of unmanned vending machine based on face recognition
Technical Field
The invention relates to the technical field of face recognition, in particular to a goods loading and taking method of an unmanned vending machine based on face recognition.
Background
At present, along with the change of consumption modes, more and more vending machines appear in people's life, and vending machines can fully supplement the shortage of human resources, adapt to the change of consumption environment and consumption mode.
However, the existing unmanned vending machine has no accurate guarantee, and no method is available for carrying out automatic goods loading and taking operation on the unmanned vending machine through face recognition, so that very inconvenient experience is brought to a user, and meanwhile, whether goods loading and taking are finished or not can not be automatically recognized after goods loading and taking are finished, so that the unmanned vending machine does not have enough intelligence to provide convenient service for the user.
Disclosure of Invention
The invention provides a goods loading and taking method of an unmanned vending machine based on face recognition, which is used for acquiring a face image of a user through a goods loading and taking instruction and recognizing the face image so as to complete the goods loading and taking operation of the unmanned vending machine, and meanwhile, the goods loading and taking door of the unmanned vending machine after the goods loading and taking operation is automatically closed, so that the intelligence of the unmanned vending machine is improved.
A goods loading and taking method of an unmanned vending machine based on face recognition comprises the following steps:
step 1: acquiring a goods loading and taking instruction of the unmanned vending machine, and controlling the unmanned vending machine to collect facial images of a user based on the goods loading and taking instruction;
step 2: analyzing and processing the collected face image, and uploading a processing result to a cloud data processing end;
and step 3: matching and identifying the processing result with stored face data in the cloud data processing terminal;
and 4, step 4: after the matching is successful, the user carries out goods loading and taking operation on the unmanned vending machine, and meanwhile, whether the goods loading and taking operation is finished or not is judged through detection of a detection end of the unmanned vending machine;
and 5: and if the goods are finished, controlling the automatic closing of the goods loading and taking bin door of the unmanned vending machine.
Preferably, in step 1, a specific working process of the vending machine for collecting facial images of a user is controlled based on the goods loading and taking instruction, and the method includes:
acquiring the goods loading and taking instruction, controlling the unmanned vending machine to acquire a pixel array in an image range according to the goods loading and taking instruction, and analyzing the pixel array;
acquiring a feature matrix of the image range based on the analysis result, and acquiring the dimension of the feature matrix;
at the same time, determining a face extraction region of the user based on the dimension of the feature matrix;
acquiring the signal-to-noise ratio of the face extraction area, and performing discrete Fourier transform on the face area according to the signal-to-noise ratio of the face extraction area to acquire a structural network of the face extraction area;
carrying out region segmentation on the structural network of the face extraction region in a preset face image acquisition model to obtain a sub-region;
and carrying out smooth filtering processing on the sub-regions, and carrying out smooth connection on the processed sub-regions to obtain the face image of the user.
Preferably, the specific working process of performing the smoothing filtering processing on the sub-region in the method for loading and taking the goods on the unmanned vending machine based on the face recognition includes:
acquiring the contour edge of the sub-region, and determining corresponding position data based on the contour edge;
determining the center of the sub-region according to the position data, and simultaneously acquiring a center-contour line of the sub-region by taking the center of the sub-region as a starting point and the contour edge as an end point;
acquiring the length of the center-contour line, calculating the area of the sub-region through a preset function, and meanwhile determining a smooth correction range according to the area of the sub-region;
based on the smooth correction range and according to a preset smooth coefficient, smoothing the sub-region to obtain a smooth sub-region;
acquiring edge information of the smooth sub-region, determining a filtering direction based on the edge information, and simultaneously determining a filtering factor of the smooth sub-region according to a preset standard;
and carrying out filtering processing on the smoothing subarea based on the filtering direction and the filtering factor.
Preferably, in step 2, a specific working process of analyzing and processing the collected facial image includes:
calculating an edge intensity value of the face image, and determining an effective edge range of the face image according to the edge intensity value;
acquiring a target processing image of the face image based on the effective edge range, and performing image graying processing on the target processing image to acquire a grayscale processing result;
acquiring a point pixel value corresponding to a pixel point of the target processing image according to the gray processing result;
acquiring a corresponding correction curve graph according to the point pixel points, and acquiring a correction coefficient of the target processing image based on the correction curve graph;
meanwhile, determining an adaptive area of the target processing image based on the correction curve graph;
correcting the target processing image according to the adaptive region based on the correction coefficient of the target processing image;
meanwhile, the resolution of the corrected target processing image is obtained;
comparing the resolution of the corrected target processing image with a preset resolution threshold, and judging whether the corrected target processing image needs to be subjected to definition processing or not;
when the resolution of the corrected target processing image is equal to or greater than the preset resolution threshold, judging that the corrected target processing image does not need to be subjected to definition processing;
otherwise, acquiring the gray scale gradient value of the corrected target processing image and determining the energy of the edge direction of the corrected target processing image;
meanwhile, performing definition processing on the corrected target processing image according to the gray gradient value and the energy of the edge direction;
acquiring original data of the face image after definition processing, and acquiring neural network configuration parameters based on the original data;
constructing a neural network data node based on the neural network configuration parameters, and generating a neural network based on the neural network data node and the neural network configuration parameters;
performing convolution operation on the original data according to the neural network, and acquiring an operation result;
and analyzing the operation result, and extracting the feature data of the image after the definition processing based on the analysis result, wherein the feature data is the analysis result of analyzing the face image.
Preferably, in step 2, the specific working process of uploading the processing result to the cloud data processing end includes:
carrying out timeliness detection on the processing result based on the inspection window of the cloud data processing end, and judging whether the processing result has timeliness;
when the processing result does not have timeliness, the cloud data processing terminal refuses the processing result to upload;
otherwise, acquiring the attribute type of the processing result, and determining the uploading mode of the processing result according to the attribute type of the processing result;
acquiring a data memory of the processing result, and determining uploading time for uploading the processing result based on the data memory;
and uploading the processing result to the cloud data processing terminal based on the uploading mode and the uploading time.
Preferably, in step 3, a specific working process of matching and identifying the processing result and stored facial data in the cloud data processing terminal includes:
acquiring a data field of the processing result, dividing the data field according to the facial features of the user, and acquiring a subdata field;
performing data coding on the subdata fields, and acquiring coded field identifiers;
acquiring a data identifier of stored face data in the cloud data processing terminal, and performing matching degree verification on the data identifier and a field identifier of a sub-data field;
when the matching degree of the data identifier and the field identifier of the sub-data field does not accord with the preset matching degree, judging that the matching identification verification between the processing result and the stored face data in the cloud data processing terminal fails;
otherwise, the processing result corresponds to the stored face data in the cloud data processing end, and matching and identification of the processing result and the stored face data in the cloud data processing end are completed.
Preferably, in step 4, the specific working process of judging whether the detection of the detection end of the unmanned vending machine finishes the loading and pickup operation includes:
s101: acquiring corresponding data volume of goods loading and taking according to the goods loading and taking instruction;
s102: acquiring dynamic detection data of a detection end of the unmanned vending machine;
s103: judging whether the dynamic detection data reach the data volume of the goods loading and taking;
and when the dynamic detection data reaches the goods loading and taking data volume, judging that the unmanned vending machine completes the goods loading and taking operation.
Preferably, in step 3, after matching and identifying the processing result and the stored facial data in the cloud data processing terminal, the method for loading and taking goods for the unmanned vending machine based on face recognition further includes:
acquiring a matching factor of the processing result and the stored face data, calculating the matching degree of the processing result and the stored face data according to the matching factor, and meanwhile, calculating the specific working process of the accuracy of face recognition of the unmanned vending machine according to the matching degree, wherein the specific working process comprises the following steps:
acquiring the matching time of the processing result and the stored face data;
meanwhile, determining a matching factor of the processing result and the stored face data based on the features of the processing result and the features of the stored face data;
calculating a matching degree of the processing result with the stored face data based on a matching time of the processing result with the stored face data and a matching factor of the processing result with the stored face data;
Figure BDA0003022148590000061
wherein P represents a degree of matching of the processing result with the stored face data; mu represents a non-matching factor of the processing result and the stored face data, and the value range is (0.1, 0.2); v represents a matching processing speed at which the processing result is matched with the stored face data; t represents a matching processing time for matching the processing result with the stored face data; s represents a total amount of data of the processing result and the stored face data; delta represents the face recognition degree of the user in the processing result, and the value range is (0.023, 0.036); q represents the sharpness of the face image;
calculating the accuracy of face recognition of the unmanned vending machine based on the matching degree of the processing result and the stored face data;
Figure BDA0003022148590000062
wherein Z represents the accuracy rate of face recognition of the unmanned vending machine; p represents a nominal degree of matching of the processing result with the stored face data; p represents a degree of matching of the processing result with the stored face data; zeta represents an error factor and has a value range of (0.12, 0.25); q represents a matching weight of the processing result and the stored face data; q represents an estimated match weight of the processing result and the stored facial data;
determining the face recognition performance of the unmanned vending machine based on the accuracy of the unmanned vending machine after face recognition;
when the accuracy of face recognition of the unmanned vending machine is greater than the reference accuracy, judging that the face recognition performance of the unmanned vending machine is excellent;
when the accuracy of face recognition of the unmanned vending machine is equal to the reference accuracy, judging that the face recognition performance of the unmanned vending machine is good;
when the accuracy of face recognition of the unmanned vending machine is greater than the reference accuracy, judging that the face recognition performance of the unmanned vending machine is poor;
and meanwhile, determining performance factors of face recognition of the unmanned vending machine, and performing performance optimization on the face recognition of the unmanned vending machine based on the performance factors until the accuracy of the face recognition of the unmanned vending machine is equal to or greater than the reference accuracy.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a goods loading and picking method of an unmanned vending machine based on face recognition in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the embodiment provides a goods loading and taking method for an unmanned vending machine based on face recognition, as shown in fig. 1, the method includes:
step 1: acquiring a goods loading and taking instruction of the unmanned vending machine, and controlling the unmanned vending machine to collect facial images of a user based on the goods loading and taking instruction;
step 2: analyzing and processing the collected face image, and uploading a processing result to a cloud data processing end;
and step 3: matching and identifying the processing result with stored face data in the cloud data processing terminal;
and 4, step 4: after the matching is successful, the user carries out goods loading and taking operation on the unmanned vending machine, and meanwhile, whether the goods loading and taking operation is finished or not is judged through detection of a detection end of the unmanned vending machine;
and 5: and if the goods are finished, controlling the automatic closing of the goods loading and taking bin door of the unmanned vending machine.
In this embodiment, the order for getting goods and goods controls the unmanned vending machine to capture facial images of the user, which may be: after the unmanned vending machine acquires the goods loading and taking instruction, the image acquisition device is automatically opened, facial image acquisition is carried out on a user through the image acquisition device, wherein the image acquisition device can be a camera and other machines.
In this embodiment, the analysis processing of the collected facial image may be to process the eyes, nose, and mouth of the user respectively, so as to obtain the characteristics of the user, thereby facilitating the facial recognition.
The beneficial effects of the above technical scheme are: get the goods instruction through getting the goods and acquire user's facial image to discernment facial image, thereby the completion is got the goods operation to the shipment of unmanned aerial vehicle vending machine, simultaneously, get the goods door through the unmanned aerial vehicle vending machine self-closing who gets after the goods operation will accomplish the shipment, improved unmanned aerial vehicle vending machine's intellectuality.
Example 2:
on the basis of embodiment 1, this embodiment provides a method for goods loading and pickup of an unmanned vending machine based on face recognition, and in step 1, a specific working process of the unmanned vending machine for facial image acquisition of a user is controlled based on the goods loading and pickup instruction, and the method includes:
acquiring the goods loading and taking instruction, controlling the unmanned vending machine to acquire a pixel array in an image range according to the goods loading and taking instruction, and analyzing the pixel array;
acquiring a feature matrix of the image range based on the analysis result, and acquiring the dimension of the feature matrix;
at the same time, determining a face extraction region of the user based on the dimension of the feature matrix;
acquiring the signal-to-noise ratio of the face extraction area, and performing discrete Fourier transform on the face area according to the signal-to-noise ratio of the face extraction area to acquire a structural network of the face extraction area;
carrying out region segmentation on the structural network of the face extraction region in a preset face image acquisition model to obtain a sub-region;
and carrying out smooth filtering processing on the sub-regions, and carrying out smooth connection on the processed sub-regions to obtain the face image of the user.
In this embodiment, the pixel array is determined based on the acquired image range in order to determine the characteristics of the image range, such as: the image range includes: a main body: user, object: roadside signs, etc.
In this embodiment, the face extraction area may be a region that filters out objects in the image range and extracts only the subject, the face of the user, in the image range.
In this embodiment, the structural network may be a network in which the face of the user is displayed, so as to facilitate the division of the face extraction area.
In this embodiment, the division of the face extraction region is performed to perform the smoothing filter process for each sub-region.
The beneficial effects of the above technical scheme are: through the pixel array who obtains the image range, be favorable to extracting user's facial image, through carrying out smooth filtering to every subregion and handling in order to obtain more accurate facial extraction region, improved the precision that facial image acquireed greatly.
Example 3:
on the basis of embodiment 2, this embodiment provides a method for goods loading and pickup of an unmanned vending machine based on face recognition, and a specific working process of performing smoothing filtering processing on the sub-region includes:
acquiring the contour edge of the sub-region, and determining corresponding position data based on the contour edge;
determining the center of the sub-region according to the position data, and simultaneously acquiring a center-contour line of the sub-region by taking the center of the sub-region as a starting point and the contour edge as an end point;
acquiring the length of the center-contour line, calculating the area of the sub-region through a preset function, and meanwhile determining a smooth correction range according to the area of the sub-region;
based on the smooth correction range and according to a preset smooth coefficient, smoothing the sub-region to obtain a smooth sub-region;
acquiring edge information of the smooth sub-region, determining a filtering direction based on the edge information, and simultaneously determining a filtering factor of the smooth sub-region according to a preset standard;
and carrying out filtering processing on the smoothing subarea based on the filtering direction and the filtering factor.
In this embodiment, the position data may be data obtained according to a specific position of the contour edge.
In this embodiment, the smoothing coefficient may be determined according to the smoothing correction range, and is used as a parameter for performing the smoothing processing on the sub-region.
In this embodiment, the filter factor may be a constant that affects the smoothing subarea in the filtering process, and the value range of the constant is (0.3, 0.6).
In this embodiment, the edge information may be a gray scale change feature on the edge, for example, the merge change on both sides of the edge is fast, or the edge information may be a portion where the abrupt change of the texture structure is obtained, such as a gray scale value, a color component.
In this embodiment, the filtering direction is determined based on the edge information, for example: when the texture structure in the edge information changes suddenly, the change trend of the texture is the filtering direction.
The beneficial effects of the above technical scheme are: and smoothing the sub-region by the smoothing coefficient, and accurately filtering the smoothed sub-region by the filtering direction and the filtering factor, thereby improving the efficiency of sub-region processing.
Example 4:
on the basis of embodiment 1, this embodiment provides a method for goods loading and pickup of an unmanned vending machine based on face recognition, and in step 2, a specific working process of analyzing and processing a collected face image includes:
calculating an edge intensity value of the face image, and determining an effective edge range of the face image according to the edge intensity value;
acquiring a target processing image of the face image based on the effective edge range, and performing image graying processing on the target processing image to acquire a grayscale processing result;
acquiring a point pixel value corresponding to a pixel point of the target processing image according to the gray processing result;
acquiring a corresponding correction curve graph according to the point pixel points, and acquiring a correction coefficient of the target processing image based on the correction curve graph;
meanwhile, determining an adaptive area of the target processing image based on the correction curve graph;
correcting the target processing image according to the adaptive region based on the correction coefficient of the target processing image;
meanwhile, the resolution of the corrected target processing image is obtained;
comparing the resolution of the corrected target processing image with a preset resolution threshold, and judging whether the corrected target processing image needs to be subjected to definition processing or not;
when the resolution of the corrected target processing image is equal to or greater than the preset resolution threshold, judging that the corrected target processing image does not need to be subjected to definition processing;
otherwise, acquiring the gray scale gradient value of the corrected target processing image and determining the energy of the edge direction of the corrected target processing image;
meanwhile, performing definition processing on the corrected target processing image according to the gray gradient value and the energy of the edge direction;
acquiring original data of the image after definition processing, and acquiring a neural network configuration parameter based on the original data;
constructing a neural network data node based on the neural network configuration parameters, and generating a neural network based on the neural network data node and the neural network configuration parameters;
performing convolution operation on the original data according to the neural network, and acquiring an operation result;
and analyzing the operation result, and extracting the feature data of the image after the definition processing based on the analysis result, wherein the feature data is the analysis result of analyzing the face image.
In this embodiment, the effective edge range may be a face edge of the face image of the user as the effective edge range.
In this embodiment, the obtaining of the correction coefficient for the target image based on the correction graph may be by obtaining a sub-slope value of the correction curve, and calculating a variance value of the sub-slope value of the correction curve, where the variance value is the correction coefficient, and the correction coefficient is a parameter for correcting the target processing image and has a value range of (0.25, 0.36).
In this embodiment, the adaptive region may be an optimal region for correcting the target processing image.
In this embodiment, the preset resolution threshold may be based on a minimum requirement for performing sharpness recognition on the facial image, and when the preset resolution threshold is lower, the facial image cannot be analyzed.
In this embodiment, the neural network configuration parameter may be a necessary factor for acquiring the neural network node, which is beneficial to accurately acquiring the neural network node.
In this embodiment, the convolution operation is to acquire feature data of the face image.
The beneficial effects of the above technical scheme are: the method comprises the steps of obtaining an effective edge range of a facial image, accurately obtaining a target processing image, carrying out gray processing on the target processing image, effectively obtaining pixel points, further determining a correction curve, accurately correcting the target processing image, ensuring the accuracy of identification on the facial image by judging the resolution of the target processing image, and carrying out convolution operation on image data of the facial image by obtaining a neural network, so that the method is favorable for accurately obtaining characteristic parameters of the facial image.
Example 5:
on the basis of embodiment 1, this embodiment provides a method for goods loading and pickup of an unmanned vending machine based on face recognition, and in step 2, a specific working process of uploading a processing result to a cloud data processing end includes:
carrying out timeliness detection on the processing result based on the inspection window of the cloud data processing end, and judging whether the processing result has timeliness;
when the processing result does not have timeliness, the cloud data processing terminal refuses the processing result to upload;
otherwise, acquiring the attribute type of the processing result, and determining the uploading mode of the processing result according to the attribute type of the processing result;
acquiring a data memory of the processing result, and determining uploading time for uploading the processing result based on the data memory;
and uploading the processing result to the cloud data processing terminal based on the uploading mode and the uploading time.
In this embodiment, the timeliness of the processing results is determined to ensure that data is not leaked or that face information is protected.
In this embodiment, the attribute type of the processing result may be determined according to the data type of the data, and for example, may be an int type of the data type of the processing result, where the attribute type of the processing result corresponds to an integer type, and the data type of the processing result is a float type, where the attribute type of the processing result corresponds to a float type.
The beneficial effects of the above technical scheme are: the facial information of the user can be protected by judging the timeliness of the processing result, the uploading time can be accurately calculated by determining the data memory, the processing result is accurately uploaded by the uploading time and the uploading mode, and the uploading accuracy and safety are greatly improved.
Example 6:
on the basis of embodiment 1, this embodiment provides a method for goods loading and pickup of an unmanned vending machine based on face recognition, and in step 3, a specific working process of performing matching recognition on the processing result and stored face data in the cloud data processing terminal includes:
acquiring a data field of the processing result, dividing the data field according to the facial features of the user, and acquiring a subdata field;
performing data coding on the subdata fields, and acquiring coded field identifiers;
acquiring a data identifier of stored face data in the cloud data processing terminal, and performing matching degree verification on the data identifier and a field identifier of a sub-data field;
when the matching degree of the data identifier and the field identifier of the sub-data field does not accord with the preset matching degree, judging that the matching identification verification between the processing result and the stored face data in the cloud data processing terminal fails;
otherwise, the processing result corresponds to the stored face data in the cloud data processing end, and matching and identification of the processing result and the stored face data in the cloud data processing end are completed.
In this embodiment, the field identifier may be determined by the first address of the data field.
In this embodiment, the data identifier may be determined according to the face data of the user, and may be, for example, an eye data identifier of a, a nose data identifier of b, a mouth data identifier of c, or the like.
The beneficial effects of the above technical scheme are: by matching the field identifier of the sub-data field with the data identifier, whether the matching is successful can be accurately determined, and therefore the accuracy of face recognition is improved.
Example 7:
on the basis of embodiment 1, this embodiment provides a method for picking up goods on an unmanned vending machine based on face recognition, and in step 4, the specific working process of determining whether the detection of the detection end of the unmanned vending machine completes the picking up and picking up operations includes:
s101: acquiring corresponding data volume of goods loading and taking according to the goods loading and taking instruction;
s102: acquiring dynamic detection data of a detection end of the unmanned vending machine;
s103: judging whether the dynamic detection data reach the data volume of the goods loading and taking;
and when the dynamic detection data reaches the goods loading and taking data volume, judging that the unmanned vending machine completes the goods loading and taking operation.
The beneficial effects of the above technical scheme are: get goods data volume through acquireing the shipment to get goods data volume with the shipment and compare dynamic verification data and shipment, thereby whether effective definite unmanned vending machine accomplishes the operation of getting goods of getting the shipment, thereby improved unmanned vending machine's intellectuality greatly.
Example 8:
on the basis of embodiment 1, this embodiment provides a method for goods loading and pickup of an unmanned vending machine based on face recognition, and in step 3, after matching and recognizing the processing result and the stored face data in the cloud data processing terminal, the method further includes:
acquiring a matching factor of the processing result and the stored face data, calculating the matching degree of the processing result and the stored face data according to the matching factor, and meanwhile, calculating the specific working process of the accuracy of face recognition of the unmanned vending machine according to the matching degree, wherein the specific working process comprises the following steps:
acquiring the matching time of the processing result and the stored face data;
meanwhile, determining a matching factor of the processing result and the stored face data based on the features of the processing result and the features of the stored face data;
calculating a matching degree of the processing result with the stored face data based on a matching time of the processing result with the stored face data and a matching factor of the processing result with the stored face data;
Figure BDA0003022148590000151
wherein P represents a degree of matching of the processing result with the stored face data; mu represents a non-matching factor of the processing result and the stored face data, and the value range is (0.1, 0.2); v represents a matching processing speed at which the processing result is matched with the stored face data; t represents a matching processing time for matching the processing result with the stored face data; s represents a total amount of data of the processing result and the stored face data; delta represents the face recognition degree of the user in the processing result, and the value range is (0.023, 0.036); q represents the sharpness of the face image;
calculating the accuracy of face recognition of the unmanned vending machine based on the matching degree of the processing result and the stored face data;
Figure BDA0003022148590000152
wherein Z represents the accuracy rate of face recognition of the unmanned vending machine; p represents a nominal degree of matching of the processing result with the stored face data; p represents a degree of matching of the processing result with the stored face data; zeta represents an error factor and has a value range of (0.12, 0.25); q represents a matching weight of the processing result and the stored face data; q represents an estimated match weight of the processing result and the stored facial data;
determining the face recognition performance of the unmanned vending machine based on the accuracy of the unmanned vending machine after face recognition;
when the accuracy of face recognition of the unmanned vending machine is greater than the reference accuracy, judging that the face recognition performance of the unmanned vending machine is excellent;
when the accuracy of face recognition of the unmanned vending machine is equal to the reference accuracy, judging that the face recognition performance of the unmanned vending machine is good;
when the accuracy of face recognition of the unmanned vending machine is greater than the reference accuracy, judging that the face recognition performance of the unmanned vending machine is poor;
and meanwhile, determining performance factors of face recognition of the unmanned vending machine, and performing performance optimization on the face recognition of the unmanned vending machine based on the performance factors until the accuracy of the face recognition of the unmanned vending machine is equal to or greater than the reference accuracy.
In this embodiment, the matching weight of the processing result and the stored face data may be a proportion of data in the processing result to the stored face data in the matching process.
In this embodiment, the estimated matching weight of the processing result and the stored face data may be a proportion of the data in the processing result to the stored face data in a theoretical or ideal state.
In this embodiment, the baseline accuracy is generally equal to or greater than 80%.
In this embodiment, the performance factor of face recognition of the automatic vending machine may be, for example, accuracy of a face image, an image processing factor, an instruction control factor, and the like.
The beneficial effects of the above technical scheme are: through acquiring the processing result and the matching factor of the stored facial data, the matching degree of the processing result and the stored facial data can be calculated accurately according to the matching factor, so that the accuracy of the face recognition of the unmanned vending machine can be effectively calculated, the optimization of the unmanned vending machine is determined through the analysis of the accuracy, and the use efficiency of the face recognition of the unmanned vending machine is improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A goods loading and taking method of an unmanned vending machine based on face recognition is characterized by comprising the following steps:
step 1: acquiring a goods loading and taking instruction of the unmanned vending machine, and controlling the unmanned vending machine to collect facial images of a user based on the goods loading and taking instruction;
step 2: analyzing and processing the collected face image, and uploading a processing result to a cloud data processing end;
and step 3: matching and identifying the processing result with stored face data in the cloud data processing terminal;
and 4, step 4: after the matching is successful, the user carries out goods loading and taking operation on the unmanned vending machine, and meanwhile, whether the goods loading and taking operation is finished or not is judged through detection of a detection end of the unmanned vending machine;
and 5: and if the goods are finished, controlling the automatic closing of the goods loading and taking bin door of the unmanned vending machine.
2. The method for goods loading and pickup of the vending machine based on the face recognition as claimed in claim 1, wherein in step 1, the specific work process of the vending machine for facial image acquisition of the user based on the goods loading and pickup instruction is controlled, and the method comprises:
acquiring the goods loading and taking instruction, controlling the unmanned vending machine to acquire a pixel array in an image range according to the goods loading and taking instruction, and analyzing the pixel array;
acquiring a feature matrix of the image range based on the analysis result, and acquiring the dimension of the feature matrix;
at the same time, determining a face extraction region of the user based on the dimension of the feature matrix;
acquiring the signal-to-noise ratio of the face extraction area, and performing discrete Fourier transform on the face area according to the signal-to-noise ratio of the face extraction area to acquire a structural network of the face extraction area;
carrying out region segmentation on the structural network of the face extraction region in a preset face image acquisition model to obtain a sub-region;
and carrying out smooth filtering processing on the sub-regions, and carrying out smooth connection on the processed sub-regions to obtain the face image of the user.
3. The method for goods loading and pickup of the vending machine based on the face recognition as claimed in claim 2, wherein the specific working process of performing the smoothing filtering process on the sub-area comprises:
acquiring the contour edge of the sub-region, and determining corresponding position data based on the contour edge;
determining the center of the sub-region according to the position data, and simultaneously acquiring a center-contour line of the sub-region by taking the center of the sub-region as a starting point and the contour edge as an end point;
acquiring the length of the center-contour line, calculating the area of the sub-region through a preset function, and meanwhile determining a smooth correction range according to the area of the sub-region;
based on the smooth correction range and according to a preset smooth coefficient, smoothing the sub-region to obtain a smooth sub-region;
acquiring edge information of the smooth sub-region, determining a filtering direction based on the edge information, and simultaneously determining a filtering factor of the smooth sub-region according to a preset standard;
and carrying out filtering processing on the smoothing subarea based on the filtering direction and the filtering factor.
4. The method for goods loading and picking up of the vending machine based on the face recognition as claimed in claim 1, wherein the specific working process of analyzing and processing the collected face image in the step 2 comprises:
calculating an edge intensity value of the face image, and determining an effective edge range of the face image according to the edge intensity value;
acquiring a target processing image of the face image based on the effective edge range, and performing image graying processing on the target processing image to acquire a grayscale processing result;
acquiring a point pixel value corresponding to a pixel point of the target processing image according to the gray processing result;
acquiring a corresponding correction curve graph according to the point pixel points, and acquiring a correction coefficient of the target processing image based on the correction curve graph;
meanwhile, determining an adaptive area of the target processing image based on the correction curve graph;
correcting the target processing image according to the adaptive region based on the correction coefficient of the target processing image;
meanwhile, the resolution of the corrected target processing image is obtained;
comparing the resolution of the corrected target processing image with a preset resolution threshold, and judging whether the corrected target processing image needs to be subjected to definition processing or not;
when the resolution of the corrected target processing image is equal to or greater than the preset resolution threshold, judging that the corrected target processing image does not need to be subjected to definition processing;
otherwise, acquiring the gray scale gradient value of the corrected target processing image and determining the energy of the edge direction of the corrected target processing image;
meanwhile, performing definition processing on the corrected target processing image according to the gray gradient value and the energy of the edge direction;
acquiring original data of the image after definition processing, and acquiring a neural network configuration parameter based on the original data;
constructing a neural network data node based on the neural network configuration parameters, and generating a neural network based on the neural network data node and the neural network configuration parameters;
performing convolution operation on the original data according to the neural network, and acquiring an operation result;
and analyzing the operation result, and extracting the feature data of the image after the definition processing based on the analysis result, wherein the feature data is the analysis result of analyzing the face image.
5. The vending machine goods loading and pickup method based on face recognition as claimed in claim 1, wherein in the step 2, the specific working process of uploading the processing result to the cloud data processing end includes:
carrying out timeliness detection on the processing result based on the inspection window of the cloud data processing end, and judging whether the processing result has timeliness;
when the processing result does not have timeliness, the cloud data processing terminal refuses the processing result to upload;
otherwise, acquiring the attribute type of the processing result, and determining the uploading mode of the processing result according to the attribute type of the processing result;
acquiring a data memory of the processing result, and determining uploading time for uploading the processing result based on the data memory;
and uploading the processing result to the cloud data processing terminal based on the uploading mode and the uploading time.
6. The method for goods loading and picking up of the vending machine based on the face recognition as claimed in claim 1, wherein in step 3, the specific working process of matching and recognizing the processing result and the stored face data in the cloud data processing terminal includes:
acquiring a data field of the processing result, dividing the data field according to the facial features of the user, and acquiring a subdata field;
performing data coding on the subdata fields, and acquiring coded field identifiers;
acquiring a data identifier of stored face data in the cloud data processing terminal, and performing matching degree verification on the data identifier and a field identifier of a sub-data field;
when the matching degree of the data identifier and the field identifier of the sub-data field does not accord with the preset matching degree, judging that the matching identification verification between the processing result and the stored face data in the cloud data processing terminal fails;
otherwise, the processing result corresponds to the stored face data in the cloud data processing end, and matching and identification of the processing result and the stored face data in the cloud data processing end are completed.
7. The method for goods loading and pickup of the unmanned vending machine based on the face recognition as claimed in claim 1, wherein the specific working process of determining whether the detection of the detection end of the unmanned vending machine completes the goods loading and pickup operation in the step 4 comprises:
s101: acquiring corresponding data volume of goods loading and taking according to the goods loading and taking instruction;
s102: acquiring dynamic detection data of a detection end of the unmanned vending machine;
s103: judging whether the dynamic detection data reach the data volume of the goods loading and taking;
and when the dynamic detection data reaches the goods loading and taking data volume, judging that the unmanned vending machine completes the goods loading and taking operation.
8. The method for goods delivery and pickup of the vending machine based on the face recognition as claimed in claim 1, wherein in step 3, after performing matching recognition on the processing result and the stored face data in the cloud data processing terminal, the method further comprises:
acquiring a matching factor of the processing result and the stored face data, calculating the matching degree of the processing result and the stored face data according to the matching factor, and meanwhile, calculating the specific working process of the accuracy of face recognition of the unmanned vending machine according to the matching degree, wherein the specific working process comprises the following steps:
acquiring the matching time of the processing result and the stored face data;
meanwhile, determining a matching factor of the processing result and the stored face data based on the features of the processing result and the features of the stored face data;
calculating a matching degree of the processing result with the stored face data based on a matching time of the processing result with the stored face data and a matching factor of the processing result with the stored face data;
Figure FDA0003022148580000051
wherein P represents a degree of matching of the processing result with the stored face data; mu represents a non-matching factor of the processing result and the stored face data, and the value range is (0.1, 0.2); v represents a matching processing speed at which the processing result is matched with the stored face data; t represents a matching processing time for matching the processing result with the stored face data; s represents a total amount of data of the processing result and the stored face data; delta represents the face recognition degree of the user in the processing result, and the value range is (0.023, 0.036); q represents the sharpness of the face image;
calculating the accuracy of face recognition of the unmanned vending machine based on the matching degree of the processing result and the stored face data;
Figure FDA0003022148580000052
wherein Z represents the accuracy rate of face recognition of the unmanned vending machine; p represents a nominal degree of matching of the processing result with the stored face data; p represents a degree of matching of the processing result with the stored face data; zeta represents an error factor and has a value range of (0.12, 0.25); q represents a matching weight of the processing result and the stored face data; q represents an estimated match weight of the processing result and the stored facial data;
determining the face recognition performance of the unmanned vending machine based on the accuracy of the unmanned vending machine after face recognition;
when the accuracy of face recognition of the unmanned vending machine is greater than the reference accuracy, judging that the face recognition performance of the unmanned vending machine is excellent;
when the accuracy of face recognition of the unmanned vending machine is equal to the reference accuracy, judging that the face recognition performance of the unmanned vending machine is good;
when the accuracy of face recognition of the unmanned vending machine is greater than the reference accuracy, judging that the face recognition performance of the unmanned vending machine is poor;
and meanwhile, determining performance factors of face recognition of the unmanned vending machine, and performing performance optimization on the face recognition of the unmanned vending machine based on the performance factors until the accuracy of the face recognition of the unmanned vending machine is equal to or greater than the reference accuracy.
CN202110405525.3A 2021-04-15 2021-04-15 Face recognition-based method for loading and picking goods of vending machine Active CN113128391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110405525.3A CN113128391B (en) 2021-04-15 2021-04-15 Face recognition-based method for loading and picking goods of vending machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110405525.3A CN113128391B (en) 2021-04-15 2021-04-15 Face recognition-based method for loading and picking goods of vending machine

Publications (2)

Publication Number Publication Date
CN113128391A true CN113128391A (en) 2021-07-16
CN113128391B CN113128391B (en) 2024-02-06

Family

ID=76776719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110405525.3A Active CN113128391B (en) 2021-04-15 2021-04-15 Face recognition-based method for loading and picking goods of vending machine

Country Status (1)

Country Link
CN (1) CN113128391B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913241A (en) * 2016-04-01 2016-08-31 袁艳荣 Application method of customer authentication system based on image identification
CN107958444A (en) * 2017-12-28 2018-04-24 江西高创保安服务技术有限公司 A kind of face super-resolution reconstruction method based on deep learning
CN108446642A (en) * 2018-03-23 2018-08-24 四川意高汇智科技有限公司 A kind of Distributive System of Face Recognition
CN109191682A (en) * 2018-11-16 2019-01-11 广州批霸电子商务有限公司 A kind of self-service machine and its method based on artificial intelligence technology
WO2019165895A1 (en) * 2018-03-02 2019-09-06 北京京东尚科信息技术有限公司 Automatic vending method and system, and vending device and vending machine
CN110245894A (en) * 2019-06-03 2019-09-17 杭州小伊智能科技有限公司 A kind of self-service machine based on recognition of face is got in stocks the method and device of picking
CN111145431A (en) * 2019-11-19 2020-05-12 嘉善百格休闲用品有限公司 Intelligent vending machine with face recognition payment function
CN111161281A (en) * 2019-12-25 2020-05-15 广州杰赛科技股份有限公司 Face region identification method and device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913241A (en) * 2016-04-01 2016-08-31 袁艳荣 Application method of customer authentication system based on image identification
CN107958444A (en) * 2017-12-28 2018-04-24 江西高创保安服务技术有限公司 A kind of face super-resolution reconstruction method based on deep learning
WO2019165895A1 (en) * 2018-03-02 2019-09-06 北京京东尚科信息技术有限公司 Automatic vending method and system, and vending device and vending machine
CN108446642A (en) * 2018-03-23 2018-08-24 四川意高汇智科技有限公司 A kind of Distributive System of Face Recognition
CN109191682A (en) * 2018-11-16 2019-01-11 广州批霸电子商务有限公司 A kind of self-service machine and its method based on artificial intelligence technology
CN110245894A (en) * 2019-06-03 2019-09-17 杭州小伊智能科技有限公司 A kind of self-service machine based on recognition of face is got in stocks the method and device of picking
CN111145431A (en) * 2019-11-19 2020-05-12 嘉善百格休闲用品有限公司 Intelligent vending machine with face recognition payment function
CN111161281A (en) * 2019-12-25 2020-05-15 广州杰赛科技股份有限公司 Face region identification method and device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHANG-JUN CHEN ET AL.: "Smart Vending Machine System Prototyped with Deep- and Machine-Learning Technologies", 《2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN)》, pages 79 *
李东海: "无人零售环境下数据生成与物体识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2021 *
林筑英: "《多媒体技术》", 重庆:重庆大学出版社, pages: 214 *

Also Published As

Publication number Publication date
CN113128391B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN110728225B (en) High-speed face searching method for attendance checking
CN109389135B (en) Image screening method and device
CN106886216B (en) Robot automatic tracking method and system based on RGBD face detection
US8620036B2 (en) System and method for controlling image quality
CN102007499A (en) Detecting facial expressions in digital images
CN111639629B (en) Pig weight measurement method and device based on image processing and storage medium
CN111368758A (en) Face ambiguity detection method and device, computer equipment and storage medium
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN108446687B (en) Self-adaptive face vision authentication method based on interconnection of mobile terminal and background
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN109902603A (en) Driver identity identification authentication method and system based on infrared image
CN108764371A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN116740728B (en) Dynamic acquisition method and system for wafer code reader
CN114445879A (en) High-precision face recognition method and face recognition equipment
CN114332938A (en) Pet nose print recognition management method and device, intelligent equipment and storage medium
CN116386120A (en) Noninductive monitoring management system
CN113963149A (en) Medical bill picture fuzzy judgment method, system, equipment and medium
CN109461220A (en) Method, apparatus of registering and system
CN113128391B (en) Face recognition-based method for loading and picking goods of vending machine
CN111950556A (en) License plate printing quality detection method based on deep learning
CN112686851B (en) Image detection method, device and storage medium
JP4510562B2 (en) Circle center position detection method, apparatus, and program
CN114820707A (en) Calculation method for camera target automatic tracking
CN113705672A (en) Threshold value selection method, system and device for image target detection and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant