CN113243804A - Automatic paper fetching method and device, readable storage medium and computer equipment - Google Patents

Automatic paper fetching method and device, readable storage medium and computer equipment Download PDF

Info

Publication number
CN113243804A
CN113243804A CN202110621813.2A CN202110621813A CN113243804A CN 113243804 A CN113243804 A CN 113243804A CN 202110621813 A CN202110621813 A CN 202110621813A CN 113243804 A CN113243804 A CN 113243804A
Authority
CN
China
Prior art keywords
paper
sample
requester
face organ
fetching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110621813.2A
Other languages
Chinese (zh)
Other versions
CN113243804B (en
Inventor
孙震
刘新
朱光升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Youjing Media Technology Co ltd
Original Assignee
Shandong Zhongxin Youjing Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Zhongxin Youjing Intelligent Technology Co ltd filed Critical Shandong Zhongxin Youjing Intelligent Technology Co ltd
Priority to CN202110621813.2A priority Critical patent/CN113243804B/en
Publication of CN113243804A publication Critical patent/CN113243804A/en
Application granted granted Critical
Publication of CN113243804B publication Critical patent/CN113243804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47KSANITARY EQUIPMENT NOT OTHERWISE PROVIDED FOR; TOILET ACCESSORIES
    • A47K10/00Body-drying implements; Toilet paper; Holders therefor
    • A47K10/24Towel dispensers, e.g. for piled-up or folded textile towels; Toilet-paper dispensers; Dispensers for piled-up or folded textile towels provided or not with devices for taking-up soiled towels as far as not mechanically driven
    • A47K10/32Dispensers for paper towels or toilet-paper
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47KSANITARY EQUIPMENT NOT OTHERWISE PROVIDED FOR; TOILET ACCESSORIES
    • A47K10/00Body-drying implements; Toilet paper; Holders therefor
    • A47K10/24Towel dispensers, e.g. for piled-up or folded textile towels; Toilet-paper dispensers; Dispensers for piled-up or folded textile towels provided or not with devices for taking-up soiled towels as far as not mechanically driven
    • A47K10/32Dispensers for paper towels or toilet-paper
    • A47K10/34Dispensers for paper towels or toilet-paper dispensing from a web, e.g. with mechanical dispensing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/374Thesaurus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/18Coin-freed apparatus for hiring articles; Coin-freed facilities or services for washing or drying persons
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47KSANITARY EQUIPMENT NOT OTHERWISE PROVIDED FOR; TOILET ACCESSORIES
    • A47K10/00Body-drying implements; Toilet paper; Holders therefor
    • A47K10/24Towel dispensers, e.g. for piled-up or folded textile towels; Toilet-paper dispensers; Dispensers for piled-up or folded textile towels provided or not with devices for taking-up soiled towels as far as not mechanically driven
    • A47K10/32Dispensers for paper towels or toilet-paper
    • A47K2010/3226Dispensers for paper towels or toilet-paper collecting data of usage

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an automatic paper fetching method, an automatic paper fetching device, a readable storage medium and computer equipment, wherein the automatic paper fetching method comprises the following steps: acquiring a target face image of a paper fetching requester; the method comprises the steps of identifying each face organ of a target face image independently to obtain a target characteristic value of each face organ of a paper-taking requester; comparing the target characteristic value in a pre-stored fusion characteristic dictionary library to determine whether the paper fetching requester fetches paper within a preset time; if the paper-taking requester does not take the paper within the preset time, controlling the paper-taking machine to output the paper; and if the paper taking requester takes the paper within the preset time, controlling the paper taking machine not to output the paper. The invention can avoid the excessive paper fetching of the user, does not need the user to scan codes through a mobile phone, and is more convenient to use.

Description

Automatic paper fetching method and device, readable storage medium and computer equipment
Technical Field
The invention relates to the technical field of computers, in particular to an automatic paper fetching method, an automatic paper fetching device, a readable storage medium and computer equipment.
Background
With the improvement of living standard and the attention of personal hygiene, toilets in places with dense population, such as hotels, hospitals, stations, tourist attractions, offices and the like, are increasingly used, and paper taking machines are generally prepared in public places for users to use freely for convenience of the users. However, the phenomenon that an individual user excessively fetches paper exists, and paper is affected for others.
In the prior art, a solution for achieving paper fetching through a mobile phone code scanning or face recognition mode exists, however, mobile phone code scanning is not convenient for people (such as the elderly) who are not familiar with smart phones, and although face recognition is more convenient, face recognition can acquire face information of users, so that hidden danger is brought to information safety of the users.
Disclosure of Invention
Therefore, an object of the present invention is to provide an automatic paper fetching method, so as to improve information security on the premise of facilitating paper fetching by users.
The invention provides an automatic paper taking method, which comprises the following steps:
acquiring a target face image of a paper fetching requester;
the method comprises the steps of identifying each face organ of a target face image independently to obtain a target characteristic value of each face organ of a paper-taking requester;
comparing the target characteristic value in a pre-stored fusion characteristic dictionary library to determine whether the paper fetching requester fetches paper within a preset time;
if the paper-taking requester does not take the paper within the preset time, controlling the paper-taking machine to output the paper;
and if the paper taking requester takes the paper within the preset time, controlling the paper taking machine not to output the paper.
According to the automatic paper fetching method provided by the invention, firstly, a target face image of a paper fetching requester is obtained, then, each face organ of the target face image is independently identified, and a target characteristic value of each face organ of the paper fetching requester is obtained; comparing the target characteristic value in a pre-stored fusion characteristic dictionary library to determine whether a paper fetching requester fetches paper within a preset time, and if not, controlling a paper fetching machine to output paper; if the paper is taken, the paper making machine is controlled not to output paper, so that excessive paper taking by a user can be avoided, the user does not need to scan codes through a mobile phone, and the use is more convenient.
In addition, the automatic paper fetching method according to the present invention may further have the following additional technical features:
further, the method further comprises:
acquiring a sample face image of a sample crowd shot by a camera;
respectively identifying each face organ in the sample face image to respectively obtain a sample characteristic value of each face organ;
respectively clustering the sample characteristic values of each human face organ to obtain sample characteristic data of each human face organ;
and fusing all sample feature data to obtain a fused feature dictionary library.
Further, the step of clustering the sample feature values of each kind of face organ respectively to obtain the sample feature data of each kind of face organ specifically includes:
and respectively clustering the sample characteristic values of each human face organ by adopting a clustering algorithm with a fixed K value to obtain the sample characteristic data of each human face organ.
Further, the method further comprises:
acquiring the system sensitivity of the paper taking machine, wherein the system sensitivity is used for reflecting the paper outlet probability, and the higher the system sensitivity is, the higher the paper outlet probability is;
if the system sensitivity is less than a sensitivity threshold, reducing the K value;
if the system sensitivity is greater than a sensitivity threshold, increasing the value of K.
Further, the human face organ includes at least one of auricle, hair, eye type, mouth type, and nose type.
Another objective of the present invention is to provide an automatic paper fetching device, so as to improve information security on the premise of facilitating paper fetching by users.
The invention provides an automatic paper taking device, comprising:
the first acquisition module is used for acquiring a target face image of a paper-taking requester;
the first identification module is used for individually identifying each face organ of the target face image so as to obtain a target characteristic value of each face organ of a paper-taking requester;
the comparison determining module is used for comparing the target characteristic value in a pre-stored fusion characteristic dictionary library so as to determine whether the paper fetching applicant has fetched paper within a preset time;
the first control module is used for controlling the paper taking machine to output paper if the paper taking requester does not take paper within a preset time;
and the second control module is used for controlling the paper making machine not to output paper if the paper fetching requester fetches the paper within the preset time.
According to the automatic paper taking device provided by the invention, firstly, a target face image of a paper taking requester is obtained, then, each face organ of the target face image is independently identified, and a target characteristic value of each face organ of the paper taking requester is obtained; comparing the target characteristic value in a pre-stored fusion characteristic dictionary library to determine whether a paper fetching requester fetches paper within a preset time, and if not, controlling a paper fetching machine to output paper; if the paper is taken, the paper making machine is controlled not to output paper, so that excessive paper taking by a user can be avoided, the user does not need to scan codes through a mobile phone, and the use is more convenient.
In addition, the automatic paper fetching device according to the present invention may further have the following additional features:
further, the apparatus further comprises:
the second acquisition module is used for acquiring sample face images of sample crowds shot by the camera;
the second identification module is used for carrying out individual identification on each human face organ in the sample human face image so as to respectively obtain a sample characteristic value of each human face organ;
the cluster acquisition module is used for respectively clustering the sample characteristic values of each human face organ to acquire sample characteristic data of each human face organ;
and the fusion module is used for fusing all sample feature data to obtain a fusion feature dictionary library.
Further, the cluster acquisition module is specifically configured to:
and respectively clustering the sample characteristic values of each human face organ by adopting a clustering algorithm with a fixed K value to obtain the sample characteristic data of each human face organ.
Further, the apparatus further comprises:
the third acquisition module is used for acquiring the system sensitivity of the paper taking machine, wherein the system sensitivity is used for reflecting the paper outlet probability, and the higher the system sensitivity is, the higher the paper outlet probability is;
a decreasing module for decreasing the value of K if the system sensitivity is less than a sensitivity threshold;
and the increasing module is used for increasing the K value if the system sensitivity is greater than a sensitivity threshold.
Further, the human face organ includes at least one of auricle, hair, eye type, mouth type, and nose type.
The invention also proposes a readable storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
The invention also proposes a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the program.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of embodiments of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a method of automatic paper pickup according to an embodiment of the present invention;
FIG. 2 is a flow diagram of building a fused feature dictionary library;
fig. 3 is a block diagram of an automatic paper pickup apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an automatic paper fetching method according to an embodiment of the present invention includes steps S101 to S105.
S101, acquiring a target face image of a paper fetching requester.
When the method is specifically implemented, the face of the paper fetching requester can be shot by the camera, so that the target face image of the paper fetching requester is obtained.
S102, each face organ of the target face image is separately identified to obtain a target characteristic value of each face organ of the paper-taking requester.
Wherein, the human face organ comprises at least one of auricle, hair, eye type, mouth type and nose type. The human face organs including auricles, hairs and eye shapes are taken as an example for explanation. By separately identifying the auricle, the hair and the eye shape of the paper taking requester in the target face image, the target characteristic value of the auricle, the target characteristic value of the hair and the target characteristic value of the eye shape of the paper taking requester can be obtained. The characteristic value is specifically a vector, the target characteristic value for the auricle may be one or more of the shape and the size of the auricle, the target characteristic value for the hair may be one or more of the color and the shape of the hair, and the target characteristic value for the eye type may be one or more of the shape, the presence or absence of the double eyelids and the color of the eyes.
For example, the target feature value of the auricle, the target feature value of the hair, and the target feature value of the eye shape of the paper pickup requester are respectively obtained as an ear i, an eye j, and a hair k, where i, j, and k represent serial numbers.
S103, comparing the target characteristic values in a pre-stored fusion characteristic dictionary library to determine whether the paper fetching applicant has fetched paper within a preset time.
Referring to fig. 2, a fused feature dictionary library is established in advance through steps S201 to S204:
s201, acquiring a sample face image of a sample crowd shot by a camera;
in which sample face images of a large number of sample populations need to be acquired first.
S202, individually identifying each face organ in the sample face image to respectively obtain a sample characteristic value of each face organ;
s203, clustering the sample characteristic values of each human face organ respectively to obtain sample characteristic data of each human face organ;
the method specifically comprises the steps of clustering sample characteristic values of each human face organ by a clustering algorithm with a fixed K value, and classifying the sample characteristic values of the same type to obtain sample characteristic data of each human face organ.
And S204, fusing all sample feature data to obtain a fused feature dictionary library.
After the sample feature values of each human face organ are respectively clustered, a large amount of sample feature data can be obtained, for example, the sample feature data of auricles includes ear 1, ear 2, … and ear n, the sample feature data of hairs includes hair 1, hair 2, … and hair n, and the sample feature data of eye shapes includes eye 1, eye 2, … and eye n. Then, a dictionary library of fused features is obtained by expanding through cartesian products, that is, all possible combinations of sample feature values are obtained through an exhaustion method, for example, the first combination is ear 1, hair 1, eye 1, the second combination is ear 2, hair 1, eye 1, the third combination is ear 3, hair 1, eye 1, the fourth combination is ear 1, hair 2, eye 2, the fifth combination is ear 3, hair 4, eye 1, and the like.
It will be appreciated that different persons may correspond to different combinations, so that identification of the user may be achieved. When a new user wants to take paper, the combination of the feature values corresponding to the new user, such as the ear 6, the hair 2 and the eye 3, can be obtained, and then the feature values are stored in the fused feature dictionary library, the time limit of the feature values of the user stored in the fused feature dictionary library is stored, such as 1 hour, and if the time limit exceeds 1 hour, the feature values of the user are deleted from the fused feature dictionary library, so that whether the user has taken paper in the latest 1 hour can be identified. If the feature value of the user exists in the fused feature dictionary library, the user is indicated to have taken paper in the last 1 hour, and otherwise, if the feature value of the user does not exist in the fused feature dictionary library, the user is indicated to have not taken paper in the last 1 hour.
And S104, if the paper fetching requester does not fetch paper within the preset time, controlling the paper fetching machine to output paper.
And S105, controlling the paper making machine not to output paper if the paper fetching requester fetches the paper within the preset time.
Further, as a specific example, the method further includes:
acquiring the system sensitivity of the paper taking machine, wherein the system sensitivity is used for reflecting the paper outlet probability, and the higher the system sensitivity is, the higher the paper outlet probability is;
if the system sensitivity is less than a sensitivity threshold, reducing the K value;
if the system sensitivity is greater than a sensitivity threshold, increasing the value of K.
By the steps, the sensitivity of the system can be automatically adjusted, and the situation that paper is easy to discharge (namely, a person who takes paper is identified as a person who does not take paper) or difficult to discharge (namely, a person who does not take paper is identified as a person who takes paper) is avoided.
In summary, according to the automatic paper fetching method provided by this embodiment, a target face image of a paper fetching requester is first obtained, and then each face organ of the target face image is individually identified, so as to obtain a target feature value of each face organ of the paper fetching requester; comparing the target characteristic value in a pre-stored fusion characteristic dictionary library to determine whether a paper fetching requester fetches paper within a preset time, and if not, controlling a paper fetching machine to output paper; if the paper is taken, the paper making machine is controlled not to output paper, so that excessive paper taking by a user can be avoided, the user does not need to scan codes through a mobile phone, and the use is more convenient.
Referring to fig. 3, an automatic paper fetching apparatus according to an embodiment of the present invention includes:
the first acquisition module is used for acquiring a target face image of a paper-taking requester;
the first identification module is used for individually identifying each face organ of the target face image so as to obtain a target characteristic value of each face organ of a paper-taking requester;
the comparison determining module is used for comparing the target characteristic value in a pre-stored fusion characteristic dictionary library so as to determine whether the paper fetching applicant has fetched paper within a preset time;
the first control module is used for controlling the paper taking machine to output paper if the paper taking requester does not take paper within a preset time;
and the second control module is used for controlling the paper making machine not to output paper if the paper fetching requester fetches the paper within the preset time.
In this embodiment, the apparatus further includes:
the second acquisition module is used for acquiring sample face images of sample crowds shot by the camera;
the second identification module is used for carrying out individual identification on each human face organ in the sample human face image so as to respectively obtain a sample characteristic value of each human face organ;
the cluster acquisition module is used for respectively clustering the sample characteristic values of each human face organ to acquire sample characteristic data of each human face organ;
and the fusion module is used for fusing all sample feature data to obtain a fusion feature dictionary library.
In this embodiment, the cluster acquiring module is specifically configured to:
and respectively clustering the sample characteristic values of each human face organ by adopting a clustering algorithm with a fixed K value to obtain the sample characteristic data of each human face organ.
In this embodiment, the apparatus further includes:
the third acquisition module is used for acquiring the system sensitivity of the paper taking machine, wherein the system sensitivity is used for reflecting the paper outlet probability, and the higher the system sensitivity is, the higher the paper outlet probability is;
a decreasing module for decreasing the value of K if the system sensitivity is less than a sensitivity threshold;
and the increasing module is used for increasing the K value if the system sensitivity is greater than a sensitivity threshold.
In this embodiment, the face organ includes at least one of a pinna, a hair, an eye shape, a mouth shape, and a nose shape.
According to the automatic paper fetching device provided by the embodiment, firstly, a target face image of a paper fetching requester is obtained, then, each face organ of the target face image is independently identified, and a target characteristic value of each face organ of the paper fetching requester is obtained; comparing the target characteristic value in a pre-stored fusion characteristic dictionary library to determine whether a paper fetching requester fetches paper within a preset time, and if not, controlling a paper fetching machine to output paper; if the paper is taken, the paper making machine is controlled not to output paper, so that excessive paper taking by a user can be avoided, the user does not need to scan codes through a mobile phone, and the use is more convenient.
Furthermore, an embodiment of the present invention also proposes a readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the above-mentioned method.
Furthermore, an embodiment of the present invention also provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the steps of the above method when executing the program.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. An automatic paper fetching method, characterized in that the method comprises:
acquiring a target face image of a paper fetching requester;
the method comprises the steps of identifying each face organ of a target face image independently to obtain a target characteristic value of each face organ of a paper-taking requester;
comparing the target characteristic value in a pre-stored fusion characteristic dictionary library to determine whether the paper fetching requester fetches paper within a preset time;
if the paper-taking requester does not take the paper within the preset time, controlling the paper-taking machine to output the paper;
and if the paper taking requester takes the paper within the preset time, controlling the paper taking machine not to output the paper.
2. The automatic paper extraction method of claim 1, further comprising:
acquiring a sample face image of a sample crowd shot by a camera;
respectively identifying each face organ in the sample face image to respectively obtain a sample characteristic value of each face organ;
respectively clustering the sample characteristic values of each human face organ to obtain sample characteristic data of each human face organ;
and fusing all sample feature data to obtain a fused feature dictionary library.
3. The automatic paper fetching method according to claim 2, wherein the step of clustering the sample feature values of each human face organ respectively to obtain the sample feature data of each human face organ specifically comprises:
and respectively clustering the sample characteristic values of each human face organ by adopting a clustering algorithm with a fixed K value to obtain the sample characteristic data of each human face organ.
4. The automatic paper extraction method of claim 3, further comprising:
acquiring the system sensitivity of the paper taking machine, wherein the system sensitivity is used for reflecting the paper outlet probability, and the higher the system sensitivity is, the higher the paper outlet probability is;
if the system sensitivity is less than a sensitivity threshold, reducing the K value;
if the system sensitivity is greater than a sensitivity threshold, increasing the value of K.
5. The automatic paper fetching method of claim 1, wherein the human face organ comprises at least one of auricle, hair, eye type, mouth type, nose type.
6. An automatic paper fetching device, characterized in that the device comprises:
the first acquisition module is used for acquiring a target face image of a paper-taking requester;
the first identification module is used for individually identifying each face organ of the target face image so as to obtain a target characteristic value of each face organ of a paper-taking requester;
the comparison determining module is used for comparing the target characteristic value in a pre-stored fusion characteristic dictionary library so as to determine whether the paper fetching applicant has fetched paper within a preset time;
the first control module is used for controlling the paper taking machine to output paper if the paper taking requester does not take paper within a preset time;
and the second control module is used for controlling the paper making machine not to output paper if the paper fetching requester fetches the paper within the preset time.
7. The automatic paper extraction device of claim 6, further comprising:
the second acquisition module is used for acquiring sample face images of sample crowds shot by the camera;
the second identification module is used for carrying out individual identification on each human face organ in the sample human face image so as to respectively obtain a sample characteristic value of each human face organ;
the cluster acquisition module is used for respectively clustering the sample characteristic values of each human face organ to acquire sample characteristic data of each human face organ;
and the fusion module is used for fusing all sample feature data to obtain a fusion feature dictionary library.
8. The automatic paper fetching device of claim 7, wherein the cluster acquisition module is specifically configured to:
and respectively clustering the sample characteristic values of each human face organ by adopting a clustering algorithm with a fixed K value to obtain the sample characteristic data of each human face organ.
9. A readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the program.
CN202110621813.2A 2021-06-03 2021-06-03 Automatic paper fetching method and device, readable storage medium and computer equipment Active CN113243804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110621813.2A CN113243804B (en) 2021-06-03 2021-06-03 Automatic paper fetching method and device, readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110621813.2A CN113243804B (en) 2021-06-03 2021-06-03 Automatic paper fetching method and device, readable storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN113243804A true CN113243804A (en) 2021-08-13
CN113243804B CN113243804B (en) 2022-11-22

Family

ID=77186396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110621813.2A Active CN113243804B (en) 2021-06-03 2021-06-03 Automatic paper fetching method and device, readable storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113243804B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050404A1 (en) * 2012-08-17 2014-02-20 Apple Inc. Combining Multiple Image Detectors
CN103824052A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Multilevel semantic feature-based face feature extraction method and recognition method
WO2015197029A1 (en) * 2014-06-27 2015-12-30 北京奇虎科技有限公司 Human face similarity recognition method and system
CN106384119A (en) * 2016-08-23 2017-02-08 重庆大学 Improved K-means clustering algorithm capable of determining value of K by using variance analysis
CN106503686A (en) * 2016-10-28 2017-03-15 广州炒米信息科技有限公司 The method and system of retrieval facial image
GB201711775D0 (en) * 2017-07-21 2017-09-06 Spirit Aerosystems (Europe) Ltd Method and apparatus for curing a composite article
CN108416336A (en) * 2018-04-18 2018-08-17 特斯联(北京)科技有限公司 A kind of method and system of intelligence community recognition of face
CN108514366A (en) * 2018-06-01 2018-09-11 成都博云启初科技有限公司 A kind of face recognition takes paper machine and paper amount is taken to limit method automatically
CN109284675A (en) * 2018-08-13 2019-01-29 阿里巴巴集团控股有限公司 A kind of recognition methods of user, device and equipment
CN109726749A (en) * 2018-12-21 2019-05-07 齐鲁工业大学 A kind of Optimal Clustering selection method and device based on multiple attribute decision making (MADM)
CN110598535A (en) * 2019-07-31 2019-12-20 广西大学 Face recognition analysis method used in monitoring video data
CN110889433A (en) * 2019-10-29 2020-03-17 平安科技(深圳)有限公司 Face clustering method and device, computer equipment and storage medium
CN111291822A (en) * 2020-02-21 2020-06-16 南京航空航天大学 Equipment running state judgment method based on fuzzy clustering optimal k value selection algorithm
CN111368858A (en) * 2018-12-25 2020-07-03 中国移动通信集团广东有限公司 User satisfaction evaluation method and device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050404A1 (en) * 2012-08-17 2014-02-20 Apple Inc. Combining Multiple Image Detectors
CN103824052A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Multilevel semantic feature-based face feature extraction method and recognition method
WO2015197029A1 (en) * 2014-06-27 2015-12-30 北京奇虎科技有限公司 Human face similarity recognition method and system
CN106384119A (en) * 2016-08-23 2017-02-08 重庆大学 Improved K-means clustering algorithm capable of determining value of K by using variance analysis
CN106503686A (en) * 2016-10-28 2017-03-15 广州炒米信息科技有限公司 The method and system of retrieval facial image
GB201711775D0 (en) * 2017-07-21 2017-09-06 Spirit Aerosystems (Europe) Ltd Method and apparatus for curing a composite article
CN108416336A (en) * 2018-04-18 2018-08-17 特斯联(北京)科技有限公司 A kind of method and system of intelligence community recognition of face
CN108514366A (en) * 2018-06-01 2018-09-11 成都博云启初科技有限公司 A kind of face recognition takes paper machine and paper amount is taken to limit method automatically
CN109284675A (en) * 2018-08-13 2019-01-29 阿里巴巴集团控股有限公司 A kind of recognition methods of user, device and equipment
CN109726749A (en) * 2018-12-21 2019-05-07 齐鲁工业大学 A kind of Optimal Clustering selection method and device based on multiple attribute decision making (MADM)
CN111368858A (en) * 2018-12-25 2020-07-03 中国移动通信集团广东有限公司 User satisfaction evaluation method and device
CN110598535A (en) * 2019-07-31 2019-12-20 广西大学 Face recognition analysis method used in monitoring video data
CN110889433A (en) * 2019-10-29 2020-03-17 平安科技(深圳)有限公司 Face clustering method and device, computer equipment and storage medium
CN111291822A (en) * 2020-02-21 2020-06-16 南京航空航天大学 Equipment running state judgment method based on fuzzy clustering optimal k value selection algorithm

Also Published As

Publication number Publication date
CN113243804B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN105654952B (en) Electronic device, server and method for outputting voice
CN108012081B (en) Intelligent beautifying method, device, terminal and computer readable storage medium
KR20180054505A (en) Apparatus, method and robot for store management
US10255487B2 (en) Emotion estimation apparatus using facial images of target individual, emotion estimation method, and non-transitory computer readable medium
US11392213B2 (en) Selective detection of visual cues for automated assistants
US20220262091A1 (en) Image alignment method and device therefor
CN108921856B (en) Image cropping method and device, electronic equipment and computer readable storage medium
CN108764973A (en) A kind of advertisement broadcast method, device, equipment and storage medium
CN107346419B (en) Iris recognition method, electronic device, and computer-readable storage medium
CN106897659A (en) The recognition methods of blink motion and device
CN110096251A (en) Exchange method and device
WO2021120626A1 (en) Image processing method, terminal, and computer storage medium
CN113703585A (en) Interaction method, interaction device, electronic equipment and storage medium
CN114677730A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN106650632A (en) Identity identification method and device, and electronic equipment
CN111382723A (en) Method, device and system for identifying help
CN107368783A (en) Living body iris detection method, electronic installation and computer-readable recording medium
CN113243804B (en) Automatic paper fetching method and device, readable storage medium and computer equipment
CN114529962A (en) Image feature processing method and device, electronic equipment and storage medium
CN113380383A (en) Medical monitoring method, device and terminal
CN109819111A (en) A kind of control method, device, equipment and storage medium
CN116523914B (en) Aneurysm classification recognition device, method, equipment and storage medium
CN109934149B (en) Method and apparatus for outputting information
CN111259695A (en) Method and device for acquiring information
US20230377371A1 (en) Information providing device, information providing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221031

Address after: Room 405, No. 238, Shuangzhu Road, Huangdao District, Qingdao, Shandong 266400

Applicant after: Shandong Youjing Media Technology Co.,Ltd.

Address before: 266100 Room 501, East unit, building 1, Qingdao high level talent entrepreneurship center, No. 153, Zhuzhou Road, Laoshan District, Qingdao, Shandong

Applicant before: Shandong Zhongxin Youjing Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant