CN116152890B - Medical fee self-service payment system - Google Patents

Medical fee self-service payment system Download PDF

Info

Publication number
CN116152890B
CN116152890B CN202211693502.8A CN202211693502A CN116152890B CN 116152890 B CN116152890 B CN 116152890B CN 202211693502 A CN202211693502 A CN 202211693502A CN 116152890 B CN116152890 B CN 116152890B
Authority
CN
China
Prior art keywords
information
control computer
main control
sampling information
payment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211693502.8A
Other languages
Chinese (zh)
Other versions
CN116152890A (en
Inventor
何晓俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rongwei Zhongbang Electronic Technology Co ltd
Original Assignee
Beijing Rongwei Zhongbang Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rongwei Zhongbang Electronic Technology Co ltd filed Critical Beijing Rongwei Zhongbang Electronic Technology Co ltd
Priority to CN202211693502.8A priority Critical patent/CN116152890B/en
Publication of CN116152890A publication Critical patent/CN116152890A/en
Application granted granted Critical
Publication of CN116152890B publication Critical patent/CN116152890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/18Payment architectures involving self-service terminals [SST], vending machines, kiosks or multimedia terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Accounting & Taxation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the medical field, in particular to a medical fee self-service payment system, which comprises: an imaging unit: the sampling device is used for capturing sampling information; a signal processor: the processing unit is used for processing the sampling information captured by the image pickup component and transmitting the processing result to the main control computer; and the main control computer: for validating the information of the signal processor and transmitting the final validation result to the payment unit; payment unit: the payment instruction is controlled to be output and executed according to the final confirmation result transmitted by the main control computer; touch screen display: and the device is used for displaying according to the output confirmation result of the main control computer. The signal processor adds the channel attention module through fusion, performs characteristic output through residual error thought, reduces loss of characteristic information as much as possible, and strengthens learning ability; the anti-noise performance is improved through self-adaptive adjustment of the adjustable factors, the accuracy of matching the face information is improved, the payment efficiency of the self-service payment system is improved, and the safety of the payment system is improved.

Description

Medical fee self-service payment system
Technical Field
The invention relates to the medical field, in particular to a medical fee self-service payment system.
Background
The traditional medical treatment payment needs to take a great deal of time to wait in a queue, and delays the patient's seeing time. With the advancement of automation technology, advanced information technology can be applied to hospital payment. Therefore, with the rapid development of economy, the living standard of people is greatly improved, an automatic self-service hospital payment system is urgently needed, convenience and rapidness are achieved, the economy is brought to common people, the problem of difficulty in seeing a doctor can be solved, the time of a patient is saved, and the efficiency of seeing the doctor is improved. With the popularization of electronic commerce, the digital system payment mode is favored by more and more consumers and merchants, the consumers do not need to use cash when paying, the merchants do not need to make changes, and the transaction process is simplified. In particular to a payment means for identifying payment by face recognition technology. Face recognition technology is one of biological recognition technologies, and since faces of everyone are unique and are the most natural and common identification features in human vision, an identification system based on the face recognition technology is widely used. However, the existing face recognition technology cannot fully extract and distinguish face features and a large amount of noise exists in the extracted face information, so that the judgment effect is easy to influence, the payment efficiency is reduced, and certain potential safety hazards exist for a payment system.
Disclosure of Invention
The invention aims to solve the defects in the background technology by providing a medical fee self-service payment system.
The technical scheme adopted by the invention is as follows:
provided is a medical fee self-service payment system, comprising:
an imaging unit: the signal processor is connected with the sampling device and used for capturing sampling information;
a signal processor: the system is connected with the main control computer and is used for processing the sampling information captured by the camera shooting component and transmitting the processing result to the main control computer;
and the main control computer: the device is connected with the payment unit and the touch screen display, and is used for confirming information of the signal processor and transmitting a final confirmation result to the payment unit and the touch screen display;
payment unit: the payment instruction is executed according to the final confirmation result transmitted by the main control computer;
touch screen display: and the device is used for displaying according to the output confirmation result of the main control computer.
As a preferred technical scheme of the invention: in the signal processor, a sample information sample data set is stored through a distributed storage data grid.
As a preferred technical scheme of the invention: the processing steps of the signal processor comprise preprocessing of the sampled information and feature recognition.
As a preferred technical scheme of the invention: the preprocessing step comprises the steps of sampling information calibration, sampling information normalization and sampling information enhancement.
As a preferred technical scheme of the invention: the feature recognition includes feature information analysis and processing and feature matching.
As a preferred technical scheme of the invention: in the characteristic information analysis and processing step, a convolutional neural network is constructed to learn the sampling information, and a channel attention module is added to reduce the loss of the characteristic information and strengthen the learning ability; the input characteristic is F epsilon R H×W×C Wherein H×W is the feature map size, and C is the channel number; the global attention and the local attention of the channel are acquired through global maximization pooling and local maximization pooling, the channel information is fused by two branch points through point-by-point convolution, and the channel information of the image at each spatial position is stored:
wherein F is 1 To local channel attention, F 2 For global channel attention, W c/r Indicating that the convolution kernel of the first layer has a size of C/r, W c The second layer convolution kernel size is denoted as C, where r is the downsampling multiple used to control the rate of channel compression, σ is the Relu activation function,representing the features after maximum pooling, < >>Representing the characteristics after local maximum pooling, wherein the pooled convolution kernel size is 7×7;
adding the global channel attention and the local channel attention element by element, normalizing by using a Sigmoid function, generating attention weight represented by delta, and multiplying the attention weight by an input feature to obtain an output feature:
X=δ(F 1 +F 2 )·F
adding the input features and the output features in a jump connection manner to obtain final output features:
x=X+F
where x is the final output characteristic.
As a preferred technical scheme of the invention: in the feature matching step, sampling information identification is performed through a self-adaptive loss function with anti-noise performance:
extracting feature vector x over a network i ∈R H×W×C The class being y i The representation is made of a combination of a first and a second color,
classification probability P for ith sample information i Expressed as:
the cross entropy function is expressed as:
wherein y is a category label,the final classification probability of the ith sampling information is N is the total number of sampling information samples, and s is the sampling informationThe number of positive examples in the samples; />The method meets the following conditions:
beta is in the form of single-heat encoding,for the introduced adaptive modulation factor, wherein ∈>τ is an adjustable factor by which anti-noise performance is improved.
As a preferred technical scheme of the invention: in the feature matching step, the similarity between the sampling information and the data set in the database is measured through cosine distance, a similarity threshold is set for matching the sampling information, and when the similarity is larger than the threshold, the matching is successful, otherwise, the matching is failed.
As a preferred technical scheme of the invention: and the signal processor performs feature matching on the sampling information, and transmits the distributed data grid information in the database corresponding to the matching to the main control computer after the matching is successful.
As a preferred technical scheme of the invention: the main control computer confirms that the sampling information is matched and outputs a payment instruction to the payment unit, the payment unit executes the payment instruction and returns an instruction execution result to the main control computer, and the main control computer automatically adjusts distributed data grid information which is matched with the information in the database according to the instruction execution result and re-stores the distributed data grid information in the database.
Compared with the prior art, the medical expense self-service payment system provided by the invention has the beneficial effects that:
the signal processor adds the channel attention module through fusion, performs characteristic output through residual error thought, reduces loss of characteristic information as much as possible, and strengthens learning ability; the anti-noise performance is improved through self-adaptive adjustment of the adjustable factors, the accuracy of matching the face information is improved, the payment efficiency of the self-service payment system is improved, and the safety of the payment system is improved.
Drawings
Fig. 1 is a system block diagram of a preferred embodiment of the present invention.
The meaning of each label in the figure is: 1. an imaging unit; 2. a signal processor; 3. a main control computer; 4. a payment unit; 5. a touch screen display.
Detailed Description
It should be noted that, under the condition of no conflict, the embodiments of the present embodiments and features in the embodiments may be combined with each other, and the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and obviously, the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a preferred embodiment of the present invention provides a medical fee self-service payment system, comprising:
imaging unit 1: the signal processor 2 is connected with the signal processor and used for capturing sampling information;
the signal processor 2: the device is connected with the main control computer 3 and is used for processing the sampling information captured by the image pickup component 1 and transmitting the processing result to the main control computer 3;
the main control computer 3: the device is connected with the payment unit 4 and the touch screen display 5 and is used for confirming information of the signal processor 2 and transmitting a final confirmation result to the payment unit 4 and the touch screen display 5;
payment unit 4: the payment instruction is executed according to the final confirmation result transmitted by the main control computer 3;
touch screen display 5: for display according to the output confirmation result of the main control computer 3.
In the signal processor 2, a sample information sample data set is stored by means of a distributed storage data grid.
The processing steps of the signal processor 2 include preprocessing of the sampled information and feature recognition.
The preprocessing step comprises the steps of sampling information calibration, sampling information normalization and sampling information enhancement.
The feature recognition includes feature information analysis and processing and feature matching.
In the characteristic information analysis and processing step, a convolutional neural network is constructed to learn the sampling information, and a channel attention module is added to reduce the loss of the characteristic information and strengthen the learning ability; the input characteristic is F epsilon R H×W×C Wherein H×W is the feature map size, and C is the channel number; the global attention and the local attention of the channel are acquired through global maximization pooling and local maximization pooling, the channel information is fused by two branch points through point-by-point convolution, and the channel information of the image at each spatial position is stored:
wherein F is 1 To local channel attention, F 2 For global channel attention, W C/r Indicating that the convolution kernel of the first layer has a size of C/r, W C The second layer convolution kernel size is denoted as C, where r is the downsampling multiple used to control the rate of channel compression, σ is the Relu activation function,representing the features after maximum pooling, < >>Representing the features after local maximization,the pooled convolution kernel size is 7×7;
adding the global channel attention and the local channel attention element by element, normalizing by using a Sigmoid function, generating attention weight represented by delta, and multiplying the attention weight by an input feature to obtain an output feature:
X=δ(F 1 +F 2 ) F adding the input features to the output features in a jump connection to obtain the final output features:
x=X+F
where x is the final output characteristic.
In the feature matching step, sampling information identification is performed through a self-adaptive loss function with anti-noise performance:
extracting feature vector x over a network i ∈R H×W×C The class being y i The representation is made of a combination of a first and a second color,
classification probability P for ith sample information i Expressed as:
the cross entropy function is expressed as:
wherein y is a category label,the final classification probability of the ith sampling information is that N is the total number of sampling information samples, and s is the number of positive samples in the sampling information samples; />The method meets the following conditions:
beta is in the form of single-heat encoding,for the introduced adaptive modulation factor, wherein ∈>τ is an adjustable factor by which anti-noise performance is improved.
In the feature matching step, the similarity between the sampling information and the data set in the database is measured through cosine distance, a similarity threshold is set for matching the sampling information, and when the similarity is larger than the threshold, the matching is successful, otherwise, the matching is failed.
And the signal processor 2 performs feature matching on the sampling information, and transmits distributed data grid information in a database corresponding to matching to the main control computer 3 after the matching is successful.
The main control computer 3 confirms that the sampling information is matched and outputs a payment instruction to the payment unit 4, the payment unit 4 executes the payment instruction and returns an instruction execution result to the main control computer 3, and the main control computer 3 automatically adjusts distributed data grid information which adopts information matching in a database according to the instruction execution result and re-stores the distributed data grid information in the database.
In the present embodiment, the image pickup section 1 recognizes the collected face information, and transmits the collected face information to the signal processor 2. The signal processor 2 carries out preprocessing and recognition on the face information uploaded by the camera component 1, and comprises the step of calibrating the collected face information to remove non-face information, the step of normalizing the face information, the step of being convenient for being suitable for a convolutional neural network, the step of enhancing the face information and the step of facilitating the recognition of the face information in the later period. Then, carrying out characteristic information analysis and processing on the preprocessed face information, constructing a convolutional neural network to learn the face information, and adding a channel attention module to reduce the loss of the characteristic information and strengthen the learning capacity; the input characteristic is F epsilon R H×W×C Wherein H×W is the feature map size, and C is the channel number; channel global attention and sum by global max pooling and local max poolingThe local attention of the channel, two branch points are respectively used for fusing channel information by point-by-point convolution, and the channel information of the image at each spatial position is stored:
wherein F is 1 To local channel attention, F 2 For global channel attention, W C/r Indicating that the convolution kernel of the first layer has a size of C/r, W C The second layer convolution kernel size is denoted as C, where r is the downsampling multiple used to control the rate of channel compression, σ is the Relu activation function,representing the features after maximum pooling, < >>Representing the characteristics after local maximum pooling, wherein the pooled convolution kernel size is 7×7;
adding the global channel attention and the local channel attention element by element, normalizing by using a Sigmoid function, generating attention weight represented by delta, and multiplying the attention weight by an input feature to obtain an output feature:
X=δ(F 1 +F 2 )·F
adding the input features and the output features in a jump connection manner to obtain final output features:
x=X+F
where x is the final output characteristic.
The channel attention module is added through fusion, the characteristic output is carried out through the residual error idea, the loss of characteristic information is reduced as much as possible, and the learning capability is enhanced.
And then carrying out feature matching, and carrying out face information identification through a self-adaptive loss function with anti-noise performance:
extracting feature vector x over a network i ∈R H×W×C The class being y i The representation is made of a combination of a first and a second color,
classification probability P for ith face information i Expressed as:
the cross entropy function is expressed as:
wherein y is a category label,the final classification probability of the ith sampling information is that N is the total number of sampling information samples, and s is the number of positive samples in the sampling information samples; />The method meets the following conditions:
beta is in the form of single-heat encoding,for the introduced adaptive modulation factor, wherein ∈>τ is an adjustable factor by which anti-noise performance is improved.
Let classification set q= [ Q ] of face features 1 ,Q 2 ,…,Q N ]The classification probability P of the ith face information is obtained i Then, the data set of the ith face information is Q i =[Q i1 ,Q i2 ,…,Q iN ]Finally throughCalculating the face distance D in the database of the face to be detected i ,D i A smaller value of (c) indicates a smaller degree of similarity between the two.
The anti-noise performance is improved through self-adaptive adjustment of the adjustable factors, and the accuracy of face information matching is improved.
After the signal processor 2 successfully matches, the information stored in the distributed data grid in the database corresponding to the matching result is transmitted to the main control computer 3, the main control computer 3 extracts the information in the distributed data grid, outputs medical items and expense information to the touch screen display 5 for display, and sends a payment instruction to the payment unit 4, the payment unit 4 automatically deducts medical expense through wireless networking, and the touch screen display 5 automatically updates and displays the payment condition of the payment unit 4. After successful payment, the main control computer 3 automatically adjusts the distributed data grid information which adopts information matching in the database according to the instruction execution result and stores the information in the database again.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (6)

1. A medical fee self-service payment system, which is characterized in that: comprising the following steps:
imaging means (1): the device is connected with the signal processor (2) and is used for capturing sampling information;
signal processor (2): the device is connected with the main control computer (3) and is used for processing the sampling information captured by the image pickup component (1) and transmitting the processing result to the main control computer (3);
master control computer (3): the device is connected with the payment unit (4) and the touch screen display (5) and is used for confirming information of the signal processor (2) and transmitting a final confirmation result to the payment unit (4) and the touch screen display (5);
payment unit (4): the payment instruction is executed according to the final confirmation result transmitted by the main control computer (3);
touch screen display (5): the device is used for outputting and displaying a confirmation result according to the main control computer (3);
the processing steps of the signal processor (2) comprise preprocessing of sampling information and feature recognition, wherein the feature recognition comprises feature information analysis and processing and feature matching;
in the characteristic information analysis and processing step, a convolutional neural network is constructed to learn the sampling information, and a channel attention module is added to reduce the loss of the characteristic information and strengthen the learning ability; the input characteristic is F epsilon R H×W×C Wherein H×W is the feature map size, and C is the channel number; the global attention and the local attention of the channel are acquired through global maximization pooling and local maximization pooling, the channel information is fused by two branch points through point-by-point convolution, and the channel information of the image at each spatial position is stored:
wherein F is 1 To local channel attention, F 2 For global channel attention, W C/r Indicating that the convolution kernel of the first layer has a size of C/r, W C The second layer convolution kernel size is denoted as C, where r is the downsampling multiple used to control the rate of channel compression, σ is the Relu activation function,representing the features after maximum pooling, < >>Representing the characteristics after local maximum pooling, wherein the pooled convolution kernel size is 7×7;
adding the global channel attention and the local channel attention element by element, normalizing by using a Sigmoid function, generating attention weight represented by delta, and multiplying the attention weight by an input feature to obtain an output feature:
X=δ(F 1 +F 2 )·F
adding the input features and the output features in a jump connection manner to obtain final output features:
x=X+F
wherein x is the final output feature;
in the feature matching step, sampling information identification is performed through a self-adaptive loss function with anti-noise performance:
extracting feature vector x over a network i ∈R H×W×C The class being y i The representation is made of a combination of a first and a second color,
classification probability P for ith sample information i Expressed as:
the cross entropy function is expressed as:
wherein y is a category label,the final classification probability of the ith sampling information is that N is the total number of sampling information samples, and s is the number of positive samples in the sampling information samples; />The method meets the following conditions:
beta is in the form of single-heat encoding,for the introduced adaptive modulation factor, wherein ∈>τ is an adjustable factor by which anti-noise performance is improved.
2. The medical fee self-service payment system of claim 1, wherein: in the signal processor (2), a sample information sample data set is stored by means of a distributed storage data grid.
3. The medical fee self-service payment system according to claim 2, wherein: the preprocessing step comprises the steps of sampling information calibration, sampling information normalization and sampling information enhancement.
4. The medical fee self-service payment system of claim 1, wherein: in the feature matching step, the similarity between the sampling information and the data set in the database is measured through cosine distance, a similarity threshold is set for matching the sampling information, and when the similarity is larger than the threshold, the matching is successful, otherwise, the matching is failed.
5. The medical fee self-service payment system of claim 4, wherein: and the signal processor (2) performs feature matching on the sampling information, and transmits distributed data grid information in a database corresponding to matching to the main control computer (3) after the matching is successful.
6. The medical fee self-service payment system of claim 5, wherein: the main control computer (3) confirms that the sampling information is matched and outputs a payment instruction to the payment unit (4), the payment unit (4) executes the payment instruction and returns an instruction execution result to the main control computer (3), and the main control computer (3) automatically adjusts distributed data grid information which is matched with the information in the database according to the instruction execution result and stores the distributed data grid information in the database again.
CN202211693502.8A 2022-12-28 2022-12-28 Medical fee self-service payment system Active CN116152890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211693502.8A CN116152890B (en) 2022-12-28 2022-12-28 Medical fee self-service payment system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211693502.8A CN116152890B (en) 2022-12-28 2022-12-28 Medical fee self-service payment system

Publications (2)

Publication Number Publication Date
CN116152890A CN116152890A (en) 2023-05-23
CN116152890B true CN116152890B (en) 2024-01-26

Family

ID=86353635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211693502.8A Active CN116152890B (en) 2022-12-28 2022-12-28 Medical fee self-service payment system

Country Status (1)

Country Link
CN (1) CN116152890B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206097325U (en) * 2016-10-13 2017-04-12 秦皇岛市妇幼保健院 Self -service payment systems of hospital based on face identification
CN207302235U (en) * 2017-10-01 2018-05-01 尤伟 A kind of hospital self-service payment system based on recognition of face
CN207489088U (en) * 2017-10-01 2018-06-12 遵义医学院附属医院 A kind of therapeutic medical recognition of face intelligence payment system
CN112257647A (en) * 2020-11-03 2021-01-22 徐州工程学院 Human face expression recognition method based on attention mechanism
CN112287940A (en) * 2020-10-30 2021-01-29 西安工程大学 Semantic segmentation method of attention mechanism based on deep learning
CN112784856A (en) * 2021-01-29 2021-05-11 长沙理工大学 Channel attention feature extraction method and identification method of chest X-ray image
CN114067171A (en) * 2021-10-29 2022-02-18 南京付联微网络科技有限公司 Image recognition precision improving method and system for overcoming small data training set
CN114676733A (en) * 2022-01-17 2022-06-28 中国人民解放军海军工程大学 Fault diagnosis method for complex supply and delivery mechanism based on sparse self-coding assisted classification generation type countermeasure network
CN114964781A (en) * 2022-05-31 2022-08-30 广西大学 Intelligent diagnosis method for train bearing fault
KR20220129463A (en) * 2021-03-16 2022-09-23 삼성전자주식회사 Method and apparatus of face recognition
CN115294038A (en) * 2022-07-25 2022-11-04 河北工业大学 Defect detection method based on joint optimization and mixed attention feature fusion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206097325U (en) * 2016-10-13 2017-04-12 秦皇岛市妇幼保健院 Self -service payment systems of hospital based on face identification
CN207302235U (en) * 2017-10-01 2018-05-01 尤伟 A kind of hospital self-service payment system based on recognition of face
CN207489088U (en) * 2017-10-01 2018-06-12 遵义医学院附属医院 A kind of therapeutic medical recognition of face intelligence payment system
CN112287940A (en) * 2020-10-30 2021-01-29 西安工程大学 Semantic segmentation method of attention mechanism based on deep learning
CN112257647A (en) * 2020-11-03 2021-01-22 徐州工程学院 Human face expression recognition method based on attention mechanism
CN112784856A (en) * 2021-01-29 2021-05-11 长沙理工大学 Channel attention feature extraction method and identification method of chest X-ray image
KR20220129463A (en) * 2021-03-16 2022-09-23 삼성전자주식회사 Method and apparatus of face recognition
CN114067171A (en) * 2021-10-29 2022-02-18 南京付联微网络科技有限公司 Image recognition precision improving method and system for overcoming small data training set
CN114676733A (en) * 2022-01-17 2022-06-28 中国人民解放军海军工程大学 Fault diagnosis method for complex supply and delivery mechanism based on sparse self-coding assisted classification generation type countermeasure network
CN114964781A (en) * 2022-05-31 2022-08-30 广西大学 Intelligent diagnosis method for train bearing fault
CN115294038A (en) * 2022-07-25 2022-11-04 河北工业大学 Defect detection method based on joint optimization and mixed attention feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合通道层注意力机制的多支路卷积网络抑郁症识别;孙浩浩 等;中国图象图形学报;第第27卷卷(第第11期期);3292-3302 *

Also Published As

Publication number Publication date
CN116152890A (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN111860573B (en) Model training method, image category detection method and device and electronic equipment
EP3637317A1 (en) Method and apparatus for generating vehicle damage information
CN110660484B (en) Bone age prediction method, device, medium, and electronic apparatus
CN111539942A (en) Method for detecting face depth tampered image based on multi-scale depth feature fusion
CN108197592B (en) Information acquisition method and device
CN108388889B (en) Method and device for analyzing face image
CN112464803A (en) Image comparison method and device
WO2024011835A1 (en) Image processing method and apparatus, device, and readable storage medium
CN112215831B (en) Method and system for evaluating quality of face image
CN113420690A (en) Vein identification method, device and equipment based on region of interest and storage medium
CN111814821A (en) Deep learning model establishing method, sample processing method and device
WO2019128362A1 (en) Human facial recognition method, apparatus and system, and medium
CN112883980A (en) Data processing method and system
CN113256605A (en) Breast cancer image identification and classification method based on deep neural network
CN111047590A (en) Hypertension classification method and device based on fundus images
CN116152890B (en) Medical fee self-service payment system
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN113781462A (en) Human body disability detection method, device, equipment and storage medium
CN113066223A (en) Automatic invoice verification method and device
CN111325282A (en) Mammary gland X-ray image identification method and device suitable for multiple models
CN116433970A (en) Thyroid nodule classification method, thyroid nodule classification system, intelligent terminal and storage medium
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN114170221A (en) Method and system for confirming brain diseases based on images
CN114764948A (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN112560700A (en) Information association method and device based on motion analysis and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant