CN111599460A - Telemedicine method and system - Google Patents

Telemedicine method and system Download PDF

Info

Publication number
CN111599460A
CN111599460A CN202010476964.9A CN202010476964A CN111599460A CN 111599460 A CN111599460 A CN 111599460A CN 202010476964 A CN202010476964 A CN 202010476964A CN 111599460 A CN111599460 A CN 111599460A
Authority
CN
China
Prior art keywords
target
image
face image
terminal
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010476964.9A
Other languages
Chinese (zh)
Inventor
詹俊鲲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jieda Medical Instrument Co.,Ltd.
Original Assignee
詹俊鲲
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 詹俊鲲 filed Critical 詹俊鲲
Priority to CN202010476964.9A priority Critical patent/CN111599460A/en
Publication of CN111599460A publication Critical patent/CN111599460A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present application provides a telemedicine method, the method comprising the steps of: the method comprises the steps that a terminal obtains an original image of a target object, and carries out contour recognition on the original image to obtain a face image of the target object; the terminal carries out enhancement processing on the face image to obtain a target image, and carries out face recognition processing on the target image to obtain a first identity of a target object; the terminal extracts medical registration reservation information in the log information, wherein the medical registration reservation information comprises: appointment patient, appointment time and appointment doctor; and when the terminal determines that the current time reaches the appointment time and the first identity is the appointment patient, the terminal initiates a telemedicine request to the appointment doctor to establish telemedicine communication. The technical scheme that this application provided has the advantage that medical experience degree is high.

Description

Telemedicine method and system
Technical Field
The application relates to the field of medical treatment, in particular to a remote medical treatment method and system.
Background
The remote medical treatment refers to the remote diagnosis, treatment and consultation of the sick and wounded in remote areas, islands or ships with poor medical conditions by fully utilizing the known medical technology and medical equipment advantages of large hospitals or specialized medical centers by relying on computer technology, remote sensing, remote measuring and remote control technology. Aiming at improving the level of diagnosis and medical treatment, reducing medical expenses and meeting the health care requirements of the masses. At present, the telemedicine technology has been developed from the initial television monitoring and telephone telediagnosis to the comprehensive transmission of digital, image and voice by using a high-speed network, and realizes the communication of real-time voice and high-definition images, thereby providing a wider development space for the application of modern medicine.
The existing telemedicine is based on the operation of a patient on intelligent equipment, but for telemedicine, because real-time medical detection cannot be carried out, the telemedicine is not useful for some young people, but for the old, because the function of the old is weakened and the intelligent equipment is weakened, the operation on the intelligent equipment cannot be independently realized, and the service experience degree of the telemedicine is reduced.
Disclosure of Invention
The invention aims to provide a remote medical treatment method and system, and the technical scheme can improve the experience of the old people on remote medical treatment.
In a first aspect, there is provided a telemedicine method comprising the steps of:
the method comprises the steps that a terminal obtains an original image of a target object, and carries out contour recognition on the original image to obtain a face image of the target object;
the terminal carries out enhancement processing on the face image to obtain a target image, and carries out face recognition processing on the target image to obtain a first identity of a target object;
the terminal extracts medical registration reservation information in the log information, wherein the medical registration reservation information comprises: appointment patient, appointment time and appointment doctor;
and when the terminal determines that the current time reaches the appointment time and the first identity is the appointment patient, the terminal initiates a telemedicine request to the appointment doctor to establish telemedicine communication.
In a second aspect, a terminal is provided, which includes:
an acquisition unit configured to acquire an original image of a target object;
the processing unit is used for carrying out contour recognition on the original image to obtain a face image of the target object; carrying out enhancement processing on the face image to obtain a target image, and carrying out face recognition processing on the target image to obtain a first identity of a target object; extracting medical registration reservation information in the log information, the medical registration reservation information comprising: appointment patient, appointment time and appointment doctor;
and the communication unit is used for initiating a remote medical request to the reservation doctor to establish remote medical communication when the current time reaches the reservation time and the first identity is determined to be the reserved patient.
In a third aspect, a computer-readable storage medium storing a computer program for electronic data exchange is provided, wherein the computer program causes a computer to perform the method provided in the first aspect.
According to the technical scheme, a terminal acquires an original image of a target object, and carries out contour recognition on the original image to obtain a face image of the target object; the terminal carries out enhancement processing on the face image to obtain a target image, and carries out face recognition processing on the target image to obtain a first identity of a target object; the terminal extracts medical registration reservation information in the log information, wherein the medical registration reservation information comprises: when a reservation patient, a reservation time and a reservation doctor terminal determine that the current time reaches the reservation time and the first identity is the reservation patient, the terminal initiates a telemedicine request to the reservation doctor to establish telemedicine communication.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal according to the present invention.
Fig. 2 is a schematic flow chart of a telemedicine method provided by the present invention.
Fig. 3 is a schematic structural diagram of a terminal provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiments of the present application will be described below with reference to the drawings.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more. The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application. The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
A terminal in the embodiments of the present application may refer to various forms of UE, access terminal, subscriber unit, subscriber station, mobile station, MS (mobile station), remote station, remote terminal, mobile device, user terminal, terminal device (terminal equipment), wireless communication device, user agent, or user equipment. The terminal device may also be a cellular phone, a cordless phone, an SIP (session initiation protocol) phone, a WLL (wireless local loop) station, a PDA (personal digital assistant), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a future 5G network or a terminal device in a future evolved PLMN (public land mobile network, chinese), and the like, which are not limited in this embodiment.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal disclosed in an embodiment of the present application, the terminal 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, where the sensor 170 may include a camera, a distance sensor, a gravity sensor, and the like, the electronic device may include two transparent display screens, the transparent display screens are disposed on a back side and a front side of the electronic device, and part or all of components between the two transparent display screens may also be transparent, so that the electronic device may be a transparent electronic device in terms of visual effect, and if part of the components are transparent, the electronic device may be a hollow electronic device. Wherein:
the terminal 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may be a memory, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the terminal 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the terminal 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functionality, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the terminal 100, to name a few, embodiments of the present application are not limited.
The terminal 100 may include an input-output circuit 150. The input-output circuit 150 may be used to enable the terminal 100 to input and output data, i.e., to allow the terminal 100 to receive data from external devices and also to allow the terminal 100 to output data from the terminal 100 to external devices. The input-output circuit 150 may further include a sensor 170. Sensor 170 vein identification module, can also include ambient light sensor, proximity sensor based on light and electric capacity, fingerprint identification module, touch sensor (for example, based on light touch sensor and/or capacitanc touch sensor, wherein, touch sensor can be touch-control display screen's partly, also can regard as a touch sensor structure independent utility), acceleration sensor, the camera, and other sensors etc. the camera can be leading camera or rear camera, the fingerprint identification module can integrate in the display screen below, be used for gathering the fingerprint image, the fingerprint identification module can be: optical fingerprint module, etc., and is not limited herein. The front camera can be arranged below the front display screen, and the rear camera can be arranged below the rear display screen. Of course, the front camera or the rear camera may not be integrated with the display screen, and certainly in practical applications, the front camera or the rear camera may also be a lifting structure.
Input-output circuit 150 may also include one or more display screens, and when multiple display screens are provided, such as 2 display screens, one display screen may be provided on the front of the electronic device and another display screen may be provided on the back of the electronic device, such as display screen 130. The display 130 may include one or a combination of liquid crystal display, transparent display, organic light emitting diode display, electronic ink display, plasma display, and display using other display technologies. The display screen 130 may include an array of touch sensors (i.e., the display screen 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The terminal 100 can also include an audio component 140. Audio component 140 may be used to provide audio input and output functionality for terminal 100. The audio components 140 in the terminal 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sound.
The communication circuit 120 can be used to provide the terminal 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The terminal 100 may further include a battery, a power management circuit, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control operation of terminal 100 and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from terminal 100.
Referring to fig. 2, fig. 2 provides a telemedicine method, which is performed by using the terminal shown in fig. 1, and the terminal may specifically include: a smart phone, a smart television, and the like, the method is shown in fig. 2, and includes the following steps:
step S201, a terminal acquires an original image of a target object, and carries out contour recognition on the original image to obtain a face image of the target object;
the above-mentioned contour recognition algorithm can be implemented by using an existing algorithm, and the application does not limit the specific implementation manner of the above-mentioned contour recognition algorithm.
Step S202, the terminal performs enhancement processing on the face image to obtain a target image, and performs face recognition processing on the target image to obtain a first identity of a target object;
the corresponding algorithm of the enhancement processing can be at least one of the following: image sharpening, gray stretching, histogram equalization, and the like, which are not limited herein.
Of course, in practical application, other enhancement processing methods may also be adopted, and the specific method may include:
the method comprises the steps that a terminal calculates corresponding JND (just distinguishable difference) for each pixel point of a face image, divides the pixel points of the face image into a first pixel point set and a second pixel point set according to the JND (the pixel points larger than a threshold th are the second pixel point set, and the pixel points smaller than or equal to the threshold th are the first pixel point set), executes image enhancement processing on the first pixel point set according to a formula (1) to obtain a first enhanced pixel point set, and executes image enhancement processing on the second pixel points according to a formula (2) to obtain a second enhanced pixel point set; and combining the first enhanced pixel point set and the second enhanced pixel point set to obtain the target image.
Figure BDA0002516159420000061
Where JND (i) denotes JND of i, k is a target contrast resolution compensation scaling factor (which may be set by user experience), and OG (x, y) denotes an original grayscale value before compensation at coordinates (x, y) of the face image and a target grayscale value after compensation at coordinates (x, y) of TG (x, y) of the face image. k is a real number greater than 0.
TG(x,y)=TG(x,y)Th+a×[OG(x,y)-TG(x,y)Th]Formula (2)
Wherein, TG (x, y)ThThe compensated target gray level at the threshold (th) (which can be calculated by formula (1)), a is a target linear stretching adjustment coefficient (which can be set by a user, for example, 1.5, 1.55, etc.), and OG (x, y) and TG (x, y) respectively represent an original gray level value before compensation and a target gray level value after compensation at the image pixel coordinates (x, y).
The method for calculating the JND may specifically include:
Figure BDA0002516159420000071
wherein i is a background gray value, i belongs to [0,47] and is scotopic vision, and the just-distinguishable gray difference of human eyes is exponentially changed; i e (47,255) is photopic, just the distinguishable difference is 1.17 to 1.75 gray levels.
The image enhancement method provided by the application can enable the quality of the target image to be higher, and improves the accuracy of face recognition.
Step S203, the terminal extracts medical registration reservation information in the log information, wherein the medical registration reservation information comprises: appointment patient, appointment time and appointment doctor;
and step S204, when the terminal determines that the current time reaches the appointment time and the first identity is the appointment patient, the terminal initiates a remote medical request to the appointment doctor to establish remote medical communication.
According to the technical scheme, a terminal acquires an original image of a target object, and carries out contour recognition on the original image to obtain a face image of the target object; the terminal carries out enhancement processing on the face image to obtain a target image, and carries out face recognition processing on the target image to obtain a first identity of a target object; the terminal extracts medical registration reservation information in the log information, wherein the medical registration reservation information comprises: when a reservation patient, a reservation time and a reservation doctor terminal determine that the current time reaches the reservation time and the first identity is the reservation patient, the terminal initiates a telemedicine request to the reservation doctor to establish telemedicine communication.
The step S202 of performing the face recognition processing on the target image to obtain the first identity of the target object may specifically include:
e1, acquiring a target face image of the target image;
e2, verifying the target face image;
e3, when the target face image passes the verification, determining that the target object is a first identity corresponding to a preset face module.
In the specific implementation, a preset face template can be stored in the electronic device in advance, the original image of the target object can be obtained through the camera, and then the first identity of the target object can be determined when the target face image is successfully matched with the preset face template by the electronic device, otherwise, the first identity of the target object is not determined, so that the identity of the target object can be identified, whether the first identity is a reserved patient or not can be judged, and the fact that other people start telemedicine is avoided.
Further, in a possible example, in the step E2, the verifying the target face image may include the following steps:
e21, performing region segmentation on the target face image to obtain a target face region, wherein the target face region is a region image only of a face;
e22, performing binarization processing on the target face area to obtain a binarized face image;
e23, dividing the binary face image into a plurality of regions, wherein the areas of the regions are the same and the area size is larger than a preset area value;
e24, extracting the characteristic points of the binary face image to obtain a plurality of characteristic points;
e25, determining the distribution density of the feature points corresponding to each of the plurality of areas according to the plurality of feature points to obtain a plurality of distribution densities of the feature points;
e26, determining a target mean square error according to the distribution densities of the plurality of feature points;
e27, determining a target quality evaluation value corresponding to the target mean square error according to a preset mapping relation between the mean square error and the quality evaluation value;
e28, when the target quality evaluation value is smaller than the preset quality evaluation value, performing image enhancement processing on the target face image, and matching the target face image subjected to the image enhancement processing with a preset face template to obtain a matching value;
e29, when the matching value is larger than a preset threshold value, determining that the target face image is verified.
In specific implementation, the preset threshold and the preset area value can be set by a user or default by a system, and the preset face template can be stored in the electronic device in advance. The electronic device may obtain a region segmentation of the target face image to obtain a target face region, where the target face region may be a region that does not include a background but only includes a face, that is, a region image of only a face. And then, can carry out binarization processing to target face region, obtain two quantification face image, so, can reduce the image complexity, divide two quantification face image into a plurality of regions, the area size of each region is equal, and is greater than preset area value. Further, feature point extraction may be performed on the binarized face image to obtain a plurality of feature points, and an algorithm of the feature extraction may be at least one of the following: scale Invariant Feature Transform (SIFT), SURF, pyramid, harris corner detection, etc., without limitation.
Further, the electronic device may determine, according to the plurality of feature points, a feature point distribution density corresponding to each of the plurality of regions to obtain a plurality of feature point distribution densities, and determine a target mean square error according to the plurality of feature point distribution densities, the electronic device may pre-store a mapping relationship between a preset mean square error and a quality evaluation value, and determine, according to the mapping relationship between the preset mean square error and the quality evaluation value, a target quality evaluation value corresponding to the target mean square error, where the smaller the mean square error is, the larger the quality evaluation value is, when the target quality evaluation value is greater than the preset quality evaluation value, directly match the target face image with a preset face template, and when a matching value therebetween is greater than a preset threshold, determine that the target face image is verified, and otherwise, determine that the target face image is verified.
Further, when the target quality evaluation value is smaller than the preset quality evaluation value, the terminal may perform image enhancement processing on the target face image, match the target face image after the image enhancement processing with the preset face template, and determine that the target face image passes verification if the matching value between the target face image and the preset face template is larger than a preset threshold value, otherwise, determine that the target face image fails verification.
Referring to fig. 3, the present application also provides a terminal, including:
an acquisition unit 301 for acquiring an original image of a target object;
the processing unit 302 is configured to perform contour recognition on an original image to obtain a face image of a target object; carrying out enhancement processing on the face image to obtain a target image, and carrying out face recognition processing on the target image to obtain a first identity of a target object; extracting medical registration reservation information in the log information, the medical registration reservation information comprising: appointment patient, appointment time and appointment doctor;
and the communication unit is used for initiating a remote medical request to the reservation doctor to establish remote medical communication when the current time reaches the reservation time and the first identity is determined to be the reserved patient.
The specific implementation method of the processing unit may refer to the refinement scheme of step S202 shown in fig. 2, and is not described herein again.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (6)

1. A telemedicine method, comprising the steps of:
the method comprises the steps that a terminal obtains an original image of a target object, and carries out contour recognition on the original image to obtain a face image of the target object;
the terminal carries out enhancement processing on the face image to obtain a target image, and carries out face recognition processing on the target image to obtain a first identity of a target object;
the terminal extracts medical registration reservation information in the log information, wherein the medical registration reservation information comprises: appointment patient, appointment time and appointment doctor;
and when the terminal determines that the current time reaches the appointment time and the first identity is the appointment patient, the terminal initiates a telemedicine request to the appointment doctor to establish telemedicine communication.
2. The method according to claim 1, wherein the step of the terminal performing enhancement processing on the face image to obtain the target image specifically comprises:
the method comprises the steps that a terminal calculates corresponding JND (just noticeable Difference) for each pixel point of a face image, divides the pixel points of the face image into a first pixel point set and a second pixel point set according to the JND, performs image enhancement processing on the first pixel point set according to a formula 1 to obtain a first enhanced pixel point set, and performs image enhancement processing on the second pixel point set according to a formula 2 to obtain a second enhanced pixel point set; combining the first enhanced pixel point set and the second enhanced pixel point set to obtain a target image;
Figure FDA0002516159410000011
JND (i) represents JND of i, k is a target contrast resolution compensation scale factor (which can be set through user experience), OG (x, y) represents an original gray value before compensation at coordinates (x, y) of the face image, and TG (x, y) represents a target gray value after compensation at coordinates (x, y) of the face image; k is a real number greater than 0.
TG(x,y)=TG(x,y)Th+a×[OG(x,y)-TG(x,y)Th]Equation 2
Wherein, TG (x, y)ThA is the compensated target gray level at the threshold, a is the target linear stretching adjustment coefficient;
Figure FDA0002516159410000012
where i is the background gray value.
3. The method of claim 1, wherein the performing the face recognition processing on the target image to obtain the first identity of the target object specifically comprises:
acquiring a target face image of a target image;
verifying the target face image;
and when the target face image passes the verification, determining that the target object is a first identity corresponding to a preset face module.
4. The method according to claim 3, wherein the verifying the target face image specifically comprises:
performing region segmentation on the target face image to obtain a target face region, wherein the target face region is a region image only of a face;
carrying out binarization processing on the target face area to obtain a binarization face image;
dividing the binaryzation face image into a plurality of regions, wherein the areas of the regions are the same and the area size is larger than a preset area value;
extracting feature points of the binarized face image to obtain a plurality of feature points;
determining the distribution density of the characteristic points corresponding to each of the plurality of areas according to the plurality of characteristic points to obtain a plurality of distribution densities of the characteristic points;
determining a target mean square error according to the distribution densities of the plurality of feature points;
determining a target quality evaluation value corresponding to the target mean square error according to a preset mapping relation between the mean square error and the quality evaluation value;
when the target quality evaluation value is smaller than the preset quality evaluation value, performing image enhancement processing on the target face image, and matching the target face image subjected to the image enhancement processing with a preset face template to obtain a matching value;
and when the matching value is larger than a preset threshold value, determining that the target face image is verified to be passed.
5. A terminal, characterized in that the terminal comprises:
an acquisition unit configured to acquire an original image of a target object;
the processing unit is used for carrying out contour recognition on the original image to obtain a face image of the target object; carrying out enhancement processing on the face image to obtain a target image, and carrying out face recognition processing on the target image to obtain a first identity of a target object; extracting medical registration reservation information in the log information, the medical registration reservation information comprising: appointment patient, appointment time and appointment doctor;
and the communication unit is used for initiating a remote medical request to the reservation doctor to establish remote medical communication when the current time reaches the reservation time and the first identity is determined to be the reserved patient.
6. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-4.
CN202010476964.9A 2020-05-29 2020-05-29 Telemedicine method and system Withdrawn CN111599460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010476964.9A CN111599460A (en) 2020-05-29 2020-05-29 Telemedicine method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010476964.9A CN111599460A (en) 2020-05-29 2020-05-29 Telemedicine method and system

Publications (1)

Publication Number Publication Date
CN111599460A true CN111599460A (en) 2020-08-28

Family

ID=72187105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010476964.9A Withdrawn CN111599460A (en) 2020-05-29 2020-05-29 Telemedicine method and system

Country Status (1)

Country Link
CN (1) CN111599460A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183293A (en) * 2020-09-23 2021-01-05 深圳市奋达智能技术有限公司 Body temperature detection method and device of wearable device
CN112418996A (en) * 2020-11-30 2021-02-26 珠海采筑电子商务有限公司 Recommendation method and system for elevator suppliers
CN112992305A (en) * 2021-03-15 2021-06-18 深圳市南山区慢性病防治院 Health record management system and method
CN113362429A (en) * 2021-08-09 2021-09-07 景昱医疗器械(长沙)有限公司 Image processing apparatus, method, device, and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957985A (en) * 2010-10-15 2011-01-26 重庆医科大学 Automatic self-adaptive optimum compensation method of human vision contrast resolution
CN102186054A (en) * 2011-03-25 2011-09-14 谢丹玫 Adaptive bottom video online mining system and method based on contrast resolution compensation
KR20130118510A (en) * 2012-04-20 2013-10-30 경희대학교 산학협력단 A system and the method for providing medical picture conversation
CN106295103A (en) * 2015-05-28 2017-01-04 青岛海尔智能技术研发有限公司 A kind of tele-medicine scheduled visits method and system
CN110808041A (en) * 2019-09-24 2020-02-18 深圳市火乐科技发展有限公司 Voice recognition method, intelligent projector and related product
CN111163442A (en) * 2019-12-27 2020-05-15 咻享智能(深圳)有限公司 Route planning method and related device for wireless Internet of things

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957985A (en) * 2010-10-15 2011-01-26 重庆医科大学 Automatic self-adaptive optimum compensation method of human vision contrast resolution
CN102186054A (en) * 2011-03-25 2011-09-14 谢丹玫 Adaptive bottom video online mining system and method based on contrast resolution compensation
KR20130118510A (en) * 2012-04-20 2013-10-30 경희대학교 산학협력단 A system and the method for providing medical picture conversation
CN106295103A (en) * 2015-05-28 2017-01-04 青岛海尔智能技术研发有限公司 A kind of tele-medicine scheduled visits method and system
CN110808041A (en) * 2019-09-24 2020-02-18 深圳市火乐科技发展有限公司 Voice recognition method, intelligent projector and related product
CN111163442A (en) * 2019-12-27 2020-05-15 咻享智能(深圳)有限公司 Route planning method and related device for wireless Internet of things

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨桦 等: "《低照度图像的分段非线性化处理算法》", 《河南理工大学学报(自然科学版)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183293A (en) * 2020-09-23 2021-01-05 深圳市奋达智能技术有限公司 Body temperature detection method and device of wearable device
CN112418996A (en) * 2020-11-30 2021-02-26 珠海采筑电子商务有限公司 Recommendation method and system for elevator suppliers
CN112992305A (en) * 2021-03-15 2021-06-18 深圳市南山区慢性病防治院 Health record management system and method
CN113362429A (en) * 2021-08-09 2021-09-07 景昱医疗器械(长沙)有限公司 Image processing apparatus, method, device, and readable storage medium

Similar Documents

Publication Publication Date Title
CN111599460A (en) Telemedicine method and system
CN110286738B (en) Fingerprint acquisition method and related product
US10061969B2 (en) Fingerprint unlocking method and terminal
WO2020034710A1 (en) Fingerprint recognition method and related product
CN110706179B (en) Image processing method and electronic equipment
CN108427873B (en) Biological feature identification method and mobile terminal
CN107480496A (en) Solve lock control method and Related product
CN107749046B (en) Image processing method and mobile terminal
CN110688973A (en) Equipment control method and related product
EP4206983A1 (en) Fingerprint identification method and electronic device
CN111881813B (en) Data storage method and system of face recognition terminal
WO2022247762A1 (en) Electronic device, and fingerprint unlocking method and fingerprint unlocking apparatus therefor
CN110298274B (en) Optical fingerprint parameter upgrading method and related product
CN110245483B (en) Biometric identification method and related product
CN108647566B (en) Method and terminal for identifying skin characteristics
WO2021164730A1 (en) Fingerprint image processing method and electronic device
CN110221696B (en) Eyeball tracking method and related product
CN110263757B (en) Fingerprint identification method and related product
CN109104522B (en) Face recognition method and mobile terminal
CN110244848B (en) Reading control method and related equipment
CN109819331B (en) Video call method, device and mobile terminal
CN111489395B (en) Image signal direction judging method and related equipment
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN111145083B (en) Image processing method, electronic equipment and computer readable storage medium
CN110278305B (en) Pattern recognition method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210428

Address after: 3 / F, building 5, Changyuan new material port engineering technology center, Gaoxin Middle Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Jieda Medical Instrument Co.,Ltd.

Address before: 410000 Hunan province Changsha Kaifu District, Xiangya Road No. 26

Applicant before: Zhan Junkun

TA01 Transfer of patent application right
WW01 Invention patent application withdrawn after publication

Application publication date: 20200828

WW01 Invention patent application withdrawn after publication