CN115414025A - Screening method, apparatus, storage medium, and program product - Google Patents

Screening method, apparatus, storage medium, and program product Download PDF

Info

Publication number
CN115414025A
CN115414025A CN202110603113.0A CN202110603113A CN115414025A CN 115414025 A CN115414025 A CN 115414025A CN 202110603113 A CN202110603113 A CN 202110603113A CN 115414025 A CN115414025 A CN 115414025A
Authority
CN
China
Prior art keywords
user
interactive
expectoration
information
sputum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110603113.0A
Other languages
Chinese (zh)
Inventor
许培达
李靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110603113.0A priority Critical patent/CN115414025A/en
Priority to PCT/CN2022/085099 priority patent/WO2022252803A1/en
Publication of CN115414025A publication Critical patent/CN115414025A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0823Detecting or evaluating cough events
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/681Wristwatch-type devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physiology (AREA)
  • Pulmonology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Telephone Function (AREA)

Abstract

The application discloses a screening method, screening equipment, a storage medium and a program product, and belongs to the field of breath analysis. The method comprises the steps of detecting cough sound, and sensing interaction operation of a user through an interaction device; obtaining expectoration information corresponding to the interactive operation; and outputting reminding information according to the expectoration information. The cough sound is detected as a trigger to acquire expectoration information, and then respiratory infection screening is assisted according to the expectoration information. By adopting the embodiment of the application, expectoration information can be acquired in time, screening accuracy is provided, and operability is high.

Description

Screening method, apparatus, storage medium, and program product
Technical Field
The present application relates to the field of breath analysis, and more particularly, to a screening method, apparatus, storage medium, and program product.
Background
Respiratory infection (respiratory tract infection) is a common disease and frequently-occurring disease, the types of the diseases are complex, the incidence rate is increased year by year, and the respiratory tract infection is the most serious and the mortality rate is high. The new coronary pneumonia is a type of lower respiratory tract infection, and due to infectivity and harmfulness, the new coronary pneumonia is very important to prevent and control, and has important significance in respiratory infection screening.
Existing screening for infectious diseases typically requires a blood test or testing of another body fluid (e.g., saliva), which may take several hours to perform, and typically needs to be performed at a central location where the blood analysis equipment is located. Conventional screening techniques are also not suitable for individuals to perform their own screening tests-requiring the collection, handling, transport, and analysis of samples by care workers. A great demand for better screening procedures is not met, especially during outbreaks and epidemics of disease.
Disclosure of Invention
The present application provides a screening method that will be suitable for non-professionals to self-check, with timeliness and universality.
In a first aspect, the present application provides a screening method comprising: and when the cough sound is detected, the interactive operation of the user is sensed through the interactive equipment, expectoration information corresponding to the interactive operation is acquired, and reminding information is output according to the expectoration information.
Among them, cough (cough) is a common symptom of respiratory tract, and is caused by inflammation, foreign matter, physical or chemical stimulation to the trachea, the bronchial mucosa or the pleura, and is characterized in that the glottis is closed, the respiratory muscle is contracted, the intra-pulmonary pressure is increased, then the glottis is opened, and the air in the lung is ejected, and is usually accompanied by sound, so that when a user coughs, the cough sound can be recorded.
In this application embodiment, through detecting cough sound as triggering to acquire expectoration information, and then according to expectoration information assisted respiration infection screening, can in time acquire expectoration information, provide the screening accuracy, maneuverability is strong. It would be suitable for a non-professional to perform the test on himself or herself, which would be able to obtain results quickly. The user optionally performs a rapid off-line analysis of the sample, which will include avoiding cross-contamination of test subjects and protecting health care workers from infection, and which is suitable for convenient re-use, as in the case of screening large populations or repeating tests on a single subject.
Optionally, the interaction device includes a voice interaction device, and then the sensing, by the interaction device, the interaction operation of the user includes: outputting first voice data through the voice interaction equipment, wherein the first voice data is used for interacting with a user; and receiving interactive operation input by a user after the voice interaction equipment outputs the first voice data, wherein the interactive operation comprises voice input.
Optionally, the first voice data includes a guide word technique set according to expectoration information, the guide word technique including: universal answer guided dialogs, verified guided dialogs, selective guided dialogs, and query guided dialogs.
Optionally, the interaction device may further include a display device; the display equipment displays the text data and the interactive elements converted from the first voice data; the interactive operation further comprises a touch operation acting on the interactive element.
Optionally, the interaction device includes a display device, and then the sensing, by the interaction device, the interaction operation of the user includes: presenting, by the display device, an interactive interface, the interactive interface including an interactive element, wherein the interactive element is for interacting with a user; and receiving the interactive operation input by the user on the interactive interface.
Optionally, the interactive element comprises a first option set according to expectoration information; the interaction operation comprises a touch operation on the first option.
Optionally, the interactive element further includes a sputum picture and an interactive control associated with the sputum picture, and the interactive operation includes a touch operation acting on the interactive control.
Optionally, the interactive element includes a view finder of the picture and a shooting control, and the interactive operation includes a touch operation applied to the shooting control.
Optionally, the expectoration information includes: frequent expectoration, sputum color and sputum viscosity.
Optionally, the outputting of the reminding information according to the expectoration information includes: acquiring a physiological index of a user; and outputting reminding information according to the physiological indexes and the expectoration information.
In a second aspect, the present application further provides an electronic device, where the structure of the electronic device includes a processor and a memory, where the memory is used to store a program that supports the electronic device to execute the screening method provided in the first aspect and the optional implementation manner thereof, and to store data involved in implementing the screening method provided in the first aspect and the optional implementation manner thereof. The processor executes the program stored in the memory to perform the method provided by the foregoing first aspect and its optional implementation. The electronic device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a third aspect, the present application also provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the screening method of the first aspect and its optional implementation.
In a fourth aspect, the present application also provides a computer program product comprising computer program code which, when executed by a computer, causes the computer to perform the screening method of the first aspect and its optional implementation.
The technical effects obtained by the second, third and fourth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
The beneficial effect that technical scheme that this application provided brought includes at least:
cough sound is detected as a trigger to acquire expectoration information, and then respiratory infection screening is assisted according to the expectoration information, the expectoration information can be acquired in time, screening accuracy is provided, and operability is high. It would be suitable for a non-professional to perform the test on himself or herself, which would be able to obtain results quickly. The user optionally performs a rapid off-line analysis of the sample, which will include avoiding cross-contamination of test subjects and protecting health care workers from infection, and which is suitable for convenient re-use, as in the case of screening large populations or repeating tests on a single subject.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a smart watch provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart of a screening method provided by an embodiment of the present application;
fig. 5A to 5E are schematic diagrams illustrating a cough sound recording according to an embodiment of the present application;
FIGS. 6A-6F are schematic diagrams of an interactive interface provided by an embodiment of the present application;
FIGS. 7A-7C are schematic diagrams of another interactive interface provided by embodiments of the present application;
8A-8C are schematic diagrams of another interactive interface provided by embodiments of the present application;
FIG. 9A is a schematic diagram of a prompt message provided in an embodiment of the present application;
fig. 9B is a schematic diagram of another prompt message provided in the embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, an application scenario related to the embodiments of the present application will be described.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 may include: one or more sound pickup apparatuses 10, one or more interaction apparatuses 11, one or more processors 12, a memory 13, one or more application programs (not shown), and one or more computer programs 14, which may be connected via one or more communication buses.
The sound pickup apparatus 10 is used for picking up audio, and may be a Microphone Array (Microphone Array) composed of a plurality of microphones, where the Microphone Array physically refers to a system composed of a number of acoustic sensors (generally microphones) and used for sampling and processing spatial characteristics of a sound field.
The interactive device 11 is used for interacting with a user to realize human-computer interaction. The interactive device 11 may be a voice interactive device, such as a home voice assistant, a smart speaker, or a visual interactive device, such as a smart screen, a smart television, or a smart appliance, such as a smart air conditioner, a smart washing machine, or the like, and may also be an interactive device 11 suitable for a work scene, such as a factory, an enterprise, or a hospital, in addition to the above interactive device suitable for a home scene, without any limitation. The voice interaction equipment can perform voice interaction through voice, and the visual interaction equipment can perform interaction through an interaction interface.
Wherein the one or more computer programs 14 are stored in the memory 13 and configured to be executed by the one or more processors 12, the one or more computer programs 14 comprising instructions which may be used to perform the steps in the embodiments described below. All relevant contents of the steps related to the following embodiments may be referred to the functional description of the corresponding entity device, and are not described herein again. The instructions constitute a software product which is loaded into the memory 13. When executed, the instructions cause the electronic device 100 to operate as a screening operation that is performed based on the cough sound, and in particular, the method provided by the embodiments described below. It will be appreciated that the programming of the software product is directly in view of the method of the embodiments of the present application.
The above-described electronic apparatus 100 may be an apparatus for screening respiratory infections, and may be installed in an electronic device such as a mobile phone, a smart watch, a bracelet, a tablet PC, a desktop computer, a laptop computer, etc., or in a medical device used in a dedicated medical institution. In addition, the apparatus for screening for respiratory infection may be manufactured as a separate hardware device, such as a wearable device worn on a subject, and examples of the wearable device include wrist-watch type wearable device, bracelet type wearable device, wrist-band type wearable device, loop type wearable device, glasses type wearable device, headband type wearable device, and the like, but the wearable device is not limited thereto.
Taking an electronic device as a mobile phone as an example, please refer to fig. 2, and fig. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus comprising a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of receiving a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The internal memory 121 may include one or more Random Access Memories (RAMs) and one or more non-volatile memories (NVMs).
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous dynamic random-access memory (DDR SDRAM), such as fifth generation DDR SDRAM generally referred to as DDR5 SDRAM, and the like;
the nonvolatile memory may include a magnetic disk storage device, flash memory (flash memory).
The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. according to the operation principle, may include single-level cell (SLC), multi-level cell (MLC), triple-level cell (TLC), quad-level cell (QLC), etc. according to the level order of the memory cell, and may include universal FLASH memory (english: UFS), embedded multimedia memory Card (mc em), etc. according to the storage specification.
The random access memory may be read and written directly by the processor 110, may be used to store executable programs (e.g., machine instructions) of an operating system or other programs in operation, and may also be used to store data of users and applications, etc.
The nonvolatile memory may also store executable programs, data of users and application programs, and the like, and may be loaded in advance into the random access memory for the processor 110 to directly read and write.
The external memory interface 120 may be used to connect an external nonvolatile memory to extend the storage capability of the electronic device 100. The external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are saved in an external nonvolatile memory.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into analog audio signals for output, and also used to convert analog audio inputs into digital audio signals. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal to the microphone 170C by uttering a voice signal close to the microphone 170C through the mouth of the user. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association) standard interface of the USA.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for identifying the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and the like.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid abnormal shutdown of the electronic device 100 due to low temperature. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch screen includes the touch sensor 180K and the display screen 194, which is also referred to as a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone block vibrated by the sound part obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
In the embodiment of the present application, some sensors may be added to the electronic device 100, and the physiological data is obtained by the sensors, so that the physiological data can be used to measure physiological indexes of the user, where the physiological indexes include heart rate, body temperature, blood oxygen, respiration rate, and the like.
In one possible implementation, the electronic device 100 includes a blood oxygen module, which includes two red light sensors, at least two infrared sensors, a green light sensor, and at least two red light sensors for acquiring at least two red light signals; the infrared sensors are used for acquiring at least two infrared signals; and the green light sensor is used for acquiring a green light signal. The processor is used for determining the red light direct current data and the component signals of the red light alternating current signals according to the at least two red light signals; and determining infrared direct current data and infrared alternating current signal component signals according to the at least two infrared signals. The constituent signals include arterial signals. The processor is also used for determining red light alternating current data according to the component signal of the red light alternating current signal and the green light signal; and determining infrared alternating current data according to the component signals of the infrared alternating current signals and the green light signals. The processor is further configured to determine the blood oxygen saturation based on the red light DC data, the red light AC data, the infrared DC data, and the infrared AC data.
In one of them possible implementation, electronic device 100 includes the heart rate module, and the heart rate module is used for gathering user's blood vessel along with the data of pulse change when testing the heart rate, supplies the processor to pass through heart rate algorithm calculation heart rate value. Wherein the heart rate module can include control chip, PPG sensor and acceleration sensor etc. that realize heart rate algorithm.
In one possible implementation, the electronic device 100 is provided with a Photoplethysmograph (PPG) to measure heart rate and an Electrocardiogram (ECG) to measure Electrocardiogram.
In one of the possible implementations, the electronic device 100 includes a temperature sensor by which a body temperature of the user is measured.
In one possible implementation, electronic device 100 may generate a raw signal representing airflow sounds of breathing using a standard non-contact microphone by analyzing the raw signal to determine one or more breathing parameters of a first subset and deriving one or more estimated breathing parameters of a second subset that are not normally directly detectable in the raw signal. Wherein: the first subset of parameters includes the active breath time (duration of active exhalation) and the breath period (time between successive breaths), and the second subset of parameters includes the inspiration time, whereby the breathing rate, i.e. the breathing frequency, can be calculated.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration prompts as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards can be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
Taking an electronic device as an example of a smart watch, please refer to fig. 3, and fig. 3 is a schematic structural diagram of a smart watch according to an embodiment of the present application.
As shown in fig. 3, the smart watch may include: a processor 201, a memory 202, a sensor 203, at least one receiver 204, at least one microphone 205, a power source 206, and a bluetooth device 207.
The memory 202 may be used to store application code, such as application code for pairing the smart watch with the second electronic device. The processor 201 may control application program codes executing the screening method of the embodiment of the present application to implement the function of the smart watch for screening respiratory infection in the embodiment of the present application.
The memory 202 may also have a bluetooth address stored therein for uniquely identifying the smart watch. Additionally, the memory 202 may also store connection data with electronic devices that have been successfully paired with the smart watch before. For example, the connection data may be a bluetooth address of an electronic device that was successfully paired with the smart watch. Based on the connection data, the smart watch can be automatically paired with the electronic device without configuring a connection therebetween, such as performing legitimacy verification and the like. The bluetooth address may be a Media Access Control (MAC) address.
The sensors 203 may include distance sensors, proximity light sensors, gravity sensors, gyroscopes, and/or the like. The processor 201 of the smart watch may determine whether the watch is worn by the user through the sensor 203, and may also determine some actions of the user when the watch is worn, such as a wrist-lifting action, through the sensor 203. For example, the processor 201 of the smart watch may utilize a proximity light sensor to detect whether an object is near the smart watch, thereby determining whether the smart watch is worn by the user. Upon determining that the smart watch is worn, the processor 201 of the smart watch may turn on the receiver 204. In some embodiments, the smart watch may further include a bone conduction sensor, incorporated into a bone conduction headset. The bone conduction sensor can acquire a vibration signal of the vibration bone block of the sound part, and the processor 201 analyzes the voice signal to realize a control function corresponding to the voice signal. In other embodiments, the smart watch may further include a touch sensor or a pressure sensor for detecting a touch operation and a press operation of the user, respectively. In other embodiments, the smart watch may further include a fingerprint sensor for detecting a fingerprint of the user, identifying the identity of the user, and the like. In other embodiments, the smart watch may further include an ambient light sensor, and the processor 201 of the smart watch may adaptively adjust some parameters, such as volume, according to the brightness of the ambient light sensed by the ambient light sensor.
The bluetooth apparatus 207 is used for establishing a bluetooth connection with another electronic device (e.g., a second electronic device), so that short-distance data interaction can be performed between the first electronic device and the other electronic device.
The receiver 204, which may also be referred to as a "handset," may be used to convert the audio electrical signal into a sound signal and play it. For example, when the smart watch is used as the audio output device of the second electronic device, the receiver 204 may convert the received audio electrical signal into a sound signal and play the sound signal.
The microphone 205, which may also be referred to as a "microphone," is used to convert sound signals into electrical audio signals. For example, when the smart watch is used as the audio input device of the second electronic device, the microphone 205 may collect the voice signal of the user and convert the voice signal into an audio electrical signal when the user speaks (e.g., calls or sends voice messages).
A power supply 206 may be used to supply power to the various components included in the smart watch. In some embodiments, the power source 206 may be a battery, such as a rechargeable battery.
In this application embodiment, some sensors can be added to the smart watch, and physiological data is obtained through the sensor, and then can be used to measure user's physiological index, and physiological index includes heart rate, body temperature, blood oxygen and respiratory rate etc..
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation on the smart watch. It may have more or fewer components than shown in fig. 2, may combine two or more components, or may have a different configuration of components. For example, the smart watch may further include an indicator light (which may indicate a status such as a power level), a dust screen (which may be used with an earpiece), and other components. The various components shown in fig. 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
The screening method provided in the embodiments of the present application will be described next.
Flow charts are used herein to illustrate operations performed by an apparatus according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Referring to fig. 4, fig. 4 is a schematic flow chart of a screening method according to an embodiment of the present disclosure. The screening method may be performed by an electronic device, the method comprising the steps of:
step S401: the cough sound was recorded.
In this embodiment, for example, the electronic device includes a smart watch and/or a mobile phone, a sound pickup device (e.g., a microphone) on the smart watch or the mobile phone may collect a sound signal of an external environment of the smart watch or the mobile phone, analyze a sound characteristic of the collected sound signal, and record a cough sound. The environmental sound signals can be collected through microphones configured in the electronic equipment and processed through a processor configured in the electronic equipment; the ambient sound signal may also be collected by a microphone in the headset and processed by a processor in the headset. This is not a particular limitation of the present application. The environmental sound signal is processed, and a speech processing algorithm may be used to identify the collected environmental sound, identify the cough sound therein, and extract the cough sound therein, which is not described in detail herein.
In this embodiment of the present application, the screening method in this embodiment of the present application may be implemented as APP or as a certain function in APP, for example, the screening method in this embodiment of the present application may be implemented as a respiratory infection screening function in exercise health APP, or as a respiratory infection APP, which is not specifically limited in this application.
Install the healthy APP of motion on smart watch or the cell-phone, this healthy APP of motion can provide respiratory infection screening function. Or a respiratory infection screening APP is installed on the smart watch or the mobile phone, and the respiratory infection screening APP can provide a respiratory infection screening function. In practical applications, the method can be divided into an active screening scenario and a passive screening scenario according to the spontaneity of the respiratory infection function. The active screening scene can be used for the user to actively use the respiratory infection screening function in the respiratory infection screening APP to screen respiratory infection, namely, the user can autonomously initiate screening at any time to know the respiratory infection risk of the user. In the passive screening scene, after the user authorizes the breathing infection screening APP, the breathing infection screening APP automatically calls the breathing infection screening function of the breathing infection screening APP to screen the breathing infection.
The recording process of the cough sound in the active screening scenario is described below.
Referring to fig. 5A to 5E together, the user turns on the exercise health APP or the respiratory infection screening APP to enter the respiratory infection screening interface 50, as shown in fig. 5A. A data frame 501 and a start detection button 502 are displayed on the respiratory infection screening interface 50, data measured in the past are displayed on the data frame 501, and if no measurement data exists, no data is displayed today. The start detection button 502 shown in fig. 5A is clicked, and after the start detection button 502 is clicked, if the user is first used, the guidance interface 51 is presented as shown in fig. 5B. The guide interface 51 may include a measurement tutorial 511, measurement techniques 512 and a start detect button 502, the measurement tutorial 511 including: the first step is as follows: the cough sound is recorded. The second step is that: collecting physiological indexes. The measurement techniques 512 include: the measuring process takes about 1 minute and 30 seconds, and the whole measuring process can refer to a measuring course. 1. Recording of cough sound: after the click starts to detect, the user coughs 3 sounds against the microphone and please keep the ambient environment relatively quiet. 2. The wearable device was worn and kept quiet and physiological data was collected for 1 minute. After the user clicks the start detection button 502 shown in fig. 5B, or the user does not use the start detection button 502 for the first time, after the start detection button 502 shown in fig. 5A is clicked, the test interface 52 is presented, as shown in fig. 5C, the test interface 52 includes a recording button 521, after the user clicks the recording button 521, the recording interface 53 is presented, as shown in fig. 5D, the recording duration is presented on the recording interface 53. In response to the user clicking the record button 521, the sound pickup device of the electronic device starts to pick up sound, and at this time, the user may forcibly cough (not a natural cough), which usually requires 3-5 consecutive coughs, within about 20 seconds all the time. In order to ensure that the recorded sound is cough sound, the cough sound recognition is carried out, if the recorded sound is not cough sound or the cough sound is poor in quality, the electronic equipment can prompt a user to record the cough sound again, and when the cough sound is recorded, the electronic equipment can prompt the user to enable the mouth of the user to have angle deviation with the position of the sound pickup equipment, so that the sound cannot be directly aligned to the sound pickup equipment, and the phenomenon that the sound frequency amplitude is too large and distorted and exceeds the sound pickup amplitude upper limit of the sound pickup equipment is avoided. After the recording is finished, the electronic device records the cough sound collected by the sound pickup device, and when the cough sound is detected, the following steps S402 to S406 are executed.
In one possible implementation, after or while recording the cough sounds, other physiological data of the user may also be collected. For example, after the recording is finished, as shown in fig. 5E, a physiological data collecting interface 54 may be further presented, where the physiological data collecting interface 54 prompts the user to wear a wearable device (such as a smart watch), keep quiet, and sit comfortably to collect physiological data of the user, for example, physiological data such as pulse wave and body temperature is collected through a wrist smart watch, a sensor is installed on the smart watch, the physiological data is obtained through the sensor, and a physiological index of the user is obtained according to the collected physiological data.
In one possible implementation manner, the electronic device in the embodiment of the present application may include a plurality of different electronic devices, for example, a smart watch and a mobile phone may be included, the cough sound is recorded by the mobile phone, and the physiological data of the user is collected by the smart watch.
In one possible implementation manner, the electronic device may be the same device, such as a mobile phone, and the cough sound of the user is collected and recorded by the mobile phone, and the physiological data of the user is collected by the mobile phone to obtain the physiological index of the user. Or the intelligent watch collects and records the cough sound of the user and collects the physiological data of the user to obtain the physiological index of the user.
The recording process of the cough sound in the passive screening scenario is described below.
Install healthy APP in motion or respiratory infection screening APP on intelligent wrist-watch or the cell-phone, healthy APP in motion or respiratory infection screening APP's of respiration function includes automatic screening (backstage screening, noninductive screening), promptly after the user authorizes agrees, the pickup equipment of intelligent wrist-watch or cell-phone can record user's sound data in succession, and judge whether this record sound is cough sound according to the sound data in the certain time, if for cough sound, then record this cough sound, the cough sound of here record is the cough that user's own physiology naturally takes place, and the cough sound of user's the sending under the automatic screening scene is mostly the cough that artificial control takes place. Illustratively, the user authorizes an exercise health APP or a respiratory infection screening APP, which may record sounds of the external environment in real time, identify the sounds of the external environment, identify cough sounds therein, and perform the following steps S402 to S406 when the cough sounds are detected.
In this embodiment of the application, if the user can predict that the user is about to cough, the user can autonomously use the respiratory infection screening function of the exercise health APP or the respiratory infection screening APP, refer to the guidance shown in fig. 5A to 5D, and perform cough, so that the electronic device records cough sound, and when it is determined that the recorded sound is cough sound, the respiratory infection screening function of the APP is triggered, and the screening method in this embodiment of the application is executed. Or the user authorizes the APP, allows the APP to collect and record the environmental sound in real time, and if the user wants to detect the respiratory infection condition of the user, the user can cough artificially, so that the electronic device records the cough sound, and when the recorded sound is judged to be the cough sound, the respiratory infection screening function of the APP is triggered, and the screening method of the embodiment of the application is executed.
Step S402: and processing the cough sound, and judging whether the cough sound is wet cough.
In this embodiment of the present application, the recorded cough sound may be input into the cough sound recognition model, the cough sound recognition model determines whether dry cough or wet cough, and the cough sound recognition model is established by collecting the cough sound of a person who normally does not have cough, the cough sound of a person who has cough but does not have phlegm, and the cough sound of a person who has cough and has phlegm, and then training the deep learning model or the shallow learning model according to the collected audio data, where the deep learning model or the shallow learning model after training is the cough sound recognition model.
In the embodiment of the present application, step S402 may be omitted, that is, step S403 may be executed after the cough sound is detected, without determining whether the cough sound is wet cough, and step S403 may be executed after the cough sound is detected, that is, the cough sound is detected as a trigger event for screening respiratory infection, and the screening method of the embodiment of the present application is executed when the cough sound is detected.
If it is determined in step S402 that the cough sound is wet cough, step S403 is performed. If it is determined in step S402 that the cough sound is not wet cough, step S406 is performed.
Step S403: and sensing the interactive operation of the user through the interactive equipment.
In the embodiment of the present application, the detection of the cough sound as wet cough may be used as a trigger event for the respiratory infection screening, and the screening method in the embodiment of the present application may be executed when the wet cough sound is detected.
In the embodiment of the application, the interactive device is in communication connection with the electronic device, and the interactive device may be a part of the electronic device. The interactive device may be a voice interactive device, such as a home voice assistant, a smart speaker, or a visual interactive device, such as a smart screen, a smart television, or a touch screen, or an intelligent household appliance, such as an intelligent air conditioner, an intelligent washing machine, or the like, and may be an interactive device suitable for a work scene, such as a factory, an enterprise, or a hospital, in addition to the above interactive device suitable for a home scene, without any limitation.
In the embodiment of the present application, in the first case, the interactive device may include only a voice interactive device. In a second scenario, the interaction device may comprise a voice interaction device and a display device. In a third scenario, the interaction device may comprise only a display device.
In a first scenario, the interaction device may exemplarily include a voice interaction device, and performs voice interaction with the user through the voice interaction device, that is, the voice interaction device may include a sound pickup device, a voice playing device, and a processor, the sound pickup device may collect environmental sound to obtain cough sound, the sound pickup device may further collect sound of the user when interacting with the user, the processor processes the voice of the user collected by the sound pickup device, outputs corresponding first voice data according to the processing of the voice of the user, and plays the first voice data through the voice playing device.
In the embodiment of the application, first voice data can be output through a voice interaction device, wherein the first voice data is used for interacting with a user, and an interaction operation input by the user after the voice interaction device outputs the first voice data is received, wherein the interaction operation comprises voice input. Wherein the first voice data includes a guide speech technique set according to the expectoration information, the guide speech technique including: universal answer guided dialogs, verified guided dialogs, selective guided dialogs, and query guided dialogs.
In this application embodiment, according to the guide word art that expectoration information set up, this guide word art is used for guiding the user promptly to obtain expectoration information from the user, including whether the cough takes the sputum, whether the sputum is transparent, whether the sputum is sticky etc..
Where general reply-type guided dialog is used to indicate that the electronic device behaviorally returns only one sentence of guidance, without multiple rounds of dialog interaction, the general reply-type guided dialog may, for example, "just not hear, can you say me again? "the interaction corresponding to the behavior type is suitable for a situation that the user intention cannot be recognized based on the user voice.
Where checkstyle guided dialogs are used to indicate guidance for the electronic device to ask the user for a resource selection and expect the user to return a "yes" or "no" behavior in his turn to the dialog, the guided dialogs may be, for example, "do i not hear, do you say there is no cough? "the interaction corresponding to the behavior type is suitable for the situation that the user intention is presumed based on the voice of the user and less selections are fed back to the user.
Wherein the selective guidance behavior is used to indicate guidance for the electronic device to feedback to the user a plurality of choices and to expect the user to select one of the choices in the next turn of the conversation, the guidance conversation may be, for example, "what color sputum is? A: transparent, B: rust color? "the interaction corresponding to the behavior type is suitable for the situation that the user intention is presumed based on the user voice and more choices are fed back to the user.
Where the query-guided behavior is used to indicate that the electronic device returns guidance to the user asking for a slot and expecting the user to answer, where a slot refers to a keyword in the user's voice that is relevant to the user's intent, and guidance jargon may be, for example, "what song you are to listen to Zhou Jielun? "slot position is" Zhou Jielun ", the interaction corresponding to the behavior type is suitable for the situation that the user intention needs to be further clarified based on the user voice, and the user intention can be determined.
The audio interaction is realized through the voice interaction equipment, and specifically: when the electronic device detects a cough sound or determines that the cough sound is a wet cough, the processor in the voice interaction device controls the voice playback device to output the first voice data "XXX, good, just did you cough? ". And then, collecting the voice input of the user through sound pickup equipment, processing the collected voice input of the user, and ending if judging that the user answers 'no'. If the answer of the user is judged to be ' yes ', continuing, the processor controls the voice playing device to output first voice data ' do you cough with sputum? ". And then, collecting the voice input of the user through sound pickup equipment, processing the collected voice input of the user, and if judging that the user answers 'no', judging that the voice input is ended. If judge that the user answers "yes", continue, do the treater control voice playback device output first voice data "does the sputum sticky? ". Then, voice input of the user is collected through the sound pickup equipment, the collected voice input of the user is processed, information input by the user is recorded, and if the answer of the user is judged to be 'yes', the 'thick sputum' is recorded. If the user is judged to answer 'no', recording 'the sputum is not viscous'. After receiving the voice input by the user, control the voice playing device to output the first voice data, "what color is sputum? And then, the voice input of the user is collected through the pickup equipment, the collected voice input of the user is processed, the information input by the user is recorded, and the user can input transparent, rusty, brick red, golden yellow purulent sputum or yellow green purulent sputum and the like. The voice input by the user is recognized and the color of the user input is recorded. In one of the possible implementations, the voice playback device is controlled to output the first voice data "what color is sputum? A: transparent, B: rust color, C: brick red, D: golden yellow purulent sputum, E: yellow-green purulent sputum. Then the user's choice is determined, and if the user chooses A, the sputum is recorded as transparent. After receiving the voice input by the user, controlling the voice playing device to output first voice data "do you cough too much recently? Then, the voice input of the user is collected through the sound pickup equipment, the collected voice input of the user is processed, the information input by the user is recorded, and if the user answers more, the expectoration amount is recorded more. If the user is judged to answer "less", recording "less expectoration amount". And finally, controlling the voice playing equipment to output the first voice data ' thank you ' feedback '.
In a second case, the interactive device may include a voice interactive device and a display device, and the display device displays the text data converted from the first voice data and may also display a voice waveform. The first voice data is played and output with the voice playing device, and the text data converted from the first voice data is displayed and output through the display device. The display device may display the text data and the interactive element converted from the first voice data, and the interactive operation between the user and the electronic device may be a touch operation acting on the interactive element. That is, when the interactive device includes a voice interactive device and a display device, the user may perform voice interaction through the voice interactive device or may perform interaction on the display device.
Illustratively, referring to fig. 6A together, first voice data "XXX, your, just did you cough? "time, text data" XXX, your good, just you coughed? "the display device further displays an interactive element, where the interactive element may be an answer option to the above-mentioned question, such as option" yes "and option" no ", and detects an interactive operation of the user, such as detecting whether the user inputs a voice or detecting whether the user inputs an operation on the display device, such as a touch operation on the display device, (click, long press, etc.). And detecting that the user clicks the 'no' option, and ending. The user click "yes" is detected and the cough is recorded. The voice playback device outputs first voice data "do you cough with sputum? "please refer to fig. 6B, the text data" do you cough with sputum? "and option" yes "and option" no ". And detecting that the user clicks the 'no' option, and ending. Detecting the user clicking 'yes', recording the cough with phlegm. The voice playback device outputs the first voice data "is sputum sticky? "please refer to fig. 6C, the text data" whether sputum is sticky or not "after the first voice data conversion is displayed on the display device? "and option" yes "and option" no ". Detecting the user clicking the "no" option, record "sputum not sticky". When the user clicks "yes" is detected, recording "sputum is sticky", and the voice playing device outputs first voice data "what color is sputum? A: transparent, B: rust color, C: brick red, D: golden yellow purulent sputum, E: yellow-green purulent sputum. Please refer to fig. 6D together, the text data "what color sputum is" after the first voice data is converted is displayed on the display device? A: transparent, B: rust color, C: brick red, D: golden yellow purulent sputum, E: yellow-green purulent sputum. "detect user selection B, record sputum as rust color. And if the user clicks the option B, recording that the sputum is rust color. The voice playback device outputs first voice data "do you cough too much recently? Please refer to fig. 6E together, the display device displays text data "do you cough too much recently? "and option A: and a plurality of, B: not many, C: if the user selects more than one option, recording more expectoration. When the user is detected to select the option "less", the "less expectoration amount" is recorded. And finally, controlling the voice playing device to output the feedback of thank you, and displaying the feedback of thank you on the display device by referring to fig. 6F.
In a third case, the interactive device only includes a display device, and the sensing the interactive operation of the user through the interactive device includes: presenting, by the display device, an interactive interface, the interactive interface including an interactive element, wherein the interactive element is for interacting with a user; and receiving interactive operations (such as clicking, long-time pressing and character inputting) input on the interactive interface by the user. I.e. the user can interact with the electronic device via the interactive interface. The interactive element comprises a first option set according to expectoration information; the interaction comprises a touch operation acting on the first option.
The questioning information can be respectively presented through a plurality of interactive interfaces, and all questions can be presented on the interactive interfaces at one time, so that the user can pull down and drag the page to answer the questions.
Illustratively, referring to FIG. 7A in conjunction, question 1 is presented on the interactive interface 70: asking for a question about which of the expectoration information of the current cough most matches? ", and presents a first option" a: no phlegm, B: transparent, C: rust color, D: brick red, E: golden yellow purulent sputum, F: yellow-green purulent sputum, G: other inputs ". Problem 2: "do you cough too much recently? ", and presents a first option" a: and a plurality of groups B: not many, C: not much ". And monitoring the interactive operation of the user on the first option on the interactive interface 70, and recording that the cough has no phlegm if the user clicks the option A for the problem 1. And if the user clicks the option B, recording that the sputum of the cough is transparent. If the user clicks the option C, the cough sputum is recorded to be rust color. If the user clicks option D, record that the sputum of the current cough is brick red. And if the user clicks the option E, recording that the cough sputum is golden yellow purulent sputum. And if the user clicks the option F, recording that the sputum of the cough is yellow-green purulent sputum. If the user clicks the option G, the user can directly input relevant information in a text mode, such as yellow or upload pictures of the cough sputum.
In one of the possible implementations, please refer to fig. 7B together, and fig. 7B differs from fig. 7A in that a question "ask you just cough? ", and the first option" yes "and the option" no ". If the user selects "yes", the user can continue answering the following questions, and if the user selects "no", the user ends the answer.
In the embodiment of the present application, if it is detected that the current active screening scenario is detected, the interactive interface 70 shown in fig. 7A may be displayed. If it is detected that the current passive screening scenario is, the interactive interface 70 shown in fig. 7B may be displayed, and by inquiring whether the user cough himself, it is avoided to recognize the cough of the surrounding environment of the user or other people, so that consideration needs to be given to the situation of whether the user actually coughs.
In one possible implementation manner, if the interactive interface is displayed through the display device, the user may be attracted to the display device through vibration, a bright screen, a prompt sound, and the like. If in the automatic screening scene, the user has focused attention on the display device, then the interaction is directly performed through the interactive interface, if the user is allowed to check the corresponding problem, the voice can not be output at the moment, and the method and the device are not specifically limited.
In this embodiment of the application, the interactive element further includes a sputum picture and an interactive control associated with the sputum picture, and then the interactive operation includes a touch operation acting on the interactive control.
Wherein, the sputum picture can be the sputum picture of different expectoration circumstances, different expectoration circumstances can be for not having phlegm, the sputum is transparent, the sputum is rust color, the sputum is brick red, the sputum is golden yellow purulent sputum or the sputum is yellow green purulent sputum etc. exemplarily, please refer to figure 7C together, interactive interface 70 in figure 7C presents the problem and the picture that corresponds, the picture that option a corresponds in figure 7C is the picture under the circumstances of no sputum, the picture that option B corresponds in figure 7C is the sputum picture that the sputum is transparent, the picture that option C corresponds in figure 7C is the sputum picture that the sputum is rust color, the picture that option D corresponds in figure 7C is the sputum picture that the sputum is brick red, the picture that option E corresponds in figure 7C is the sputum picture of golden yellow purulent sputum, the picture that option F corresponds in figure 7C is the sputum picture of yellow green purulent sputum.
Wherein, the interactive control associated with the sputum picture may be an option button under the sputum picture, such as options "a", "B", "C", "D", "E" in fig. 7C. And the user clicks the option and selects the sputum picture corresponding to the option. The interactive control can also be a frame arranged around the sputum picture, and the user clicks the sputum picture or the frame to select the sputum picture.
In the embodiment of the present application, the expectoration information represented by each sputum picture is different, and if the picture corresponding to option a in fig. 7C is selected, the expectoration information of this cough is recorded. If the picture corresponding to the option B in the picture 7C is selected, the recording sputum is transparent. And if the picture corresponding to the item C in the figure 7C is selected, recording that the sputum is rust color at this time. And if the picture corresponding to the option D in the figure 7C is selected, recording that the sputum is brick red. And if the picture corresponding to the option E in the figure 7C is selected, recording that the sputum is golden yellow purulent sputum. If the picture corresponding to the option F in FIG. 7C is selected, the sputum is recorded as yellowish green purulent sputum.
Fig. 7C is different from fig. 7A in that each option in fig. 7A is shown in a picture form, which is convenient for a user to quickly determine.
In one possible implementation manner, the interactive elements include a finder frame of a picture and a shooting control, and the interactive operation includes a touch operation acting on the shooting control.
Illustratively, referring to FIG. 8A, the interactive interface 70 presents a popup, presents questions within the popup, asks the user: the 'expectoration picture is helpful for accurate screening of respiratory infection, helps you to further know the risk condition and take corresponding measures, and please confirm whether to allow a photo to obtain the expectoration information'. Three option buttons are also included on the interactive interface 70: a: no sputum and no need of taking a picture; b: the next time, say again; c: agrees to take a picture and upload the information. If the user selects option a or B, a thank you may be prompted for user feedback. If option C is selected, the user enters the shooting interface 80, please refer to fig. 8B, a preview pane 81, a close control 82, a shooting control 83, and a gallery control 84 are displayed in the shooting interface, and the user may be prompted by text to align the square frame with the sputum or by voice to align the square frame with the sputum in the preview pane 81. The user exits the capture interface by clicking on the close control 82. The user clicks the shooting control 83, shooting can be performed, and after the shooting is completed, the picture is stored and uploaded to the cloud server or the local area. The user can also use the camera APP of the electronic device to take pictures, and then the user clicks the gallery control 84 to enter the gallery and select a corresponding sputum picture from the gallery. Referring to fig. 8C, the user is prompted by the interactive interface 70 to "image capture complete". The interactive interface 70 may also present a popup to ask the user about other expectoration information, such as whether the sputum is thick or not and whether the amount of sputum is large or not, as can be seen from fig. 6E, a question "do you cough too much recently? "and option A: and a plurality of, B: not many, C: and a little to acquire more expectoration information.
In the embodiment of the application, expectoration information can be obtained by combining an image recognition algorithm with a sputum picture shot by a user, for example, the shape, color and the like of sputum can be analyzed to obtain expectoration information. Exemplarily, if a sputum recognition model can be established, a picture without sputum, a transparent picture of sputum, a picture with the sputum being rusty, a picture with the sputum being brick red, a picture with the sputum being golden yellow purulent sputum, a picture with the sputum being yellow green purulent sputum, a picture with more sputum, a picture with less sputum being sticky, and a picture with the sputum not being sticky are collected, then a learning model is trained according to the collected pictures, and the trained learning model is the sputum recognition model. And inputting the sputum picture uploaded by the user into the sputum recognition model for recognition, and finally obtaining corresponding expectoration information.
It can understand, after obtaining image information, can be through image recognition sputum colour and consistency, can cut apart through the image and compare with real expectoration picture and can construct the recognition model, obtain corresponding information, this image recognition process can go on locally at equipment, also can go on at high in the clouds server, and this application does not specifically limit to this.
In the embodiment of the application, in an active screening scene, at this time, a user already focuses attention on a display device of a smart watch or a mobile phone, and the user is prompted to authorize to take a picture and upload the picture directly through an interactive interface. If the scene is passively screened, it is recognized that there is a high probability of wet cough (sputum due to cough) or cough sound is detected, but the cough is not necessarily the user himself, and may come from the surrounding environment of the user or other people, so it is also necessary to consider whether the user actually coughs, and attract the user to pay attention to the display device for interaction by means of vibration, screen lighting, prompt sound, and the like, for example, asking a voice or text window popup asking "do you just ask about coughs? ".
Step S404: and acquiring expectoration information corresponding to the interactive operation.
In the embodiment of the present application, the expectoration information includes: frequent expectoration, sputum color, sputum viscosity, sputum amount and the like. The expectoration frequency can be the number of expectoration within one day or the number of expectoration within a preset time, and the number of expectoration within the preset time can be determined by monitoring whether cough sound is detected and integrating whether the cough is dry cough or wet cough. Wherein the sputum can be transparent, rust, brick red, golden yellow or yellow green. The sputum may be viscous or non-viscous. The sputum amount may be the amount of sputum in a preset time, such as the amount of sputum in each expectoration, or the amount of sputum in a day.
Step S405: and outputting reminding information according to the physiological indexes of the user and the expectoration information.
In the embodiment of the application, expectoration information can be input into a respiratory infection screening algorithm, and then the information is processed by the respiratory infection screening algorithm to judge whether the respiratory tract of the user is abnormal or judge whether the user is infected by breathing.
In the embodiment of the present application, other inputs may be optionally fused to assist in screening for respiratory infection risk, and may include, but are not limited to: physiological indicators (body temperature, respiration rate, heart rate, blood oxygen, etc.), user medical history, audio data (cough sound, deep breathing sound, normal breathing sound, lung sound, forced breathing sound, etc.), wherein the cough sound in other inputs is used for screening respiratory infection, and the respiratory infection can be screened according to the information of the pitch, loudness, frequency, etc. of the cough sound. Illustratively, the electronic device detects a cough of the user, records the cough sound of the user, starts to execute the screening method of the present application, acquires expectoration information of the user, acquires other inputs, such as physiological indexes of the user, medical history input by the user, and cough sound, deep breath sound, normal breath sound, lung sound, forced breath sound and the like input again by the user, and inputs the acquired information into the respiratory infection screening method for processing.
In one possible implementation manner, physiological indexes and expectoration information of a user can be input into a respiratory infection screening algorithm, the respiratory infection screening algorithm aims to help the user judge how the respiratory infection risk condition is currently located through additional auxiliary information such as physiological indexes and audio data, and the respiratory infection screening algorithm guides the user to perform self intervention, even some researches show that early warning can be given before the user disease is initiated, so that the user can pay attention to and control the development of the disease course, which is significant for special crowds, and a typical example is as follows: elderly with Chronic Obstructive Pulmonary Disease (COPD) may develop severe COPD and severe respiratory complications, life threatening if the condition is not controlled in time after suffering from upper respiratory tract infection, and hopefully control the ultimate risk and reduce mortality if the elderly get attention at an earlier stage (attention of the elderly themselves or family members). On the other hand, in the aspect of epidemic situation prevention and control, if the infection degree is screened to be lower, the patient does not need to go to a hospital to avoid other epidemic diseases, and if the infection degree is screened to be higher, the patient needs to be hospitalized as soon as possible to avoid aggravation of infection.
The respiratory infection screening algorithm collects big data by normal performance and abnormal performance of various information, covers normal people groups and various disease people groups, and constructs a judgment rule to determine a fixed machine learning model. For example, the risk of common cold is judged by the rising of body temperature, the rising of respiratory rate and the frequency of coughing, and the common cold usually does not generate fever through the typical symptoms of various diseases, so the risk of common infection is lower. With such a design and big data modeling, a model (i.e., algorithm) may be determined to assess the user's risk of respiratory infection based on sensor data or user input.
Step S406: and outputting reminding information according to the physiological indexes of the user and the cough sound.
In the embodiment of the application, if there is no expectoration information, the physiological index and the cough sound of the user can be input to a respiratory infection screening algorithm to determine whether the respiratory tract of the user is abnormal. The cough sound and the physiological index of the user recorded in step S401 may be input into the respiratory infection screening algorithm again, or the physiological index of the user and the newly recorded cough sound may be input into the respiratory infection screening algorithm, or additional auxiliary information (physiological index, such as body temperature, respiratory rate, heart rate, blood oxygen, etc., user medical history, audio data, such as cough sound, deep breathing sound, normal breathing sound, lung sound, forced breathing sound, etc.) may be input into the respiratory infection screening algorithm to determine whether the respiratory tract of the user is abnormal.
In the embodiment of the application, the expectoration information is used for some serious diseases and has risk prompting significance, such as: in pneumonia, bacterial infections are often accompanied by expectoration. Rust-colored sputum often indicates Streptococcus pneumoniae infection, brick-red sputum often indicates Klebsiella pneumoniae infection, golden yellow purulent sputum often indicates Staphylococcus aureus infection, and yellow-green purulent sputum often indicates Pseudomonas aeruginosa infection. Atypical pathogens such as mycoplasma pneumoniae, chlamydia pneumoniae, legionella pneumophila and the like are often manifested as dry cough and little sputum. Therefore, the occurrence of expectoration with specific color and viscosity has the significance of indicating risk. The method is favorable for improving the final screening precision of the algorithm.
In the embodiment of the present application, the reminding information may include, but is not limited to: whether the patient is infected or not, risk level, risk of a specific disease, condition interpretation, advice, recorded information of cough sound history, current or recent physiological index data and the like, wherein whether the patient is infected or not, namely, whether the respiratory tract is infected or not, the risk level comprises that the patient has no abnormality or has abnormality, namely, whether the respiratory tract is abnormal or not, or the risk level comprises that the patient has no abnormality, low, medium or high risk, namely, whether the respiratory tract is abnormal, the risk of the respiratory tract is low, the risk of the respiratory tract is medium and the risk of the respiratory tract is high; a risk of a particular disease, such as the likelihood or risk of flu, the likelihood or risk of the common cold, the likelihood or risk of pharyngitis, the likelihood or risk of laryngitis, the likelihood or risk of pneumonia and the likelihood or risk of bronchitis, etc., or a situation interpretation reading and suggesting user's near-state data, recorded information of cough sounds history, current or recent period of time physiological index data, such as average body temperature and respiratory rate, etc.
Referring also to fig. 9A, the respiratory infection screening interface 50 presents a risk rating of abnormality, and the interpretation and recommendation of the situation are: "by analyzing your recent measurements, your respiratory tract is at a higher risk of infection, please see … in time. The recorded cough sound has three sections of audio frequencies, and the acquired physiological indexes have the average temperature of 37.1 ℃ and the respiratory rate of 21 times/minute. Referring also to fig. 9B, the respiratory infection screening interface 50 presents a risk rating of no abnormality, and the interpretation and recommendation of the situation are: by analyzing your recent measurement, the risk of infection of the respiratory tract is low, and you keep a good living habit …, the recorded cough sound has three sections of audio, the acquired physiological index is 37.1 ℃ of average temperature and 21 times/minute of respiratory rate.
When the APP continuously and repeatedly identifies that the illness state of the user is serious, whether the user is willing to accept return visit or not can be prompted when the APP is opened by the user, so that assistance and question answering of experts can be obtained. If the user authorizes and agrees, the expectoration picture and the corresponding expectoration color and viscosity information can be uploaded to the cloud end, and a doctor determines an intervention means at the cloud end according to the overall situation of the user and provides help for the user, wherein the intervention means includes but is not limited to modes of call return visit, text chat interaction, short message leaving, video talkback and the like. The doctor return visit is not necessary, but as digital medical treatment is deepened continuously, the technology of the mode of intelligent terminal + sensor + doctor remote diagnosis is mature continuously, and expectoration information can be used in the technology.
In the embodiment of the application, the method and the device are mainly used for matching with the existing respiratory infection screening scene, taking cough sound as a driving event, timely acquiring expectoration information of a user, matching the expectoration information with other inputs to screen respiratory infection, ensuring the effectiveness and screening timeliness of the expectoration information and improving the overall screening accuracy.
The following introduces the active screening scenario of the embodiment of the present application from the perspective of the user:
the user wears intelligent wrist-watch (or bracelet), this intelligent wrist-watch includes the microphone at least, physiological data acquires the sensor, the speaker, the treater, the display screen, the camera, the user installs respiratory infection screening APP on intelligent wrist-watch, can be used to the screening of individual respiratory infection, the user clicks the respiratory infection screening function among the respiratory infection screening APP, hope to know the respiratory infection risk of oneself in the short time, respiratory infection screening APP suggestion user sits quietly and correctly wears the wrist-watch, the physiological data that the user was gathered to the intelligent wrist-watch through physiological data acquisition sensor, in order to obtain physiological index. Respiratory infection screening APP prompts the user to take one deep inhalation and then cough at least once. And the user operates according to the prompt. Recording cough sound by a microphone of the watch, processing the cough sound by the processor, if the cough sound is not cough, ending, prompting the user to re-enter, otherwise, identifying dry cough or wet cough, if the cough is dry cough, then judging that no expectoration information exists, inputting the cough sound and the expectoration information as a respiratory infection screening algorithm, and ending; otherwise, the patient is wet cough, the user has sputum, the user interacts with the user through a display screen of a watch or interacts with a loudspeaker through a microphone to acquire expectoration information of the user, the user mainly focuses on information such as expectoration frequency, color and viscosity, and the information is input as a respiratory infection screening algorithm, and the selectable interaction mode includes but is not limited to: the first mode is as follows: the user is asked by voice "do there is sputum? "," thick sputum? "," what color sputum is? And the questions are equal, and the audio answered by the user is converted into answer information according to the answer of the user so as to obtain expectoration information. The second mode is as follows: typical sputum pictures or sputum color pictures with high screening effectiveness are presented on a watch display screen, so that a user can select one which best meets the current situation, and the sputum-free picture is also used as an option. The third mode is as follows: and prompting the user to take a picture of the sputum, and obtaining expectoration information according to the picture of the sputum and an image recognition algorithm. Inputting expectoration information as a respiratory infection screening algorithm, optionally fusing other inputs to assist in screening respiratory infection risk and prompting information, other inputs may include but are not limited to: physiological characteristics (body temperature, respiratory rate, heart rate, blood oxygen, etc.), user medical history, audio data (cough sounds, deep breathing sounds, normal breathing sounds, lung sounds, forced breathing sounds, etc.).
In the initiative screening scene, maneuverability is strong, will acquire expectoration information embedding in whole screening process, does not increase the time cost basically, and the very first time obtains user's expectoration information, and in time accurate and direct use for respiratory infection screening can promote the screening precision.
The passive screening scenario of the present application embodiment is introduced from the perspective of the user as follows:
the user wears intelligent wrist-watch (or bracelet), and this intelligent wrist-watch contains microphone, physiological data acquisition sensor, speaker, treater, display screen, camera at least, and the user installs respiratory infection screening APP on intelligent wrist-watch, can be used to the screening of individual respiratory infection, and user authorizes respiratory infection screening AP for respiratory infection screening APP can the backstage operation, and real-time recording external environment sound is in order to obtain the sound of coughing. The method comprises the steps that a user clicks a respiratory infection screening function in a respiratory infection screening APP, the user hopes to know the respiratory infection risk in a short time, a smart watch monitors whether the user coughs or not through a microphone, if the user coughs, the microphone of the smart watch records cough sound, the cough sound is processed, dry cough or wet cough is identified, if the user coughs, expectoration information does not exist at the moment, and the cough sound and the expectoration-free information are input as a screening algorithm and end; otherwise, the user is wet cough, the smart watch draws the attention of the user and pays attention to the display screen through vibration, screen brightness, prompt tone and the like, and the popup asks the user to' just coughed? "ask the user whether cough, mainly in order to avoid recording the situation when other people nearby coughs, if the user feedback does not cough, end, otherwise the user has phlegm this moment, obtain user expectoration information through the smart watch, mainly pay close attention to information such as expectoration frequency, colour, viscosity, and regard as respiratory infection screening algorithm input this information, optional interactive mode includes but is not limited to: the first mode is as follows: the user is asked by voice "do there is sputum? "," thick sputum? "," what color sputum is? And the questions are equal, and the audio answered by the user is converted into answer information according to the answer of the user so as to obtain expectoration information. The second mode is as follows: typical sputum pictures or sputum color pictures with high screening effectiveness are presented on a display screen of the watch, so that a user can select one which best meets the current situation, and the sputum-free picture is also used as an option. The third mode is as follows: and prompting the user to take a picture of the sputum, and obtaining expectoration information according to the picture of the sputum and an image recognition algorithm. Inputting expectoration information as a respiratory infection screening algorithm, optionally fusing other inputs to assist in screening respiratory infection risk and prompting information, other inputs may include but are not limited to: physiological characteristics (body temperature, respiratory rate, heart rate, blood oxygen, etc.), user medical history, audio data (cough sounds, deep breathing sounds, normal breathing sounds, lung sounds, forced breathing sounds, etc.).
In the passive screening scene, maneuverability is strong, the acquired expectoration information is embedded in the whole screening process, time cost is not increased basically, the expectoration information of a user is acquired at the first time, the information is timely and accurately used for respiratory infection screening directly, screening precision can be improved, the user is not sensible in the active screening scene, equipment carries out background monitoring and prompts the user to cooperate to provide information, the scene is stronger, and the timeliness of screening is further improved by a background monitoring mode.
In the embodiment of the application, after the cough sound of the user is acquired, the dry cough and the wet cough are automatically judged, the user does not need to judge by himself, the method and the device can be used for continuously monitoring scenes, and the timeliness of screening is improved to the greatest extent. Aiming at wet cough, the system can interact with a user in time, record the expectoration information of the user at the first time by various means, naturally, sequentially and in line with the scene to acquire the expectoration information, and ensure that the user is matched to acquire the first-hand expectoration information in a convenient way, thereby reducing the interference of forgetting or placing deterioration of the expectoration information and accurately acquiring the latest information. The expectoration information is combined with other inputs in time to screen the user for respiratory infection, the screening accuracy can be improved after the expectoration information is added, the whole screening time and cost are not influenced basically after the method is added, and the whole process is natural. The screening result helps the user to intervene in the disease as soon as possible and discover early intervention. And the method is suitable for timely screening and alarming of scenes.
The descriptions of the flows corresponding to the above-mentioned figures have respective emphasis, and for parts not described in detail in a certain flow, reference may be made to the related descriptions of other flows.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. A computer program product for carrying out the screening includes one or more computer program instructions for screening, which when loaded and executed on a computer, cause, in whole or in part, the processes or functions described in accordance with the embodiments of the present application and illustrated in fig. 4.
The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., digital Versatile Disk (DVD)), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above-mentioned embodiments are provided by way of example and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A screening method, comprising:
detecting cough sound, and sensing the interactive operation of the user through the interactive equipment;
obtaining expectoration information corresponding to the interactive operation;
and outputting reminding information according to the expectoration information.
2. The method of claim 1, wherein the interactive device comprises a voice interactive device, and the sensing the user's interactive operation through the interactive device comprises:
outputting first voice data through the voice interaction equipment, wherein the first voice data is used for interacting with a user;
and receiving interactive operation input by a user after the voice interactive equipment outputs the first voice data, wherein the interactive operation comprises voice input.
3. The method of claim 2, wherein the first voice data comprises a guided session set according to expectoration information, the guided session comprising: universal answer guided dialogs, verified guided dialogs, selective guided dialogs, and query guided dialogs.
4. The method of claim 2 or 3, wherein the interactive device further comprises a display device;
the display equipment displays the text data and the interactive elements converted from the first voice data;
the interactive operation further comprises a touch operation acting on the interactive element.
5. The method of claim 1, wherein the interactive device comprises a display device, and the sensing the user's interactive operation through the interactive device comprises:
presenting, by the display device, an interactive interface, the interactive interface including an interactive element, wherein the interactive element is for interacting with a user;
and receiving the interactive operation input on the interactive interface by the user.
6. The method of claim 5, wherein the interactive element comprises a first option set according to expectoration information;
the interaction comprises a touch operation acting on the first option.
7. The method of claim 5 or 6, wherein the interactive element further comprises a sputum picture and an interactive control associated with the sputum picture;
the interactive operation comprises a touch operation acting on the interactive control.
8. The method of any of claims 5 to 7, wherein the interactive elements comprise a viewfinder for a picture and a capture control;
the interactive operation comprises a touch operation acting on the shooting control.
9. The method of any one of claims 1 to 7, wherein the expectoration information comprises: frequent expectoration, sputum color and sputum viscosity.
10. The method of any one of claims 1 to 9, wherein outputting a reminder in accordance with the expectoration information comprises:
acquiring a physiological index of a user;
and outputting reminding information according to the physiological indexes and the expectoration information.
11. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program stored by the memory, the processor being adapted to perform the method of any of claims 1 to 10 when the computer program is executed.
12. A computer-readable storage medium comprising computer instructions which, when executed on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 10.
13. A computer program product, characterized in that it comprises computer program code which, when executed by a computer, causes the computer to perform the method of any one of claims 1 to 10.
CN202110603113.0A 2021-05-31 2021-05-31 Screening method, apparatus, storage medium, and program product Pending CN115414025A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110603113.0A CN115414025A (en) 2021-05-31 2021-05-31 Screening method, apparatus, storage medium, and program product
PCT/CN2022/085099 WO2022252803A1 (en) 2021-05-31 2022-04-02 Screening method, device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110603113.0A CN115414025A (en) 2021-05-31 2021-05-31 Screening method, apparatus, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN115414025A true CN115414025A (en) 2022-12-02

Family

ID=84230501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110603113.0A Pending CN115414025A (en) 2021-05-31 2021-05-31 Screening method, apparatus, storage medium, and program product

Country Status (2)

Country Link
CN (1) CN115414025A (en)
WO (1) WO2022252803A1 (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009047756A2 (en) * 2007-10-11 2009-04-16 Bioview Ltd. Methods and kits for diagnosing lung cancer
US8758262B2 (en) * 2009-11-25 2014-06-24 University Of Rochester Respiratory disease monitoring system
AU2013239327B2 (en) * 2012-03-29 2018-08-23 The University Of Queensland A method and apparatus for processing patient sounds
CN202723829U (en) * 2012-04-06 2013-02-13 肖遥 Intelligent cough monitoring and evaluation system
EP3727148A4 (en) * 2017-12-21 2021-03-03 Queensland University Of Technology A method for analysis of cough sounds using disease signatures to diagnose respiratory diseases
CN109008992A (en) * 2018-07-03 2018-12-18 秦昊宇 A kind of sign monitoring and control recording system
CN109009129B (en) * 2018-08-20 2019-06-04 南京农业大学 Sow respiratory disease early warning system based on acoustic analysis
CN111681756A (en) * 2020-05-29 2020-09-18 吾征智能技术(北京)有限公司 Disease symptom prediction system based on sputum character cognition

Also Published As

Publication number Publication date
WO2022252803A1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
US11191432B2 (en) Wearable health monitoring system
KR20170033641A (en) Electronic device and method for controlling an operation thereof
CN112577611A (en) Human body temperature measuring method, electronic equipment and computer readable storage medium
CN108431764A (en) Electronic equipment and the method operated for control electronics
CN110070863A (en) A kind of sound control method and device
KR20170033025A (en) Electronic device and method for controlling an operation thereof
CN112783330A (en) Electronic equipment operation method and device and electronic equipment
CN113892920A (en) Wearable device wearing detection method and device and electronic device
US20220148597A1 (en) Local artificial intelligence assistant system with ear-wearable device
WO2022100407A1 (en) Intelligent eye mask, terminal device, and health management method and system
WO2022068650A1 (en) Auscultation position indication method and device
CN115702993A (en) Rope skipping state detection method and electronic equipment
Richer et al. Novel human computer interaction principles for cardiac feedback using google glass and Android wear
WO2023001165A1 (en) Exercise guidance method and related apparatus
US20240164725A1 (en) Physiological detection signal quality evaluation method, electronic device, and storage medium
WO2022252803A1 (en) Screening method, device, storage medium, and program product
WO2022237598A1 (en) Sleep state testing method and electronic device
WO2022100597A1 (en) Adaptive action evaluation method, electronic device, and storage medium
CN113509145B (en) Sleep risk monitoring method, electronic device and storage medium
EP4088287A1 (en) Systems and methods including ear-worn devices for vestibular rehabilitation exercises
CN113539487A (en) Data processing method and device and terminal equipment
WO2024088049A1 (en) Sleep monitoring method and electronic device
WO2022206641A1 (en) Hypertension risk measurement method and related apparatus
US11998305B2 (en) Systems and methods for using a wearable health monitor
WO2021239079A1 (en) Data measurement method and related apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination