WO2018098716A1 - 听诊器数据处理方法、装置、电子设备及云服务器 - Google Patents

听诊器数据处理方法、装置、电子设备及云服务器 Download PDF

Info

Publication number
WO2018098716A1
WO2018098716A1 PCT/CN2016/108108 CN2016108108W WO2018098716A1 WO 2018098716 A1 WO2018098716 A1 WO 2018098716A1 CN 2016108108 W CN2016108108 W CN 2016108108W WO 2018098716 A1 WO2018098716 A1 WO 2018098716A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
detection object
target
detection
result
Prior art date
Application number
PCT/CN2016/108108
Other languages
English (en)
French (fr)
Inventor
骆磊
黄晓庆
郭潮波
刘澎
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201680002855.5A priority Critical patent/CN107077531B/zh
Priority to PCT/CN2016/108108 priority patent/WO2018098716A1/zh
Publication of WO2018098716A1 publication Critical patent/WO2018098716A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • G06F19/34
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Definitions

  • the present disclosure relates to the field of signal processing technologies, and in particular, to a stethoscope data processing method, apparatus, electronic device, and cloud server.
  • Stethoscopes are the most commonly used diagnostic tools. Most current stethoscopes are traditional air-conducting stethoscopes that use a hollow tube to deliver sound directly to the ear.
  • the existing stethoscope it is artificially judged by the signal collected by the stethoscope. Therefore, it is greatly influenced by humans.
  • the human auditory system cannot detect and judge the signal of the infrasound wave band.
  • the position where the stethoscope is placed directly affects the signal collected by the stethoscope, thereby affecting the accuracy of the detection result.
  • the stethoscope in the related art does not guarantee the accuracy of the detection.
  • the present disclosure provides a stethoscope data processing method, apparatus, electronic device, and cloud server, which are mainly used to overcome the problems in the related art.
  • a first aspect of the present disclosure provides a stethoscope data processing method, including:
  • the prompt information is output.
  • a second aspect of the present disclosure provides a stethoscope data processing apparatus, including:
  • a receiving module configured to receive a target detection object input by a user
  • a determining module configured to determine, according to the detection data collected by the stethoscope, whether the auscultation location of the stethoscope corresponds to the target detection object
  • the prompt information output module is configured to output the prompt information when the auscultation location does not correspond to the target detection object.
  • a computer program product comprising a computer program executable by a programmable device, the computer program having a program for execution when executed by the programmable device.
  • a fourth aspect of the present disclosure provides a non-transitory computer readable storage medium, the non-transitory computer readable storage medium including one or more programs for executing the auscultation data described above Approach.
  • a fifth aspect of the present disclosure provides an electronic device, including:
  • One or more processors for executing a program in the non-transitory computer readable storage medium.
  • a sixth aspect of the present disclosure provides a cloud server, including:
  • One or more processors for executing a program in the non-transitory computer readable storage medium.
  • the electronic device or the cloud server guides the auscultation process to determine an accurate auscultation location, and the electronic device or the cloud server can match the detection result with the stored data to obtain a detection result, which can achieve combined diagnosis of multiple organs.
  • the processing of the detected data by the electronic device or the cloud server enables the ordinary user to accurately know the detection result; on the other hand, the data of the infrasound wave band and the audible sound wave band can be combined for diagnosis at the same time, so that the accuracy can be further improved;
  • wired and wireless are connected in a common way, which can keep the stethoscope at one end of the power at any time, and can also continue to transmit seamlessly when the cable needs to be disconnected temporarily (such as convenient operation). Data to enhance the user experience.
  • the electronic device and the cloud server can update the stored data and continuously learn to improve the accuracy of the detection.
  • FIG. 1 is a schematic diagram of a network architecture according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a method for processing auscultation data according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flow chart of acquisition of detection results according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of interaction between an electronic device and a cloud server according to an embodiment of the present disclosure
  • FIG. 5 is a block diagram of a stethoscope data processing apparatus according to an embodiment of the present disclosure
  • FIG. 6 is a block diagram of an apparatus for auscultation data processing method, according to an exemplary embodiment.
  • FIG. 1 a schematic diagram of a network architecture according to an embodiment of the present disclosure.
  • the stethoscope 100, the electronic device 200, and the cloud server 300 are communicatively coupled by a network 400.
  • the detection data collected by the stethoscope 100 can be transmitted to the electronic device 200, or can be transmitted to the cloud server 300, and can also be transmitted to the electronic device 200 by the stethoscope 100.
  • the electronic device 200 transmits the data to the cloud server 300. The processing of the detection data by the electronic device 200 and the cloud server 300 will be described in detail later.
  • the stethoscope 100 includes an auscultation head, a sound output module, a signal processing module, a communication module, and the like.
  • the auscultation head includes a microphone.
  • the signal processing module comprises: a low pass filter, a noise reduction circuit, an amplifying circuit, a high pass filter circuit, an AD conversion circuit, an infrasonic wave upsampling circuit, a communication interface, a wireless transmission module, a volume adjustment module, a switch, a battery, and the like.
  • the microphone is used for collecting the detection data of the detection object of the auscultation location, and the detection data includes the audible sound wave data (that is, the data of the sound wave band) and the infrasound wave data (that is, the data of the infrasound wave band).
  • the sound output module is used to output the detection data collected by the microphone, and the sound output module may include an in-ear earphone.
  • the communication interface can be a USB interface or other interface capable of transmitting digital signals, and is used for connecting to the electronic device via the connection line, thereby transmitting the detection data to the electronic device.
  • the wireless transmission module can be a Bluetooth module, a WIFI module, or the like, for wirelessly transmitting detection data to an electronic device, a cloud server, or the like.
  • the AD conversion circuit is configured to convert the subsonic data and the acoustic data into a digital sound signal, respectively generate a signal of two lossless PCM stream formats or generate a signal of a lossless PCM stream format for transmission.
  • the signal collected by the microphone is subjected to noise reduction of the noise reduction circuit, filtering of the low-pass filter circuit and high-pass filter circuit before generating the signal of the lossless PCM stream format.
  • the signal of the lossless PCM stream format can be transmitted to the electronic device through the communication interface or transmitted to the electronic device or the cloud server through the wireless transmission module.
  • the wireless transmission module when the stethoscope is in an active state, the wireless transmission module maintains the connected state, thereby ensuring uninterrupted signal transmission through the wireless transmission module when the transmission of the communication interface is interrupted.
  • the signal transmission mode can be preset. For example, when there is a wireless connection between the stethoscope and the electronic device, and there is a wired connection, the cable is preferentially selected. Signal transmission when wired connection When disconnected, signal transmission via a wireless connection.
  • the electronic device can also power the stethoscope through the communication interface.
  • the infrasonic up-converting circuit is used for up-converting the signal of the in-sound wave band in the detected data so that the signal of the in-sound wave band can be heard.
  • a non-original sound can be heard in an upconverted manner.
  • the infrasound frequency band generated by human organs is in the range of 5 Hz to 10 Hz.
  • the frequency is shortened by 10 times or other multiples to 50 Hz-100 Hz, which makes the human ear audible.
  • the battery in the stethoscope 100 is used to power each module.
  • the switch is used to turn a function on or off for one or more modules in the stethoscope. For example, when the switch is turned on, the microphone collects the signal, and when the switch is turned off, the microphone stops collecting the signal.
  • the stethoscope data processing method of the embodiment of the present disclosure processes the detection data collected by the stethoscope through an electronic device or a cloud server, helps the user confirm the auscultation location, and obtains a more accurate detection result; and according to the secondary sound wave data and the sound wave data, The acquisition of test results can improve the accuracy and comprehensiveness of the test.
  • the auscultation data processing method is applied to an electronic device or a cloud server, and includes the following steps:
  • step 201 a target detection object input by the user is received.
  • the user can input the target detection object through the electronic device.
  • the target detection object can be an organ such as a heart, a lung, a stomach, or the like.
  • the electronic device can transmit the target detection object input by the user to the cloud server, so that the cloud server can receive the target detection object input by the user.
  • step 202 it is determined whether the auscultation position of the stethoscope corresponds to the target detection object based on the detection data collected by the stethoscope.
  • the detection data collected by the stethoscope can be transmitted to the electronic device through wired or wireless means after being processed by the internal signal, and the electronic device determines whether the auscultation position corresponds to the target detection object.
  • the detection data collected by the stethoscope from the auscultation location may include one or more sound signals of the detection object, by sorting the sound pressure levels of the sound signals, It can be determined whether the auscultation position corresponds to the target detection object.
  • the electronic device or the cloud server may determine, according to the stored organ standard sound signal library and the detection data, the sound pressure level order of each detected object in the superimposed signals, for example, the sound pressure level is sequentially ordered as the stomach. Heart, lungs, etc.
  • the auscultation position when the sound pressure level of the target detection object is the highest, the auscultation position corresponds to the target detection object. If the sound pressure level of the target detection object is not the highest, the auscultation position does not correspond to the target detection object.
  • This step can be performed by an electronic device or a cloud server.
  • step 203 when the auscultation position does not correspond to the target detection object, the prompt information is output.
  • the prompt information in order to assist the user to place the auscultation head in the correct position to accurately detect the target detection object, when the auscultation position does not correspond to the target detection object, the prompt information is output.
  • the prompt information may include a result of sound pressure level ordering and/or a prompt to place a position error, and the like.
  • the electronic device may directly output the prompt information; when the step is performed by the cloud server, the cloud server may send the prompt information to the electronic device for display by the electronic device.
  • the user can adjust the auscultation location.
  • the current sound pressure level is the stomach, then the heart, and finally the lung, and the target detection object input by the user is the stomach, there is no need to output a prompt message for prompting the user to adjust the auscultation position, the current auscultation
  • the detected data of the location will be continuously sent to the electronic device or the cloud server for subsequent detection result judgment.
  • the target detection object input by the user is the heart, the position of the auscultation head is inaccurate and offset, and the prompt information for prompting the user to adjust the auscultation position is output; the user adjusts the auscultation position according to the prompt information; when the position of the stethoscope After being adjusted, the sound pressure level of the collected detection data is re-arranged Order, and determine whether the adjusted auscultation position corresponds to the target detection object according to the ranking.
  • the sound pressure level ordering of the detected data is updated in real time.
  • the target detection object input by the user may be one or more.
  • it is determined whether the sound pressure level of at least two target detection objects is at least two digits before the sorting result for example, If the target detection object selected by the user is two, when the sorting results of the two targets are respectively located at the first and second positions, and the order between the two is not limited, it is determined that the placement position of the stethoscope is correct. .
  • the user may also input the target detection object, and directly determine whether the sound pressure level of the target detection object is the highest according to the sound pressure level ranking result of the output detection data to determine whether the auscultation position is correct.
  • the determination of the detection result may be performed based on the detection data.
  • FIG. 3 is a schematic flowchart of obtaining a detection result according to an embodiment of the present disclosure.
  • step 301 when the auscultation location corresponds to the target detection object, the detection data including the target detection object is similarly matched with the first pre-stored data.
  • the similarity matching can be performed according to the detection data.
  • the first pre-stored data including the plurality of detection data and the detection result correspondence relationship is stored. After the target detection object is determined, the detection data corresponding to the target detection may be similarly matched with the first pre-stored data.
  • the similarity match can include the following:
  • Method 1 The first pre-stored data is in the form of a waveform, and the similarity matching is performed on the detection data which is also in the form of a waveform. Similarity matching can utilize the similarity matching method in image processing.
  • the similarity threshold of the waveform can be set, whereby the similarity can be determined to be above or below a preset threshold.
  • the first pre-stored data is a plurality of values, and the detected data is numerically extracted to compare the similarity between the numerical value and the numerical value.
  • a fluctuation range of data may be set, whereby, based on whether the data fluctuation range of the detection data is within a preset range, it may be determined that the similarity between the two is higher or lower than a preset threshold.
  • step 302 when the result of the similarity matching meets the preset condition, the detection result of the target detection object is acquired according to the information corresponding to the first pre-stored data.
  • the detection data includes the secondary sound wave data and the audible sound wave data.
  • the first detection result is used as the detection result.
  • the first test result may be that the user is in a healthy state.
  • the first target data may be data of a disease A, That is, if the subsonic data and the disease A are more than a preset threshold, and the audible sound data and the disease A are more than a preset threshold (the preset threshold and the preset threshold of the subsonic data and the disease A are more than the preset threshold may be the same)
  • the value or the different value is weighted according to the first probability obtained by the similarity of the infrasound wave and the second probability obtained according to the similarity of the audible sound wave to obtain a third probability.
  • weight values may be set for the first probability and the second probability, for example, the weight values are 0.4 and 0.6, respectively.
  • the final available test result is that the probability that the user has disease A (ie, the information corresponding to the first target data) is the third probability.
  • the information corresponding to the second target data is used as Test results.
  • the detection result is disease A (ie, information corresponding to the second target data).
  • the similarity between the secondary sound wave data and the third target data in the first pre-stored data is higher than a preset threshold, and the audible sound wave data and the fourth target data in the first pre-stored data are When the similarity is higher than the preset threshold, the information corresponding to the third target data and the information corresponding to the fourth target data are used as the detection result. That is, if the similarity between the subsonic data and the disease A exceeds a preset threshold, and the similarity between the audible sound wave data and the disease B exceeds a preset threshold, the disease A (the information corresponding to the third target data) and the disease B (ie the fourth target data) Corresponding information) together as a test result.
  • the presentation information including the information of another detection object associated with the target detection object is displayed.
  • the user moves the stethoscope to the corresponding position by the information of the associated detection object, and when the sound pressure level of the associated detection object is the highest, the similarity matching is performed on the detection data, and the detection of the associated detection object is acquired. result. Therefore, according to the result of the similarity matching, the step of acquiring the detection result of the target detection object includes:
  • step 303 the detection result is displayed.
  • the test results can be displayed by electronic devices. If the matching process is performed by the cloud server, it can send the detection result to the electronic device for display.
  • the auscultation process can be guided by the electronic device or the cloud server, an accurate auscultation location is determined, and the detection result can be matched with the stored data by the electronic device or the cloud server, and the detection result is obtained.
  • the detection data can be processed through electronic devices or cloud servers, so that ordinary users can accurately know the detection results; on the other hand, the data of the infrasound wave band and the audible sound wave band can be combined for diagnosis at the same time.
  • Accuracy can be further improved; between the electronic device and the stethoscope, wired and wireless can be connected in a common way, which can keep the end of the stethoscope at any time, and can also continue to disconnect the cable (such as convenient operation), wirelessly continue Seamlessly transfer data to enhance the user experience.
  • the user may receive the confirmation information returned by the user according to the detection result according to the displayed detection result, and the confirmation information includes: the diagnosis result of the target detection object.
  • the diagnosis result is inconsistent with the detection result
  • the diagnosis result is corresponding to the detection data of the target detection object and stored. Thereby, the data stored by the electronic device and the cloud server can be updated, and learning is continuously performed to improve the accuracy of subsequent detection.
  • the electronic device can perform waveform display in real time after processing the detection data of the target detection object.
  • the infrasound data can also be up-converted and output through a preset sound output device (for example, a speaker) so that the user can hear the detection data of the infrasound band.
  • a preset sound output device for example, a speaker
  • the processed data is sent to the electronic device for real-time waveform display, and the secondary sound wave data is up-converted by the cloud server, and then sent to the electronic device.
  • Output from the speaker of the electronic device so that the user can hear the detection data of the infrasound band.
  • the auscultation data processing method of the embodiment of the present disclosure may be executed by an electronic device or a cloud server. When executed by the cloud server, the cloud server may send information for prompting and information for display to the electronic device. Tips and displays.
  • an embodiment of the present disclosure further provides a stethoscope data processing apparatus.
  • the device can be applied to an electronic device or a server.
  • the cloud server can send information for prompting and information for display to the electronic device for prompting and displaying.
  • the number of auscultations According to the processing device 500,
  • the receiving module 501 is configured to receive a target detection object input by the user
  • the determining module 502 is configured to determine, according to the detection data collected by the stethoscope, whether the auscultation location of the stethoscope corresponds to the target detection object;
  • the prompt information output module 503 is configured to output prompt information when the auscultation location does not correspond to the target detection object.
  • the determining module 502 includes:
  • a sorting sub-module configured to sort sound pressure levels of sound signals of one or more detection objects included in the detection data
  • the auscultation location determining sub-module is configured to determine whether the auscultation location corresponds to the target detection object based on the sorted result.
  • the apparatus further includes:
  • the matching module 504 is configured to perform similarity matching between the detection data including the target detection object and the first pre-stored data when the auscultation location corresponds to the target detection object;
  • the detection result obtaining module 505 is configured to acquire the detection result of the target detection object according to the information corresponding to the first pre-stored data when the result of the similarity matching meets the preset condition;
  • the display module 506 is configured to display the detection result.
  • the detection data of the target detection object includes: infrasound wave data and audible sound wave data.
  • the detection result acquisition sub-module 505 is configured to: if the secondary sound wave data and the audible sound wave data have similarities with the first pre-stored data below a preset threshold, The first detection result is used as the detection result.
  • the first test result is that the user is in a healthy state.
  • the detection result acquisition sub-module 505 is configured to have a similarity between the secondary sound wave data and the audible sound wave data and the first target data in the first pre-stored data exceeds a preset. a threshold, determining a first probability according to the similarity between the infrasound data and a first target data in the first pre-stored data, and determining a second probability according to the similarity between the audible sound wave data and the first target data And weighting the first probability and the second probability to obtain a third probability; and obtaining the detection result according to the third probability and the information corresponding to the first target data.
  • the detection result acquisition sub-module 505 is configured to compare the similarity between one of the secondary sound wave data and the audible sound wave data and the second target data in the first pre-stored data. If the similarity between the other one and the first pre-stored data is lower than the preset threshold, the information corresponding to the second target data is used as the detection result.
  • the detection result acquisition sub-module 505 is configured to: if the similarity between the secondary sound wave data and the third target data in the first pre-stored data is higher than a preset threshold, and the audible sound wave data is And the similarity between the fourth target data in the first pre-stored data is higher than a preset threshold, and the information corresponding to the third target data and the information corresponding to the fourth target data are used as the detection result. .
  • the apparatus 500 further includes:
  • the prompting module 507 is configured to display, when the detection result of the target detection object needs to be combined with the detection data of the detection object associated with the target detection object, the detection object including the target detection object Information about the prompt information.
  • the apparatus 500 further includes:
  • the receiving module 508 is configured to receive confirmation information returned by the user according to the detection result, where the confirmation information includes: a diagnosis result of the target detection object;
  • the storage module 509 is configured to associate and store the diagnosis result with the detection data of the target detection target if the diagnosis result does not match the detection result.
  • the apparatus 500 further includes:
  • the waveform display module 510 is configured to process the detection data of the target detection object After that, the waveform display is performed in real time.
  • the apparatus 500 further includes:
  • the up-conversion module 511 is configured to up-convert the sub-sonic data and output it through a preset sound output device.
  • FIG. 6 is a block diagram of an apparatus 600 for auscultation data processing method, which may be an electronic device or a server, according to an exemplary embodiment.
  • the apparatus 600 can include a processor 601, a memory 602, and a communication component 605.
  • the processor 601 is configured to control the overall operation of the apparatus 600 to complete all or part of the steps in the above stethoscope data processing method.
  • Memory 602 is used to store operating system, various types of data to support operations at the device 600, such as may include instructions for any application or method operating on the device 600, and application related data.
  • the memory 602 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read Only Read-Only Memory (ROM), magnetic memory, flash memory, disk or optical disk.
  • SRAM static random access memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • PROM Programmable Read-Only Memory
  • ROM Read Only Read-Only Memory
  • Communication component 605 is used for wired or wireless communication between the device 600 and other devices.
  • Wireless communication such as Wi-Fi, Bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so the corresponding communication component 605 can include: Wi-Fi module, Bluetooth module, NFC module.
  • device 600 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), Digital Signal Processor (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the above stethoscope data processing method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the above stethoscope data processing method.
  • the multimedia component 603 can include a screen and an audio component.
  • the screen may be, for example, a touch screen, and the audio component is used to output and/or input an audio signal.
  • the audio component can include a microphone for receiving an external audio signal.
  • the received audio signal may be further stored in memory 602 or transmitted via communication component 605.
  • the audio component also includes at least one speaker for outputting an audio signal.
  • the I/O interface 604 provides an interface between the processor 601 and other interface modules.
  • the other interface modules may be keyboards, mice, buttons, and the like. These buttons can be virtual buttons or physical buttons.
  • a computer program product comprising a computer program executable by a programmable device, the computer program having when executed by the programmable device A code portion for performing the above-described stethoscope data processing method.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 602 comprising instructions executable by processor 601 of apparatus 600 to perform the above described stethoscope data processing method .
  • the non-transitory computer readable storage medium can be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • Any process or method description in a flowchart or otherwise described in the embodiments of the present disclosure may be understood to represent code that includes one or more executable instructions for implementing the steps of a particular logical function or process. Module, segment or portion, and scope of embodiments of the present disclosure
  • implementations may be performed in which the functions may be performed in a substantially simultaneous manner or in an inverse order, depending on the functions involved, which should be in the technical field described in the embodiments of the present disclosure. The technical staff understands.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

本公开公开了一种听诊器数据处理方法、装置、电子设备及云服务器,所述方法包括:接收用户输入的目标检测对象;根据所述听诊器采集的检测数据,确定所述听诊器的听诊位置是否与所述目标检测对象相对应;当所述听诊位置与所述目标检测对象不对应时,输出提示信息。本公开通过电子设备或云服务器,对听诊器采集的检测数据进行处理,帮助用户对听诊位置进行确认,得到更准确的检测结果;且可根据次声波数据和声波数据,进行检测结果的获取,可提高检测准确性和全面性。

Description

听诊器数据处理方法、装置、电子设备及云服务器 技术领域
本公开涉及信号处理技术领域,尤其涉及一种听诊器数据处理方法、装置、电子设备及云服务器。
背景技术
听诊器是最常用的诊断用具,当前的听诊器大多是传统的空气传导听诊器,其利用中空管直接传递声音到耳朵。
然而,对于现有的听诊器,是人为对听诊器采集到的信号进行判断。因此,受人为的影响很大,例如,人类听觉系统并不能对次声波频段的信号进行检测判断。另一方面,听诊器所放置的位置,直接影响到听诊器所采集到的信号,由此,影响到检测结果的准确性。
综上,相关技术中的听诊器,并不能保证检测的准确度。
发明内容
本公开提供一种听诊器数据处理方法、装置、电子设备及云服务器,主要用于克服相关技术中存在的问题。
本公开的第一方面,提供一种听诊器数据处理方法,包括:
接收用户输入的目标检测对象;
根据所述听诊器采集的检测数据,确定所述听诊器的听诊位置是否与所述目标检测对象相对应;
当所述听诊位置与所述目标检测对象不对应时,输出提示信息。
本公开的第二方面,提供一种听诊器数据处理装置,包括:
接收模块,被配置为接收用户输入的目标检测对象;
确定模块,被配置为根据所述听诊器采集的检测数据,确定所述听诊器的听诊位置是否与所述目标检测对象相对应;
提示信息输出模块,被配置为当所述听诊位置与所述目标检测对象不对应时,输出提示信息。
本公开的第三方面,提供一种算机程序产品,所述计算机程序产品包含能够由可编程的装置执行的计算机程序,所述计算机程序具有当由所述可编程的装置执行时用于执行上述听诊数据处理方法的代码部分。
本公开的第四方面,提供一种非临时性计算机可读存储介质,所述非临时性计算机可读存储介质中包括一个或多个程序,所述一个或多个程序用于执行上述听诊数据处理方法。
本公开的第五方面,提供一种电子设备,包括:
上述的非临时性计算机可读存储介质;以及
一个或者多个处理器,用于执行所述非临时性计算机可读存储介质中的程序。
本公开的第六方面,提供一种云服务器,包括:
上述的非临时性计算机可读存储介质;以及
一个或者多个处理器,用于执行所述非临时性计算机可读存储介质中的程序。
本公开,由电子设备或云服务器对听诊过程进行引导,确定准确的听诊位置,且可由电子设备或云服务器将检测结果与存储的数据进行匹配,得到检测结果,可做到多器官的联合诊断;通过电子设备或云服务器对检测数据的处理,使普通用户也能准确获知检测结果;另一方面,可结合次声波频段和可闻声波频段的数据同时进行诊断,使准确度得以进一步提升;电子设备和听诊器之间,采用有线与无线共同连接的方式,既能保持听诊器一端随时都有电,也能在临时需要断开连接线时(如方便操作),无线继续无缝传输 数据,提升用户体验。电子设备和云服务器可对存储的数据进行更新,不断进行学习,提高检测的准确性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是本公开一实施例的网络架构示意图;
图2是本公开一实施例的听诊数据处理方法的流程示意图;
图3是本公开一实施例的检测结果获取的流程示意图
图4是本公开一实施例的电子设备和云服务器的交互示意图;
图5是本公开一实施例的听诊器数据处理装置的框图;
图6是根据一示例性实施例示出的一种用于听诊数据处理方法的装置的框图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
参见图1,为本公开一实施例的网络架构示意图。
在本公开的实施例中,听诊器100、电子设备200和云服务器300之间通过网络400通信连接。听诊器100采集的检测数据即可传输给电子设备200,也可传输给云服务器300,还可由听诊器100传输给电子设备200后, 再由电子设备200传输给云服务器300。关于电子设备200和云服务器300对检测数据的处理,将在后续详细说明。
参见图1,听诊器100包括:听诊头、声音输出模块、信号处理模块、通信模块等。其中听诊头中包括一麦克风。在一个实施例中。信号处理模块包括:低通滤波器、降噪电路、放大电路、高通滤波电路、AD转换电路、次声波升频电路、通信接口、无线传输模块、音量调整模块、开关、电池等。
其中,麦克风用于采集听诊位置的检测对象的检测数据,检测数据包括可闻声波数据(即可闻声波频段的数据)和次声波数据(即次声波频段的数据)。检测数据中的可闻声波数据经放大电路放大后,声音输出模块用于将麦克风采集的检测数据输出,声音输出模块可包括入耳式耳机。
通信接口,可为USB接口,也可为其它可传输数字信号的接口,其用于经连接线与电子设备连接,由此,可将检测数据传输给电子设备。无线传输模块可为蓝牙模块、WIFI模块等,用于将检测数据无线传输给电子设备、云服务器等。
AD转换电路用于将次声波数据和声波数据分别转换成数字声音信号,生成两路无损PCM流格式的信号或生成一路无损PCM流格式的信号进行传输。此外,麦克风采集的信号在生成无损PCM流格式的信号之前,还经过降噪电路的降噪,低通滤波电路和高通滤波电路的滤波等。无损PCM流格式的信号可通过通信接口传输给电子设备,或通过无线传输模块传给电子设备或云服务器。
在本公开的一实施例中,当听诊器处于工作状态时,无线传输模块保持连接状态,由此,保证当通信接口的传输中断时,可通过无线传输模块进行不间断的信号传输。由于,听诊器可通过有线或无线方式进行信号传输,因此,可对信号的传输方式进行预先设定,例如,当听诊器与电子设备之间,即有无线连接,又有有线连接时,优先选择有线进行信号传输,当有线连接 断开时,通过无线连接进行信号传输。在一个实施例中,电子设备还可通过通信接口为听诊器进行供电。
次声波升频电路用于将检测数据中的次声波频段的信号进行升频,使得次声波频段的信号可以被听到。在一个实施例中,对于次声波频段,可以采用升频方式听到一个非原始的声音。人体器官产生的次声波频段在5Hz-10Hz,采用缩短时域的方法,将其频率提升10倍或其它倍数,变为50Hz-100Hz,也就使人耳可听到。
听诊器100中的电池用于对各个模块进行供电。开关用于对听诊器中的一个或多个模块进行功能的开启或关闭。例如,当开关开启时,麦克风即进行信号的采集,而当开关关闭时,麦克风即停止进行信号的采集。
本公开实施例的听诊器数据处理方法,通过电子设备或云服务器,对听诊器采集的检测数据进行处理,帮助用户对听诊位置进行确认,得到更准确的检测结果;且可根据次声波数据和声波数据,进行检测结果的获取,可提高检测准确性和全面性。
参见图2,为本公开一实施例的听诊数据处理方法的流程示意图。该听诊数据处理方法应用于电子设备或云服务器,其包括以下步骤:
在步骤201中,接收用户输入的目标检测对象。
用户可通过电子设备输入目标检测对象,例如,目标检测对象可为心脏、肺、胃等器官。电子设备可将用户输入的目标检测对象发送给云服务器,使得云服务器可接收到用户输入的目标检测对象。
在步骤202中,根据听诊器采集的检测数据,确定听诊器的听诊位置是否与目标检测对象相对应。
如上所述,听诊器采集到的检测数据,经过其内部的信号处理后,可通过有线或无线的方式传输给电子设备,由电子设备确定听诊位置是否与目标检测对象相对应。
在本公开的实施例中,听诊器从听诊位置(即听诊头所放置的位置)采集的检测数据中可包括一个或多个检测对象的声音信号,通过对这些声音信号的声压级进行排序,可确定听诊位置与目标检测对象是否相对应。在一个实施例中,电子设备或云服务器可根据存储的器官标准声音信号库和检测数据,确定二者叠加的信号中各检测对象的声压级排序,例如,声压级排序依次为胃,心脏,肺等。
在一个实施例中,当目标检测对象的声压级为最高时,听诊位置是与目标检测对象相对应的。若目标检测对象的声压级不为最高,则听诊位置与目标检测对象是不相对应的。
该步骤可由电子设备或云服务器执行。
在步骤203中,当听诊位置与目标检测对象不对应时,输出提示信息。
在本公开的实施例中,为了帮助用户将听诊头放置到正确的位置,以对目标检测对象进行准确的检测,当听诊位置与目标检测对象不相对应时,输出提示信息。在一个实施例中,提示信息可包括声压级排序的结果和/或放置位置错误的提示等。
当该步骤由电子设备执行时,电子设备可直接输出提示信息;当步骤由云服务器执行时,云服务器可将提示信息发送给电子设备,由电子设备进行显示。
根据输出的提示信息,用户可调整听诊位置。在一个实施例中,若当前声压级最高的是胃,然后是心脏,最后是肺,而用户输入的目标检测对象就是胃,则无需输出用于提示用户调整听诊位置的提示信息,当前听诊位置的检测数据将被持续发送给电子设备或云服务器,进行后续的检测结果判断。而若用户输入的目标检测对象是心脏,则听诊头放置的位置不准确,有偏移,将输出用于提示用户调整听诊位置的提示信息;用户根据提示信息,调整听诊位置;当听诊器的位置被调整后,重新对采集的检测数据的声压级进行排 序,并根据排序确定调整后的听诊位置是否和目标检测对象相对应。
在本公开的实施例中,检测数据的声压级排序是实时更新的。用户输入的目标检测对象可为一个或多个,当用户输入的目标检测对象为至少两个时,确定至少两个目标检测对象的声压级是否处于排序结果的前至少两位即可,例如,若用户选择的目标检测对象为两个,则当这两个目标的排序结果分别位于第一和第二的位置,两者之间的顺序并无限制,则确定听诊器的放置位置是正确的。
在一些实施例中,用户也可不输入目标检测对象,而直接根据输出的检测数据的声压级排序结果,确定目标检测对象的声压级是否为最高,来确定听诊位置是否正确。
在本公开的实施例中,当确定听诊位置正确后,即可根据检测数据进行检测结果的确定。
参见图3,为本公开一实施例的检测结果获取的流程示意图。
在步骤301中,当听诊位置与目标检测对象相对应时,将包括目标检测对象的检测数据与第一预先存储的数据进行相似度匹配。
当听诊位置与目标检测对象相对应时,检测数据稳定后,可根据检测数据进行相似度匹配。
在本公开的实施例中,在电子设备或云服务器中,存储有包括多个检测数据和检测结果对应关系的第一预先存储的数据。当目标检测对象确定后,可将目标检测对应的检测数据与第一预先存储的数据进行相似度匹配。
在一个实施例中,相似度匹配可包括以下方式:
方式1:将第一预先存储的数据为波形的形式,则将同样为波形形式的检测数据,进行相似度匹配。相似度匹配可利用图像处理中的相似度匹配方法。在一个实施例中,可设定波形的相似度阈值,由此,可确定相似度高于或低于预设阈值。
方式2:将第一预先存储的数据为多个数值,则对检测数据进行数值提取,从而比较数值与数值之间的相似度。在一个实施例中,可设定一数据的波动范围,由此,根据检测数据的数据波动范围是否在预设的范围内,可确定二者的相似度高于或低于预设阈值。
在步骤302中,当相似度匹配的结果满足预设条件时,根据所述第一预先存储的数据对应的信息,获取所述目标检测对象的检测结果。
由上所述,本公开实施例中,检测数据中包括次声波数据和可闻声波数据。
在一个实施例中,若次声波数据和可闻声波数据,与第一预先存储的数据的相似度均低于预设阈值,则将第一检测结果作为检测结果。第一检测结果可为用户为健康状态。
在一个实施例中,根据次声波数据与第一预先存储的数据中的一第一目标数据的相似度,确定一第一概率,且根据可闻声波数据与第一目标数据的相似度,确定一第二概率;将第一概率和第二概率进行加权,得到一第三概率;根据第三概率和第一目标数据对应的信息,得到所述检测结果。在一个实施例中,若次声波数据和可闻声波数据与第一预先存储的数据中的第一目标数据的相似度均超过预设阈值,即第一目标数据可为某一疾病A的数据,即若次声波数据与疾病A相似度超过预设阈值,且可闻声波数据与疾病A相似度超过预设阈值(这里的预设阈值与次声波数据与疾病A相似度超过的预设阈值可为相同值或不同值),则将根据次声波的相似度得到的第一概率和根据可闻声波相似度得到的第二概率,进行加权得到一第三概率。加权时可分别为第一概率和第二概率设定不同的权重值,例如,权重值分别为0.4和0.6。最终可得到的检测结果为:用户患疾病A(即第一目标数据对应的信息)的概率为第三概率。
在一个实施例中,若次声波数据和可闻声波数据的其中一者与第一预先 存储的数据中的第二目标数据的相似度高于预设阈值,而另一者与第一预先存储的数据的相似度均低于预设阈值,则将与第二目标数据对应的信息作为检测结果。即若次声波数据和可闻声波数据中的其中一者判定无疾病(即与第一预先存储的数据的相似度均低于预设阈值),而另一者判定为疾病A(即与第一预先存储的第二目标数据相似度高于预设阈值),则检测结果为疾病A(即第二目标数据对应的信息)。
在一个实施例中,若次声波数据与第一预先存储的数据中的第三目标数据的相似度高于预设阈值,且可闻声波数据与第一预先存储的数据中的第四目标数据的相似度高于预设阈值,则将与第三目标数据对应的信息以及与第四目标数据对应的信息作为检测结果。即若次声波数据与疾病A相似度超过预设阈值,可闻声波数据与疾病B相似度超过预设阈值,则将疾病A(第三目标数据对应的信息)和疾病B(即第四目标数据对应的信息)一起作为检测结果。
在一个实施例中,当目标检测对象的检测,需要结合与目标检测对象相关联的另一检测对象的检测数据时,显示包括与目标检测对象相关联的另一检测对象的信息的提示信息。用户该相关联的检测对象的信息,将听诊器移动到相应的位置,当该相关联的检测对象的声压级为最高时,对检测数据进行相似度匹配,获取该相关联的检测对象的检测结果。由此,根据相似度匹配的结果,获取所述目标检测对象的检测结果的步骤包括:
根据目标检测对象的检测数据与所述第一预先存储的数据的匹配结果,以及与目标检测对象相关联的检测对象的检测数据与第二预先存储的数据的匹配结果,获取目标检测对象的检测结果。
在步骤303中,显示检测结果。
检测结果可通过电子设备进行显示。若上述匹配过程是由云服务器执行的,则其可将检测结果发送给电子设备,由其进行显示。
在本公开的一实施例中,由于可由电子设备或云服务器对听诊过程进行引导,确定准确的听诊位置,且可由电子设备或云服务器将检测结果与存储的数据进行匹配,得到检测结果,可做到多器官的联合诊断;通过电子设备或云服务器对检测数据的处理,使普通用户也能准确获知检测结果;另一方面,可结合次声波频段和可闻声波频段的数据同时进行诊断,使准确度得以进一步提升;电子设备和听诊器之间,采用有线与无线共同连接的方式,既能保持听诊器一端随时都有电,也能在临时需要断开连接线时(如方便操作),无线继续无缝传输数据,提升用户体验。
在一个实施例中,为了对电子设备和云服务器的检测结果进行校正,用户可根据显示的检测结果,接收用户根据检测结果返回的确认信息,确认信息中包括:目标检测对象的确诊结果。当确诊结果与检测结果不一致时,将确诊结果与目标检测对象的检测数据相对应并存储。由此,可对电子设备和云服务器存储的数据进行更新,不断进行学习,提高后续检测的准确性。
在一个实施例中,电子设备可将目标检测对象的检测数据进行处理后,实时进行波形显示。还可将次声波数据进行升频后,通过预设的声音输出装置(例如,扬声器)输出,以使用户可听到次声波频段的检测数据。应理解,也可由云服务器将目标检测对象的检测数据进行处理后,将处理后的数据发送给电子设备进行实时进行波形显示,以及由云服务器将次声波数据进行升频后,发送给电子设备,由电子设备的扬声器输出,以使用户可听到次声波频段的检测数据。参见图4,本公开实施例的听诊数据处理方法,可由电子设备或云服务器执行,当由云服务器执行时,云服务器可将用于提示的信息和用于显示的信息,发送给电子设备进行提示和显示。
参见图5,相应的,本公开实施例还提供一种听诊器数据处理装置。该装置可应用于电子设备或服务器,当由云服务器执行时,云服务器可将用于提示的信息和用于显示的信息,发送给电子设备进行提示和显示。该听诊数 据处理装置500包括:
接收模块501,被配置为接收用户输入的目标检测对象;
确定模块502,被配置为根据所述听诊器采集的检测数据,确定所述听诊器的听诊位置是否与所述目标检测对象相对应;
提示信息输出模块503,被配置为当所述听诊位置与所述目标检测对象不对应时,输出提示信息。
在一个实施例中,确定模块502包括:
排序子模块,被配置为对所述检测数据中包括的一个或多个检测对象的声音信号的声压级进行排序;
听诊位置确定子模块,被配置为根据所述排序的结果,确定所述听诊位置是否与所述目标检测对象相对应。
在一个实施例中,所述装置还包括:
匹配模块504,被配置为当所述听诊位置与所述目标检测对象相对应时,将包括所述目标检测对象的检测数据与第一预先存储的数据进行相似度匹配;
检测结果获取模块505,被配置为当相似度匹配的结果满足预设条件时,根据所述第一预先存储的数据对应的信息,获取所述目标检测对象的检测结果;
显示模块506,被配置为显示所述检测结果。
所述目标检测对象的检测数据包括:次声波数据和可闻声波数据。
在一个实施例中,检测结果获取子模块505,被配置为若所述次声波数据和所述可闻声波数据,与所述第一预先存储的数据的相似度均低于预设阈值,则将第一检测结果作为所述检测结果。第一检测结果为用户为健康状态。
在一个实施例中,检测结果获取子模块505,被配置若次声波数据和可闻声波数据与第一预先存储的数据中的第一目标数据的相似度均超过预设 阈值,则根据次声波数据与第一预先存储的数据中的一第一目标数据的相似度,确定一第一概率,且根据可闻声波数据与第一目标数据的相似度,确定一第二概率;将第一概率和第二概率进行加权,得到一第三概率;根据第三概率和第一目标数据对应的信息,得到所述检测结果。
在一个实施例中,检测结果获取子模块505,被配置为若所述次声波数据和所述可闻声波数据的其中一者与所述第一预先存储的数据中的第二目标数据的相似度高于预设阈值,而另一者与所述第一预先存储的数据的相似度均低于预设阈值,则将与所述第二目标数据对应的信息作为所述检测结果。
在一个实施例中,检测结果获取子模块505,被配置为若所述次声波数据与所述第一预先存储的数据中的第三目标数据的相似度高于预设阈值,且可闻声波数据与所述第一预先存储的数据中的第四目标数据的相似度高于预设阈值,则将与第三目标数据对应的信息以及与所述第四目标数据对应的信息作为所述检测结果。
在一个实施例中,所述装置500还包括:
提示模块507,被配置为当所述目标检测对象的检测结果,需要结合与所述目标检测对象相关联的检测对象的检测数据时,显示包括所述与所述目标检测对象相关联的检测对象的信息的提示信息。
在一个实施例中,所述装置500还包括:
接收模块508,被配置为接收用户根据所述检测结果返回的确认信息,所述确认信息包括:所述目标检测对象的确诊结果;
存储模块509,被配置为若所述确诊结果与所述检测结果不一致时,将所述确诊结果与所述目标检测对象的检测数据相对应并存储。
在一个实施例中,所述装置500还包括:
波形显示模块510,被配置为将所述目标检测对象的检测数据进行处理 后,实时进行波形显示。
在一个实施例中,所述装置500还包括:
升频模块511,被配置为将所述次声波数据进行升频后,通过预设声音输出装置输出。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图6是根据一示例性实施例示出的一种用于听诊数据处理方法的装置600的框图,该装置600可以是电子设备或服务器。如图所示,该装置600可以包括:处理器601,存储器602,以及通信组件605。
其中,处理器601用于控制该装置600的整体操作,以完成上述听诊器数据处理方法中的全部或部分步骤。存储器602用于存储操作系统,各种类型的数据以支持在该装置600的操作,这些数据的例如可以包括用于在该装置600上操作的任何应用程序或方法的指令,以及应用程序相关的数据。该存储器602可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。
通信组件605用于该装置600与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G或4G,或它们中的一种或几种的组合,因此相应的该通信组件605可以包括:Wi-Fi模块,蓝牙模块,NFC模块。
在一示例性实施例中,装置600可以被一个或多个应用专用集成电路 (Application Specific Integrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述听诊器数据处理方法。
当装置600为电子设备时,其还包括多媒体组件603和I/O接口604。其中,多媒体组件603可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器602或通过通信组件605发送。音频组件还包括至少一个扬声器,用于输出音频信号。I/O接口604为处理器601和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。
在另一示例性实施例中,还提供了一种计算机程序产品,所述计算机程序产品包含能够由可编程的装置执行的计算机程序,所述计算机程序具有当由所述可编程的装置执行时用于执行上述听诊器数据处理方法的代码部分。
在另一示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器602,上述指令可由装置600的处理器601执行以完成上述听诊器数据处理方法。示例地,该非临时性计算机可读存储介质可以是ROM、随机存取存储器(Random Access Memory,简称RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
流程图中或在本公开的实施例中以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开实施方式的范围 包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所述技术领域的技术人员所理解。
本领域技术人员在考虑说明书及实践本公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (23)

  1. 一种听诊器数据处理方法,其特征在于,包括:
    接收用户输入的目标检测对象;
    根据所述听诊器采集的检测数据,确定所述听诊器的听诊位置是否与所述目标检测对象相对应;
    当所述听诊位置与所述目标检测对象不对应时,输出提示信息。
  2. 根据权利要求1所述的方法,其特征在于,根据所述听诊器采集的检测数据,确定所述听诊器的听诊位置是否与所述目标检测对象相对应的步骤包括:
    对所述检测数据中包括的一个或多个检测对象的声音信号的声压级进行排序;
    根据所述排序的结果,确定所述听诊位置是否与所述目标检测对象相对应。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述排序的结果,确定所述听诊位置是否与所述目标检测对象相对应的步骤包括:
    当所述目标检测对象的声压级为最高时,确定所述听诊位置与所述目标检测对象相对应。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    当所述听诊位置与所述目标检测对象相对应时,将包括所述目标检测对象的检测数据与第一预先存储的数据进行相似度匹配;
    当相似度匹配的结果满足预设条件时,根据所述第一预先存储的数据对 应的信息,获取所述目标检测对象的检测结果;
    显示所述检测结果。
  5. 根据权利要求4所述的方法,其特征在于,所述目标检测对象的检测数据包括:次声波数据和可闻声波数据。
  6. 根据权利要求5所述的方法,其特征在于,所述当相似度匹配的结果满足预设条件时,根据所述第一预先存储的数据对应的信息,获取所述目标检测对象的检测结果的步骤包括:
    根据所述次声波数据与所述第一预先存储的数据中的一第一目标数据的相似度,确定一第一概率,且根据所述可闻声波数据与所述第一目标数据的相似度,确定一第二概率;
    将所述第一概率和第二概率进行加权,得到一第三概率;
    根据所述第三概率和所述第一目标数据对应的信息,得到所述检测结果。
  7. 根据权利要求5所述的方法,其特征在于,所述当相似度匹配的结果满足预设条件时,根据所述第一预先存储的数据对应的信息,获取所述目标检测对象的检测结果的步骤包括:
    若所述次声波数据和所述可闻声波数据的其中一者与所述第一预先存储的数据中的第二目标数据的相似度高于预设阈值,而另一者与所述第一预先存储的数据的相似度均低于预设阈值,则将与所述第二目标数据对应的信息作为所述检测结果。
  8. 根据权利要求5所述的方法,其特征在于,所述当相似度匹配的结 果满足预设条件时,根据所述第一预先存储的数据对应的信息,获取所述目标检测对象的检测结果的步骤包括:
    若所述次声波数据与所述第一预先存储的数据中的第三目标数据的相似度高于预设阈值,且可闻声波数据与所述第一预先存储的数据中的第四目标数据的相似度高于预设阈值,则将与第三目标数据对应的信息以及与所述第四目标数据对应的信息作为所述检测结果。
  9. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    当所述目标检测对象的检测,需要结合与所述目标检测对象相关联的另一检测对象的检测数据时,显示包括与所述目标检测对象相关联的检测对象的信息的提示信息;
    根据所述目标检测对象的检测数据与所述第一预先存储的数据的匹配结果,以及所述目标检测对象相关联的检测对象的检测数据与第二预先存储的数据的匹配结果,获取所述目标检测对象的检测结果。
  10. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    接收用户根据所述检测结果返回的确认信息,所述确认信息包括:所述目标检测对象的确诊结果;
    若所述确诊结果与所述检测结果不一致时,将所述确诊结果与所述目标检测对象的检测数据相对应并存储。
  11. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    将所述目标检测对象的检测数据进行处理后,实时进行波形显示。
  12. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    将所述次声波数据进行升频后,通过预设声音输出装置输出。
  13. 一种听诊器数据处理装置,其特征在于,包括:
    接收模块,被配置为接收用户输入的目标检测对象;
    确定模块,被配置为根据所述听诊器采集的检测数据,确定所述听诊器的听诊位置是否与所述目标检测对象相对应;
    提示信息输出模块,被配置为当所述听诊位置与所述目标检测对象不对应时,输出提示信息。
  14. 根据权利要求13所述的装置,其特征在于,所述确定模块包括:
    排序子模块,被配置为对所述检测数据中包括的一个或多个检测对象的声音信号的声压级进行排序;
    听诊位置确定子模块,被配置为根据所述排序的结果,确定所述听诊位置是否与所述目标检测对象相对应。
  15. 根据权利要求13或14任一项所述的装置,其特征在于,所述装置还包括:
    匹配模块,被配置为当所述听诊位置与所述目标检测对象相对应时,将包括所述目标检测对象的检测数据与第一预先存储的数据进行相似度匹配;
    检测结果获取模块,被配置为当相似度匹配的结果满足预设条件时,根据所述第一预先存储的数据对应的信息,获取所述目标检测对象的检测结果;
    显示模块,被配置为显示所述检测结果。
  16. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    提示模块,被配置为当所述目标检测对象的检测结果,需要结合与所述目标检测对象相关联的另一检测对象的检测数据时,显示包括与所述目标检测对象相关联的检测对象的信息的提示信息。
  17. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    接收模块,被配置为接收用户根据所述检测结果返回的确认信息,所述确认信息包括:所述目标检测对象的确诊结果;
    存储模块,被配置为若所述确诊结果与所述检测结果不一致时,将所述确诊结果与所述目标检测对象的检测数据相对应并存储。
  18. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    波形显示模块,被配置为将所述目标检测对象的检测数据进行处理后,实时进行波形显示。
  19. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    升频模块,被配置为将所述次声波数据进行升频后,通过预设声音输出装置输出。
  20. 一种计算机程序产品,其特征在于,所述计算机程序产品包含能够由可编程的装置执行的计算机程序,所述计算机程序具有当由所述可编程的装置执行时用于执行权利要求1至12中任一项所述的方法的代码部分。
  21. 一种非临时性计算机可读存储介质,其特征在于,所述非临时性计算机可读存储介质中包括一个或多个程序,所述一个或多个程序用于执行权利要求1至12中任一项所述的方法。
  22. 一种电子设备,其特征在于,包括:
    权利要求21中所述的非临时性计算机可读存储介质;以及
    一个或者多个处理器,用于执行所述非临时性计算机可读存储介质中的程序。
  23. 一种云服务器,其特征在于,包括:
    权利要求21中所述的非临时性计算机可读存储介质;以及
    一个或者多个处理器,用于执行所述非临时性计算机可读存储介质中的程序。
PCT/CN2016/108108 2016-11-30 2016-11-30 听诊器数据处理方法、装置、电子设备及云服务器 WO2018098716A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680002855.5A CN107077531B (zh) 2016-11-30 2016-11-30 听诊器数据处理方法、装置、电子设备及云服务器
PCT/CN2016/108108 WO2018098716A1 (zh) 2016-11-30 2016-11-30 听诊器数据处理方法、装置、电子设备及云服务器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/108108 WO2018098716A1 (zh) 2016-11-30 2016-11-30 听诊器数据处理方法、装置、电子设备及云服务器

Publications (1)

Publication Number Publication Date
WO2018098716A1 true WO2018098716A1 (zh) 2018-06-07

Family

ID=59623763

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108108 WO2018098716A1 (zh) 2016-11-30 2016-11-30 听诊器数据处理方法、装置、电子设备及云服务器

Country Status (2)

Country Link
CN (1) CN107077531B (zh)
WO (1) WO2018098716A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492135A (zh) * 2018-10-27 2019-03-19 平安科技(深圳)有限公司 一种基于数据处理的数据审核方法以及装置
USD889385S1 (en) 2018-10-16 2020-07-07 Bridgestone Corporation Tire tread
US20220354451A1 (en) * 2021-05-06 2022-11-10 Eko Devices, Inc. Systems and methods for electronic stethoscope wireless auscultation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108962389A (zh) * 2018-06-21 2018-12-07 上海掌门科技有限公司 用于风险提示的方法及系统
CN112515698B (zh) * 2020-11-24 2023-03-28 英华达(上海)科技有限公司 听诊系统及其控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1994231A (zh) * 2006-01-06 2007-07-11 财团法人工业技术研究院 消除杂音的听诊装置及方法
WO2015065988A1 (en) * 2013-10-28 2015-05-07 Smith Clive L Stethoscope and electronic device structure
CN105266842A (zh) * 2015-11-03 2016-01-27 江苏物联网研究发展中心 听诊模式切换自动检测数字听诊器及听诊模式切换检测方法
CN105286911A (zh) * 2015-12-04 2016-02-03 上海拓萧智能科技有限公司 一种健康监测系统和健康监测方法
CN105708489A (zh) * 2016-01-26 2016-06-29 卓效医疗有限公司 一种电子听诊器的远程听诊实现方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR200389343Y1 (ko) * 2005-02-25 2005-07-14 이병훈 휴대폰-청진기
CN203776933U (zh) * 2014-01-28 2014-08-20 郑州恒之杰电子科技有限公司 一种医用波形可视听辅助诊断分析仪

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1994231A (zh) * 2006-01-06 2007-07-11 财团法人工业技术研究院 消除杂音的听诊装置及方法
WO2015065988A1 (en) * 2013-10-28 2015-05-07 Smith Clive L Stethoscope and electronic device structure
CN105266842A (zh) * 2015-11-03 2016-01-27 江苏物联网研究发展中心 听诊模式切换自动检测数字听诊器及听诊模式切换检测方法
CN105286911A (zh) * 2015-12-04 2016-02-03 上海拓萧智能科技有限公司 一种健康监测系统和健康监测方法
CN105708489A (zh) * 2016-01-26 2016-06-29 卓效医疗有限公司 一种电子听诊器的远程听诊实现方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD889385S1 (en) 2018-10-16 2020-07-07 Bridgestone Corporation Tire tread
CN109492135A (zh) * 2018-10-27 2019-03-19 平安科技(深圳)有限公司 一种基于数据处理的数据审核方法以及装置
CN109492135B (zh) * 2018-10-27 2024-03-19 平安科技(深圳)有限公司 一种基于数据处理的数据审核方法以及装置
US20220354451A1 (en) * 2021-05-06 2022-11-10 Eko Devices, Inc. Systems and methods for electronic stethoscope wireless auscultation

Also Published As

Publication number Publication date
CN107077531A (zh) 2017-08-18
CN107077531B (zh) 2021-04-30

Similar Documents

Publication Publication Date Title
WO2018098716A1 (zh) 听诊器数据处理方法、装置、电子设备及云服务器
US9973847B2 (en) Mobile device-based stethoscope system
US9918171B2 (en) Online hearing aid fitting
KR101268829B1 (ko) 청진 훈련 장치 및 관련 방법
US20150124985A1 (en) Device and method for detecting change in characteristics of hearing aid
US9064428B2 (en) Auscultation training device and related methods
US11889273B2 (en) System for configuring a hearing device
KR101348331B1 (ko) 스마트폰과 기능적으로 연결된 청진기
CN109565629A (zh) 分布式音频捕获和混合控制
US20200121277A1 (en) Systems and methods for detecting physiological information using a smart stethoscope
WO2020245631A1 (en) Sound modification based on frequency composition
CN113746983B (zh) 助听方法及装置、存储介质、智能终端
CN111526467A (zh) 声学收听区域制图和频率校正
US10716479B2 (en) Signal synchronization device, as well as stethoscope, auscultation information output system and symptom diagnosis system capable of signal synchronization
JP2020034542A (ja) 情報処理方法、情報処理装置及びプログラム
CN104274209A (zh) 一种基于移动智能终端的新型胎心仪
US11227423B2 (en) Image and sound pickup device, sound pickup control system, method of controlling image and sound pickup device, and method of controlling sound pickup control system
US20220369053A1 (en) Systems, devices and methods for fitting hearing assistance devices
JP6589042B1 (ja) 音声分析装置、音声分析方法、音声分析プログラム及び音声分析システム
US9355648B2 (en) Voice input/output device, method and programme for preventing howling
AU2014243717A1 (en) Service aware software architecture in wireless device ecosystem
US10505879B2 (en) Communication support device, communication support method, and computer program product
US20240015462A1 (en) Voice processing system, voice processing method, and recording medium having voice processing program recorded thereon
JP2019140503A (ja) 情報処理装置、情報処理方法、及び情報処理プログラム
US20230260505A1 (en) Information processing method, non-transitory recording medium, information processing apparatus, and information processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922777

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/10/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16922777

Country of ref document: EP

Kind code of ref document: A1