CN109984770B - Method and system for collecting and processing sound in human body - Google Patents

Method and system for collecting and processing sound in human body Download PDF

Info

Publication number
CN109984770B
CN109984770B CN201910187047.6A CN201910187047A CN109984770B CN 109984770 B CN109984770 B CN 109984770B CN 201910187047 A CN201910187047 A CN 201910187047A CN 109984770 B CN109984770 B CN 109984770B
Authority
CN
China
Prior art keywords
sound
abdominal
organs
abdomen
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910187047.6A
Other languages
Chinese (zh)
Other versions
CN109984770A (en
Inventor
刘兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Hounify Technology Co ltd
Original Assignee
Chongqing Hounify Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Hounify Technology Co ltd filed Critical Chongqing Hounify Technology Co ltd
Priority to CN201910187047.6A priority Critical patent/CN109984770B/en
Publication of CN109984770A publication Critical patent/CN109984770A/en
Application granted granted Critical
Publication of CN109984770B publication Critical patent/CN109984770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes

Landscapes

  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention provides a method and a system for collecting and processing sound in a human body, wherein the method comprises the following steps: collecting sound signal samples of all internal organs of the abdomen; acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the collected sound signal sample signals of the internal organs of the abdomen; deep learning is carried out on the characteristics of the acoustic signals of all the abdominal organs, and a standard acoustic model of all the abdominal organs is established; collecting abdominal viscera sound signals of a person to be collected; acquiring the characteristics of the sound signals of all internal organs of the abdomen of a person to be acquired; establishing individual acoustic models of all abdominal organs according to the characteristics of the acoustic signals of all abdominal organs of the collector; comparing the individual acoustic model of each abdominal organ with the standard acoustic model of each abdominal organ according to the same organ, and acquiring and outputting a comparison result of each abdominal organ; the invention improves the life quality of people and enhances the capacity of treating and preventing diseases by collecting the sound emitted by internal organs of a human body to assist the disease diagnosis of a user and the prediction of diseases.

Description

Method and system for collecting and processing sound in human body
Technical Field
The invention relates to the field of electronics, in particular to a method and a system for collecting and processing sound in a human body.
Background
The human body can make a lot of sounds, and signals of the body are output outwards, such as respiratory sounds of the lung, fetal sounds of pregnant women and the like, and the sounds made by the human body can be used for judging whether the human body has problems, such as constipation, gastritis, extreme visceral organ canceration and the like, and the sounds made by the human body can be used for judging, but some unobvious lesions, especially early lesions, do not draw the attention of people or cannot be felt, and the sound differences of all parts of the visceral organs of the human body can be judged, so that doctors can find the sounds with long-term clinical experience.
At present, there is no device dedicated to collecting internal sounds of a human body and performing corresponding processing, and therefore, there is a need for a dedicated device and system that monitors sounds of individuals from a group of the whole population by collecting internal sounds of the human body, and further assists in disease diagnosis or disease prediction of people to improve quality of life of people and improve capabilities of human in disease treatment and prevention.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides a method and a system for collecting and processing sound in a human body, so as to solve the above-mentioned technical problems.
The invention provides a method for collecting and processing sound in a human body, which comprises the following steps:
collecting sound signal samples of all internal organs of the abdomen;
acquiring the characteristics of the abdominal organ sound signals according to the acquired abdominal organ sound signal sample signals;
deep learning is carried out on the characteristics of the acoustic signals of all the abdominal organs, and a standard acoustic model of all the abdominal organs is established;
collecting abdominal viscera sound signals of a person to be collected;
acquiring the characteristics of the sound signals of all internal organs of the abdomen of a person to be acquired;
establishing individual acoustic models of all abdominal organs according to the characteristics of the acoustic signals of all abdominal organs of the collector;
and comparing the individual acoustic model of each abdominal organ with the standard acoustic model of each abdominal organ according to the same organ, and acquiring and outputting a comparison result of each abdominal organ.
Optionally, performing time-frequency analysis on collected abdominal organ sound signal samples to obtain sound frequency spectrums of the organs;
and respectively acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the sound frequency spectrum of each internal organ to form a group internal organ sound signal training set.
Optionally, training the group organ sound signal training set through a convolutional neural network to construct a standard sound model of each organ in the abdomen;
inputting abdominal viscera sound signals of a person to be acquired into the standard sound model of each abdominal viscera, identifying the abdominal viscera sound signals of the person to be acquired, and outputting an identification result;
and establishing an individual acoustic model of each abdominal organ according to the identification result.
Optionally, the wearable device acquires abdominal viscera sound signals of the person to be collected in real time, and when the abdominal viscera sound signals of the person to be collected are detected to be abnormal, abnormal alarm is given through the wearable device.
The invention also provides a system for collecting and processing the sound in the human body, which comprises: a wearable detection device, a terminal device and a server,
the server includes:
the first acquisition module is used for acquiring sound signal samples of all visceral organs of the abdomen;
the signal processing module is used for acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the collected sound signal sample signals of the internal organs of the abdomen;
the model training module is used for deeply learning the characteristics of the acoustic signals of all the internal organs of the abdomen and establishing a standard acoustic model of all the internal organs of the abdomen;
the wearable detection device includes:
the second acquisition module is used for acquiring abdominal viscera sound signals of a person to be acquired;
the terminal device includes: the communication module is connected with the second acquisition module and is used for transmitting abdominal viscera sound signals of a person to be acquired to a server;
acquiring the characteristics of the sound signals of all the visceral organs of the abdomen of the person to be acquired through the signal processing module;
establishing individual acoustic models of all abdominal organs according to the characteristics of the acoustic signals of all abdominal organs of the collector;
and comparing the individual acoustic models of the internal organs of the abdomen with the standard acoustic models of the internal organs of the abdomen according to the same internal organs, acquiring comparison results of the internal organs of the abdomen, outputting the comparison results to the wearable detection equipment, and displaying the individual acoustic models of the internal organs of the abdomen and the comparison results through the display module.
Optionally, the model training module is a convolutional neural network, and the group organ sound signal training set is trained through the convolutional neural network to construct a standard sound model of each organ in the abdomen;
inputting abdominal viscera sound signals of a person to be acquired into the standard sound model of each abdominal viscera, identifying the abdominal viscera sound signals of the person to be acquired, and outputting an identification result;
optionally, the second collecting module includes a plurality of directional micro-electromechanical structures, and the plurality of directional micro-electromechanical structures form a collecting array.
Optionally, the wearable detection device further comprises an alarm module, and when detecting that the abdominal organ sound signal of the collector is abnormal, the alarm module gives an abnormal alarm.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above.
The present invention also provides an electronic terminal, comprising: a processor and a memory;
the memory is adapted to store a computer program and the processor is adapted to execute the computer program stored by the memory to cause the terminal to perform the method as defined in any one of the above.
The invention has the beneficial effects that: the method and the system for collecting and processing the sound in the human body can realize monitoring of the organ sound of the whole crowd from the group to the individual by collecting the sound emitted by the internal organs of the human body, and achieve the purposes of assisting the diagnosis of human diseases and the prediction of the diseases by the aid of the non-inductive collecting and monitoring mode on the premise of not changing any living and working habits of a user, so that the living quality of people is improved, and the capability of treating and preventing the human diseases is improved.
Drawings
Fig. 1 is a schematic flow chart of a human body sound collection processing method in an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a human body internal sound collection and processing system according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a wearable device in a human body internal sound collection and processing system in an embodiment of the invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present invention.
As shown in fig. 1, the method for collecting and processing sound in a human body according to the embodiment of the present invention includes:
collecting sound signal samples of all internal organs of the abdomen;
acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the collected sound signal sample signals of the internal organs of the abdomen;
deep learning is carried out on the characteristics of the acoustic signals of all the abdominal organs, and a standard acoustic model of all the abdominal organs is established;
collecting abdominal viscera sound signals of a person to be collected;
acquiring the characteristics of the sound signals of all internal organs of the abdomen of a person to be acquired;
establishing individual acoustic models of all abdominal organs according to the characteristics of the acoustic signals of all abdominal organs of the collector;
and comparing the individual acoustic model of each abdominal organ with the standard acoustic model of each abdominal organ according to the same organ, and acquiring and outputting a comparison result of each abdominal organ.
In this embodiment, the sound signals of the internal organs of the abdomen of the human body can be collected by a sensor or other devices, so that the user can know the digital conditions of the internal organs (such as the heart, the lung, the abdomen, the blood flow, etc.), the beating times and the breathing times of the heart, the sound effect of the digestive system after meals, and the display effect can be presented in a graphical and digital manner (the walking information of the mobile phone-like exercise).
In this embodiment, a time-frequency analysis is performed on collected samples of the sound signals of the internal organs of the abdomen, so as to obtain sound frequency spectrums of the internal organs; and respectively acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the sound frequency spectrum of each internal organ to form a group internal organ sound signal training set. The voice recognition in this embodiment is performed based on a voice spectrum after time-frequency analysis, and the time-frequency spectrum has structural characteristics. To improve the speech recognition rate, it is necessary to overcome the various diversity of the audio signal, including the diversity of organs and the diversity of environments.
In the embodiment, the group organ sound signal training set is trained through a convolutional neural network, and a standard sound model of each organ in the abdomen is constructed; inputting abdominal viscera sound signals of a person to be acquired into the standard sound model of each abdominal viscera, identifying the abdominal viscera sound signals of the person to be acquired, and outputting an identification result; and establishing an individual acoustic model of each abdominal organ according to the identification result. The convolutional neural network provides translation invariance convolution in time and space, and in the acoustic modeling of organ voice recognition of the convolutional neural network, the invariance of the convolution can be utilized to overcome the diversity of the voice signal. The time frequency spectrum obtained by analyzing the whole sound signal can be treated as an image, and a deep convolutional network widely applied in the image is adopted to identify the time frequency spectrum.
In the embodiment, the wearable device is used for acquiring abdominal organ sound signals of a person to be acquired in real time, and when the fact that the abdominal organ sound signals of the person to be acquired are abnormal is detected, the wearable device is used for alarming the abnormality. Wearable devices have a variety of specific sensing forms and courses. Such as a microphone, which may be analog or digital. The digital microphone can adopt a micro-electro-mechanical system (MEMS) structure, also called a silicon microphone, and an array digital microphone formed by the silicon microphones can effectively position a sound source so as to acquire sounds emitted by different organs; in addition, a skin patch type sound sensor, PVDF micro-pressure, bone conduction and the like can be adopted to obtain the sound emitted by the body.
As shown in fig. 2, correspondingly, in another embodiment, there is also provided a human body internal sound collection processing system, including: a wearable detection device, a terminal device and a server,
the server includes:
the first acquisition module is used for acquiring sound signal samples of all visceral organs of the abdomen;
the signal processing module is used for acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the collected sound signal sample signals of the internal organs of the abdomen;
the model training module is used for deeply learning the characteristics of the acoustic signals of all the internal organs of the abdomen and establishing a standard acoustic model of all the internal organs of the abdomen;
the wearable detection device includes:
the second acquisition module is used for acquiring abdominal viscera sound signals of a person to be acquired;
the terminal device includes: the communication module is connected with the second acquisition module and is used for transmitting abdominal viscera sound signals of a person to be acquired to a server;
acquiring the characteristics of the sound signals of all the visceral organs of the abdomen of the person to be acquired through the signal processing module;
establishing individual acoustic models of all abdominal organs according to the characteristics of the acoustic signals of all abdominal organs of the collector;
and comparing the individual acoustic models of the internal organs of the abdomen with the standard acoustic models of the internal organs of the abdomen according to the same internal organs, acquiring comparison results of the internal organs of the abdomen, outputting the comparison results to the wearable detection equipment, and displaying the individual acoustic models of the internal organs of the abdomen and the comparison results through the display module.
In this embodiment, the model training module is a convolutional neural network, and trains the group organ sound signal training set through the convolutional neural network to construct a standard sound model of each organ in the abdomen; inputting abdominal viscera sound signals of a person to be acquired into the standard sound model of each abdominal viscera, identifying the abdominal viscera sound signals of the person to be acquired, and outputting an identification result; and establishing an individual acoustic model of each abdominal organ according to the identification result.
Preferably, as shown in fig. 3, the wearable detection apparatus in this embodiment mainly includes a belt 1, the second acquisition module includes a plurality of directional Micro-Electromechanical structures (MEMS) 3, and the plurality of directional Micro-Electromechanical structures 3 form an acquisition array. The micro-electromechanical structure in this embodiment refers to a device with a size of several millimeters or even smaller, and its internal structure is usually in the micrometer or even nanometer scale, and is an independent intelligent system. The array fixed digital microphone mainly comprises a sensor, an actuator (executor) and a micro energy source, is also called a silicon microphone, and can effectively position a sound source so as to acquire sounds emitted by different organs. The directional micro-electromechanical structures 3 are uniformly arranged on the inner side of the waistband 1 to form an acquisition array, more preferably, concave parts 1 which are matched with the micro-electromechanical structures 3 and used for containing the micro-electromechanical structures are arranged on the inner side of the waistband 1, and the micro-electromechanical structures 3 can be fixed through the concave parts 1 on one hand, and on the other hand, the situation that the micro-electromechanical structures are scraped off 3 in the using process is avoided. The belt 1 is further provided with a power supply unit, not shown, for supplying power to the first communication unit and the collecting unit, preferably, the power supply unit may include a solar panel and a storage battery, and solar energy is converted into electric energy through the solar panel, stored in the storage battery, and supplied to the power utilization module arranged on the belt 1.
In this embodiment, the wearable detection device further comprises an alarm module, and when detecting that the abdominal organ sound signals of the collector are abnormal, the alarm module gives an abnormal alarm. In this embodiment, after wearable equipment gathered the sound that the human body sent, can be through wireless transmission's mode, short distance transmission technologies such as bluetooth, wifi, zigBee also can adopt long distance transmission technologies such as GPRS, CDMA, LTE, transmit to terminal module, terminal equipment can be cell-phone or various special portable equipment, takes the cell-phone as an example, can reach the server again through cell-phone APP, and then handles the analysis, background data operations such as special model are constructed.
Note that in the corresponding figures of embodiments, where signals are represented by lines, some lines are thicker, to indicate more constituent signal paths (constituent _ signal paths) and/or one or more ends of some lines have arrows, to indicate primary information flow direction, these designations are not intended to be limiting, and indeed, the use of such lines in connection with one or more example embodiments facilitates easier circuit or logic unit routing, and any represented signal (as determined by design requirements or preferences) may actually comprise one or more signals that may be conveyed in either direction and may be implemented in any suitable type of signal scheme.
Unless otherwise specified the use of the ordinal adjectives "first", "second", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
Reference in the specification to "an embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of "an embodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic "may", "might", or "could" be included, that particular component, feature, structure, or characteristic is not necessarily included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the element. If the specification or claim refers to "a further" element, that does not preclude there being more than one of the further element.
While the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, the discussed embodiments may be used. The embodiments of the invention are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims.
The present embodiment also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements any of the methods in the present embodiments.
The present embodiment further provides an electronic terminal, including: a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to enable the terminal to execute the method in the embodiment.
The computer-readable storage medium in the present embodiment can be understood by those skilled in the art as follows: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The electronic terminal provided by the embodiment comprises a processor, a memory, a transceiver and a communication interface, wherein the memory and the communication interface are connected with the processor and the transceiver and are used for completing mutual communication, the memory is used for storing a computer program, the communication interface is used for carrying out communication, and the processor and the transceiver are used for operating the computer program so that the electronic terminal can execute the steps of the method.
In this embodiment, the Memory may include a Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (9)

1. A method for collecting and processing sound in a human body is characterized by comprising the following steps:
collecting sound signal samples of all visceral organs of the abdomen;
acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the collected sound signal sample signals of the internal organs of the abdomen;
deep learning is carried out on the characteristics of the acoustic signals of all the abdominal organs, and a standard acoustic model of all the abdominal organs is established;
collecting abdominal viscera sound signals of a person to be collected;
acquiring the characteristics of the sound signals of all internal organs of the abdomen of a person to be acquired;
establishing individual acoustic models of all abdominal organs according to the characteristics of the acoustic signals of all abdominal organs of the collector; carrying out time-frequency analysis on collected abdominal organ sound signal samples to obtain sound frequency spectrums of organs; respectively acquiring the characteristics of the sound signals of all the abdominal organs according to the sound frequency spectrums of all the organs to form a group organ sound signal training set; training the group organ sound signal training set through a convolutional neural network to construct standard sound models of all organs of the abdomen; the convolutional neural network provides translation invariance convolution in time and space, and the invariance of the convolution is utilized to overcome the diversity of the sound signal;
and comparing the individual acoustic model of each abdominal organ with the standard acoustic model of each abdominal organ according to the same organ, and acquiring and outputting a comparison result of each abdominal organ.
2. The human body internal sound collection processing method according to claim 1,
inputting abdominal viscera sound signals of a person to be acquired into the standard sound model of each abdominal viscera, identifying the abdominal viscera sound signals of the person to be acquired, and outputting an identification result;
and establishing an individual acoustic model of each abdominal organ according to the identification result.
3. The method for collecting and processing the sound in the human body according to claim 1, wherein the wearable device is used for acquiring the abdominal organ sound signals of the person to be collected in real time, and when the wearable device detects that the abdominal organ sound signals of the person to be collected are abnormal, the wearable device is used for alarming the abnormality.
4. A system for collecting and processing sounds in a human body, comprising: a wearable detection device, a terminal device and a server,
the server includes:
the first acquisition module is used for acquiring sound signal samples of all visceral organs of the abdomen;
the signal processing module is used for acquiring the characteristics of the sound signals of the internal organs of the abdomen according to the collected sound signal sample signals of the internal organs of the abdomen;
the model training module is used for deeply learning the characteristics of the sound signals of all the visceral organs of the abdomen and establishing a standard sound model of all the visceral organs of the abdomen; carrying out time-frequency analysis on collected abdominal organ sound signal samples to obtain sound frequency spectrums of organs; respectively acquiring the characteristics of the sound signals of all the abdominal organs according to the sound frequency spectrums of all the organs to form a group organ sound signal training set; training the group organ sound signal training set through a convolutional neural network to construct standard sound models of all organs of the abdomen; the convolutional neural network provides translation invariance convolution in time and space, and the invariance of the convolution is utilized to overcome the diversity of the sound signal;
the wearable detection device includes:
the second acquisition module is used for acquiring abdominal viscera sound signals of a person to be acquired;
the terminal device includes: the communication module is connected with the second acquisition module and is used for transmitting abdominal viscera sound signals of a person to be acquired to a server;
acquiring the characteristics of the sound signals of all the visceral organs of the abdomen of the person to be acquired through the signal processing module;
establishing individual acoustic models of all internal organs of the abdomen according to the characteristics of the acoustic signals of all internal organs of the abdomen of the collector;
and comparing the individual acoustic models of the internal organs of the abdomen with the standard acoustic models of the internal organs of the abdomen according to the same internal organs, acquiring comparison results of the internal organs of the abdomen, outputting the comparison results to the wearable detection equipment, and displaying the individual acoustic models of the internal organs of the abdomen and the comparison results through the display module.
5. The system according to claim 4, wherein the model training module is a convolutional neural network, and the convolutional neural network is used for training the group organ sound signal training set to construct a standard sound model of each abdominal organ;
inputting abdominal viscera sound signals of a person to be acquired into the standard sound model of each abdominal viscera, identifying the abdominal viscera sound signals of the person to be acquired, and outputting an identification result;
and establishing an individual acoustic model of each organ in the abdomen according to the identification result.
6. The system of claim 4, wherein the second collection module comprises a plurality of directional micro-electromechanical structures, the plurality of directional micro-electromechanical structures forming a collection array.
7. The system for collecting and processing the sound in the human body according to claim 4, wherein the wearable detection device further comprises an alarm module, and when the abnormal sound signals of the abdominal viscera of the collector are detected, the alarm module gives an alarm for the abnormal sound signals.
8. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program when executed by a processor implements the method of any one of claims 1 to 3.
9. An electronic terminal, comprising: a processor and a memory;
the memory is for storing a computer program and the processor is for executing the computer program stored by the memory to cause the terminal to perform the method of any of claims 1 to 3.
CN201910187047.6A 2019-03-13 2019-03-13 Method and system for collecting and processing sound in human body Active CN109984770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910187047.6A CN109984770B (en) 2019-03-13 2019-03-13 Method and system for collecting and processing sound in human body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910187047.6A CN109984770B (en) 2019-03-13 2019-03-13 Method and system for collecting and processing sound in human body

Publications (2)

Publication Number Publication Date
CN109984770A CN109984770A (en) 2019-07-09
CN109984770B true CN109984770B (en) 2022-05-17

Family

ID=67130545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910187047.6A Active CN109984770B (en) 2019-03-13 2019-03-13 Method and system for collecting and processing sound in human body

Country Status (1)

Country Link
CN (1) CN109984770B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103479385A (en) * 2013-08-29 2014-01-01 无锡慧思顿科技有限公司 Wearable heart, lung and intestine comprehensive detection equipment and method
CN107292286A (en) * 2017-07-14 2017-10-24 中国科学院苏州生物医学工程技术研究所 Breath sound discrimination method and system based on machine learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103479385A (en) * 2013-08-29 2014-01-01 无锡慧思顿科技有限公司 Wearable heart, lung and intestine comprehensive detection equipment and method
CN107292286A (en) * 2017-07-14 2017-10-24 中国科学院苏州生物医学工程技术研究所 Breath sound discrimination method and system based on machine learning

Also Published As

Publication number Publication date
CN109984770A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
US20180108440A1 (en) Systems and methods for medical diagnosis and biomarker identification using physiological sensors and machine learning
CN105393252B (en) Physiological data collection and analysis
US8475396B2 (en) Method and system of an acoustic scene analyzer for body sounds
US9504420B2 (en) Methods and arrangements for identifying dermatological diagnoses with clinically negligible probabilities
EP4146071A1 (en) Sensor systems and methods for characterizing health conditions
CN107693044A (en) Surveillance of Coronary Heart diagnostic device
US20210030390A1 (en) Electronic stethoscope
JP2021502222A (en) Multi-microphone sound collector, system and method for sound positioning
US20120209131A1 (en) Method and System of a Cardio-acoustic Classification system for Screening, Diagnosis and Monitoring of Cardiovascular Conditions
US20210202094A1 (en) User interface for navigating through physiological data
US20200383582A1 (en) Remote medical examination system and method
WO2022040353A2 (en) Sensor systems and methods for characterizing health conditions
WO2020051582A1 (en) Screening device, method, and system for structural heart disease
Zhao et al. An IoT-based wearable system using accelerometers and machine learning for fetal movement monitoring
CN109394183A (en) A kind of medical condition method for early warning, system and storage medium
CN105943080A (en) Intelligent stethophone
KR20150001009A (en) Mobile terminal diagnosis system using portable wireless digital electronic stethoscope
CN110610754A (en) Immersive wearable diagnosis and treatment device
CN108334200B (en) Electronic equipment control method and related product
KR20140146782A (en) Animal wiress stethoscope diagnosis system
CN109984770B (en) Method and system for collecting and processing sound in human body
CN109036552A (en) Tcm diagnosis terminal and its storage medium
CN112489796A (en) Intelligent auscultation auxiliary diagnosis system and diagnosis method
Massey et al. Experimental analysis of a mobile health system for mood disorders
CN105631224B (en) Health monitoring method, mobile terminal and health monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant