CN117038051A - Remote auscultation method, device and remote auscultation system based on intelligent stethoscope - Google Patents

Remote auscultation method, device and remote auscultation system based on intelligent stethoscope Download PDF

Info

Publication number
CN117038051A
CN117038051A CN202310927542.2A CN202310927542A CN117038051A CN 117038051 A CN117038051 A CN 117038051A CN 202310927542 A CN202310927542 A CN 202310927542A CN 117038051 A CN117038051 A CN 117038051A
Authority
CN
China
Prior art keywords
auscultation
target
model
body sound
sound information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310927542.2A
Other languages
Chinese (zh)
Inventor
张帅军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Haorui Technology Co ltd
Original Assignee
Zhuhai Haorui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Haorui Technology Co ltd filed Critical Zhuhai Haorui Technology Co ltd
Priority to CN202310927542.2A priority Critical patent/CN117038051A/en
Publication of CN117038051A publication Critical patent/CN117038051A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Acoustics & Sound (AREA)
  • Primary Health Care (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention provides a remote auscultation method and device based on an intelligent stethoscope and a remote auscultation system, wherein the method comprises the following steps: the auscultation terminal generates an auscultation reference model according to the shot patient image and displays the auscultation reference model on the display, and each to-be-auscultated area of the auscultation reference model comprises at least one to-be-auscultated point; the auscultation terminal performs position tracking on the auscultation head through the camera, and displays the real-time projection position of the auscultation head on the auscultation reference model; when the real-time projection position is overlapped with the target auscultation point, the auscultation terminal generates target body sound information according to real-time audio collected by the pickup device and sends the target body sound information to the doctor terminal for playing. According to the technical scheme provided by the embodiment of the invention, the auscultation reference model can be displayed to guide a target patient to operate the auscultation head, the position of the auscultation head is judged through the image recognition and image tracking technology, a space sensor is not required to be arranged in the auscultation head, the hardware cost is reduced, the vibration interference to an aluminum film in the auscultation head is reduced, and the accuracy of target body sound information is improved.

Description

Remote auscultation method, device and remote auscultation system based on intelligent stethoscope
Technical Field
The invention relates to the technical field of telemedicine, in particular to a remote auscultation method and device based on an intelligent stethoscope and a remote auscultation system.
Background
Currently, some remote auscultation systems are on the market. The remote auscultation system is usually provided with an intelligent stethoscope, the intelligent stethoscope returns the collected body sound information to the remote auscultation system, the remote auscultation system sends the body sound information to a doctor terminal through a network, and a doctor completes remote auscultation according to the body sound information played by the doctor terminal, so that the problem of uneven distribution of medical resource regions is relieved to a certain extent.
Remote auscultation usually requires the patient to operate an intelligent stethoscope, and the patient does not necessarily have rich medical knowledge, so that auscultation sites are difficult to accurately find, and auscultation effects are affected. In order to locate the auscultation site, the related art sets a space sensor in the intelligent stethoscope, and in the operation process of a patient, the space sensor transmits the collected space position information back to an auscultation terminal, and the auscultation terminal determines the current corresponding patient site of the intelligent stethoscope according to the returned space position information, so as to guide the patient to aim the stethoscope at the correct auscultation site.
If the space sensor is arranged outside the stethoscope head, the stethoscope head is easy to collide and damage in the use process, so that the space sensor is arranged in a cavity structure inside the stethoscope head in the related technology, body sound information collected by the stethoscope depends on vibration transmission of an aluminum film inside the stethoscope head, and the space sensor occupies a part of the space of the cavity structure and can interfere vibration receiving of the aluminum film to a certain extent, thereby causing inaccurate body sound information and affecting diagnosis of doctors.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides a remote auscultation method, a remote auscultation device and a remote auscultation system based on an intelligent stethoscope, which can ensure the atomicity, consistency, isolation and durability of database transactions of a service when the database is abnormal.
In a first aspect, an embodiment of the present invention provides a remote auscultation method based on an intelligent stethoscope, which is applied to a remote auscultation system, the remote auscultation system includes an intelligent stethoscope, an auscultation terminal and a doctor terminal, the auscultation terminal is provided with a camera and a display, the intelligent stethoscope is provided with an auscultation head and a pickup device, the pickup device is in communication connection with the auscultation terminal, and the remote auscultation method based on the intelligent stethoscope includes:
The auscultation terminal acquires a patient image of a target patient through the camera, generates an auscultation reference model according to the patient image, and displays the auscultation reference model on the display, wherein the target patient is a current use object of the intelligent stethoscope, the auscultation reference model is used for indicating a human body contour and at least one to-be-auscultated region in the patient image, and each to-be-auscultated region comprises at least one to-be-auscultated point;
the auscultation terminal performs position tracking on the auscultation head, and displays the real-time projection position of the auscultation head on the auscultation reference model;
highlighting the target auscultation area and the target auscultation point on the display, wherein the target auscultation area is the to-be-auscultated area where the real-time projection position is located, and the target auscultation point is the to-be-auscultated point of the target auscultation area;
when the real-time projection position is overlapped with the target auscultation point, the sound pickup device is started, the auscultation terminal generates target body sound information according to real-time audio collected by the sound pickup device, and the target body sound information is sent to the doctor terminal to be played.
According to some embodiments of the invention, the generating an auscultation reference model from the patient image comprises:
identifying a torso contour from the patient image;
obtaining a preset trunk model, and stretching or zooming the trunk model to enable the trunk model to be matched with the trunk outline;
the torso model that is stretched or scaled is determined as the auscultation reference model.
According to some embodiments of the invention, the torso model has a plurality of auscultatable regions pre-divided therein, the auscultatable regions being stretched or scaled synchronously with the torso model, and after the determining the stretched or scaled torso model as the auscultatory reference model, the method further comprises:
acquiring auscultation demand information, wherein the auscultation demand information is used for indicating auscultation part demands of the target patient, and the auscultation demand information is pre-configured on the auscultation terminal or configured by the doctor terminal and sent to the auscultation terminal;
determining at least one auscultatory region from a plurality of auscultatory regions according to the auscultatory demand information;
displaying the auscultation reference model in the display, and displaying the region to be auscultated and the auscultatable region in different region display styles.
According to some embodiments of the present invention, each auscultatable region is preset with a corresponding part identifier, where the part identifier is used to indicate a body part, and the auscultatory terminal generates target body sound information according to real-time audio collected by the pickup device, and the method includes:
the auscultation terminal determines a target identifier, wherein the target identifier is the region identifier corresponding to the target auscultation region;
the auscultation terminal determines real-time audio collected by the pickup device as original body sound information;
and the auscultation terminal generates auscultation description information according to the target identifier and the target auscultation point, and combines the auscultation description information and the original body sound information into the target body sound information.
According to some embodiments of the present invention, the auscultation terminal is further provided with a plurality of selectable audio models and a plurality of selectable model parameters, the selectable audio models are associated with at least one of the location identifiers in advance, the location identifiers associated with each of the selectable model parameters are different from each other, and the combining the auscultation description information and the original body sound information into the target body sound information includes:
when the corresponding selectable audio model is not matched according to the target identifier, combining the auscultation description information and the original body sound information into target body sound information;
Or when a target audio model is matched from the selectable model parameters according to the target identification, acquiring target model parameters from the selectable model parameters according to the target identification, configuring the target model parameters to the target audio model, inputting the original body sound information to the target audio model for audio processing to obtain a model output result, and combining the auscultation description information, the original body sound information and the model output result into the target body sound information.
According to some embodiments of the invention, the target audio model includes a target filter and a body sound diagnostic model, the target model parameters include a target filtering parameter, a target training set and a target test set, and the inputting the original body sound information into the target audio model for audio processing to obtain a model output result includes:
configuring the target filter according to the target filtering parameters, and inputting the original body sound information to the target filter for filtering processing to obtain filtered body sound information;
inputting the filter body sound information, the target training set and the target test set into the body sound diagnosis model;
And extracting audio characteristics from the filter body sound information through the body sound diagnosis model, and according to the audio characteristics, the target training set and the model output result at the training position of the target test set, indicating the body sound diagnosis information corresponding to the filter body sound information by the model output result.
According to some embodiments of the invention, after the display highlights the target auscultation area and target auscultation point, the method further comprises:
establishing a plane coordinate system according to the auscultation reference model, and determining the coordinate of each target auscultation point;
position tracking is carried out on the auscultation head, and the projection coordinates of the image center point of the auscultation head in the plane coordinate system are determined to be the coordinates of the auscultation head;
when the coordinates of the auscultation head are overlapped with the coordinates of the target auscultation point, and the stay time length of the auscultation head is longer than the preset auscultation time length, when the coordinates of the auscultation head are changed and leave the coordinates of the target auscultation point, determining the target auscultation point which acquires the original body sound information at the time as an auscultation point;
the target auscultation area is displayed in the display, and the auscultated spot and the target auscultation spot are displayed in different auscultation spot display styles in the target auscultation area.
In a second aspect, embodiments of the present invention provide a remote stethoscope-based remote stethoscope comprising at least one control processor and a memory for communicative connection with said at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the intelligent stethoscope-based remote auscultation method of the first aspect described above.
In a third aspect, an embodiment of the present invention provides a remote auscultation system, including a remote auscultation device based on an intelligent stethoscope according to the second aspect.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing computer executable instructions for performing the intelligent stethoscope-based remote auscultation method according to the first aspect.
The remote auscultation method based on the intelligent stethoscope, provided by the embodiment of the invention, has at least the following beneficial effects: the auscultation terminal acquires a patient image of a target patient through the camera, generates an auscultation reference model according to the patient image, and displays the auscultation reference model on the display, wherein the target patient is a current use object of the intelligent stethoscope, the auscultation reference model is used for indicating a human body contour and at least one to-be-auscultated region in the patient image, and each to-be-auscultated region comprises at least one to-be-auscultated point; the auscultation terminal performs position tracking on the auscultation head, and displays the real-time projection position of the auscultation head on the auscultation reference model; highlighting the target auscultation area and the target auscultation point on the display, wherein the target auscultation area is the to-be-auscultated area where the real-time projection position is located, and the target auscultation point is the to-be-auscultated point of the target auscultation area; when the real-time projection position is overlapped with the target auscultation point, the sound pickup device is started, the auscultation terminal generates target body sound information according to real-time audio collected by the sound pickup device, and the target body sound information is sent to the doctor terminal to be played. According to the technical scheme provided by the embodiment of the invention, the auscultation reference model and the point to be auscultated can be displayed on the display, the target patient is guided to align the auscultation head with the target auscultation point, the position of the auscultation head is judged through the image recognition and image tracking technology, a space sensor is not required to be arranged in the auscultation head, the hardware cost is reduced, the vibration interference to an aluminum film in the auscultation head is reduced, and the accuracy of the target volume sound information acquired by the intelligent stethoscope is improved.
Drawings
FIG. 1 is a schematic diagram of a remote auscultation system provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a remote auscultation method based on a smart stethoscope according to one embodiment of the present invention;
FIG. 3 is a flow chart for generating an auscultation reference model provided by another embodiment of the invention;
FIG. 4 is a flow chart showing an auscultation reference model provided by another embodiment of the present invention;
FIG. 5 is a flow chart of generating target body sound information according to another embodiment of the present invention;
FIG. 6 is a flowchart of generating target body sound information based on location identification according to another embodiment of the present invention;
FIG. 7 is a flow chart of identification by a body sound diagnostic model provided in another embodiment of the present invention;
FIG. 8 is a flow chart showing auscultation points provided by another embodiment of the present invention;
fig. 9 is a block diagram of a remote stethoscope-based intelligent stethoscope according to another embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
The embodiment of the invention provides a remote auscultation method and device based on an intelligent stethoscope and a remote auscultation system, wherein the remote auscultation method based on the intelligent stethoscope comprises the following steps: the auscultation terminal acquires a patient image of a target patient through the camera, generates an auscultation reference model according to the patient image, and displays the auscultation reference model on the display, wherein the target patient is a current use object of the intelligent stethoscope, the auscultation reference model is used for indicating a human body contour and at least one to-be-auscultated region in the patient image, and each to-be-auscultated region comprises at least one to-be-auscultated point; the auscultation terminal performs position tracking on the auscultation head, and displays the real-time projection position of the auscultation head on the auscultation reference model; highlighting the target auscultation area and the target auscultation point on the display, wherein the target auscultation area is the to-be-auscultated area where the real-time projection position is located, and the target auscultation point is the to-be-auscultated point of the target auscultation area; when the real-time projection position is overlapped with the target auscultation point, the sound pickup device is started, the auscultation terminal generates target body sound information according to real-time audio collected by the sound pickup device, and the target body sound information is sent to the doctor terminal to be played. According to the technical scheme provided by the embodiment of the invention, the auscultation reference model and the point to be auscultated can be displayed on the display, the target patient is guided to align the auscultation head with the target auscultation point, the position of the auscultation head is judged through the image recognition and image tracking technology, a space sensor is not required to be arranged in the auscultation head, the hardware cost is reduced, the vibration interference to an aluminum film in the auscultation head is reduced, and the accuracy of the target volume sound information acquired by the intelligent stethoscope is improved.
First, the structure of the remote auscultation system of the present invention will be exemplarily described, and this example is not limited to the structure of the remote auscultation system, but one embodiment of the technical solution of the present invention may be implemented, and referring to fig. 1, fig. 1 is a schematic diagram of the structure of the remote auscultation system provided by the present invention, where the remote auscultation system includes an intelligent stethoscope 12, an auscultation terminal 11 and a doctor terminal 13, the auscultation terminal 12 is provided with a camera 111 and a display 112, the intelligent stethoscope 12 is provided with an auscultation head 121 and a sound pickup device 122, and the sound pickup device 122 is communicatively connected with the auscultation terminal 11.
It should be noted that, the auscultation terminal 11 may be an integrated remote auscultation terminal, and the auscultation terminal 11 may be provided with a display 112, so that remote auscultation is to be achieved, and input devices such as a mouse and a keyboard may be further provided, so that a user may conveniently input operations such as personal information.
It should be noted that, the intelligent stethoscope 12 and the auscultation terminal 11 may be connected through a conduit 123, the pickup device 122 may be a microphone embedded in the conduit 123, one side of the conduit 123 is connected to the auscultation terminal, the other side is connected to the auscultation head 121, and the pickup device 122 is used for converting vibration sound collected by the auscultation head 121 into audio information and transmitting the audio information to the auscultation terminal 11 through the conduit 123. Of course, the intelligent stethoscope 12 may also be a wireless device detachably connected with the auscultation terminal 11, the patient can take out the intelligent stethoscope 12 from the auscultation terminal 11 to operate, auscultation is usually close to the auscultation, and the pickup device 122 can be in communication connection with the auscultation terminal 11 through Bluetooth. The stethoscope head 121 can be a common stethoscope head with an aluminum film arranged inside, and the internal cavity structure is not affected because the internal part is not provided with a space sensor, so that the accuracy of body sound information acquired by the stethoscope head 121 is ensured.
It should be noted that, the doctor terminal 13 may be a common computer or an intelligent terminal, and the doctor terminal 13 is in communication connection with the auscultation terminal 11 through a network, so as to meet the requirement of remote auscultation. The doctor terminal 13 may also be provided with sound playing means to enable the playing of body sounds. The doctor terminal 13 and the auscultation terminal 11 can be jointly provided with the same software platform, and the doctor terminal 13 can interact with the auscultation terminal 11 through the software platform in the auscultation process, for example, the auscultation position is adjusted according to the actual condition of a patient, or voice information is sent to the auscultation terminal 11, so that the patient is guided to auscultate a plurality of auscultation points according to a specific sequence.
The control method according to the embodiment of the present invention will be further described based on the remote auscultation system shown in fig. 1.
Referring to fig. 2, fig. 2 is a flowchart of a remote auscultation method based on an intelligent stethoscope according to an embodiment of the present invention, which includes, but is not limited to, the following steps:
s21, the auscultation terminal acquires a patient image of a target patient through the camera, generates an auscultation reference model according to the patient image, and displays the auscultation reference model on the display, wherein the target patient is a current use object of the intelligent stethoscope, the auscultation reference model is used for indicating a human body contour and at least one to-be-auscultated region in the patient image, and each to-be-auscultated region comprises at least one to-be-auscultated point.
It should be noted that, the shooting direction of the camera 111 of the auscultation terminal 11 faces the user side, the auscultation terminal can identify and start auscultation by detecting whether the intelligent auscultation device is taken out, for example, as shown in fig. 1, the intelligent auscultation device 12 is connected with the auscultation terminal 11 through the conduit 123, the auscultation head 121 can be hung at the hook to be placed, a weight sensor is arranged at the hook, when the auscultation head 121 is taken out, the data detected by the weight sensor can be obviously changed, so that the auscultation terminal 11 judges that the auscultation head 121 is taken out, the camera 111 is started to shoot pictures, and the operator is usually the patient himself, so that the object shot in the current picture can be determined as the target patient, of course, a seat can be arranged in front of the auscultation terminal 11, the object on the seat can be determined as the target patient, and the auscultation terminal 11 can be determined according to the actual situation.
It should be noted that, the auscultation reference model may be a human body model generated according to a current patient image of a target patient, as shown in fig. 1, a human body outline is displayed in the display 112 as the auscultation reference model, and a plurality of areas to be auscultated (square, circular or oval areas in the drawing) are displayed in the auscultation reference model, and at least one point to be auscultated (solid dots in the drawing) is displayed in each area to be auscultated, so as to play an indication role on the auscultation operation of the target patient, and the target patient can perform the auscultation operation only by aiming the auscultation head at the point to be auscultated, so that a doctor does not need to guide the auscultation position online, and the target patient does not need to have professional medical knowledge, thereby improving the convenience of the remote auscultation system.
It should be noted that, because auscultation is mainly aimed at an internal organ, and the object for auscultation is a certain position of a single organ, and the position of the human organ is fixed, only the position displayed on a plane is different according to the different body types, the to-be-auscultated area is the area corresponding to the human organ, the to-be-auscultated point is the position of the organ to be auscultated, the specific to-be-auscultated area and the to-be-auscultated point can be configured in advance in an auscultated terminal, and corresponding configuration information is read in the use process to generate.
It should be noted that, because the auscultation process is a dynamic process, multiple target auscultation points need to be auscultated, the body of the target patient can not move in the auscultation point replacement process, if only a static image is used as an auscultation reference model, the target patient can not aim at the auscultation points after moving the body, after the remote auscultation system is started, the auscultation terminal can shoot a video stream in the current view angle through a camera, and a patient image of the target ring is determined from each frame of the video stream, so that a dynamically displayed auscultation reference model is generated.
S22, the auscultation terminal performs position tracking on the auscultation head, and displays the real-time projection position of the auscultation head on the auscultation reference model.
It should be noted that, because the space sensor is not arranged in the stethoscope head in the embodiment, the space coordinate of the stethoscope head cannot be directly obtained, in the auscultation process, the position of the target patient is relatively fixed, and because the target patient is usually facing to the camera head and the stethoscope head is usually close to the body of the target patient under the auscultation scene, the stethoscope head and the body of the target patient can be regarded as being approximately the same in the Z-axis direction, based on the scene, the planar position of the stethoscope head is tracked and determined by the image tracking technology, the projection position of the stethoscope head in the Z-axis direction is determined as the position of the body, the positioning of the stethoscope head is realized on the basis of omitting the space sensor, the interference of the space sensor on the vibration of the aluminum film is avoided, and meanwhile, the accurate positioning of the stethoscope head is also realized.
It should be noted that, the shape of the stethoscope head is fixed and universal, so that the stethoscope head may appear in the patient image, the stethoscope head may be identified from the patient image by the image identification technology, then the target tracking is performed on the stethoscope head in the video stream by the image tracking technology, for example, after the foreground extraction is performed by the background difference method, the stethoscope head is tracked in the foreground image, for example, as shown in fig. 1, the projection position of the stethoscope head is displayed in real time in a circular dotted line portion, and the image tracking technology is a technology well known to those skilled in the art, and will not be repeated herein.
S23, highlighting a target auscultation area and a target auscultation point on the display, wherein the target auscultation area is an area to be auscultated where the real-time projection position is located, and the target auscultation point is an area to be auscultated of the target auscultation area.
It should be noted that, as shown in fig. 1, the auscultation reference model may include a plurality of areas to be auscultated, and because auscultation needs to be completed one by one, in this embodiment, the area entered by the auscultation head is used as a target auscultation area, and the target auscultation area is highlighted in the auscultation reference model, so that the target patient can know the current auscultation position more clearly, and the reference value of the auscultation reference model is improved.
It should be noted that, since the area to be auscultated is divided in advance, when the real-time projection position is detected to enter a certain area to be auscultated, the area to be auscultated is determined to be the target auscultated area, and a person skilled in the art is well known how to determine whether the projection point enters a certain area in the planar image, and details will not be repeated here.
And S24, when the real-time projection position is overlapped with the target auscultation point, starting the sound pickup device, generating target body sound information according to real-time audio acquired by the sound pickup device by the auscultation terminal, and sending the target body sound information to the doctor terminal for playing.
It should be noted that, when the real-time projection position overlaps with the target auscultation point, it can be determined that the auscultation head is already close to the target auscultation point, or is close to and about to be close to the target auscultation point, at this time, the pick-up device is restarted to collect body sounds, so as to avoid that useless body sounds are collected when the auscultation head is not aligned with the target auscultation point, and the subsequent diagnosis is affected.
It should be noted that, after the target body sound information is generated in this embodiment, if real-time remote auscultation is to be achieved, the target body sound information may be played at the doctor terminal in real time. Of course, for some auscultation scenes without real-time auscultation requirement, for example, when in self-help physical examination, after physical examination personnel collect target body sound information by operating the auscultation terminal, the auscultation terminal stores the target body sound information in the auscultation terminal or doctor terminal, and a doctor plays and diagnoses the auscultation terminal after completing the collection of the target body sound information for a period of time, so that a remote auscultation system can meet different auscultation scenes, and the application range is improved.
Through the technical scheme of this embodiment, need not to set up space sensor in the stethoscope head, avoid space sensor to the interference of aluminium membrane vibrations, improve the accuracy of the body sound that the stethoscope head gathered. Meanwhile, an auscultation reference model is generated at the auscultation terminal to conduct auscultation guidance on a target patient, and the auscultation head position is located by utilizing image recognition and image tracking technology, so that accuracy of the auscultation position is ensured.
In addition, referring to fig. 3, in an embodiment, step S21 of the embodiment shown in fig. 2 further includes, but is not limited to, the following steps:
s31, identifying a trunk outline from the patient image;
s32, acquiring a preset trunk model, and stretching or zooming the trunk model to enable the trunk model to be matched with the trunk outline;
and S33, determining the trunk model which is stretched or zoomed as an auscultation reference model.
It should be noted that, the remote auscultation scene mainly includes torso auscultation, and in this embodiment, after the patient image is acquired, the torso outline is identified from the patient image, so as to avoid interference of the body part that does not need auscultation on subsequent model generation.
It should be noted that, the torso model may be a preset torso model of a human body, for example, a two-dimensional model of the upper half body, and the proportion of the torso model may be a preset standard proportion due to different body types of different patients, and after the torso contour of the target patient is obtained, the torso model is stretched or scaled according to the torso contour, for example, when the torso contour is wider than the torso model, the torso model is stretched in equal proportion; for another example, when the trunk outline is narrower than the trunk model, the trunk model is scaled in equal proportion, so that the trunk model is matched with the trunk outline, the displayed auscultation reference model is ensured to be similar to the body type of a patient, and the reference value of the auscultation reference model is improved.
In addition, in an embodiment, the torso model is pre-divided into a plurality of auscultatable areas, and the auscultatable areas are stretched or zoomed synchronously with the torso model, referring to fig. 4, after performing step S33 of the embodiment shown in fig. 3, the method further includes, but is not limited to, the following steps:
s41, acquiring auscultation requirement information, wherein the auscultation requirement information is used for indicating auscultation part requirements of a target patient, and the auscultation requirement information is pre-configured at an auscultation terminal or configured by a doctor terminal and sent to the auscultation terminal;
s42, determining at least one to-be-auscultated region from a plurality of auscultatable regions according to auscultatory demand information;
s43, displaying auscultation reference models in a display, and displaying the to-be-auscultated region and the auscultated region in different region display styles.
It should be noted that, the auscultatable areas divided in the torso model are all auscultatable areas of the human body, and since the organ distribution of the human body is the same, the auscultatable areas can be preset according to the organ distribution in the torso model, and each auscultatable area corresponds to the position of one or more organs.
It should be noted that, according to the description of the embodiment shown in fig. 3, the torso model needs to be stretched or scaled according to the torso contour, and in this process, the auscultatable area is also stretched or scaled synchronously, so as to ensure the accuracy of the auscultatory reference model.
It should be noted that, because the physical condition of each patient is different, the auscultation requirement information of the embodiment may be pre-configured, or may be configured in real time by the doctor terminal, for example, in order to implement real-time remote auscultation, the doctor terminal may interact with the auscultation terminal in real time, the doctor may set the area to be auscultated in the doctor terminal, send the configured area to the auscultation terminal in the form of auscultation requirement information, and the auscultation terminal analyzes the auscultation requirement information to determine the area to be auscultated; for another example, for a scene such as a physical examination of a person, the requirement of the auscultation area is determined before auscultation, that is, auscultation requirement information is pre-configured in the auscultation terminal, and the auscultation area is read and determined after the auscultation process is triggered, so that flexible configuration of the auscultation area is realized.
It should be noted that, since the auscultatory reference model has already divided a plurality of auscultatory regions, after determining the auscultatory region, the auscultatory region and the auscultatory region may be displayed in different region display patterns, for example, by differentiating colors or differentiating brightness, and on the basis of differentiating the auscultatory region and the auscultatory region, after determining the target auscultatory region according to image tracking of the auscultatory head, the target auscultatory region is further displayed in a highlighted form, that is, the display brightness of the auscultatory region is lower than that of the target auscultatory region, and those skilled in the art will know how to set the region display patterns, which will not be repeated here.
In addition, in an embodiment, each auscultatable region is preset with a corresponding part identifier, where the part identifier is used to indicate a body part, and referring to fig. 5, step S24 of the embodiment shown in fig. 2 further includes, but is not limited to, the following steps:
s51, determining a target identifier by the auscultation terminal, wherein the target identifier is an area identifier corresponding to a target auscultation area;
s52, the auscultation terminal determines real-time audio collected by the pickup device as original body sound information;
and S53, the auscultation terminal generates auscultation description information according to the target identification and the target auscultation point, and combines the auscultation description information and the original body sound information into target body sound information.
In the above embodiment, in order to distinguish between auscultatable areas, in the torso model, a location identifier is preconfigured for each auscultatable area, and the location identifier may be a name of a corresponding organ or a simple number, and the specific form of the location identifier is not limited in this embodiment, so that different auscultatable areas may be distinguished.
It should be noted that, the real-time audio collected by the sound pickup device comes from the vibration of the aluminum film inside the stethoscope head, so that the real-time audio can be used as the original body sound information to reflect the actual body sound of the target patient.
It should be noted that, in this embodiment, a location identifier and an auscultation point are preset, so that, in order for a doctor to distinguish the collection locations corresponding to different original body sound information, in this embodiment, auscultation description information is generated according to a target identifier and a target auscultation point, for example, the target identifier is "organ a", the target auscultation point is "auscultation point 1", the auscultation description information may be "auscultation point 1 of organ a", so that it is ensured that the doctor can know the collection locations of the original body sound information from the auscultation description information, and a data basis is provided for improving diagnosis efficiency.
It should be noted that, in this embodiment, the auscultation description information and the original body sound information are combined into the target body sound information and sent to the doctor terminal, the doctor terminal may display the auscultation description information in a graphic manner, play the original body sound information in an audio playing manner, so as to implement remote auscultation, and of course, since the auscultation reference model is known, an auscultation reference model may also be synchronously generated at the doctor terminal, a target auscultation point is highlighted according to the auscultation description information in the auscultation reference model of the doctor terminal, for example, the auscultation description information "auscultation point 1 of organ a" is analyzed at the doctor terminal, the target mark of the target auscultation area is determined to be "organ a", the target auscultation point is "auscultation point 1" is displayed in a blinking manner according to the auscultation reference model of the doctor terminal, and the specific display manner of the doctor terminal to the auscultation description information may be adjusted according to actual requirements, so as to be able to prompt the acquisition position of the original body sound information.
In addition, in an embodiment, the auscultation terminal is further provided with a plurality of selectable audio models and a plurality of selectable model parameters, the selectable audio models are pre-associated with at least one location identifier, the location identifiers associated with each selectable model parameter are different from each other, referring to fig. 6, step S53 of the embodiment shown in fig. 5 further includes, but is not limited to, the following steps:
s61, when the corresponding optional audio model is not matched according to the target identification, auscultation description information and original body sound information are combined into target body sound information;
s62, when a target audio model is matched from the selectable model parameters according to the target identification, obtaining target model parameters from the selectable model parameters according to the target identification, after the target model parameters are configured to the target audio model, inputting original body sound information to the target audio model for audio processing to obtain a model output result, and combining auscultation description information, the original body sound information and the model output result into target body sound information.
It should be noted that, the auscultation modes of different auscultatable areas are known in advance, so that the processing modes of the body sound information are also known in advance, for example, the body sounds of some organs do not need to be processed, a doctor needs to listen to the unprocessed original body sounds, and if the noise of some organs is larger, the auscultation can be performed better after filtering or audio processing, and then the optional audio model can be preset in a targeted manner. In the related art, a processing model with more body sound information is provided, and the embodiment does not relate to specific model improvement, and will not be described herein.
It should be noted that, because not all the body sounds collected by the auscultation points need to be processed, the embodiment performs matching of the selectable audio model according to the target identifier, and when the matching fails, the target body sound information can be directly obtained according to the original body sound information.
It should be noted that the available body sound model may be a filter and a deep learning network based on feature extraction, and the body sound processing requirements of different auscultation areas may be the same model, but because the training data are different, the training set, the test set and the target model parameters need to be configured according to each auscultation area, and the training set and the test set are pre-labeled samples, so that the available body sound model can accurately identify the audio, the target model parameters may be parameters such as a convolution kernel and a step length of each layer in the deep learning network, and the parameters may be adjusted according to the specific structure of the model, so those skilled in the art are well aware of how to label sample data, and detailed descriptions are omitted herein.
It should be noted that, when the target audio model is successfully matched according to the target identifier, and the corresponding target model parameter is obtained, the target model parameter is configured to the target audio model, and the original body sound information, the target training set and the target test set are simultaneously input to the target audio model for feature extraction and identification processing, so as to obtain a model output result, which may be a preliminary diagnosis result obtained according to the identification of the original body sound information, so that the model output result can assist in doctor diagnosis.
It should be noted that, the application scenario of this embodiment is remote auscultation, for non-real-time remote auscultation, the target patient may leave the position of remote auscultation system after completing body sound collection, when the model output result obtained by model training is in doubt, it is difficult to let the target patient auscultate again in time, in order to ensure auscultation accuracy, this embodiment sends original body sound information, auscultation description information and model output result to doctor terminal simultaneously, and after checking model output result, doctor can play original body sound information again and carry out secondary judgement, ensures remote auscultation accuracy.
In addition, in an embodiment, the target audio model includes a target filter and a body sound diagnostic model, the target model parameters include a target filter parameter, a target training set and a target test set, and referring to fig. 7, step S62 of the embodiment shown in fig. 6 further includes, but is not limited to, the following steps:
s71, configuring a target filter according to target filtering parameters, and inputting the original body sound information to the target filter for filtering processing to obtain filtered body sound information;
s72, inputting the filter body sound information, the target training set and the target test set into a body sound diagnosis model;
S73, extracting audio features from the filtered body sound information through the body sound diagnosis model, and outputting a result according to the audio features, the target training set and the model at the training position of the target test set, wherein the model output result is used for indicating the body sound diagnosis information corresponding to the filtered body sound information.
It should be noted that, the filter of the target audio model may be a common band-pass filter or a mel-spectrum filter, and the band-pass filter can be used for filtering the audio with a specific frequency so as to reduce the influence of noise; the mel spectrum filter can optimize the audio by utilizing the frequency characteristic of the audio signal, and can optimize and amplify the body sounds of some organs, thereby improving the auscultation accuracy.
It should be noted that the body sound diagnosis model may be a common deep learning network based on body sound feature classification, and the classification corresponding to the body sound information is determined by extracting the audio features of the body sound information and combining the labeled training set and the test set, so as to realize automatic diagnosis and obtain a model output result with a better auxiliary effect.
In addition, in an embodiment, referring to fig. 8, after step S23 of the embodiment shown in fig. 2 is performed, the following steps are included, but not limited to:
S81, establishing a plane coordinate system according to the auscultation reference model, and determining the coordinate of each target auscultation point;
s82, position tracking is carried out on the stethoscope head, and the projection coordinates of the image center point of the stethoscope head in a plane coordinate system are determined as the coordinates of the stethoscope head;
s83, when the coordinates of the auscultation head are overlapped with the coordinates of the target auscultation points, and the stay time of the auscultation head is longer than the preset auscultation time, when the coordinates of the auscultation head are changed and leave the coordinates of the target auscultation points, determining the target auscultation points which collect the original body sound information at the present time as auscultation points;
s84, displaying the target auscultation area in the display, and displaying auscultation points and target auscultation points in different auscultation point display modes in the target auscultation area.
It should be noted that, according to the description of the above embodiment, the auscultatory reference model and the auscultatory region may be stretched or scaled, so in order to achieve accurate positioning, the present embodiment establishes a plane coordinate system according to the auscultatory reference model, and the plane coordinate system may use any position as the origin of coordinates, which is not limited in this embodiment.
After the plane coordinate system is obtained, the position of the auscultation point is fixed and known, the coordinate of each target auscultation point can be determined first, then the auscultation head is tracked, and because the auscultation head is of a circular structure and the auscultation head and the patient image are shot by the same camera, the projection part of the auscultation head and the auscultation reference model are positioned in the same plane coordinate system, the image center point of the auscultation head can be projected in the plane coordinate system, the coordinate obtained by the projection is determined as the coordinate of the auscultation head, the auscultation head is overlapped and judged to be aligned with the target auscultation point through the simple coordinate value, and the auscultation effect is ensured.
It should be noted that, because the position of the auscultation head is fixed in the auscultation process, after the coordinates are overlapped, whether auscultation is executed is judged according to the auscultation time length longer than the preset auscultation time length, and the situation that the recording of body sounds starts when the coordinates are overlapped in the alignment process, so that the wrong body sounds and audio are recorded is avoided.
It should be noted that, when the coordinate of the auscultation head is detected to change, the embodiment may determine that auscultation is finished, determine the target auscultation point of the auscultation time as the auscultation point, and display the auscultation point and other auscultation points not being auscultated in different auscultation point display modes, for example, the color of the auscultation point is set to gray, and the other auscultation points not being auscultated remain to be highlighted, so as to improve the reference effect of the auscultation reference model.
As shown in fig. 9, fig. 9 is a block diagram of a remote stethoscope-based intelligent stethoscope according to an embodiment of the present application. The application also provides a remote auscultation device based on the intelligent stethoscope, which comprises:
the processor 901 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing related programs, so as to implement the technical scheme provided by the embodiments of the present application;
The Memory 902 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 902 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present specification are implemented by software or firmware, relevant program codes are stored in the memory 902, and the processor 901 invokes a remote auscultation method based on the intelligent stethoscope to perform the embodiments of the present application;
an input/output interface 903 for inputting and outputting information;
the communication interface 904 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between the various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 are communicatively coupled to each other within the device via a bus 905.
The embodiment of the application also provides a remote auscultation system which comprises the remote auscultation device based on the intelligent stethoscope.
The embodiment of the application also provides a storage medium, which is a computer readable storage medium, and the storage medium stores a computer program, and the computer program realizes the remote auscultation method based on the intelligent stethoscope when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The apparatus embodiments described above are merely illustrative, in which the elements illustrated as separate components may or may not be physically separate, implemented to reside in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically include computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit and scope of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. The utility model provides a remote auscultation method based on intelligent stethoscope, its characterized in that is applied to remote auscultation system, remote auscultation system includes intelligent stethoscope, auscultation terminal and doctor terminal, the auscultation terminal is provided with camera and display, intelligent stethoscope is provided with auscultation head and pickup device, pickup device with auscultation terminal communication connection, remote auscultation method based on intelligent stethoscope includes:
the auscultation terminal acquires a patient image of a target patient through the camera, generates an auscultation reference model according to the patient image, and displays the auscultation reference model on the display, wherein the target patient is a current use object of the intelligent stethoscope, the auscultation reference model is used for indicating a human body contour and at least one to-be-auscultated region in the patient image, and each to-be-auscultated region comprises at least one to-be-auscultated point;
The auscultation terminal performs position tracking on the auscultation head, and displays the real-time projection position of the auscultation head on the auscultation reference model;
highlighting the target auscultation area and the target auscultation point on the display, wherein the target auscultation area is the to-be-auscultated area where the real-time projection position is located, and the target auscultation point is the to-be-auscultated point of the target auscultation area;
when the real-time projection position is overlapped with the target auscultation point, the sound pickup device is started, the auscultation terminal generates target body sound information according to real-time audio collected by the sound pickup device, and the target body sound information is sent to the doctor terminal to be played.
2. The intelligent stethoscope-based remote auscultation method as in claim 1, wherein the generating auscultation reference models from the patient images comprises:
identifying a torso contour from the patient image;
obtaining a preset trunk model, and stretching or zooming the trunk model to enable the trunk model to be matched with the trunk outline;
the torso model that is stretched or scaled is determined as the auscultation reference model.
3. The intelligent stethoscope-based remote auscultation method as in claim 2, wherein the torso model is pre-divided with a plurality of auscultatable areas that are stretched or scaled synchronously with the torso model, and wherein after the determining the stretched or scaled torso model as the auscultatory reference model, the method further comprises:
acquiring auscultation demand information, wherein the auscultation demand information is used for indicating auscultation part demands of the target patient, and the auscultation demand information is pre-configured on the auscultation terminal or configured by the doctor terminal and sent to the auscultation terminal;
determining at least one auscultatory region from a plurality of auscultatory regions according to the auscultatory demand information;
displaying the auscultation reference model in the display, and displaying the region to be auscultated and the auscultatable region in different region display styles.
4. The remote auscultation method based on the intelligent stethoscope according to claim 3, wherein each auscultatable area is preset with a corresponding part identifier, the part identifier is used for indicating a body part, and the auscultation terminal generates target body sound information according to real-time audio collected by the sound pickup device, and the remote auscultation method comprises the following steps:
The auscultation terminal determines a target identifier, wherein the target identifier is the region identifier corresponding to the target auscultation region;
the auscultation terminal determines real-time audio collected by the pickup device as original body sound information;
and the auscultation terminal generates auscultation description information according to the target identifier and the target auscultation point, and combines the auscultation description information and the original body sound information into the target body sound information.
5. The intelligent stethoscope-based remote auscultation method as in claim 4, wherein the auscultation terminal is further provided with a plurality of selectable audio models and a plurality of selectable model parameters, wherein the selectable audio models are pre-associated with at least one of the location identifications, wherein the location identifications associated with each of the selectable model parameters are different from each other, and wherein the combining the auscultation description information and the original body sound information into the target body sound information comprises:
when the corresponding selectable audio model is not matched according to the target identifier, combining the auscultation description information and the original body sound information into target body sound information;
or when a target audio model is matched from the selectable model parameters according to the target identification, acquiring target model parameters from the selectable model parameters according to the target identification, configuring the target model parameters to the target audio model, inputting the original body sound information to the target audio model for audio processing to obtain a model output result, and combining the auscultation description information, the original body sound information and the model output result into the target body sound information.
6. The intelligent stethoscope-based remote auscultation method as in claim 5, wherein the target audio model includes a target filter and a body sound diagnosis model, the target model parameters include a target filter parameter, a target training set and a target test set, the inputting the original body sound information into the target audio model for audio processing to obtain a model output result comprises:
configuring the target filter according to the target filtering parameters, and inputting the original body sound information to the target filter for filtering processing to obtain filtered body sound information;
inputting the filter body sound information, the target training set and the target test set into the body sound diagnosis model;
and extracting audio characteristics from the filter body sound information through the body sound diagnosis model, and according to the audio characteristics, the target training set and the model output result at the training position of the target test set, indicating the body sound diagnosis information corresponding to the filter body sound information by the model output result.
7. The intelligent stethoscope-based remote auscultation method as in claim 4, wherein after the display highlights the target auscultation area and target auscultation point, the method further comprises:
Establishing a plane coordinate system according to the auscultation reference model, and determining the coordinate of each target auscultation point;
position tracking is carried out on the auscultation head, and the projection coordinates of the image center point of the auscultation head in the plane coordinate system are determined to be the coordinates of the auscultation head;
when the coordinates of the auscultation head are overlapped with the coordinates of the target auscultation point, and the stay time length of the auscultation head is longer than the preset auscultation time length, when the coordinates of the auscultation head are changed and leave the coordinates of the target auscultation point, determining the target auscultation point which acquires the original body sound information at the time as an auscultation point;
the target auscultation area is displayed in the display, and the auscultated spot and the target auscultation spot are displayed in different auscultation spot display styles in the target auscultation area.
8. A remote stethoscope-based remote auscultation device comprising at least one control processor and a memory for communication connection with said at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the intelligent stethoscope-based remote auscultation method of any one of claims 1 to 7.
9. A remote auscultation system comprising the intelligent stethoscope-based remote auscultation apparatus of claim 8.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the intelligent stethoscope-based remote auscultation method of any one of claims 1 to 7.
CN202310927542.2A 2023-07-26 2023-07-26 Remote auscultation method, device and remote auscultation system based on intelligent stethoscope Pending CN117038051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310927542.2A CN117038051A (en) 2023-07-26 2023-07-26 Remote auscultation method, device and remote auscultation system based on intelligent stethoscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310927542.2A CN117038051A (en) 2023-07-26 2023-07-26 Remote auscultation method, device and remote auscultation system based on intelligent stethoscope

Publications (1)

Publication Number Publication Date
CN117038051A true CN117038051A (en) 2023-11-10

Family

ID=88638218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310927542.2A Pending CN117038051A (en) 2023-07-26 2023-07-26 Remote auscultation method, device and remote auscultation system based on intelligent stethoscope

Country Status (1)

Country Link
CN (1) CN117038051A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118315042A (en) * 2024-06-07 2024-07-09 大连玖柒医疗科技有限公司 Medical stethoscope data line processing method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118315042A (en) * 2024-06-07 2024-07-09 大连玖柒医疗科技有限公司 Medical stethoscope data line processing method and system

Similar Documents

Publication Publication Date Title
JP6878628B2 (en) Systems, methods, and computer program products for physiological monitoring
CN107967946B (en) Gastroscope operation real-time auxiliary system and method based on deep learning
CN107708571B (en) Ultrasonic imaging system and method
WO2013089072A1 (en) Information management device, information management method, information management system, stethoscope, information management program, measurement system, control program and recording medium
WO2016001868A1 (en) A method for acquiring and processing images of an ocular fundus by means of a portable electronic device
JP2017501005A (en) Wide-field retinal image acquisition system and method
CN107077531B (en) Stethoscope data processing method and device, electronic equipment and cloud server
CN112734799A (en) Body-building posture guidance system
CN117038051A (en) Remote auscultation method, device and remote auscultation system based on intelligent stethoscope
US20220005284A1 (en) Method for automatically capturing data from non-networked production equipment
CN113349897A (en) Ultrasonic puncture guiding method, device and equipment
JP6888620B2 (en) Control device, control method, program and sound output system
EP3595533B1 (en) Determining a guidance signal and a system for providing a guidance for an ultrasonic handheld transducer
CN114693593A (en) Image processing method, device and computer device
JP2008104551A (en) Ultrasonic diagnostic equipment
CN114359953A (en) Method and device for indicating auscultation position
US20220172840A1 (en) Information processing device, information processing method, and information processing system
CN107661085A (en) A kind of dynamic method with head position and stability data of real-time collecting eye
US20200013209A1 (en) Image and sound pickup device, sound pickup control system, method of controlling image and sound pickup device, and method of controlling sound pickup control system
CN110772210A (en) Diagnosis interaction system and method
CN111035404A (en) Self-service CT detecting system
JP7313895B2 (en) Acoustic diagnostic equipment
CN210990247U (en) Blood pressure detection system
JP2014226515A (en) Diagnosis support system
JP2014008096A (en) Electronic stethoscope, electronic auscultation system, information processing method, control program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination