CN113762228A - Oral instrument disinfection and sterilization operation monitoring method and device - Google Patents

Oral instrument disinfection and sterilization operation monitoring method and device Download PDF

Info

Publication number
CN113762228A
CN113762228A CN202111322545.0A CN202111322545A CN113762228A CN 113762228 A CN113762228 A CN 113762228A CN 202111322545 A CN202111322545 A CN 202111322545A CN 113762228 A CN113762228 A CN 113762228A
Authority
CN
China
Prior art keywords
audio
picture
position information
human body
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111322545.0A
Other languages
Chinese (zh)
Other versions
CN113762228B (en
Inventor
武文
孟庆超
王俊杰
武杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xuanjia Network Technology Co ltd
Original Assignee
Nanjing Huiji Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Huiji Information Technology Co ltd filed Critical Nanjing Huiji Information Technology Co ltd
Priority to CN202111322545.0A priority Critical patent/CN113762228B/en
Publication of CN113762228A publication Critical patent/CN113762228A/en
Application granted granted Critical
Publication of CN113762228B publication Critical patent/CN113762228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a method and a device for monitoring the disinfection and sterilization operation of oral instruments. According to the invention, a video file shot in the oral cavity cleaning room is transmitted into an algorithm to be processed, the whole video is traversed, the video is analyzed respectively through two angles of sound and picture, effective information is extracted from the video, and logic judgment is carried out by combining the information, so that two types of events which are most important and have higher recognition degree in the whole final rinsing process can be recognized: and putting in/taking out the dental instruments to be cleaned from the ultrasonic instrument, and performing final rinsing on the instruments cleaned by the ultrasonic instrument in a water tank by using flowing water, thereby judging whether a standard final rinsing event occurs in the whole video.

Description

Oral instrument disinfection and sterilization operation monitoring method and device
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for monitoring the disinfection and sterilization operation of oral instruments.
Background
After the outbreak of the new coronavirus epidemic in 2020, the national health commission has strengthened the monitoring of dental office washrooms. The existing monitoring method is to arrange a camera in the cleaning room of each dental clinic, shoot the picture of the cleaning room when the office is in operation, and then submit the picture to a professional to view. The method has the advantages that the cleaning process is completely checked through videos, and whether the cleaning personnel clean according to the cleaning process required in the technical operating specification of oral instrument disinfection and sterilization is observed. And the judgment is carried out manually, so that misjudgment cannot occur. However, the number of dental offices is large, the cleaning time in one day is not fixed, it is difficult to accurately find all the cleaning operations of the cleaning personnel in each day, a large amount of time and manpower resources are consumed if the cleaning operations are completely dependent on manual supervision, and it is difficult to ensure that the overlooking cannot occur during the inspection. If the problem needs to be solved, a behavior detection algorithm in the field of artificial intelligence can be utilized, the captured images are analyzed and calculated, the part of the video subjected to cleaning operation is identified, the part is directly submitted to the algorithm to judge whether the part meets the requirement of the specification, or a large number of useless fragments in the video can be eliminated, only the fragments which are considered by the algorithm to be possibly cleaned are reserved, and then the fragments are submitted to a worker for inspection, so that the workload of manual inspection can be greatly reduced on the premise of ensuring the accuracy.
Although there is research on human behavior detection direction in the existing artificial intelligence field, more general behaviors such as running, jumping and walking are aimed at, and more specific behaviors such as cleaning of instruments in a cleaning room are not researched, so that it is difficult to monitor whether cleaning operation of the cleaning room is standard or not only by means of algorithms in the existing behavior detection field. In addition, most of the existing human body posture detection algorithms cannot perform time sequence detection, and the algorithms capable of performing time sequence detection can cost a large amount of calculation power when running, so that the algorithms are difficult to apply to practical use.
Disclosure of Invention
The invention provides a method and a device for monitoring disinfection and sterilization operations of oral instruments, which aim to solve the problems that the existing human posture detection algorithm lacks the study on the more specific behavior of cleaning instruments in a cleaning room and is difficult to realize the supervision on whether the cleaning operation of the cleaning room is standard or not.
In a first aspect, the present invention provides a method for monitoring a sterilization operation of an oral device, comprising:
reading a first json file and a video file shot in an oral cavity cleaning room, wherein the first json file records water tank position information, an absolute path of the position of the video file and an absolute path of an output file;
analyzing the video file to obtain audio and pictures;
cutting the audio into a plurality of sections of audio so as to ensure the unicity of each section of audio content;
bringing each section of audio obtained by cutting into an underwater sound detection model to detect whether underwater sound occurs in each section of audio and storing the underwater sound detection result in a list;
carrying out graying and Gaussian blur processing on the picture;
bringing the picture into a human body detection model to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list;
calling the water tank position information and the human body position information;
judging whether the positions of the water tank and the person are within a preset distance according to the position information of the water tank and the position information of the person;
if the positions of the water tank and the person are within a preset distance, identifying whether underwater sound occurs in the simultaneous audio according to the underwater sound detection result;
if the position of the sink and the person are within a predetermined distance and simultaneously a water sound occurs in the audio, determining that the operator has made a flushing action;
if the positions of the sink and the person are within a predetermined distance but no underwater sound is simultaneously present in the audio, determining that the operator has not performed a flushing action;
and outputting a judgment result to the first json file, wherein the judgment result comprises whether the operator performs the flushing action or not, and the starting time and the ending time of the flushing action when the operator performs the flushing action.
With reference to the first aspect, in a first implementable manner of the first aspect, the water tank location information includes size information of the water tank in the picture, and when a ratio of a distance from a center point of the water tank to a circumscribed circle of the water tank to a distance from a person to the center point of the water tank is greater than 1/2, the positions of the water tank and the person are considered to be within a predetermined distance.
With reference to the first aspect, in a second implementable manner of the first aspect, the first json file further records a number of reading frames per second of an algorithm; before graying and gaussian blurring processing the picture, the method further comprises the following steps:
judging whether the reading frame number per second of the algorithm recorded in the first json file is greater than the frame number of the video file per se;
if the number of reading frames per second of the algorithm recorded in the first json file is larger than the number of frames of the video file per se, frame extraction processing is not carried out, and graying and Gaussian blur processing are directly carried out on the picture;
and if the number of frames read per second by the algorithm recorded in the first json file is less than or equal to the number of frames of the video file, performing frame extraction on the video file according to the frames read per second by the algorithm, and performing graying and Gaussian blur processing on each extracted frame.
With reference to the first aspect, in a third implementable manner of the first aspect, the method further includes:
reading a second json file and a video file shot in the oral cavity cleaning room, wherein the second json file records the position information of the ultrasonic instrument, the absolute path of the position of the video file and the absolute path of the output file;
analyzing the video file to obtain audio and pictures;
cutting the audio into a plurality of sections of audio so as to ensure the unicity of each section of audio content;
bringing each section of audio obtained by cutting into a metal collision sound detection model to detect whether metal collision sound occurs in each section of audio and storing a metal collision sound detection result in a list;
carrying out graying and Gaussian blur processing on the picture;
bringing the picture into a human body detection model to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list;
calling the ultrasonic instrument position information and the human body position information;
the method comprises the following steps of (1) independently dividing an area where ultrasonic instrument position information is located, and carrying out movement detection to obtain picture change information;
judging whether the positions of the ultrasonic instrument and the human body are within a preset distance according to the position information of the ultrasonic instrument and the position information of the human body;
if the positions of the ultrasonic instrument and the person are within a preset distance, identifying whether metal collision sound occurs in simultaneous audio and whether picture change information has obvious picture change or not according to the metal collision sound detection result;
if the positions of the ultrasonic instrument and the person are within a preset distance, and metal collision sound appears in the audio, and meanwhile, the picture in the picture change information has obvious change, the operator is judged to take and place the instrument;
if the positions of the water tank and the person are within a preset distance but the conditions that metal collision sound occurs in audio and pictures in picture change information obviously change exist simultaneously are not met, judging that the operator does not perform the action of taking and placing the instrument;
and outputting a judgment result to a second json file, wherein the judgment result comprises whether the operator performs the action of taking and placing the instrument, and the starting time and the ending time of the action of taking and placing the instrument under the condition that the operator performs the action of taking and placing the instrument.
With reference to the third implementable manner of the first aspect, in a fourth implementable manner of the first aspect, the moving detection is performed by separately dividing an area where the ultrasound apparatus location information is located, and the obtaining of the picture change information includes:
modeling the picture of the area where the position information of each frame of ultrasonic instrument is located by taking the picture of the area where the position information of the previous frame of ultrasonic instrument is located as a background, and performing differential operation to obtain a difference image;
performing connectivity analysis on the difference image, and judging whether the area of the difference image is larger than a preset threshold value or not;
if the area of the difference image is larger than the preset threshold value, recording the time at the moment to obtain the picture change information, and regarding the picture change information as the picture change information when the picture is obviously changed.
With reference to the fourth implementable manner of the first aspect, in a fifth implementable manner of the first aspect, the underwater acoustic detection model and the metal collision acoustic detection model employ a hidden markov model.
With reference to the fourth implementable manner of the first aspect, in a sixth implementable manner of the first aspect, the water tank position information and the ultrasound apparatus position information are respectively marked with an upper left vertex coordinate and a lower right vertex coordinate of a position rectangle of the water tank and the ultrasound apparatus in a video file.
With reference to the fourth implementable manner of the first aspect, in a seventh implementable manner of the first aspect, the location information of the ultrasound apparatus includes size information of the ultrasound apparatus, and when a ratio of a distance from a center point of the ultrasound apparatus to a circle circumscribed by the ultrasound apparatus to a distance from a center point of the ultrasound apparatus to a center point of the ultrasound apparatus is greater than 1/2, the locations of the ultrasound apparatus and the person are considered to be within a predetermined distance.
With reference to the seventh implementable manner of the first aspect, in an eighth implementable manner of the first aspect, a value of the preset threshold is 0.3 of an area occupied by the ultrasound apparatus in a picture.
In a second aspect, the present invention provides an oral instrument disinfecting and sterilizing operation monitoring device, comprising:
the reading unit is used for reading a first json file and a video file shot in an oral cavity cleaning room, wherein the first json file records water tank position information, an absolute path of the position of the video file and an absolute path of an output file;
the analysis unit is used for analyzing the video file to obtain audio and pictures;
the cutting unit is used for cutting the audio frequency into a plurality of sections of audio frequencies so as to ensure the unicity of each section of audio frequency content;
the first bringing-in unit is used for bringing each section of audio obtained by cutting into the underwater sound detection model so as to detect whether underwater sound occurs in each section of audio and storing the underwater sound detection result in a list;
the processing unit is used for carrying out graying and Gaussian blur processing on the picture;
the second bringing-in unit is used for bringing the picture into a human body detection model so as to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list;
the calling unit is used for calling the water tank position information and the human body position information;
the first judging unit is used for judging whether the positions of the water tank and the person are within a preset distance according to the position information of the water tank and the position information of the human body;
the identification unit is used for identifying whether underwater sound occurs in the simultaneous audio according to the underwater sound detection result under the condition that the positions of the water tank and the person are within a preset distance;
a second determination unit for determining that the operator has made a flushing action in a case where the positions of the water tank and the person are within a predetermined distance and, at the same time, underwater sound occurs in the audio; and, in the event that the positions of the sink and the person are within a predetermined distance, but no underwater sound is simultaneously present in the audio, determining that the operator has not made a flushing action;
and an output unit configured to output a result of the determination to the first json file, the result of the determination including whether or not the operator performed the flushing operation, and a start time and an end time of the flushing operation when the operator performed the flushing operation.
The invention has the following beneficial effects: according to the method and the device for monitoring the disinfection and sterilization operation of the oral instruments, the video files shot in the oral cavity cleaning room are transmitted into the algorithm for processing, the whole video is traversed, the video is respectively analyzed through two angles of sound and pictures, effective information is extracted from the video, and logic judgment is carried out by combining the information, so that two types of events which are most important in the whole final rinsing process and have higher recognition degree can be recognized: the method comprises the steps of putting in/taking out dental instruments to be cleaned from an ultrasonic instrument, and carrying out final rinsing on the instruments cleaned by the ultrasonic instrument in a water tank by using flowing water, so as to judge whether a standard final rinsing event occurs in the whole video, and solves the problems that whether cleaning operation of a cleaning room is standard or not is difficult to monitor only by an algorithm in the existing behavior detection field, time sequence detection cannot be mostly carried out by the existing human posture detection algorithm, the algorithm for time sequence detection can be operated, a large amount of calculation is spent, and the algorithm for time sequence detection is difficult to be applied to actual use.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any inventive exercise.
Fig. 1 is a flowchart illustrating an identification process of a final rinsing operation of an oral appliance in a method for monitoring a disinfecting and sterilizing operation of an oral appliance according to an embodiment of the present invention.
FIG. 2 is a flowchart of an alternative embodiment of a method for monitoring a sterilization operation of an oral device according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating the identification of the ultrasonic cleaning and the final rinsing operations of the oral cavity instrument in the method for monitoring the disinfecting and sterilizing operations of the oral cavity instrument according to the embodiment of the present invention.
FIG. 4 is a flowchart illustrating an alternative embodiment of a method for monitoring a sterilization operation of an oral device according to an embodiment of the present invention;
FIG. 5 is a schematic view of a monitoring apparatus for disinfection and sterilization of an oral device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The technical solutions provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The technical scheme of the invention is used for monitoring whether the operation of the cleaning room operator is standard or not and simultaneously ensuring that the privacy of the cleaning personnel is not revealed. The invention is based on opencv image processing, and is used for detecting and analyzing the behaviors of the operating personnel of the dental clinic, and finally obtaining the conclusion whether the cleaning operation meets the requirements. The technical scheme adopted by the invention is as follows: the method comprises the following steps of transmitting a video file shot in an oral cavity cleaning room into an algorithm for processing, traversing the whole video, analyzing the video through two angles of sound and pictures, extracting effective information from the video, performing logic judgment by combining the information, and identifying two types of events which are most important and have higher identification degree in the whole final rinsing process: putting in/taking out the dental instruments to be cleaned from the ultrasonic instrument; and finally rinsing the instruments cleaned by the ultrasonic instrument with flowing water in a water tank so as to judge whether a standard final rinsing event occurs in the whole video.
Referring to fig. 1, the present invention provides a method for monitoring a sterilization operation of an oral device, the method comprising an execution main body of a processor, the method comprising the following steps:
step S101, reading a first json file and a video file shot in an oral cavity cleaning room, wherein the first json file records water tank position information, an absolute path of the position of the video file and an absolute path of an output file.
Specifically, json files are prepared in advance, a camera is installed in the oral cavity cleaning room, and video files in the oral cavity cleaning room are collected.
And step S102, analyzing the video file to obtain audio and pictures.
Step S103, the audio is cut into a plurality of sections of audio, so as to ensure the unicity of each section of audio content.
In particular, the video can be cut into smaller segments, such as two seconds, by cutting the audio to ensure the unicity of a single piece of audio content.
And step S104, bringing each section of cut audio into an underwater sound detection model to detect whether underwater sound occurs in each section of audio, and storing the underwater sound detection result in a list.
In particular, the underwater acoustic detection model may employ a hidden markov model.
And step S105, carrying out graying and Gaussian blur processing on the picture.
And S106, bringing the picture into a human body detection model to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list.
And step S107, calling the water tank position information and the human body position information.
And step S108, judging whether the positions of the water tank and the person are within a preset distance according to the water tank position information and the human body position information.
Specifically, the water tank position information includes size information of the water tank in the picture, and when the ratio of the distance from the water tank center point to the water tank circumcircle and the distance from the person to the water tank center point is larger than 1/2, the positions of the water tank and the person are considered to be within a predetermined distance.
And step S109, if the positions of the water tank and the person are within a preset distance, identifying whether the underwater sound appears in the simultaneous audio according to the underwater sound detection result.
When the person is located close enough to the sink, it can be said that there is a high probability that the operator has made a flushing decision. At this time, the algorithm brings in the sound detection result, and if underwater sound occurs at the same time, the algorithm considers that the operator has performed a flushing action.
In step S110, if the positions of the sink and the person are within a predetermined distance and simultaneously the underwater sound occurs in the audio, it is determined that the operator has made a flushing action.
Step S111, if the positions of the water tank and the person are within a predetermined distance but underwater sound does not simultaneously occur in the audio, it is determined that the operator has not performed the flushing action.
In step S112, the result of the determination including whether or not the operator performed the flushing operation and the start time and the end time of the flushing operation when the operator performed the flushing operation is output to the first json file.
Specifically, the point of intersection of the time of the occurrence of the water sounds in the audio and the time of the sink and person's position within a predetermined distance is the start time and end time of the flushing action.
In order to accelerate the operation efficiency of the algorithm, whether to perform frame extraction processing on the picture information can be determined according to the number of frames input in the json file, please refer to fig. 2, as an optional implementation manner, the first json file also records the number of frames read by the algorithm per second; before graying and gaussian blurring processing the picture, the method further comprises the following steps:
step S201, determining whether the number of reading frames per second of the algorithm recorded in the first json file is greater than the number of frames of the video file itself.
Step S202, if the number of reading frames per second of the algorithm recorded in the first json file is larger than the number of frames of the video file, the frame extraction processing is not carried out, and the graying and Gaussian blur processing are directly carried out on the picture.
Step S203, if the number of frames read per second by the algorithm recorded in the first json file is less than or equal to the number of frames of the video file, performing frame extraction processing on the video file according to the frames read per second by the algorithm, and then performing graying and Gaussian blur processing on each extracted frame.
Referring to fig. 3, the method further includes:
step S301, reading a second json file and a video file shot in the oral cavity cleaning room, wherein the second json file records the position information of the ultrasonic instrument, the absolute path of the position of the video file and the absolute path of the output file.
Step S302, analyzing the video file to obtain audio and pictures.
Step S303, the audio is cut into a plurality of sections of audio, so as to ensure the unicity of each section of audio content.
Step S304, each section of cut audio is brought into a metal collision sound detection model to detect whether metal collision sound occurs in each section of audio, and the detection result of the metal collision sound is stored in a list.
Step S305, graying and gaussian blurring the screen.
And S306, bringing the picture into a human body detection model to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list.
And step S307, calling the position information of the ultrasonic instrument and the position information of the human body.
And step S308, independently dividing the area where the position information of the ultrasonic instrument is located, and carrying out movement detection to obtain the picture change information.
Specifically, the metal collision sound detection model may employ a hidden markov model.
And step S309, judging whether the positions of the ultrasonic instrument and the person are within a preset distance according to the position information of the ultrasonic instrument, the position information of the human body and the picture change information.
Specifically, the ultrasonic instrument position information includes size information of the ultrasonic instrument, and when the ratio of the distance from the center point of the ultrasonic instrument to the circumscribed circle of the ultrasonic instrument to the distance from the person to the center point of the ultrasonic instrument is greater than 1/2, the positions of the ultrasonic instrument and the person are considered to be within a predetermined distance.
And S310, if the positions of the ultrasonic instrument and the person are within a preset distance, identifying whether the metal collision sound appears in the simultaneous audio and whether the picture changes obviously in the picture change information according to the metal collision sound detection result.
When the person is close enough to the position of the ultrasonic instrument, the operator has a high probability of taking out and putting in the instrument. At the moment, the algorithm brings in a sound detection result, and if metal collision sound occurs at the same time and the picture is obviously changed, the algorithm considers that the operator takes out the ultrasonic instrument and puts in the instrument.
Step S311, if the positions of the ultrasonic instrument and the person are within a preset distance, and metal collision sound occurs in the audio, and meanwhile, the picture in the picture change information has obvious change, the operator is judged to take and place the instrument.
In step S312, if the positions of the water tank and the person are within the predetermined distance but the metal collision sound in the audio and the obvious change of the picture in the picture change information are not satisfied at the same time, it is determined that the operator does not perform the operation of taking and placing the instrument.
Step S313, outputting a determination result to the second json file, where the determination result includes whether the operator performs the operation of picking and placing the instrument, and the start time and the end time of the operation of picking and placing the instrument when the operator performs the operation of picking and placing the instrument.
Referring to fig. 4, the method for separately dividing the region where the position information of the ultrasonic apparatus is located and performing motion detection to obtain the frame change information includes:
step S401, modeling the picture of the area where the position information of each frame of ultrasonic instrument is located and the picture of the area where the position information of the former frame of ultrasonic instrument is located as a background, and performing differential operation to obtain a difference image.
Step S402, performing connectivity analysis on the difference image, and judging whether the area of the difference image is larger than a preset threshold value.
Specifically, the value of the preset threshold is 0.3 of the area occupied by the ultrasonic instrument in the picture.
In step S403, if the difference image area is larger than the preset threshold, the time at that time is recorded to obtain the picture change information, and the picture change information is regarded as the picture change information when the picture has changed significantly.
Furthermore, the water tank position information and the ultrasonic instrument position information are respectively marked with the coordinates of the upper left vertex and the coordinates of the lower right vertex of the position rectangle of the water tank and the ultrasonic instrument in the video file.
The invention solves the problem that the existing human body posture detection algorithm can not complete time sequence detection, or the human body posture detection algorithm capable of carrying out time sequence detection needs a large number of calculation units, so that the human body posture detection algorithm can not be applied to actual production and life. Compared with the prior art, the invention fully utilizes the high-reliability special camera arranged in the oral cavity cleaning room, can uninterruptedly collect effective information in the oral cavity cleaning room within 24 hours, is efficient and labor-saving, and does not need other human resources to supervise the cleaning operation specification of workers in the follow-up process except for marking the positions of the water tank and the ultrasonic instrument in the cleaning room at first. And because machine learning and health supervision are combined, the video collected in the subsequent operation process can also be used as a training set, so that a model with higher accuracy rate is continuously trained, the original old model is replaced, and the accuracy rate of the whole algorithm can also be continuously improved. The method can quickly judge the behavior mode of the operator in the cleaning room of the oral clinic according to the existing information, is a light-weight judging method, and has low requirement on hardware by an algorithm. The invention can also be applied to the disinfection and sterilization operation identification of other medical instruments and special equipment.
Referring to fig. 5, the present invention further provides a monitoring device for disinfection and sterilization of oral instruments, comprising:
a reading unit 51, configured to read a first json file and a video file captured in an oral cavity cleaning room, where the first json file records water tank position information, an absolute path of a position where the video file is located, and an absolute path of an output file;
the analysis unit 52 is configured to analyze the video file to obtain an audio and a picture;
a cutting unit 53, configured to cut the audio into multiple pieces of audio to ensure the unicity of each piece of audio content;
a first bringing-in unit 54, configured to bring each cut segment of audio into an underwater sound detection model to detect whether underwater sound occurs in each segment of audio, and store an underwater sound detection result in a list;
a processing unit 55, configured to perform graying and gaussian blurring on the picture;
a second bringing-in unit 56, configured to bring the picture into a human body detection model to detect a human body appearing in the picture, where weights of a hand and an elbow are increased during detection, and after detection results are summarized, a point is used to replace human body position information, and the human body position information is stored in a list;
a calling unit 57 for calling the water tank position information and the human body position information;
a first judging unit 58 for judging whether the positions of the water tank and the person are within a predetermined distance based on the water tank position information and the human body position information;
an identifying unit 59 for identifying whether or not underwater sound occurs in the simultaneous audio based on the underwater sound detection result in a case where the positions of the water tank and the person are within a predetermined distance;
a second determination unit 60 for determining that the operator has made a flushing action in a case where the positions of the sink and the person are within a predetermined distance and, at the same time, underwater sound occurs in the audio; and, in the event that the positions of the sink and the person are within a predetermined distance, but no underwater sound is simultaneously present in the audio, determining that the operator has not made a flushing action;
an output unit 61, configured to output a result of the determination to the first json file, where the result of the determination includes whether the operator performed the flushing operation, and a start time and an end time of the flushing operation when the operator performed the flushing operation.
The embodiment of the invention also provides a storage medium, and the storage medium stores a computer program, and when the computer program is executed by a processor, the computer program realizes part or all of the steps of the monitoring method for the disinfection and sterilization operation of the oral instrument. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the embodiment of the monitoring device for the disinfection and sterilization operation of the oral instrument, the description is simple because the embodiment is basically similar to the embodiment of the method, and the relevant points can be referred to the description in the embodiment of the method.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (10)

1. A method for monitoring the sterilization operation of an oral instrument, comprising:
reading a first json file and a video file shot in an oral cavity cleaning room, wherein the first json file records water tank position information, an absolute path of the position of the video file and an absolute path of an output file;
analyzing the video file to obtain audio and pictures;
cutting the audio into a plurality of sections of audio so as to ensure the unicity of each section of audio content;
bringing each section of audio obtained by cutting into an underwater sound detection model to detect whether underwater sound occurs in each section of audio and storing the underwater sound detection result in a list;
carrying out graying and Gaussian blur processing on the picture;
bringing the picture into a human body detection model to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list;
calling the water tank position information and the human body position information;
judging whether the positions of the water tank and the person are within a preset distance according to the position information of the water tank and the position information of the person;
if the positions of the water tank and the person are within a preset distance, identifying whether underwater sound occurs in the simultaneous audio according to the underwater sound detection result;
if the position of the sink and the person are within a predetermined distance and simultaneously a water sound occurs in the audio, determining that the operator has made a flushing action;
if the positions of the sink and the person are within a predetermined distance but no underwater sound is simultaneously present in the audio, determining that the operator has not performed a flushing action;
and outputting a judgment result to the first json file, wherein the judgment result comprises whether the operator performs the flushing action or not, and the starting time and the ending time of the flushing action when the operator performs the flushing action.
2. The method of claim 1, wherein the sink location information comprises size information of the sink in the frame, and the sink and the person are considered to be within a predetermined distance when a ratio of a distance from a center point of the sink to a circumscribed circle of the sink to a distance from the person to the center point of the sink is greater than 1/2.
3. The method of claim 1, wherein the first json file further records an algorithm number of reading frames per second; before graying and gaussian blurring processing the picture, the method further comprises the following steps:
judging whether the reading frame number per second of the algorithm recorded in the first json file is greater than the frame number of the video file per se;
if the number of reading frames per second of the algorithm recorded in the first json file is larger than the number of frames of the video file per se, frame extraction processing is not carried out, and graying and Gaussian blur processing are directly carried out on the picture;
and if the number of frames read per second by the algorithm recorded in the first json file is less than or equal to the number of frames of the video file, performing frame extraction on the video file according to the frames read per second by the algorithm, and performing graying and Gaussian blur processing on each extracted frame.
4. The method of claim 1, wherein the method further comprises:
reading a second json file and a video file shot in the oral cavity cleaning room, wherein the second json file records the position information of the ultrasonic instrument, the absolute path of the position of the video file and the absolute path of the output file;
analyzing the video file to obtain audio and pictures;
cutting the audio into a plurality of sections of audio so as to ensure the unicity of each section of audio content;
bringing each section of audio obtained by cutting into a metal collision sound detection model to detect whether metal collision sound occurs in each section of audio and storing a metal collision sound detection result in a list;
carrying out graying and Gaussian blur processing on the picture;
bringing the picture into a human body detection model to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list;
calling the ultrasonic instrument position information and the human body position information;
the method comprises the following steps of (1) independently dividing an area where ultrasonic instrument position information is located, and carrying out movement detection to obtain picture change information;
judging whether the positions of the ultrasonic instrument and the person are within a preset distance according to the position information of the ultrasonic instrument, the position information of the human body and the picture change information;
if the positions of the ultrasonic instrument and the person are within a preset distance, identifying whether metal collision sound occurs in simultaneous audio and whether picture change information has obvious picture change or not according to the metal collision sound detection result;
if the positions of the ultrasonic instrument and the person are within a preset distance, and metal collision sound appears in the audio, and meanwhile, the picture in the picture change information has obvious change, the operator is judged to take and place the instrument;
if the positions of the water tank and the person are within a preset distance but the conditions that metal collision sound occurs in audio and pictures in picture change information obviously change exist simultaneously are not met, judging that the operator does not perform the action of taking and placing the instrument;
and outputting a judgment result to a second json file, wherein the judgment result comprises whether the operator performs the action of taking and placing the instrument, and the starting time and the ending time of the action of taking and placing the instrument under the condition that the operator performs the action of taking and placing the instrument.
5. The method of claim 4, wherein the moving detection is performed by separately dividing the region where the position information of the ultrasound apparatus is located, and the obtaining of the frame change information comprises:
modeling the picture of the area where the position information of each frame of ultrasonic instrument is located by taking the picture of the area where the position information of the previous frame of ultrasonic instrument is located as a background, and performing differential operation to obtain a difference image;
performing connectivity analysis on the difference image, and judging whether the area of the difference image is larger than a preset threshold value or not;
if the area of the difference image is larger than the preset threshold value, recording the time at the moment to obtain the picture change information, and regarding the picture change information as the picture change information when the picture is obviously changed.
6. The method of claim 4, wherein the underwater acoustic detection model and the metal collision acoustic detection model employ hidden Markov models.
7. The method of claim 4, wherein the tank location information and the sonicator location information are labeled with the top left vertex coordinates and the bottom right vertex coordinates, respectively, of a location rectangle for the tank and sonicator in the video file.
8. The method of claim 4, wherein the ultrasonic location information comprises ultrasonic size information, and the positions of the ultrasonic and the human are considered to be within a predetermined distance when the ratio of the distance from the center point of the ultrasonic to the circumscribed circle of the ultrasonic to the distance from the center point of the human to the ultrasonic is greater than 1/2.
9. The method of claim 8, wherein the value of the preset threshold is 0.3 of the area occupied by the ultrasonic apparatus in the picture.
10. An oral instrument disinfecting and sterilizing operation monitoring device, comprising:
the reading unit is used for reading a first json file and a video file shot in an oral cavity cleaning room, wherein the first json file records water tank position information, an absolute path of the position of the video file and an absolute path of an output file;
the analysis unit is used for analyzing the video file to obtain audio and pictures;
the cutting unit is used for cutting the audio frequency into a plurality of sections of audio frequencies so as to ensure the unicity of each section of audio frequency content;
the first bringing-in unit is used for bringing each section of audio obtained by cutting into the underwater sound detection model so as to detect whether underwater sound occurs in each section of audio and storing the underwater sound detection result in a list;
the processing unit is used for carrying out graying and Gaussian blur processing on the picture;
the second bringing-in unit is used for bringing the picture into a human body detection model so as to detect the human body appearing in the picture, wherein the weights of the hand and the elbow are increased during detection, and after the detection results are summarized, a point is used for replacing the position information of the human body, and the position information of the human body is stored in a list;
the calling unit is used for calling the water tank position information and the human body position information;
the first judging unit is used for judging whether the positions of the water tank and the person are within a preset distance according to the position information of the water tank and the position information of the human body;
the identification unit is used for identifying whether underwater sound occurs in the simultaneous audio according to the underwater sound detection result under the condition that the positions of the water tank and the person are within a preset distance;
a second determination unit for determining that the operator has made a flushing action in a case where the positions of the water tank and the person are within a predetermined distance and, at the same time, underwater sound occurs in the audio; and, in the event that the positions of the sink and the person are within a predetermined distance, but no underwater sound is simultaneously present in the audio, determining that the operator has not made a flushing action;
and an output unit configured to output a result of the determination to the first json file, the result of the determination including whether or not the operator performed the flushing operation, and a start time and an end time of the flushing operation when the operator performed the flushing operation.
CN202111322545.0A 2021-11-09 2021-11-09 Oral instrument disinfection and sterilization operation monitoring method and device Active CN113762228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111322545.0A CN113762228B (en) 2021-11-09 2021-11-09 Oral instrument disinfection and sterilization operation monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111322545.0A CN113762228B (en) 2021-11-09 2021-11-09 Oral instrument disinfection and sterilization operation monitoring method and device

Publications (2)

Publication Number Publication Date
CN113762228A true CN113762228A (en) 2021-12-07
CN113762228B CN113762228B (en) 2022-02-22

Family

ID=78784859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111322545.0A Active CN113762228B (en) 2021-11-09 2021-11-09 Oral instrument disinfection and sterilization operation monitoring method and device

Country Status (1)

Country Link
CN (1) CN113762228B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100131442A1 (en) * 2008-11-21 2010-05-27 Ming-Hwa Sheu Intelligent monitoring system
CN103034841A (en) * 2012-12-03 2013-04-10 Tcl集团股份有限公司 Face tracking method and face tracking system
CN108875712A (en) * 2018-08-01 2018-11-23 四川电科维云信息技术有限公司 A kind of act of violence detection system and method based on ViF descriptor
US20200177805A1 (en) * 2017-09-18 2020-06-04 Shenzhen University Pipeline detection method and apparatus, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100131442A1 (en) * 2008-11-21 2010-05-27 Ming-Hwa Sheu Intelligent monitoring system
CN103034841A (en) * 2012-12-03 2013-04-10 Tcl集团股份有限公司 Face tracking method and face tracking system
US20200177805A1 (en) * 2017-09-18 2020-06-04 Shenzhen University Pipeline detection method and apparatus, and storage medium
CN108875712A (en) * 2018-08-01 2018-11-23 四川电科维云信息技术有限公司 A kind of act of violence detection system and method based on ViF descriptor

Also Published As

Publication number Publication date
CN113762228B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN111476774B (en) Intelligent sign recognition device based on novel coronavirus pneumonia CT detection
CN110069707A (en) A kind of artificial intelligence self-adaption interactive tutoring system
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
CN108542404A (en) Attention appraisal procedure, device, VR equipment and readable storage medium storing program for executing
CN115601811A (en) Facial acne detection method and device
CN115546899A (en) Examination room abnormal behavior analysis method, system and terminal based on deep learning
CN115345858A (en) Method, system and terminal device for evaluating quality of echocardiogram
KR20210109830A (en) Method for providing digestive endoscopic image information analysis service based on visual integrated learning and detection model
CN111353439A (en) Method, device, system and equipment for analyzing teaching behaviors
CN113485555B (en) Medical image film reading method, electronic equipment and storage medium
CN113762228B (en) Oral instrument disinfection and sterilization operation monitoring method and device
CN107256375A (en) Human body sitting posture monitoring method before a kind of computer
CN113139444A (en) Space-time attention mask wearing real-time detection method based on MobileNet V2
KR20210110435A (en) Computer Program for Providing Endoscopic Image Information Analysis Service Based on Image Classification and Segmentation Integrated Learning Model
CN115359412A (en) Hydrochloric acid neutralization experiment scoring method, device, equipment and readable storage medium
CN113990432A (en) Image report pushing method and device based on RPA and AI and computing equipment
CN115116087A (en) Action assessment method, system, storage medium and electronic equipment
CN112036328B (en) Bank customer satisfaction calculating method and device
TW201610903A (en) Online learning style automated diagnostic system, online learning style automated diagnostic method and computer readable recording medium
CN111773651A (en) Badminton training monitoring and evaluating system and method based on big data
KR20210109847A (en) Method and apparatus for providing digestive endoscopic image information analysis service based on visual integrated learning and detection model
KR20210109819A (en) A program recording medium for providing analyzing services of endoscope video information based on learning algorithms
CN116309724B (en) Face blurring method and face blurring device for laboratory assessment
CN114452459B (en) Monitoring and early warning system for thrombus aspiration catheter
CN113553886A (en) Device, method and storage medium for monitoring a hand washing process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230413

Address after: F18, Block B, Building 1, Chuangzhi Building, No. 17 Xinghuo Road, Jiangbei New District, Nanjing, Jiangsu Province, 210000

Patentee after: NANJING XUANJIA NETWORK TECHNOLOGY Co.,Ltd.

Address before: 210003 No. 213 Guangzhou Road, Gulou District, Nanjing City, Jiangsu Province

Patentee before: Nanjing Huiji Information Technology Co.,Ltd.

TR01 Transfer of patent right