CN113749692B - Pharynx swab image identification and positioning method - Google Patents

Pharynx swab image identification and positioning method Download PDF

Info

Publication number
CN113749692B
CN113749692B CN202111032894.9A CN202111032894A CN113749692B CN 113749692 B CN113749692 B CN 113749692B CN 202111032894 A CN202111032894 A CN 202111032894A CN 113749692 B CN113749692 B CN 113749692B
Authority
CN
China
Prior art keywords
image
pixel coordinate
pixel
uvula
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111032894.9A
Other languages
Chinese (zh)
Other versions
CN113749692A (en
Inventor
原敏乔
郭同健
曹策
乔丹
李冬宁
程恩瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202111032894.9A priority Critical patent/CN113749692B/en
Publication of CN113749692A publication Critical patent/CN113749692A/en
Application granted granted Critical
Publication of CN113749692B publication Critical patent/CN113749692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0045Devices for taking samples of body liquids
    • A61B10/0051Devices for taking samples of body liquids for taking saliva or sputum samples
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Robotics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Pulmonology (AREA)
  • Hematology (AREA)
  • Pathology (AREA)
  • Endoscopes (AREA)

Abstract

The invention provides a pharynx swab image identification and positioning method, which comprises the following steps: s1, shooting at the acquisition far end to obtain a far end acquisition image, identifying the circular outline of the mouth piece in the far end acquisition image, and acquiring the pixel coordinate of the circle center of the mouth piece; s2, correcting the position of the pharynx swab acquisition mechanism to enable the pixel coordinate of the center of the circle of the mouthpiece to coincide with the pixel standard coordinate of the center of the circle of the mouthpiece calibrated in advance, and performing coarse positioning; s3, shooting at the acquisition near end to obtain a near end acquisition image and identifying the central pixel coordinate of the uvula in the near end acquisition image; and S4, calculating a substitute point pixel coordinate of the acquisition area according to the central pixel coordinate of the uvula, and correcting the position of the throat swab acquisition mechanism to enable the pre-calibrated spot circle center pixel coordinate to coincide with the substitute point pixel coordinate, so as to perform accurate positioning. The invention improves the identification and positioning precision of the acquisition area by two-step positioning, and can accurately guide the pharynx swab acquisition mechanism to move to the position of the acquisition area.

Description

Pharynx swab image identification and positioning method
Technical Field
The invention relates to the technical field of throat swab sampling, in particular to a throat swab image identification and positioning method.
Background
The substance for nucleic acid detection is a nucleic acid of a virus. The nucleic acid detection is to find out whether the respiratory tract specimen, blood or feces of the patient have the nucleic acid of the virus invaded from the outside to determine whether the patient is infected by the new coronavirus. Thus, once a nucleic acid is detected as "positive", the presence of the virus in the patient is demonstrated.
The most common method for detecting nucleic acid at present is artificial throat swab detection, which requires a medical worker to hold a throat swab and extend into the oral cavity of a nucleic acid detector, and to wipe the parts such as pharynx, tonsil and faucial pillars to collect secretions from the parts. The detection mode enables medical staff to be at risk of being infected by virus, so that a throat swab collection mechanism is brought forward to replace the medical staff to realize automatic sampling of the throat swab.
The automatic collection of pharynx swab collection mechanism needs to discern the location to the collection region in nucleic acid detection person's oral cavity to guide pharynx swab collection mechanism to remove collection region and sample, and collection region's discernment positioning method's among the prior art precision is relatively poor, can't guide pharynx swab collection mechanism to accurately remove collection region's position.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a pharynx swab image identification and positioning method which comprises the steps of firstly identifying the outer contour of a mouth piece to position the position of an oral cavity, then identifying the characteristics of the uvula in the oral cavity to position the position of an acquisition area, and improving the identification and positioning accuracy of the acquisition area through two-step positioning.
In order to achieve the purpose, the invention adopts the following specific technical scheme:
the invention provides a pharynx swab image identification and positioning method, which comprises the following steps:
s1, synchronously moving the laser ranging module, the image acquisition module and the pharynx swab acquisition mechanism to a first preset position away from the acquired person, and photographing the oral cavity part of the acquired person through the image acquisition module to obtain a far-end acquired image;
s2, recognizing the circular contour of the mouthpiece in the remote acquisition image through the image processing module, acquiring the pixel coordinate of the center of the mouthpiece, and correcting the position of the throat swab acquisition mechanism to ensure that the pixel coordinate of the center of the mouthpiece coincides with the standard pixel coordinate of the center of the mouthpiece calibrated in advance;
s3, synchronously moving the laser ranging module, the image acquisition module and the pharynx swab acquisition mechanism to a second preset position away from the acquired person, and taking a picture of the inside of the oral cavity of the acquired person through the image acquisition module to obtain a near-end acquired image; the second preset position is closer to the person to be collected than the first preset position;
s4, recognizing the central pixel coordinate of the uvula in the near-end collected image through the image processing module, calculating the pixel coordinate of a substitute point of the collection area according to the central pixel coordinate of the uvula, and correcting the position of the pharynx swab collection mechanism to enable the pre-calibrated light spot circle center pixel coordinate to coincide with the pixel coordinate of the substitute point; and the pixel coordinate of the center of the light spot is the pixel coordinate corresponding to the center of the laser beam emitted by the laser ranging module in the near-end collected image.
Preferably, step S2 specifically includes the following steps:
s21, carrying out image mask processing on the color of the mouthpiece in the far-end collected image to obtain a far-end binary image containing circular contour information of the mouthpiece;
s22, recognizing the circular contour of the mouthpiece in the far-end binary image by adopting a Hough circle detection method, and acquiring the pixel coordinates of the circle center of the mouthpiece based on the circular contour of the mouthpiece;
s23, calculating the pixel distance S between the pixel coordinate of the center of circle of the mouthpiece in the far-end binary image and the pre-calibrated standard pixel coordinate of the center of circle of the mouthpiece, and calculating the actual offset distance D of the pharynx swab acquisition mechanism according to the following formulaf
Figure GDA0003556729020000021
Where K is the actual distance represented by a unit pixel of the remotely acquired image in the vertical plane of the mouthpiece, L0Is a known length in the vertical plane of the mouthpiece, and PL is L in the distally acquired image0A corresponding pixel length;
s24, actual offset distance D according to pharynx swab collecting mechanismfAnd correcting the position of the throat swab acquisition mechanism to ensure that the pixel coordinate of the center of a circle of the mouthpiece coincides with the pixel standard coordinate of the center of a circle of the mouthpiece which is calibrated in advance.
Preferably, in step S4, the process of identifying the coordinates of the central pixel of the uvula in the proximally captured image by the image processing module is as follows:
s41, photographing the inside of the oral cavity of the person to be acquired at least twice through an image acquisition module to obtain at least two near-end acquired images, and respectively constructing regions of interest at different positions in each near-end acquired image;
s42, creating detection sliding windows with the same number as the interested areas, and identifying the characteristics of the uvula in the current detection area of each detection sliding window through an image classifier;
and S43, when the uvula feature is identified in the current detection area of a certain detection sliding window, calculating the pixel coordinate of the center point of the current detection area of the detection sliding window as the pixel coordinate of the uvula center.
Preferably, the training process of the image classifier is as follows:
s421, extracting a local image showing the characteristics of the uvula from the near-end collected image by adopting an image characteristic extraction method to serve as a characteristic image, and taking the rest local images in the near-end collected image as non-characteristic images;
s422, classifying and labeling the characteristic images and the non-characteristic images;
s423, calculating characteristic vectors corresponding to the characteristic images and the non-characteristic images respectively based on the gradient histograms according to the characteristic images and the labels thereof and the non-characteristic images and the labels thereof;
and S424, inputting the feature vector and the label of the feature image and the feature vector and the label of the non-feature image into a support vector machine model for training to obtain the image classifier based on the characteristics of the uvula.
Preferably, step S421 specifically includes the following steps:
s4211, converting the near-end collected image into a gray level image;
and S4212, setting a local detection area, traversing the grayscale image, and taking an image of which the local detection area completely contains the uvula and the uvula is located in the center as a characteristic image.
Preferably, after step S4211, the following steps are further included:
s42110, performing morphological open operation processing on the gray level image to eliminate a bright area in the gray level image;
s42120, performing binarization processing on the gray level image of the bright light area to obtain a near-end binary image; and the number of the first and second groups,
in step S4212, an image in which the local detection region completely includes the uvula and the uvula is located at the center is taken as a feature image by traversing the near-end binary image.
Preferably, in step S4, the process of acquiring the substitute point pixel coordinates of the acquisition area from the uvula center pixel coordinates, and correcting the position of the throat swab acquisition mechanism so that the pre-calibrated light spot center pixel coordinates coincide with the substitute point pixel coordinates is as follows:
s44, obtaining a pixel distance h between the central pixel coordinate of the uvula in the near-end collected image and the pixel coordinate of the substitute point of the collection area;
s45, calculating the pixel coordinates of the replacement points of the acquisition area based on the pixel distance h and the central pixel coordinates of the uvula;
s46, acquiring the pixel distance S' between the pixel coordinate of the center of the light spot and the pixel coordinate of the substitute point of the acquisition area, and calculating the actual offset distance D of the pharynx swab acquisition mechanism according to the following formulan
Figure GDA0003556729020000041
Wherein, K' is the actual distance represented by the unit pixel of the near-end collected image on the vertical plane of the uvula, and L0'is a known length on the vertical plane of the uvula, and PL' is L in the distally acquired image0' corresponding pixel length;
s47, actual offset distance D according to pharynx swab collection mechanismnAnd correcting the position of the throat swab acquisition mechanism to ensure that the pixel coordinate of the center of the light spot coincides with the pixel coordinate of the substitute point.
Preferably, the pre-calibration process of the coordinates of the spot center pixels is as follows:
s410, starting and adjusting the laser ranging module at a second preset position to enable the laser beam emitted by the laser ranging module to coincide with the front projection position of the throat swab clamped by the throat swab collecting mechanism;
s420, photographing the inside of the oral cavity of the person to be acquired through an image acquisition module to obtain a pre-calibration image with laser spots, and performing image mask processing on colors of the laser spots in the pre-calibration image to obtain a pre-calibration binary image;
s430, identifying the circular outline of the laser spot in the pre-calibrated binary image by adopting a Hough circle detection method, and acquiring the circle center pixel coordinate of the laser spot based on the circular outline of the laser spot to be used as the spot circle center pixel coordinate.
Preferably, after step S4, the method further includes the following steps:
and S5, driving the pharyngeal swab collecting mechanism to move from the second preset position to the collected person, simultaneously starting the laser ranging module, and when the distance detected by the laser ranging module is smaller than or equal to the set laser threshold, closing the laser ranging module and simultaneously controlling the pharyngeal swab collecting mechanism to decelerate.
Preferably, after step S5, the method further includes the following steps:
and S6, when the pharynx swab clamped by the pharynx swab collecting mechanism is contacted with the oral cavity of the person to be collected, detecting the pressure applied to the oral cavity of the person to be collected by the pharynx swab through a pressure sensing module arranged in the pharynx swab collecting mechanism, and controlling the pharynx swab collecting mechanism to stop moving when the pressure is greater than or equal to a set pressure threshold value.
Compared with the identification and positioning method of the acquisition area in the prior art, the method has the advantages that the image visual positioning of the mouth part is realized by identifying the circular outline of the mouth piece, the image visual positioning of the acquisition area is realized by identifying the characteristics of the uvula in the mouth, the image visual positioning precision of the acquisition area is improved by two-step positioning, and the pharynx swab acquisition mechanism can be accurately guided to move to the position of the acquisition area for pharynx swab sampling.
Drawings
Fig. 1 is a schematic diagram of a hardware structure used in a pharyngeal swab image identification and location method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a pharyngeal swab collection mechanism provided in accordance with an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a pharyngeal swab image recognition and location method provided according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating results of remotely acquiring images provided in accordance with an embodiment of the present invention;
FIG. 5 is a diagram illustrating the results of a remote binary image provided according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a detection result of a circular contour of a mouthpiece in a remotely captured image according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a calibration principle of a distal visual positioning location provided in accordance with an embodiment of the present invention;
FIG. 8 is a schematic diagram of a distal visual positioning provided in accordance with an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating results of proximally acquired images provided in accordance with an embodiment of the present invention;
FIG. 10 is a schematic illustration of a region of interest created in a proximally-captured image provided in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram of a multi-thread, multi-sliding window moving identification positioning scheme according to an embodiment of the present invention;
fig. 12 is a schematic diagram illustrating a detection result of a uvula in a proximally captured image according to an embodiment of the present invention;
FIG. 13 is a schematic diagram illustrating a calibration principle of a proximal visual positioning location provided in accordance with an embodiment of the present invention;
FIG. 14 is a schematic representation of a laser beam in relation to a pharyngeal swab projection provided in accordance with an embodiment of the present invention;
FIG. 15 is a schematic diagram of the proximal visual orientation provided in accordance with an embodiment of the present invention.
Wherein the reference numerals include: pharynx swab collection mechanism 1, image acquisition module 2, laser ranging module 3, light source lighting module 4, pharynx swab feed mechanism 5, pharynx swab 6, guide rail 7, seaming ware 8, first limit switch 9, second limit switch 10, third limit switch 11.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, the same reference numerals are used for the same blocks. In the case of the same reference numerals, their names and functions are also the same. Therefore, detailed description thereof will not be repeated.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
Fig. 1 illustrates the principle of a pharyngeal swab image recognition and location method provided according to an embodiment of the invention.
As shown in fig. 1, the throat swab collecting mechanism 1 is slidably connected to the guide rail 7 and can move linearly along the guide rail 7, the throat swab feeding mechanism 5 is arranged on one side of the guide rail 7 and used for inserting the throat swab 6 into the throat swab collecting mechanism 1, the mouth piece 8 is arranged at one end of the guide rail 7, the mouth piece 8 is bitten into the mouth of a person to be collected, the mouth is kept open, the tongue is pressed through the tongue pressing plate on the mouth piece 8, the uvula is exposed, and the uvula can be collected by images.
Install first limit switch 9 on guide rail 7 respectively, second limit switch 10, third limit switch 11, first limit switch 9 is located pharynx swab acquisition mechanism 1's initial position, second limit switch 10 is located pharynx swab acquisition mechanism 1's distal end visual positioning position, third limit switch 11 is located pharynx swab acquisition mechanism 1's near-end visual positioning position, when pharynx swab acquisition mechanism 1 triggers first limit switch 9, second limit switch 10 or third limit switch 11, control pharynx swab acquisition mechanism 1 stop motion, stop at the initial position, distal end visual positioning position or near-end visual positioning position.
Fig. 2 illustrates the structure of a pharyngeal swab collection mechanism provided in accordance with an embodiment of the present invention.
As shown in fig. 2, the image acquisition module 2, the laser ranging module 3, and the light source illumination module 4 are mounted on the throat swab acquisition mechanism 1, and the pressure sensing module is mounted inside the throat swab acquisition mechanism 1.
The image acquisition module 2 adopts a high-definition distortion-free fixed-focus camera (ignoring smaller distortion), the default resolution of the camera is width × height, and the camera is used for taking a picture of the oral cavity of the acquired person. The image acquisition module 2 photographs the oral cavity of the person to be acquired at the near-end visual positioning position to obtain a near-end acquisition image, and the image acquisition module 2 photographs the oral cavity of the person to be acquired at the far-end visual positioning position to obtain a far-end acquisition image.
The image acquisition module 2 sends the acquired far-end acquired image and the acquired near-end acquired image to the image processing module for image processing. The image processing module adopts a computer with higher graphic processing capacity (or a card type computer raspberry with relatively higher performance), and a Python (or C/C + +) software platform and a corresponding version OpenCV library package are installed on hardware equipment.
In practical application, the image processing module processes and analyzes the far-end collected image and the near-end collected image transmitted by the image collecting module based on an intelligent comprehensive vision positioning method, acquires correction information of the pharynx swab collecting mechanism 1 in the horizontal direction and the vertical direction at the far-end vision positioning position and the near-end vision positioning position of the guide rail 7 respectively, sends the correction information to the driving controller of the pharynx swab collecting mechanism 1 through the serial port, and adjusts the pharynx swab collecting mechanism 1 to a correct position.
The laser ranging module 3 adopts a TOF (time of flight) laser ranging sensor. The range of the laser ranging sensor is Q meters (Q is more than or equal to 0.5m), and the sensor with the proper range can be selected according to the motion stroke of a specific acquisition mechanism in actual use. The distance measurement precision of the laser distance measurement sensor is within 1 mm.
The light source illumination module 4 adopts a (light emitting diode) surface light source for illuminating the oral cavity of the person to be collected, the illumination mode is direct illumination in the same direction, and the brightness of the light source illumination module can be adjusted.
The pressure sensing module is used for detecting the pressure applied by the throat swab 6 to the oral cavity after the throat swab 6 is contacted with the oral cavity.
When the pharynx swab acquisition mechanism 1 moves to the far-end visual positioning position, the mouth position of the person to be acquired is photographed through the image acquisition module 2, a far-end acquired image is obtained, the far-end acquired image is displayed with the circular contour of the mouth piece 8, the circular contour of the mouth piece 8 is subjected to image visual positioning, the moving distance of the pharynx swab acquisition mechanism 1 is determined, the position of the pharynx swab acquisition mechanism 1 is adjusted, and coarse positioning of the mouth of the person to be acquired is realized.
After adjusting the position of throat swab collection mechanism 1, drive throat swab collection mechanism 1 and remove to near-end visual positioning position from distal end visual positioning position, shoot inside the oral area of the person of being gathered through image acquisition module 2, obtain near-end collection image, it has the uvula to show in the near-end collection image, carry out image visual positioning to the uvula, the moving distance of throat swab collection mechanism 1 is confirmed based on the position of uvula, adjust the position of throat swab collection mechanism 1, the realization is to the meticulous location by the regional sampling in the person's of gathering oral cavity.
Before the near-end image collection is photographed, the laser ranging module 3 is started first, so that the laser beam emitted by the laser ranging module 3 coincides with the front projection position of the throat swab 6 clamped by the throat swab collection mechanism 1, the coordinates of a cotton swab of the throat swab 6 are approximately replaced by the coordinates of the circle center of the laser beam, and the distance between the coordinates of the circle center of the laser beam and the center coordinates of the uvula is the moving distance of the throat swab collection mechanism 1.
After adjusting the position of the pharynx swab collecting mechanism 1, the pharynx swab collecting mechanism 1 is driven to move from the near-end visual positioning position to the collected person, the laser ranging module 3 is started when the pharynx swab collecting mechanism 1 moves, and the distance D from the collecting area in the oral cavity to the pharynx swab collecting mechanism 1 is measured in real time by the laser ranging module 3. When D is less than or equal to the set laser threshold distance D', the laser ranging module 3 sends a marking signal to the pharyngeal swab acquisition mechanism 1, and the laser ranging module 3 is closed. After receiving the marking signal sent by the laser ranging module 3, the throat swab acquisition mechanism 1 starts to decelerate.
When the pharynx swab 6 held by the pharynx swab collecting mechanism 1 is in contact with the oral cavity of a person to be collected, the pressure applied to the oral cavity of the person to be collected by the pharynx swab 6 is detected through the pressure sensing module, if the pharynx swab collecting mechanism 1 continues to advance after the cotton swab of the pharynx swab 6 is in contact with the oral cavity, the pressure F measured by the pressure sensing module can be continuously increased, and when the pressure F is greater than or equal to a set pressure threshold value F', the pharynx swab collecting mechanism 1 is controlled to stop advancing, and then the collection action of the pharynx swab sample is carried out.
The image visual positioning method mainly comprises two parts: when the pharynx swab acquisition mechanism 1 moves to a far-end visual positioning position, the far-end visual positioning is realized based on a visual positioning method of a circular contour of a mouth piece; and secondly, when the throat swab acquisition mechanism 1 moves to the near-end visual positioning position, the near-end visual positioning is realized based on a visual positioning method of the characteristics of the uvula in the oral cavity.
Combining the image effects of the near visual alignment position and the far visual alignment position, the proper illumination brightness can be selected (the illumination brightness of the far visual alignment position and the near visual alignment position can be the same or different). The invention sets proper and stable illumination brightness, so that the circular contour feature of the mouth piece in the far-end collected image is obvious, and the internal structure of the oral cavity in the near-end collected image is clear.
According to the invention, the light source illumination module 4 is turned on only when the image acquisition module 2 takes a picture, and the light source illumination module 4 is controlled to be turned off immediately after taking a picture, so that the actual illumination time is short, and the discomfort of binocular dizzy of the acquired person caused by long-time strong illumination is avoided.
The laser ranging module 3 is started only at two moments, and the pixel coordinate position of the laser beam is calibrated at one moment before the near-end collected image is obtained, so that the pixel coordinate of the circle center is obtained. The other moment is the real-time detection of the distance of the acquisition region in the oral cavity of the acquired person in the process of moving from the proximal visual positioning position to the acquired person.
Fig. 3 shows a flow of a pharyngeal swab image recognition and location method provided according to an embodiment of the invention.
As shown in fig. 3, the pharyngeal swab image identification and location method provided by the embodiment of the invention includes the following steps:
s1, synchronously moving the laser ranging module, the image acquisition module and the pharynx swab acquisition mechanism to a first preset position away from the acquired person, and photographing the oral cavity part of the acquired person through the image acquisition module to obtain a far-end acquisition image.
The first preset position is a far-end visual positioning position, and the oral cavity part of the person to be acquired is photographed at the far-end visual positioning position to obtain a far-end acquisition image.
As shown in fig. 4, the circular contour of the mouthpiece is shown in the distal captured image, and since the position of the mouthpiece is directly opposite to the uvula, the initial visual positioning of the image is achieved by recognizing the circular contour of the mouthpiece, and the pharyngeal swab capturing mechanism is prevented from deviating out of the circular area of the mouthpiece.
S2, recognizing the circular contour of the mouthpiece in the remote acquisition image through the image processing module, acquiring the pixel coordinate of the center of the mouthpiece, and correcting the position of the throat swab acquisition mechanism to enable the pixel coordinate of the center of the mouthpiece to coincide with the pixel standard coordinate of the center of the mouthpiece calibrated in advance.
Step S2 specifically includes the following steps:
and S21, performing image masking processing on the color of the mouthpiece in the far-end collected image to obtain a far-end binary image containing circular contour information of the mouthpiece.
In fig. 4, the mouthpiece is white, and the white color outline region of the mouthpiece is recognized and image masking is performed. Of course, the mouthpiece may also adopt other colors, and mask processing of corresponding colors is performed on the mouthpieces of different colors to obtain corresponding remote binary images. The far-end binary image is shown in fig. 5.
S22, recognizing the circular contour of the mouthpiece in the far-end binary image by adopting a Hough circle detection method, and acquiring the pixel coordinates of the circle center of the mouthpiece based on the circular contour of the mouthpiece.
The hough circle detection method is a prior art, and therefore is not described herein again.
The results of the detection of the circular outline of the mouthpiece in the distally captured image are shown in figure 6.
S23, calculating the pixel distance S between the pixel coordinate of the center of circle of the mouthpiece in the far-end binary image and the pre-calibrated standard pixel coordinate of the center of circle of the mouthpiece, and calculating the actual offset distance D of the pharynx swab acquisition mechanism according to the following formulaf
Figure GDA0003556729020000101
Where K is the actual distance represented by a unit pixel of the remotely acquired image in the vertical plane of the mouthpiece, L0Is a known length in the vertical plane of the mouthpiece, and PL is L in the distally acquired image0The corresponding pixel length.
The calculation principle of K is shown in fig. 7:
for far-end visual positioning, the distance from the vertical plane of the mouthpiece to the image acquisition module is w, and at the moment, a section of known length L is taken on the vertical plane of the mouthpiece by the image acquisition module facing the oral cavity of the person to be acquired0Then, acquiring the pixel length PL corresponding to a straight line in the image through screenshot software, and then calculating the actual distance K represented by the unit pixel of the far-end acquired image on the vertical plane of the mouthpiece when the far-end visual positioning position is calculated based on a formula (2):
Figure GDA0003556729020000102
the calculation process of the pixel standard coordinate of the center of a circle of the mouth piece calibrated in advance is as follows:
before the remote visual positioning detection, the center pixel coordinate of the circular outline of the mouth piece is calibrated once. Namely, the horizontal and vertical directions of the throat swab acquisition mechanism are directly controlled and adjusted to the standard direction at the far-end visual positioning position, and the center pixel coordinate of the circular outline of the mouthpiece is obtained and is used as the standard center pixel coordinate, namely the pre-calibrated standard coordinate of the center pixel of the mouthpiece.
FIG. 8 illustrates the principles of distal visual positioning provided in accordance with an embodiment of the present invention.
As shown in fig. 8, the pixel point at the top left corner of the image collected at the far end is the origin o of the image pixel coordinate system, the horizontal right direction is the positive direction of the x axis, the vertical downward direction is the positive direction of the y axis, the point m is the pixel standard coordinate (standard circle center in fig. 8) of the circle center of the pre-calibrated bite device, and the point m' is the point mThe pixel coordinate of the center of the mouthpiece (the detection center in fig. 8) is shown, and the line segment mm' is the pixel distance s of the offset of the center of the mouthpiece. Calculating the actual offset distance D of the pharynx swab acquisition mechanism when moving from the point m' to the point m at the far-end positioning position according to the coordinate information in the far-end acquisition image by using the formula (1)f
S24, actual offset distance D according to pharynx swab collecting mechanismfAnd correcting the position of the throat swab acquisition mechanism to ensure that the pixel coordinate of the center of a circle of the mouthpiece coincides with the pixel standard coordinate of the center of a circle of the mouthpiece which is calibrated in advance.
The image processing module calculates the actual offset distance DfThen, the actual offset distance D will befSending the actual offset distance D to the acquisition systemfConvert into the angle and the step length that the motor needs to rotate among the pharynx swab collection mechanism, control pharynx swab collection mechanism and remove in horizontal direction and vertical direction, realize the guide to pharynx swab collection mechanism.
S3, synchronously moving the laser ranging module, the image acquisition module and the pharynx swab acquisition mechanism to a second preset position away from the acquired person, and taking a picture of the inside of the oral cavity of the acquired person through the image acquisition module to obtain a near-end acquired image; wherein, the second preset position is closer to the person to be collected than the first preset position.
The second preset position is the near-end visual positioning position, the inside of the oral cavity of the person to be collected is photographed at the near-end visual positioning position, and a near-end collected image with a clear internal structure of the oral cavity can be obtained.
As shown in fig. 9, the uvula and the capture regions on both sides of the uvula are clearly shown in the proximal captured image.
S4, recognizing the central pixel coordinate of the uvula in the near-end collected image through the image processing module, calculating the pixel coordinate of a substitute point of the collection area according to the central pixel coordinate of the uvula, and correcting the position of the pharynx swab collection mechanism to enable the pre-calibrated light spot circle center pixel coordinate to coincide with the pixel coordinate of the substitute point; and the pixel coordinate of the center of the light spot is the pixel coordinate corresponding to the center of the laser beam emitted by the laser ranging module in the near-end collected image.
In step S4, the process of identifying the coordinates of the central pixel of the uvula in the proximally captured image by the image processing module is as follows:
s41, the inside of the oral cavity of the person to be acquired is photographed at least twice through the image acquisition module, at least two near-end acquisition images are obtained, and interested areas in different positions are respectively constructed in each near-end acquisition image.
As shown in fig. 10, the region of interest is constructed in the near-end captured image in order to reduce the detection region for visually recognizing and positioning the characteristics of the uvula, thereby improving the efficiency of visually recognizing and positioning.
Each near-end acquired image is the same image, and interested areas are respectively constructed in different areas in each near-end acquired image, so that all the interested areas cover the area of the whole near-end acquired image. And identifying the interested region of each near-end acquisition image, and stopping identifying all near-end acquisition images when the characteristics of the uvula are identified in a certain interested region. The identification efficiency can be improved by simultaneously identifying a plurality of near-end collected images.
And S42, creating detection sliding windows with the same number as the interested areas, and identifying the characteristics of the uvula in the current detection area of each detection sliding window through an image classifier.
And during actual identification and positioning, moving the detection sliding window in the horizontal and vertical directions by a set pixel step length in each constructed interested area, and intercepting an image in the detection sliding window area to identify the characteristics of the uvula.
Because the number of the detection sliding windows is multiple, the multithread parallel processing can be realized, and the principle of the multithread parallel processing is as follows:
a plurality of detection sliding windows are created, and each detection sliding window is assigned a thread. As shown in fig. 11, in the actual identification and positioning, a plurality of images are continuously collected according to the number th of created threads in a batch identification manner. Each image is processed through a detection sliding window (i.e., a thread) and an image classifier to identify a line Of image regions in a Region Of Interest (ROI), and the horizontal direction in each image is shifted by step1 in pixel steps than the horizontal direction. For the same batch of identified images, the detection sliding window moving area of each image is vertically spaced by step 2.
The training process of the image classifier is as follows:
and S421, extracting a local image showing the characteristics of the uvula from the near-end collected image by adopting an image characteristic extraction method to serve as a characteristic image, and taking the rest local images in the near-end collected image as non-characteristic images.
The image feature extraction method in the embodiment of the invention comprises two methods, namely a feature extraction method based on a binary image and a feature extraction method based on a gray level image.
(i) Feature extraction method based on binary image
Firstly, carrying out gray level conversion on a near-end collected image to obtain a gray level image.
And calling a cv2.cvtColor function in an OpenCV library to convert the near-end acquired image represented by the RGB three-color channel mode into a gray image represented by single-channel gray.
Then, the gray image is subjected to morphological opening operation processing, and a bright area in the gray image is eliminated.
And calling a cv2.morphologyEX function in an OpenCV library, and eliminating a fine area with higher brightness caused by reflection in the gray image, wherein the fine area with higher brightness caused by reflection is a bright area.
And then, carrying out Otsu binarization processing on the gray level image of the bright light area to obtain a binary image.
And calling a cv2.threshold function in an OpenCV library, and carrying out the binarization processing on the gray level image by the Otsu method to obtain a near-end binary image with an obvious uvula characteristic region.
And finally, setting a local detection region, traversing the near-end binary image, and taking an image of which the local detection region completely contains the uvula and the uvula is located in the center as a characteristic image.
A local detection region is set, and the pixel size is w multiplied by h (w < width, h < height). And moving and traversing the near-end binary image by taking the local detection area as a basic unit area and step as the pixel step length in the horizontal and vertical directions to acquire local binary image data. In the acquired image data, an image in which the local detection region completely contains the uvula feature and the uvula feature is located in the center of the image is used as a feature image, and the rest of the images are used as non-feature images.
(ii) Characteristic extraction method based on gray level image
Firstly, carrying out gray level conversion on a near-end collected image, and reserving more gray level gradient information in the image to obtain a global gray level image.
Then, the local detection region is similarly set, and the pixel size is w × h, (w < width, h < height). And moving and traversing the global gray level image to acquire local gray level image data by taking the local image detection area as a basic unit area and taking step as the pixel step length in the horizontal and vertical directions. In the acquired image data, an image in which the local detection region completely contains the uvula feature and the uvula feature is located in the center of the image is used as a feature image, and the rest of the images are used as non-feature images.
And S422, classifying and labeling the characteristic images and the non-characteristic images.
Firstly, classifying the obtained characteristic images (binary images or gray level images according to different selection methods) into a first class, and storing the first class in a folder named as English ok, so that labels of all images in the folder are ok.
Then, the obtained non-characteristic images are classified into a second class and stored in a file named by ng, so that the labels of the images in the folder are all ng.
And S423, calculating the feature vectors corresponding to the feature images and the non-feature images respectively based on the gradient histograms according to the feature images and the labels thereof and the non-feature images and the labels thereof.
And respectively reading local detection images (non-characteristic images and characteristic images) and labels thereof in the two folders, and calculating a characteristic vector corresponding to each local detection image based on the gradient histogram.
And S424, inputting the feature vector and the label of the feature image and the feature vector and the label of the non-feature image into a support vector machine model for training to obtain the image classifier based on the characteristics of the uvula.
And inputting the feature vector and the label of each local detection image into a support vector machine model in an OpenCV library for training to obtain an image classifier based on the characteristics of the uvula.
And S43, when the uvula feature is identified in the current detection area of a certain detection sliding window, calculating the pixel coordinate of the center point of the current detection area of the detection sliding window as the pixel coordinate of the uvula center.
And identifying images in each detection sliding window area by using a trained image classifier based on the characteristics of the uvula. In the process of identifying a batch of images, if the images in the sliding window of a certain thread are identified as the first-class characteristic images, the pixel coordinate of the central point of the sliding window area is calculated, and the pixel coordinate of the central point is used as the identification and positioning position of the uvula. As shown in fig. 12, the result of the identification and detection of the uvula in the oral cavity of the human body is shown. And if the first-class characteristic images are not identified in one batch of images, reading the next batch of th images for identification and positioning.
In step S4, the process of acquiring the substitute point pixel coordinates of the acquisition area based on the uvula center pixel coordinates and correcting the position of the throat swab acquisition mechanism so that the pre-calibrated spot center pixel coordinates coincide with the substitute point pixel coordinates is as follows:
and S44, acquiring a pixel distance h between the central pixel coordinate of the uvula in the near-end acquired image and the pixel coordinate of the substitute point of the acquisition area.
Because the positions of the uvula in the oral cavity of the human body and the actual throat swab acquisition area are relatively fixed, the pixel distance from the central pixel coordinate of the uvula to the pixel coordinate of the replacing point of the acquisition area is set as h in the near-end acquisition image. The value of h can be obtained by intercepting the near-end collected image through screenshot software.
And S45, calculating the pixel coordinates of the replacement point of the acquisition area based on the pixel distance h and the central pixel coordinates of the uvula.
S46, acquiring a pre-calibrated light spot central imageThe pixel distance s' between the pixel coordinates and the pixel coordinates of the alternate points of the acquisition area, the actual offset distance D of the pharyngeal swab acquisition mechanism is calculated according to the following formulan
Figure GDA0003556729020000151
Wherein, K' is the actual distance represented by the unit pixel of the near-end collected image on the vertical plane of the uvula, and L0'is a known length on the vertical plane of the uvula, and PL' is L in the distally acquired image0' corresponding pixel length.
The calculation principle of K' is shown in fig. 13:
for the near-end visual positioning and the far-end visual positioning, the distance from the vertical plane of the uvula to the image acquisition module is w', and at the moment, the known length of the image acquisition module on the vertical plane of the uvula shot by the image acquisition module facing the inside of the oral cavity of the acquired person is L0The image of ' is obtained through the screenshot software, the pixel length PL ' corresponding to the straight line in the image is obtained, and then the actual distance K ' represented by the unit pixel of the image collected at the near end on the vertical plane of the uvula when the position is located at the near end vision is calculated based on the formula (4):
Figure GDA0003556729020000152
the pre-calibration process of the coordinates of the circle center pixels of the light spots comprises the following steps:
i. and at the second preset position, the laser ranging module is started and adjusted, so that the laser beam emitted by the laser ranging module coincides with the front projection position of the throat swab clamped by the throat swab acquisition mechanism.
Before the near-end visual positioning, the laser ranging module is started, so that the laser beam emitted by the laser ranging module coincides with the front projection position of the throat swab clamped by the throat swab acquisition mechanism. As shown in fig. 14, the projected position of the pharyngeal swab on the imaging plane was approximately replaced by the position of the center of the laser spot on the proximally acquired image.
ii. The method comprises the steps of photographing the inside of the oral cavity of a person to be acquired through an image acquisition module to obtain a pre-calibration image displaying laser spots, and carrying out image mask processing on colors of the laser spots in the pre-calibration image to obtain a pre-calibration binary image.
The coordinate position of the laser spot in the pre-calibration image is the same as the coordinate position in the near-end acquisition image. The method comprises the steps of firstly calibrating the position of the laser spot before near-end visual positioning, and directly adopting the position of the laser spot calibrated in advance in a near-end collected image without starting a laser ranging module when the near-end collected image is shot.
And iii, recognizing the circular outline of the laser spot in the pre-calibrated binary image by adopting a Hough circle detection method, and acquiring the circle center pixel coordinate of the laser spot based on the circular outline of the laser spot to be used as the circle center pixel coordinate of the spot.
Because the laser ranging module is relatively fixed relative to the position of the pharynx swab, the calculated spot circle center pixel coordinates are approximately unchanged in the near-end collected image and serve as the circle center pixel coordinates of the pharynx swab cotton swab in the near-end collected image.
FIG. 15 illustrates the principle of proximal visual positioning provided in accordance with an embodiment of the present invention.
As shown in fig. 15, the pixel point at the upper left corner of the image collected at the near end is the origin o of the image pixel coordinate system, the horizontal right direction is the positive direction of the x axis, the vertical downward direction is the positive direction of the y axis, and the point p is the pixel coordinate of the center of a light spot of a laser light spot obtained by calibration in advance (i.e., the pixel coordinate of the center of a pharynx swab cotton swab, the center of a circle of the laser calibration in fig. 15); q is the pixel coordinates (detection region approximation points in fig. 15) of the pharyngeal swab collection region approximation replacement; the point t is the central pixel coordinate of the uvula (the central point of the uvula in fig. 15) obtained by visual identification and positioning; the line segment qt is the pixel distance h from the point q to the point t; the line pq is the pixel distance s' from point p to point q.
Calculating to obtain the coordinate of the point q according to the coordinate of the point t and the pixel distance h, moving from the point p to the point q, and obtaining the actual distance D for the pharyngeal swab cotton swab to movenCalculated by formula (3).
S47, actual offset distance D according to pharynx swab collecting mechanismnAnd correcting the position of the throat swab acquisition mechanism to ensure that the pixel coordinate of the center of the light spot coincides with the pixel coordinate of the substitute point.
The image processing module calculates the actual offset distance DnThen, the actual offset distance D will benSending the actual offset distance D to the acquisition systemnConvert into the angle and the step length that the motor needs to rotate among the pharynx swab collection mechanism, control pharynx swab collection mechanism and remove in horizontal direction and vertical direction, realize the guide to pharynx swab collection mechanism.
After step S4, the method for identifying and locating a pharyngeal swab provided by the embodiment of the present invention further includes the following steps:
and S5, driving the pharynx swab collecting mechanism to move from the second preset position to the collected person, simultaneously starting the laser ranging module, and when the distance detected by the laser ranging module is smaller than or equal to the set laser threshold value, closing the laser ranging module and simultaneously controlling the pharynx swab collecting mechanism to decelerate.
And S6, when the pharynx swab clamped by the pharynx swab collecting mechanism is contacted with the oral cavity of the person to be collected, detecting the pressure applied to the oral cavity of the person to be collected by the pharynx swab through a pressure sensing module arranged in the pharynx swab collecting mechanism, and when the pressure is greater than or equal to a set pressure threshold value, controlling the pharynx swab collecting mechanism to stop moving and then carrying out the collecting action of the pharynx swab sample.
Compared with the identification and positioning method of the acquisition area in the prior art, the method has the advantages that the image visual positioning of the mouth is realized by identifying the circular outline of the mouth piece, the pharyngeal swab acquisition mechanism is prevented from deviating to the area outside the outline of the mouth piece, the image visual positioning of the acquisition area is realized by identifying the characteristics of the uvula in the mouth, the pharyngeal swab acquisition mechanism is guided to move to the acquisition area, the image visual positioning precision of the acquisition area is improved by two-step positioning, and the pharyngeal swab acquisition mechanism can be accurately guided to move to the position of the acquisition area for pharyngeal swab sampling.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are exemplary and should not be taken as limiting the invention. Variations, modifications, substitutions and alterations of the above-described embodiments may be made by those of ordinary skill in the art without departing from the scope of the present invention.
The above embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (8)

1. A pharynx swab image identification and positioning method is characterized by comprising the following steps:
s1, synchronously moving the laser ranging module, the image acquisition module and the pharynx swab acquisition mechanism to a first preset position away from the acquired person, and photographing the oral cavity part of the acquired person through the image acquisition module to obtain a far-end acquired image;
s2, recognizing the circular contour of the mouthpiece in the remote-end collected image through an image processing module, acquiring the pixel coordinate of the center of a circle of the mouthpiece, and correcting the position of the throat swab collecting mechanism to enable the pixel coordinate of the center of a circle of the mouthpiece to coincide with the standard pixel coordinate of the center of a circle of the mouthpiece calibrated in advance;
s3, synchronously moving the laser ranging module, the image acquisition module and the pharynx swab acquisition mechanism to a second preset position away from the acquired person, and taking a picture of the inside of the oral cavity of the acquired person through the image acquisition module to obtain a near-end acquired image; wherein the second preset position is closer to the subject than the first preset position;
s4, recognizing the central pixel coordinate of the uvula in the near-end collected image through the image processing module, calculating the pixel coordinate of a substitute point of a collection area according to the central pixel coordinate of the uvula, and correcting the position of the pharynx swab collection mechanism to enable the pre-calibrated circle center pixel coordinate of the light spot to coincide with the pixel coordinate of the substitute point; and the pixel coordinate of the center of the light spot is the pixel coordinate corresponding to the center of the laser beam emitted by the laser ranging module in the near-end collected image.
2. The pharyngeal swab image recognition and localization method according to claim 1, wherein the step S2 specifically includes the steps of:
s21, carrying out image masking processing on the color of the mouth piece in the far-end collected image to obtain a far-end binary image containing circular contour information of the mouth piece;
s22, recognizing the circular contour of the mouth piece in the far-end binary image by adopting a Hough circle detection method, and acquiring the pixel coordinate of the circle center of the mouth piece based on the circular contour of the mouth piece;
s23, calculating the pixel distance S between the pixel coordinate of the center of circle of the mouthpiece and the pixel standard coordinate of the center of circle of the mouthpiece in the far-end binary image, and calculating the actual offset distance D of the pharynx swab acquisition mechanism according to the following formulaf
Figure FDA0003556729010000011
Wherein K is represented by a unit pixel of the distally captured image in a vertical plane of the mouthpieceActual distance, L0For a known length in the vertical plane of the mouthpiece, PL is L in the distally acquired image0A corresponding pixel length;
s24, actual offset distance D according to the pharynx swab collecting mechanismfAnd correcting the position of the throat swab collecting mechanism to ensure that the pixel coordinate of the center of the mouth piece coincides with the pixel standard coordinate of the center of the mouth piece calibrated in advance.
3. The pharyngeal swab image recognition and localization method of claim 1, wherein in step S4, the process of recognizing the coordinates of the central pixel of the uvula in the proximally captured image by the image processing module is as follows:
s41, taking pictures of the inside of the oral cavity of the person to be acquired at least twice through the image acquisition module to obtain at least two near-end acquired images, and respectively constructing interested areas at different positions in each near-end acquired image;
s42, creating detection sliding windows with the same number as the interested areas, and identifying the characteristics of the uvula in the current detection area of each detection sliding window through an image classifier;
and S43, when the uvula feature is identified in the current detection area of a certain detection sliding window, calculating the pixel coordinate of the center point of the current detection area of the detection sliding window as the pixel coordinate of the uvula center.
4. A pharyngeal swab image recognition and localization method according to claim 3, wherein the training process of the image classifier is as follows:
s421, extracting a local image for displaying the characteristics of the uvula from the near-end collected image by adopting an image characteristic extraction method to serve as a characteristic image, and taking the rest local images in the near-end collected image as non-characteristic images;
s422, classifying and labeling the characteristic images and the non-characteristic images;
s423, calculating the feature vectors corresponding to the feature images and the non-feature images respectively based on a gradient histogram according to the feature images and the labels thereof and the non-feature images and the labels thereof;
and S424, inputting the feature vector and the label of the feature image and the feature vector and the label of the non-feature image into a support vector machine model for training, and obtaining the image classifier based on the characteristics of the uvula.
5. The pharyngeal swab image recognition and localization method according to claim 4, wherein the step S421 specifically includes the steps of:
s4211, converting the near-end collected image into a gray image;
s4212, setting a local detection area, traversing the grayscale image, and taking an image of which the local detection area completely contains the uvula and the uvula is located in the center as the feature image.
6. The pharyngeal swab image recognition and localization method according to claim 5, further comprising, after the step S4211, the steps of:
s42110, performing morphological open operation processing on the gray level image to eliminate a bright area in the gray level image;
s42120, performing binarization processing on the gray level image of the bright light area to obtain a near-end binary image; and the number of the first and second groups,
in step S4212, an image in which the local detection region completely includes the uvula and the uvula is located in the center is taken as the feature image by traversing the near-end binary image.
7. A method as claimed in any one of claims 3 to 6, wherein in step S4, the substitute point pixel coordinates of the acquisition area are obtained from the uvula centre pixel coordinates, and the position of the pharyngeal swab acquisition mechanism is modified so that the pre-calibrated spot center pixel coordinates coincide with the substitute point pixel coordinates by the following process:
s44, obtaining a pixel distance h between the central pixel coordinate of the uvula and the pixel coordinate of the replacing point of the collecting area in the near-end collected image;
s45, calculating a pixel coordinate of a substitute point of the acquisition region based on the pixel distance h and the central pixel coordinate of the uvula;
s46, acquiring a pixel distance S' between the pixel coordinate of the circle center of the light spot and the pixel coordinate of the substitute point of the acquisition area, and calculating the actual offset distance D of the throat swab acquisition mechanism according to the following formulan
Figure FDA0003556729010000031
Wherein K' is an actual distance represented by a unit pixel of the proximally-captured image on a vertical plane of the uvula, L0'is a known length on a vertical plane of the uvula, and PL' is L in the distally acquired image0' corresponding pixel length;
s47, actual offset distance D according to the pharynx swab collecting mechanismnAnd correcting the position of the throat swab acquisition mechanism to enable the pixel coordinate of the circle center of the light spot to be superposed with the pixel coordinate of the substitute point.
8. The pharyngeal swab image-identifying and locating method according to claim 7, wherein the pre-calibration process of the spot center pixel coordinates is as follows:
s410, starting and adjusting the laser ranging module at the second preset position to enable the laser beam emitted by the laser ranging module to coincide with the front projection position of the throat swab clamped by the throat swab collecting mechanism;
s420, photographing the inside of the oral cavity of the person to be acquired through the image acquisition module to obtain a pre-calibration image with laser spots displayed, and performing image mask processing on colors of the laser spots in the pre-calibration image to obtain a pre-calibration binary image;
s430, identifying the circular contour of the laser spot in the pre-calibrated binary image by adopting a Hough circle detection method, and acquiring the circle center pixel coordinate of the laser spot based on the circular contour of the laser spot to be used as the spot circle center pixel coordinate.
CN202111032894.9A 2021-09-03 2021-09-03 Pharynx swab image identification and positioning method Active CN113749692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111032894.9A CN113749692B (en) 2021-09-03 2021-09-03 Pharynx swab image identification and positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111032894.9A CN113749692B (en) 2021-09-03 2021-09-03 Pharynx swab image identification and positioning method

Publications (2)

Publication Number Publication Date
CN113749692A CN113749692A (en) 2021-12-07
CN113749692B true CN113749692B (en) 2022-05-13

Family

ID=78792950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111032894.9A Active CN113749692B (en) 2021-09-03 2021-09-03 Pharynx swab image identification and positioning method

Country Status (1)

Country Link
CN (1) CN113749692B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114298934B (en) * 2021-12-24 2022-12-09 北京朗视仪器股份有限公司 Cheek clamp developing weakening method and device based on pixel adjustment
CN114711827B (en) * 2022-03-11 2024-07-05 河北科技大学 Throat swab sampling system cooperatively guided by two cameras
CN114429826A (en) * 2022-03-31 2022-05-03 北京乐普诊断科技股份有限公司 Antigen detection result processing method, system, electronic device and storage medium
CN114898071A (en) * 2022-06-08 2022-08-12 杭州湖西云百生科技有限公司 Intraoperative robot swab calibration method and system based on three-dimensional laser radar
CN115115711B (en) * 2022-06-15 2023-06-23 深圳市罗湖医院集团 Visual calibration method and related equipment in nasopharynx swab sampling
CN114916964B (en) * 2022-07-14 2022-11-04 河南科技学院 Pharynx swab sampling effectiveness detection method and self-service pharynx swab sampling method
CN114926772B (en) * 2022-07-14 2022-10-21 河南科技学院 Method for tracking and predicting trajectory of throat swab head
CN115337044B (en) * 2022-07-18 2023-06-09 深圳市安保数字感控科技有限公司 Nucleic acid sampling monitoring method, device, system and computer readable storage medium
CN114998841B (en) * 2022-08-01 2022-11-01 易凯医疗建筑设计(深圳)有限公司 Sample identification method, device and equipment for nucleic acid sampling and storage medium
CN116188748B (en) * 2023-04-27 2023-08-15 深圳市辉诚实业有限公司 Image recognition system based on intelligent throat swab sampling equipment
CN117243642B (en) * 2023-11-16 2024-01-26 山东皇圣堂药业有限公司 Intelligent throat swab sampling equipment control system based on machine vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111759361A (en) * 2020-07-02 2020-10-13 南方医科大学深圳医院 Self-service sampling machine and method for throat test sample
CN112932546A (en) * 2021-01-26 2021-06-11 溥熵微创医疗科技(辽宁)有限公司 Throat swab sampling method based on visual analysis

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITVA20020060A1 (en) * 2002-11-22 2004-05-23 St Microelectronics Srl METHOD OF ANALYSIS OF IMAGES DETECTED FROM A MICRO-ARRAY
EP2123223B1 (en) * 2008-05-19 2017-07-05 Cefla S.C. Method and Apparatus for Simplified Patient Positioning in Dental Tomographic X-Ray Imaging
CN111631754B (en) * 2020-05-26 2021-07-09 清华大学 Automatic sampling system of pharynx swab

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111759361A (en) * 2020-07-02 2020-10-13 南方医科大学深圳医院 Self-service sampling machine and method for throat test sample
CN112932546A (en) * 2021-01-26 2021-06-11 溥熵微创医疗科技(辽宁)有限公司 Throat swab sampling method based on visual analysis

Also Published As

Publication number Publication date
CN113749692A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN113749692B (en) Pharynx swab image identification and positioning method
US7572009B2 (en) Method and apparatus for processing an eye fundus image
JP4583527B2 (en) How to determine eye position
US20090278922A1 (en) Image sensor with integrated region of interest calculation for iris capture, autofocus, and gain control
US20050084179A1 (en) Method and apparatus for performing iris recognition from an image
JP2008113902A (en) Apparatus and method for detecting eye opening degree
CN107992866B (en) Biopsy method based on video flowing eye reflective spot
WO2022037397A1 (en) Endoscope system and method for detecting when tail end of endoscope comes into contact with tissue
CN108182682A (en) A kind of detection method based on Computer Image Processing and pattern-recognition
CN109002045B (en) A kind of the inspection localization method and inspection positioning system of intelligent inspection robot
TW200921087A (en) Apparatus for determining defect position of panel
CN110507338B (en) Positioning method, device and equipment and digital X-ray photography system
WO2019194106A1 (en) Image processing device, image processing method, and image processing program
CN106934349A (en) Dual camera is imaged and iris capturing identification integration apparatus
CN112790738B (en) Device and method for analyzing traditional Chinese medicine tongue fur dryness moistening information based on polarized light
CN116188748B (en) Image recognition system based on intelligent throat swab sampling equipment
CN115471552B (en) Shooting positioning method and system for portable mydriasis-free fundus camera
JP2003250789A (en) Radiation imaging apparatus, program, and computer- readable storage medium
CN106546196A (en) A kind of optical axis real-time calibration method and system
WO2012060393A1 (en) Target position tracking device and target luminescence detection device
CN209358681U (en) One kind being applied to the indoor naked eye 3D eye tracking device that performs the operation
CN108392180B (en) Time intensity curve measuring device
CN115097936B (en) Display screen control method based on gesture action deep learning
JP2000146787A (en) Measuring method for restriction in tensile test
CN110643565A (en) High-reliability automatic cell separation method and device based on visual identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant