WO2022170350A1 - Monitoring sample collection from an orifice - Google Patents

Monitoring sample collection from an orifice Download PDF

Info

Publication number
WO2022170350A1
WO2022170350A1 PCT/US2022/070535 US2022070535W WO2022170350A1 WO 2022170350 A1 WO2022170350 A1 WO 2022170350A1 US 2022070535 W US2022070535 W US 2022070535W WO 2022170350 A1 WO2022170350 A1 WO 2022170350A1
Authority
WO
WIPO (PCT)
Prior art keywords
orifice
collection instrument
indicator
video
person
Prior art date
Application number
PCT/US2022/070535
Other languages
French (fr)
Inventor
Siddarth Satish
Mayank Kumar
Kevin J. Miller
Steven Scherf
Vadim Levin
Alexey PERMINOV
Grigory SEREBRYAKOV
Alexander SMORKALOV
Original Assignee
Exa Health, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Exa Health, Inc. filed Critical Exa Health, Inc.
Publication of WO2022170350A1 publication Critical patent/WO2022170350A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; determining position of probes within or on the body of the patient
    • A61B5/061Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0045Devices for taking samples of body liquids
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0045Devices for taking samples of body liquids
    • A61B10/0051Devices for taking samples of body liquids for taking saliva or sputum samples
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0045Devices for taking samples of body liquids
    • A61B2010/0054Ear liquid
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes

Definitions

  • the subject matter disclosed herein generally relates to the technical field of special-purpose machines that facilitate healthcare testing, including software-configured computerized variants of such special-purpose machines and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other specialpurpose machines that facilitate healthcare testing.
  • the present disclosure addresses systems and methods to facilitate monitoring of sample collection from an orifice.
  • a biological sample to be tested is often performed by a front-line healthcare worker on a patient to be tested, for example by manipulating a swab or other collection instrument to obtain the biological sample from the patient.
  • the patient and the front-line healthcare worker are effectively in contact with each other or otherwise in close enough proximity to risk transmission of pathogens from one to the other.
  • self-service collection of biological samples by patients may provide a degree of protection to front-line healthcare workers and patients, as well as reduce the number of front-line healthcare workers involved.
  • a device may be configured (e.g., by suitable software, such as an app) to capture a video (e.g., as a sequential series of images, which may be called “frames,” or data that is representative thereof) using a camera of the device.
  • the device may thereafter process the video itself, communicate the captured video to another device or other machine via a network, or both.
  • FIG. l is a network diagram illustrating a network environment suitable for monitoring collection of a sample from an orifice of a person (e.g., a patient), according to some example embodiments.
  • FIG. 2 is a block diagram illustrating components of a machine (e.g., a device, such as a smartphone) suitable for monitoring collection of a sample from an orifice of a person, according to some example embodiments.
  • a machine e.g., a device, such as a smartphone
  • FIG. 3 is a diagram illustrating a person performing a monitored self-service collection of a biological sample from an orifice of the same person, according to some example embodiments.
  • FIG. 4 is a diagram illustrating a graphical user interface (GUI) presented by a machine as part of monitoring collection of a sample from an orifice of a person, according to some example embodiments.
  • GUI graphical user interface
  • FIGS. 5 and 6 are flowcharts illustrating operations of a device in performing a method of monitoring sample collection from an orifice, according to some example embodiments.
  • FIG. 7 is a block diagram illustrating components of a machine, according to some example embodiments, able to read instructions from a machine-readable medium and perform any one or more of the methodologies discussed herein.
  • Example methods facilitate monitoring sample collection (e.g., collecting a biological sample, such as an amount of mucus, saliva, blood, or earwax) from an orifice (e.g., of a person, such as a patient), and example systems (e.g., special-purpose machines configured by special-purpose software) are configured to facilitate monitoring sample collection from an orifice.
  • sample collection e.g., collecting a biological sample, such as an amount of mucus, saliva, blood, or earwax
  • example systems e.g., special-purpose machines configured by special-purpose software
  • examples merely typify possible variations.
  • structures e.g., structural components, such as modules
  • operations e.g., in a procedure, algorithm, or other function
  • a machine e.g., a patient’s device, such as a smartphone, or a healthcare provider’s device, such as a tablet or kiosk
  • a machine may be specially configured (e.g., with suitable software) to monitor a person (e.g., a patient or other user of the machine) in performing (e.g., on themselves) a collection of a sample (e.g., a biological sample to be tested for healthcare purposes, such as diagnosis of disease).
  • a person e.g., a patient or other user of the machine
  • a collection of a sample e.g., a biological sample to be tested for healthcare purposes, such as diagnosis of disease.
  • the discussion herein describes a machine that monitors (e.g., with or without provision of interactive guidance) the person in performing the sample collection from an orifice (e.g., a nostril, a mouth, an ear, a puncture, or a rectum) of the same patient.
  • an orifice e.g., a nostril, a mouth, an ear, a puncture, or a rectum
  • the machine guides one person (e.g., a healthcare worker) in performing sample collection on another person (e.g., a patient).
  • the machine accesses (e.g., by capturing with a camera, receiving from a source, or reading from a memory) a video that depicts an orifice of a person (e.g., a patient) from whom a biological sample (e.g., a sample of mucus, saliva, blood, or earwax) is to be collected by a portion of a collection instrument (e.g., a cotton-coated tip region of a swab, or an open container for receiving discharged fluid).
  • a biological sample e.g., a sample of mucus, saliva, blood, or earwax
  • a collection instrument e.g., a cotton-coated tip region of a swab, or an open container for receiving discharged fluid.
  • the machine then detects (e.g., via object recognition performed by an artificial intelligence (Al) machine-vision engine) that the video depicts the portion of the collection instrument arriving at (e.g., entering into) the orifice of the person and remaining in or at the orifice of the person for a detected duration (e.g., for at least the detected duration).
  • an artificial intelligence (Al) machine-vision engine detects (e.g., via object recognition performed by an artificial intelligence (Al) machine-vision engine) that the video depicts the portion of the collection instrument arriving at (e.g., entering into) the orifice of the person and remaining in or at the orifice of the person for a detected duration (e.g., for at least the detected duration).
  • the machine then performs a comparison of the detected duration to a threshold duration to determine whether the detected duration transgresses (e.g., exceeds) the threshold duration. Based on this comparison, the machine then generates an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video.
  • the indicator may form all or part of a GUI configured to present sample collection information (e.g., status, progress, or both) to the person.
  • the machine causes a presentation of the generated indicator (e.g., to the person, via a display screen of the machine).
  • the generated indicator may be or include a visual indicator, an audio indicator, a haptic indicator, or any suitable combination thereof, and the caused presentation of the generated indicator may be or include display of the visual indicator, play of the audio indicator, initiation of the haptic indicator, or any suitable combination thereof. Additional details and options are described below.
  • FIG. 1 is a network diagram illustrating a network environment 100 suitable for monitoring collection of a sample from an orifice of a patient, according to some example embodiments.
  • the network environment 100 includes a machine 110 (e.g., a server machine), a database 115, and devices 130 and 150, all communicatively coupled to each other via a network 190.
  • the machine 110 with or without the database 115, may form all or part of a cloud 118 (e.g., a geographically distributed set of multiple machines configured to function as a single server), which may form all or part of a network-based system 105 (e.g., a cloud-based server system configured to provide one or more network-based services to the devices 130 and 150).
  • the machine 110 and the devices 130 and 150 may each be implemented in a special-purpose (e.g., specialized) computer system, in whole or in part, as described below with respect to FIG. 7.
  • users 132 and 152 are users 132 and 152.
  • One or both of the users 132 and 152 may be a human user (e.g., a human being, also called a “person” herein), a machine user (e.g., a computer configured by a software program to interact with the device 130 or 150), or any suitable combination thereof (e.g., a human assisted by a machine or a machine supervised by a human).
  • the user 132 is associated with the device 130 and may be a user of the device 130.
  • the device 130 may be a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user 132.
  • the user 152 is associated with the device 150 and may be a user of the device 150.
  • the device 150 may be a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user 152.
  • a home media system e.g., a home theater system or other home entertainment system
  • a tablet computer e.g., a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user 152.
  • a wearable device e.g., a smart watch, smart glasses, smart clothing, or smart jewelry
  • any of the systems or machines (e.g., databases and devices) shown in FIG. 1 may be, include, or otherwise be implemented in a specialpurpose (e.g., specialized or otherwise non-conventional and non-generic) computer that has been modified to perform one or more of the functions described herein for that system or machine (e.g., configured or programmed by special-purpose software, such as one or more software modules of a specialpurpose application, operating system, firmware, middleware, or other software program).
  • special-purpose software such as one or more software modules of a specialpurpose application, operating system, firmware, middleware, or other software program.
  • a special-purpose computer system able to implement any one or more of the methodologies described herein is discussed below with respect to FIG. 7, and such a special-purpose computer may accordingly be a means for performing any one or more of the methodologies discussed herein.
  • a special-purpose computer that has been specially modified (e.g., configured by special-purpose software) by the structures discussed herein to perform the functions discussed herein is technically improved compared to other special-purpose computers that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein. Accordingly, a special-purpose machine configured according to the systems and methods discussed herein provides an improvement to the technology of similar special-purpose machines.
  • a “database” is a data storage resource and may store data structured in any of various ways, for example, as a text file, a table, a spreadsheet, a relational database (e.g., an object-relational database), a triple store, a hierarchical data store, a document database, a graph database, key -value pairs, or any suitable combination thereof.
  • a relational database e.g., an object-relational database
  • a triple store e.g., an object-relational database
  • a hierarchical data store e.g., a document database
  • a graph database e.g., a graph database
  • the network 190 may be any network that enables communication between or among systems, machines, databases, and devices (e.g., between the machine 110 and the device 130). Accordingly, the network 190 may be a wired network, a wireless network (e.g., a mobile or cellular network), or any suitable combination thereof. The network 190 may include one or more portions that constitute a private network, a public network (e.g., the Internet), or any suitable combination thereof.
  • the network 190 may include one or more portions that incorporate a local area network (LAN), a wide area network (WAN), the Internet, a mobile telephone network (e.g., a cellular network), a wired telephone network (e.g., a plain old telephone service (POTS) network), a wireless data network (e.g., a WiFi network or WiMax network), or any suitable combination thereof. Any one or more portions of the network 190 may communicate information via a transmission medium.
  • LAN local area network
  • WAN wide area network
  • the Internet a mobile telephone network
  • POTS plain old telephone service
  • POTS plain old telephone service
  • WiFi Wireless Fidelity
  • transmission medium refers to any intangible (e.g., transitory) medium that is capable of communicating (e.g., transmitting) instructions for execution by a machine (e.g., by one or more processors of such a machine), and includes digital or analog communication signals or other intangible media to facilitate communication of such software.
  • FIG. 2 is a block diagram illustrating components of the device 130 (e.g., a smartphone or a tablet) suitable for monitoring sample collection from an orifice of a person (e.g., a patient), according to some example embodiments.
  • the device 130 e.g., a smartphone or a tablet
  • the device 130 is shown as including a video accessor 210 (e.g., an access module or suitable code for accessing data, such as a video or data that encodes or otherwise represents a video), an object recognizer 220 (e.g., a recognition module or suitable code for recognizing objects, such as orifices, collection instruments, or faces of people), an indicator generator 230 (e.g., a generation module or suitable code for generating one or more indicators), a user interface 240 (e.g., a GUI, presentation module, or suitable code for presenting one or more indicators), and a camera 250 (e.g., a high-definition video camera), all configured to communicate with each other (e.g., via a bus, shared memory, or a switch).
  • the video accessor 210 may be or include any suitable hardware, software, or combination thereof, and is configured to access video (e.g., from the camera 250 or via the network 190).
  • the object recognizer 220 may be or include any suitable hardware, software, or combination thereof, and is configured to perform object recognition (e.g., shape recognition, face recognition, or both). Accordingly, the object recognizer 220 may be or include one or more suitably trained Al modules (e.g., a learning machine trained to implement one or more machinevision algorithms).
  • object recognition e.g., shape recognition, face recognition, or both.
  • the object recognizer 220 may be or include one or more suitably trained Al modules (e.g., a learning machine trained to implement one or more machinevision algorithms).
  • the indicator generator 230 may be or include any suitable hardware, software, or combination thereof, and is configured to generate an indicator of information, as described herein.
  • the indicator generator 230 may be configured to generate all or part of a GUI that presents the indicated information.
  • the user interface 240 may be or include any suitable hardware, software, or combination thereof, and is configured to present or cause presentation of the indicator generated by the indicator generator 230.
  • the indicator generated by the indicator generator 230 may be or include visual data, audio data, haptic data, or any suitable combination thereof, and thus the user interface 240 may be configured to cause such visual data, audio data, haptic data, or any suitable combination thereof, to be presented by the device 130 (e.g., to the user 132).
  • the camera 250 may be or include any suitable camera capable of generating a video with the characteristics described herein.
  • the camera 250 may be or include a visible spectrum camera (e.g., with a charge coupled device (CCD)), a near-infrared (NIR) camera, a depth camera (e.g., laser-based), or any suitable combination thereof.
  • CCD charge coupled device
  • NIR near-infrared
  • depth camera e.g., laser-based
  • one or more of the video accessor 210, the object recognizer 220, the indicator generator 230, and the user interface 240 may form all or part of an app 200 (e.g., a mobile app) that is stored (e.g., installed) on the device 130 (e.g., responsive to or otherwise as a result of data being received from the machine 110 or the database 115 via the network 190).
  • an app 200 e.g., a mobile app
  • processors 299 e.g., hardware processors, digital processors, or any suitable combination thereof
  • any one or more of the components (e.g., modules) described herein may be implemented using hardware alone (e.g., one or more of the processors 299) or a combination of hardware and software.
  • any component described herein may physically include an arrangement of one or more of the processors 299 (e.g., a subset of or among the processors 299) configured to perform the operations described herein for that component.
  • any component described herein may include software, hardware, or both, that configure an arrangement of one or more of the processors 299 to perform the operations described herein for that component.
  • different components described herein may include and configure different arrangements of the processors 299 at different points in time or a single arrangement of the processors 299 at different points in time.
  • Each component (e.g., module) described herein is an example of a means for performing the operations described herein for that component.
  • any two or more components described herein may be combined into a single component, and the functions described herein for a single component may be subdivided among multiple components.
  • components described herein as being implemented within a single system or machine e.g., a single device
  • may be distributed across multiple systems or machines e.g., multiple devices).
  • FIG. 3 is a diagram illustrating the user 132 collecting a sample (e.g., a biological sample) from an orifice of the user 132, while the device 130 monitors the collection of the sample (e.g., with provision of guidance to the user 132 regarding extent of progress in obtaining an adequate sample), according to some example embodiments.
  • a sample e.g., a biological sample
  • the device 130 of the user 132 may be configured to monitor a video of the user 132 in performing the collection of the sample. Feedback in the example form of one or more indicators may be provided by the device 130 to inform the user 132 regarding status of the sample collection, extent of progress in the sample collection, prompts, warnings, guidance, other information helpful toward collection of an adequate sample, or any suitable combination thereof.
  • the user 132 is holding the device 130 while proceeding to perform, or at least attempt, a collection of a sample (e.g., a nasal mucus sample) from an orifice 310 (e.g., her left nostril), using a collection instrument 300 (e.g., a swab).
  • a sample e.g., a nasal mucus sample
  • the device 130 includes the camera 250 (e.g., a front-facing video camera) and is configured to monitor the user 132 in her attempt to collect the sample adequately (e.g., collect a minimum amount of mucus).
  • FIG. 4 is a diagram illustrating a GUI 400 presented by the device 130 as part of monitoring collection of the sample (e.g., the mucus sample) from the orifice (e.g., the left nostril) of the user 132, according to some example embodiments.
  • the GUI 400 may include one or more indicators (e.g., generated and presented by the device 130), and the GUI 400 in FIG. 4 is shown as including various data.
  • the GUI 400 may include a progress indicator 410 that indicates a degree of progress toward the sample being fully collected (e.g., surpassing a minimum duration or a minimum count of periodic movements, such as swipes, either or both of which may be inferred as indicating collection of a minimum amount of the sample); a graphical representation 420 (e.g., a first graphical representation) of a portion (e.g., a first portion, such as a cotton-coated tip region) of the collection instrument 300 (e.g., a swab); a graphical representation 430 (e.g., a second graphical representation) of the orifice 310; a graphical representation 440 (e.g., a fourth graphical representation) of a depth of insertion by the portion (e.g., the tip) of the collection instrument 300 (e.g., the swab) into the orifice 310 (e.g., the nostril); or any suitable combination thereof.
  • a progress indicator 410
  • the degree of progress, the depth of insertion, or both, may be depicted by a video of the sample collection and detected or inferred (e.g., extrapolated) from the video (e.g., by the device 130, in executing the app 200).
  • the GUI 400 may also include a count 450 of periodic movements (e.g., rotations) made by the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) or made by another portion (e.g., a flexible shaft) of the collection instrument 300 (e.g., the swab).
  • Such movements may be depicted by a video of the sample collection and detected from the video (e.g., by the device 130, in executing the app 200).
  • FIGS. 5 and 6 are flowcharts illustrating operations of the device 130 (e.g., as configured by execution of the app 200) in performing a method 500 of monitoring sample collection from an orifice (e.g., orifice 310) of a person (e.g., the user 132), according to some example embodiments.
  • an orifice e.g., orifice 310
  • a person e.g., the user 132
  • Operations in the method 500 may be performed by the device 130, using components (e.g., modules) described above with respect to FIG. 2, using one or more processors (e.g., microprocessors or other hardware processors), or using any suitable combination thereof.
  • the method 500 includes operations 510, 520, 530, 540, and 550.
  • the video accessor 210 accesses a video (e.g., live or recorded video data that encodes or otherwise represents such a video) that depicts the orifice 310 from which a sample (e.g., a biological sample) is to be collected by a portion (e.g., a tip region) of the collection instrument 300 (e.g., a swab).
  • the accessing of the video may be or include capturing the video (e.g., using the camera 250 of the device 130), receiving the video (e.g., from the camera 250 or via the network 190), reading the video (e.g., from a memory of the device 130 or from the database 115), or any suitable combination thereof.
  • the video is self-shot in real time (e.g., with latency under 50 milliseconds) by the user 132 (e.g., by orienting the device 130 such that its camera 250 is aimed at the orifice 310).
  • the video accessor 210 guides the person (e.g., the user 132) through creating a suitable video, which is then accessed by the video accessor 210 as described above.
  • the video accessor 210 may prompt the person to position their face a certain way for video capture by the camera 250 of the device 130, prompt the person to adjust lighting conditions, prompt the person to begin an attempt to perform sample collection from the orifice 310, notify the person to restart the sample collection, or any suitable combination thereof.
  • the object recognizer 220 detects that the video accessed in operation 510 depicts the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) arriving at (e.g., entering into or making contact with edges of) the orifice 310 and remaining in or at the orifice for at least a detected duration (e.g., and later exiting the orifice of the person after the detected duration).
  • the object recognizer 220 also detects that the video depicts the portion of the collection instrument departing from (e.g., exiting from or breaking contact with edges of) the orifice 310 after the detected duration.
  • the detecting that the video depicts the portion of the collection instrument 300 may include identifying the portion of the collection instrument 300, recognizing the portion of the collection instrument 300, or both.
  • the object recognizer 220 may implement or otherwise use one or more of various image processing techniques (e.g., segmentation, edge detection, or both), computer vision techniques (e.g., using a trained Al module), or any suitable combination thereof.
  • the indicator generator 230 performs a comparison of the detected duration to a threshold duration.
  • the threshold duration is a minimum duration, while in alternative example embodiments, the threshold duration is a maximum duration.
  • the indicator generator 230 generates one or more indicator (e.g., as described above with respect to FIG. 4).
  • the indicator generator 230 may generate the progress indicator 410, which may indicate the extent to which the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) collected the sample (e.g., probably or actually) from the orifice 310 depicted by the video.
  • the performance of operation 540 may be based on the comparison of the detected duration to the threshold duration in operation 530, and thus, the generation of any one or more indicators by the indicator generator 230 may be based on that comparison of the detected duration to the threshold duration.
  • the user interface 240 causes a presentation of the indicator (e.g., the progress indicator 410) generated in operation 540 (e.g., along with one or more other indicators, which may also be generated in operation 540).
  • the user interface 240 may present or otherwise cause presentation of the progress indicator 410, which may indicate the extent to which the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) has thus far collected the sample (e.g., probably or actually) from the orifice 310 (e.g., of the user 132).
  • the caused presentation of the indicator e.g., the progress indicator 410) consequently may be exhibited (e.g., displayed or otherwise presented) by the device 130.
  • the method 500 may include one or more of operations 610, 612, 614, 620, 622, 630, 640, 650, 660, 670, and 672.
  • the portion (e.g., the tip region) of the collection instrument 300 e.g., the swab
  • the object recognizer 220 detects that the video depicts a movement of a second portion (e.g., the shaft) of the collection instrument 300 (e.g., the swab) while the first portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310.
  • the detected movement of the second portion may be or include repetitions of one or more periodic movements of the second portion.
  • the generating of the indicator in operation 540 may be based on the depicted movement of the second portion (e.g., the shaft) of the collection instrument 300 while the first portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310.
  • One or more of operations 612 and 614 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 610. In alternative example embodiments, one or more of operations 612 and 614 are performed as separate operations (e.g., between performance of 510 and performance of operation 530) with or without performance of operation 610.
  • the object recognizer 220 detects a movement duration during which the depicted movement (e.g., as detected in operation 610) of the second portion (e.g., the shaft) of the collection instrument 300 (e.g., the swab) occurs.
  • the generating of the indicator in operation 540 may be based on the movement duration during which the depicted movement of the second portion (e.g., the shaft) of the collection instrument 300 occurs.
  • the object recognizer 220 counts a number of periodic movements (e.g., rotations, oscillations, swipes, flexes, spitting motions, or other repeated strokes) within the depicted movement of the second portion (e.g., the shaft) of the collection instrument 300 (e.g., the swab). For example, each instance of a periodic movement may be detected and counted based on a distance travelled (e.g., a change in location) by the second portion (e.g., the shaft), a change in an orientation of the second portion, or both.
  • periodic movements e.g., rotations, oscillations, swipes, flexes, spitting motions, or other repeated strokes
  • each instance of a periodic movement may be detected and counted based on a distance travelled (e.g., a change in location) by the second portion (e.g., the shaft), a change in an orientation of the second portion, or both.
  • the generating of the indicator in operation 540 may be based on the counted number of periodic movements within the depicted movement of the second portion of the collection instrument 300.
  • the generating of the indicator in operation 540 may be based on a comparison (e.g., performed by the object recognizer 220) of the counted number of periodic movements to a threshold number (e.g., a minimum number or a maximum number) of periodic movements.
  • the number of periodic movements is used instead of the duration that the first portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) is in the orifice 310.
  • performance of operation 520 may substitute a counting of the number of periodic movements (e.g., as described for operation 614) in place of the detecting of the detected duration (e.g., as described for operation 520), and performance of operation 530 may accordingly compare the counted number of periodic movements to a threshold number of periodic movements (e.g., as described for operation 614), instead of comparing the detected duration to a threshold duration (e.g., as described for operation 520).
  • the object recognizer 220 performs shape recognition on at least a portion of the video accessed in operation 510. Accordingly, the shape recognition performed by the object recognizer 220 may recognize a shape of the orifice 310. For example, if the orifice 310 is the opening (e.g., aperture) of a left nostril, the object recognizer 220 may recognize the shape of the opening of the left nostril. As another example, if the orifice 310 is the mouth of the person (e.g., with pursed lips for discharging a saliva sample), the object recognizer 220 may recognize the shape (e.g., pursed) of the mouth. In example embodiments that include operation 620, the generating of the indicator in operation 540 may be based on the recognized shape of the orifice 310.
  • Operation 622 may be performed as part of operation 620. In alternative example embodiments, operation 622 is performed as a separate operation (e.g., between performance of 510 and performance of operation 530) with or without performance of operation 620.
  • the object recognizer 220 detects that the video accessed in operation 510 depicts a deformation of the orifice 310 (e.g., deformed compared to the shape recognized in operation 620) while the portion (e.g., the first portion, such as the tip region) of the collection instrument 300 (e.g., the swab) is in the orifice 310.
  • the generating of the indicator in operation 540 may be based on the detected deformation of the orifice 310 while the portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310.
  • the object recognizer 220 detects that the video accessed in operation 510 depicts a deformation of a surface region of the person (e.g., the user 132) while the portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310 of the person. For example, if a tip region of a swab is inserted into a left nostril of the person to collect the sample, the outer surface of the left nostril may exhibit detectable deformation from pressure applied by the swab.
  • the outer surface of the cheek may exhibit detectable deformation from pressure applied by the swab.
  • the generating of the indicator in operation 540 may be based on the depicted deformation of the surface region while the portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310.
  • the object recognizer 220 detects that the video accessed in operation 510 depicts a depth of insertion by the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) into the orifice 310.
  • the depth of insertion may be detected based on shape recognition of the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab), including a detected speed of the portion, a detected direction of motion by the portion, or both.
  • the depth of insertion may be detected based on shape recognition of another portion (e.g., the shaft) of the collection instrument 300, including a detected speed of the other portion, a detected direction of motion by the other portion, or both. Furthermore, the depth of insertion may be detected based on shape recognition of a fiducial mark (e.g., a logo or a target symbol) on the collection instrument 300 (e.g., the swab).
  • a fiducial mark e.g., a logo or a target symbol
  • the depth of insertion may be detected based on shape recognition of all or part of a hand (e.g., one or more fingers) of the person (e.g., the user 132), including a detected speed of all or part of the hand, a detected direction of motion by all or part of the hand, or both.
  • a tip of a finger e.g., an index fingertip
  • the depth of insertion may be detected based on deformation of the orifice 310 (e.g., as described above with respect to operation 622), deformation of a surface region of the person (e.g., as described above with respect to operation 630), or both.
  • the generating of the indicator in operation 540 may be based on the depicted depth of insertion by the portion (e.g., the tip region) of the collection instrument 300 into the orifice 310.
  • the object recognizer 220 detects that the video depicts a movement of at least a portion of a hand (e.g., one or more fingers) of the person (e.g., the user 132) from whom the biological sample is to be collected.
  • the object recognizer 220 may detect such movement by detecting a speed of all or part of the hand, a direction of motion by all or part of the hand, or both.
  • the object recognizer 220 may detect such movement by detection changes in a shape (e.g., pose) of all or part of the hand.
  • the detected movement of all or part of the hand may be or include repetitions of one or more periodic movements of all or part of the hand.
  • the generating of the indicator in operation 540 may be based on the depicted movement of at least the portion of the hand of the person.
  • the object recognizer 220 performs facial recognition on at least a portion of the video that depicts the orifice 310 of the person (e.g., the user 132). Accordingly, the facial recognition performed by the object recognizer 220 may recognize the face of the person and thus validate an identity of the person. For example, the recognized face of the person (e.g., the user 132) may be compared (e.g., by the indicator generator 230) to a reference image of the person (e.g., a driver’s license photo of the user 132), and a validation of the person (e.g., by the indicator generator 230) may be performed based on such a comparison.
  • a reference image of the person e.g., a driver’s license photo of the user 132
  • the method 500 may include an operation or sub-operation in which, based on the facial recognition, the indicator generator 230, the user interface 240, or both, cause a presentation (e.g., a further presentation) of a notification that the video does indeed depict the proper person (e.g., the user 132) from whom the sample is to be collected.
  • a presentation e.g., a further presentation
  • a sub-operation may be performed (e.g., by the indicator generator 230) as part of operation 540.
  • the collection instrument 300 is or includes a container configured to receive fluid (e.g., an effluent) that is discharged from the orifice 310 of the person (e.g., the user 132).
  • the collection instrument 300 may be or include a vial for collecting an amount of saliva as the sample, and the orifice 310 may be the mouth of the person.
  • the collection instrument 300 may be or include a blood capillary action tube (e.g., a blood pipette) for collecting an amount of blood as the sample, and the orifice 310 may be a puncture in the skin of the person.
  • operations 670 and 672 may be performed as part of the method 500.
  • Operation 672 may be performed as part of operation 670.
  • operation 672 is performed as a separate operation (e.g., between performance of 510 and performance of operation 530) with or without performance of operation 670.
  • the object recognizer 220 detects that the video accessed in operation 510 depicts an amount of fluid (e.g., saliva or blood) discharged from the orifice 310 of the person and received by the container (e.g., a vial or a pipette) of the collection instrument 300.
  • fluid e.g., saliva or blood
  • the amount of the fluid may be detected by performing shape recognition on a meniscus of the fluid in the container and calculating or estimating the amount of the fluid based on the location of the meniscus, the orientation of the meniscus, or both, relative to the container.
  • the generating of the indicator in operation 540 may be based on the depicted amount of the fluid received by the container (e.g., the vial or the pipette) of the collection instrument 300.
  • the object recognizer 220 detects a reception duration during which the depicted amount of the fluid (e.g., saliva or blood) is received by the container (e.g., the vial or the pipette).
  • the generating of the indicator in operation 540 may be based on the reception duration during which the depicted amount of the fluid (e.g., saliva or blood) is received by the container (e.g., the vial or the pipette).
  • one or more of the methodologies described herein may facilitate monitoring of sample collection from an orifice of a person. Moreover, one or more of the methodologies described herein may facilitate guiding (e.g., notifying, instructing, reminding, warning, or any suitable combination thereof) an attempt at self-service sample collection, performed by the person. Hence, one or more of the methodologies described herein may facilitate increased accuracy or precision in sample collection, which may result in increased accuracy or precision in healthcare tests based on the collected samples, as well as reductions in spoilage or other waste of test kits (e.g., self-service sample collection kits) due to improper performance of the sample collection, compared to capabilities of pre-existing systems and methods.
  • test kits e.g., self-service sample collection kits
  • one or more of the methodologies described herein may obviate a need for certain efforts or resources that otherwise would be involved in the monitoring of sample collection from various orifices of various persons. Efforts expended by a user in administering collections of samples from other persons or in guiding other persons in performing self-service collections of samples may be reduced by use of (e.g., reliance upon) a special-purpose machine that implements one or more of the methodologies described herein. Computing resources used by one or more systems or machines (e.g., within the network environment 100) may similarly be reduced (e.g., compared to systems or machines that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein). Examples of such computing resources include processor cycles, network traffic, computational capacity, main memory usage, graphics rendering capacity, graphics memory usage, data storage capacity, power consumption, and cooling capacity.
  • FIG. 7 is a block diagram illustrating components of a machine 700, according to some example embodiments, able to read instructions 724 from a machine-readable medium 722 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part.
  • a machine-readable medium 722 e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof
  • FIG. 722 e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof
  • FIG. 7 shows the machine 700 in the example form of a computer system (e.g., a computer) within which the instructions 724 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 700 to perform any one or more of the methodologies discussed herein may be executed, in whole or in part.
  • the instructions 724 e.g., software, a program, an application, an applet, an app, or other executable code
  • the machine 700 operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines.
  • the machine 700 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment.
  • the machine 700 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 724, sequentially or otherwise, that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • STB set-top box
  • web appliance a network router, a network switch, a network bridge, or any machine capable of executing the instructions 724, sequentially or otherwise, that specify actions to be taken by that machine.
  • the machine 700 includes a processor 702 (e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory 704, and a static memory 706, which are configured to communicate with each other via a bus 708.
  • the processor 702 contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions 724 such that the processor 702 is configurable to perform any one or more of the methodologies described herein, in whole or in part.
  • a set of one or more microcircuits of the processor 702 may be configurable to execute one or more modules (e.g., software modules) described herein.
  • the processor 702 is a multicore CPU (e.g., a dual -core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU) within which each of multiple cores behaves as a separate processor that is able to perform any one or more of the methodologies discussed herein, in whole or in part.
  • beneficial effects described herein may be provided by the machine 700 with at least the processor 702, these same beneficial effects may be provided by a different kind of machine that contains no processors (e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system), if such a processor-less machine is configured to perform one or more of the methodologies described herein.
  • processors e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system
  • the machine 700 may further include a graphics display 710 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video).
  • a graphics display 710 e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video).
  • PDP plasma display panel
  • LED light emitting diode
  • LCD liquid crystal display
  • CRT cathode ray tube
  • the machine 700 may also include an alphanumeric input device 712 (e.g., a keyboard or keypad), a pointer input device 714 (e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument), a data storage 716, an audio generation device 718 (e.g., a sound card, an amplifier, a speaker, a headphone j ack, or any suitable combination thereof), and a network interface device 720.
  • an alphanumeric input device 712 e.g., a keyboard or keypad
  • a pointer input device 714 e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument
  • a data storage 716 e.g., an audio generation device 718 (
  • the data storage 716 (e.g., a data storage device) includes the machine-readable medium 722 (e.g., a tangible and non-transitory machine- readable storage medium) on which are stored the instructions 724 embodying any one or more of the methodologies or functions described herein.
  • the instructions 724 may also reside, completely or at least partially, within the main memory 704, within the static memory 706, within the processor 702 (e.g., within the processor’s cache memory), or any suitable combination thereof, before or during execution thereof by the machine 700. Accordingly, the main memory 704, the static memory 706, and the processor 702 may be considered machine-readable media (e.g., tangible and non-transitory machine-readable media).
  • the instructions 724 may be transmitted or received over the network 190 via the network interface device 720.
  • the network interface device 720 may communicate the instructions 724 using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)).
  • HTTP hypertext transfer protocol
  • the machine 700 may be a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device) and may have one or more additional input components 730 (e.g., sensors or gauges).
  • a portable computing device e.g., a smart phone, a tablet computer, or a wearable device
  • additional input components 730 e.g., sensors or gauges
  • Examples of such input components 730 include an image input component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a temperature input component (e.g., a thermometer), and a gas detection component (e.g., a gas sensor).
  • an image input component e.g., one or more cameras
  • an audio input component e.g., one or more microphones
  • a direction input component e.g., a compass
  • a location input component e.g., a global positioning system (GPS) receiver
  • GPS global positioning system
  • an orientation component e.g.,
  • Input data gathered by any one or more of these input components 730 may be accessible and available for use by any of the modules described herein (e.g., with suitable privacy notifications and protections, such as opt-in consent or opt-out consent, implemented in accordance with user preference, applicable regulations, or any suitable combination thereof).
  • the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory.
  • machine-readable medium 722 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions.
  • the term “machine- readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or communicating) the instructions 724 for execution by the machine 700, such that the instructions 724, when executed by one or more processors of the machine 700 (e.g., processor 702), cause the machine 700 to perform any one or more of the methodologies described herein, in whole or in part.
  • a “machine- readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more tangible and non- transitory data repositories (e.g., data volumes) in the example form of a solid- state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof.
  • a “non-transitory” machine-readable medium specifically excludes propagating signals per se.
  • the instructions 724 for execution by the machine 700 can be communicated via a carrier medium (e.g., a machine-readable carrier medium).
  • a carrier medium include a non-transient carrier medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place) and a transient carrier medium (e.g., a carrier wave or other propagating signal that communicates the instructions 724).
  • Modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof.
  • a “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner.
  • one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module.
  • a hardware module may be implemented mechanically, electronically, hydraulically, or any suitable combination thereof.
  • a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations.
  • a hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC.
  • FPGA field programmable gate array
  • a hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
  • a hardware module may include software encompassed within a CPU or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, hydraulically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
  • the phrase “hardware module” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein.
  • the phrase “hardware-implemented module” refers to a hardware module. Considering example embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times.
  • Software e.g., a software module
  • Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource).
  • a resource e.g., a collection of information from a computing resource
  • processors may be temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein.
  • processor-implemented module refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof.
  • processors may perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein may be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations may be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines.
  • SaaS software as a service
  • the one or more processors or hardware modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules may be distributed across a number of geographic locations.
  • a first example provides a method comprising: accessing, by one or more processors of a machine, a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting, by one or more processors of the machine, that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration (e.g., and later departing from the orifice of the person after the detected duration); performing, by one or more processors of the machine, a comparison of the detected duration to a threshold duration; generating, by one or more processors of the machine and based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing, by one or more processors of the machine, a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
  • a second example provides a method according to the first example, wherein: the portion of the collection instrument is a first portion of the collection instrument; the method further comprises: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
  • a third example provides a method according to the second example, further comprising: detecting a movement duration during which the depicted movement of the second portion of the collection instrument occurs; and wherein: the generating of the indicator is based on the movement duration during which the depicted movement of the second portion of the collection instrument occurs.
  • a fourth example provides a method according to the second example or the third example, further comprising: counting a number of periodic movements within the depicted movement of the second portion of the collection instrument; and wherein: the generating of the indicator is based on the counted number of periodic movements within the depicted movement of the second portion of the collection instrument.
  • a fifth example provides a method according to any of the first through fourth examples, further comprising: performing shape recognition on at least a portion of the video that depicts the orifice of the person, the shape recognition recognizing a shape of the orifice; and wherein: the generating of the indicator is based on the recognized shape of the orifice.
  • a sixth example provides a method according to the fifth example, further comprising: detecting that the video depicts a deformation of the orifice from the recognized shape while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the orifice while the portion of the collection instrument is in the orifice.
  • a seventh example provides a method according to any of the first through sixth examples, further comprising: detecting that the video depicts a deformation of a surface region of the person while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the surface region of the person while the portion of the collection instrument is in the orifice.
  • An eighth example provides a method according to any of the first through seventh examples, further comprising: detecting that the video depicts a depth of insertion by the portion of the collection instrument into the orifice; and wherein: the generating of the indicator is based on the depicted depth of insertion by the portion of the collection instrument into the orifice.
  • a ninth example provides a method according to any of the first through eighth examples, wherein: detecting the video depicts a movement of at least a portion of a hand of the person from whom the biological sample is to be collected; and wherein: the generating of the indicator is based on the depicted movement of at least the portion of the hand of the person.
  • a tenth example provides a method according to any of the first through ninth examples, further comprising: performing facial recognition on at least a portion of the video that depicts the orifice of the person; and based on the facial recognition, causing a further presentation of a notification that the video depicts the person from whom the biological sample is to be collected.
  • An eleventh example provides a method according to any of the first through tenth examples, wherein: the collection instrument includes a container to receive fluid discharged from the orifice of the person; and the method further comprises: detecting that the video depicts an amount of the fluid discharged from the orifice of the person and received by the container of the collection instrument; and wherein: the generating of the indicator is based on the depicted amount of the fluid received by the container of the collection instrument.
  • a twelfth example provides a method according to the eleventh example, further comprising: detecting a reception duration during which the depicted amount of the fluid is received by the container; and wherein: the generating of the indicator is based on the reception duration during which the depicted amount of the fluid is received by the container.
  • a thirteenth example provides a method according to any of the first through twelfth examples, wherein: the generated indicator includes at least one of: a progress indicator that indicates a degree of progress toward the biological sample being fully collected, a first graphical representation of the portion of the collection instrument, a second graphical representation of the orifice, or a third graphical representation of a depth of insertion by the portion of the collection instrument into the orifice.
  • a fourteenth example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the detected duration to a threshold duration; generating, based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
  • a machine-readable medium e.g., a
  • a fifteenth example provides a machine-readable medium according to the fourteenth example, wherein: the portion of the collection instrument is a first portion of the collection instrument; the operations further comprise: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
  • a sixteenth example provides a machine-readable medium according to the fourteenth example or the fifteenth example, wherein the operations further comprise: detecting that the video depicts a deformation of the orifice while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the orifice while the portion of the collection instrument is in the orifice.
  • a seventeenth example provides a machine-readable medium of claim 14, wherein the operations further comprise: detecting that the video depicts a depth of insertion by the portion of the collection instrument into the orifice; and wherein: the generating of the indicator is based on the depicted depth of insertion by the portion of the collection instrument into the orifice.
  • An eighteenth example provides a system (e.g., a computer system or other system of one or more machines) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the detected duration to a threshold duration; generating, based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample
  • a nineteenth example provides a system according to the eighteenth example, wherein the operations further comprise: performing facial recognition on at least a portion of the video that depicts the orifice of the person; and based on the facial recognition, causing a further presentation of a notification that the video depicts the person from whom the biological sample is to be collected.
  • a twentieth example provides a system according to the eighteenth example or the nineteenth example, wherein: the collection instrument includes a container to receive fluid discharged from the orifice of the person; and the operations further comprise: detecting that the video depicts an amount of the fluid discharged from the orifice of the person and received by the container of the collection instrument; and wherein: the generating of the indicator is based on the depicted amount of the fluid received by the container of the collection instrument.
  • a twenty-first example provides a system according to any of the eighteenth through twentieth examples, wherein: the portion of the collection instrument is a first portion of the collection instrument; the operations further comprise: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and counting a number of rotations within the depicted movement of the second portion of the collection instrument; and wherein: the generating of the indicator is based on the counted number of rotations within the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
  • a twenty-second example provides a method comprising: accessing, by one or more processors of a machine, a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting, by one or more processors of the machine, that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a counted number of periodic movements (e.g., and later departing from the orifice of the person after the detected duration); performing, by one or more processors of the machine, a comparison of the counted number of periodic movements to a threshold number of periodic movements; generating, by one or more processors of the machine and based on the comparison of the counted number of periodic movements to the threshold number of periodic movements, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing, by one or more processors of the machine, a presentation of the
  • a twenty-third example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a counted number of periodic movements (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the counted number of periodic movements to a threshold number of periodic movements; generating, based on the comparison of the counted number of periodic movements to the threshold number of periodic movements, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion
  • a twenty-fourth example provides a system (e.g., a computer system or other system of one or more machines) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a counted number of periodic movements (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the counted number of periodic movements to a threshold number of periodic movements; generating, based on the comparison of the counted number of periodic movements to the threshold number of periodic movements, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing
  • a twenty-fifth example provides a carrier medium carrying machine-readable instructions for controlling a machine to carry out the operations (e.g., method operations) performed in any one of the previously described examples.

Abstract

A machine monitors collection of a sample from an orifice of a person from whom the sample is to be collected using a collection instrument, such as a swab. The machine accesses a video that depicts the orifice and then detects that the video depicts a portion of the collection instrument arriving at the orifice, for example, and remaining in or at the orifice for a detected duration. Continuing the example, the machine performs a comparison of the detected duration to a threshold duration to determine whether the detected duration transgresses the threshold duration. Based on this comparison, the machine generates an indicator of an extent to which the collection instrument collected the sample from the orifice. The machine may then cause a presentation of the generated indicator.

Description

MONITORING SAMPLE COLLECTION FROM AN ORIFICE
RELATED APPLICATION
[0000] This application claims the priority benefit of U.S. Provisional Patent Application No. 63/146,821, filed February 8, 2021 and titled “MONITORING SAMPLE COLLECTION FROM AN ORIFICE.”
TECHNICAL FIELD
[0001] The subject matter disclosed herein generally relates to the technical field of special-purpose machines that facilitate healthcare testing, including software-configured computerized variants of such special-purpose machines and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other specialpurpose machines that facilitate healthcare testing. Specifically, the present disclosure addresses systems and methods to facilitate monitoring of sample collection from an orifice.
BACKGROUND
[0002] For purposes of healthcare testing, collection of a biological sample to be tested is often performed by a front-line healthcare worker on a patient to be tested, for example by manipulating a swab or other collection instrument to obtain the biological sample from the patient. Generally, the patient and the front-line healthcare worker are effectively in contact with each other or otherwise in close enough proximity to risk transmission of pathogens from one to the other. Accordingly, self-service collection of biological samples by patients may provide a degree of protection to front-line healthcare workers and patients, as well as reduce the number of front-line healthcare workers involved. However, it may be helpful that such self-service collections by patients be guided to better collect sufficient amounts of their biological samples. [0003] A device may be configured (e.g., by suitable software, such as an app) to capture a video (e.g., as a sequential series of images, which may be called “frames,” or data that is representative thereof) using a camera of the device. The device may thereafter process the video itself, communicate the captured video to another device or other machine via a network, or both.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
[0005] FIG. l is a network diagram illustrating a network environment suitable for monitoring collection of a sample from an orifice of a person (e.g., a patient), according to some example embodiments.
[0006] FIG. 2 is a block diagram illustrating components of a machine (e.g., a device, such as a smartphone) suitable for monitoring collection of a sample from an orifice of a person, according to some example embodiments.
[0007] FIG. 3 is a diagram illustrating a person performing a monitored self-service collection of a biological sample from an orifice of the same person, according to some example embodiments.
[0008] FIG. 4 is a diagram illustrating a graphical user interface (GUI) presented by a machine as part of monitoring collection of a sample from an orifice of a person, according to some example embodiments.
[0009] FIGS. 5 and 6 are flowcharts illustrating operations of a device in performing a method of monitoring sample collection from an orifice, according to some example embodiments.
[0010] FIG. 7 is a block diagram illustrating components of a machine, according to some example embodiments, able to read instructions from a machine-readable medium and perform any one or more of the methodologies discussed herein.
DETAILED DESCRIPTION
[0011] Example methods (e.g., algorithms) facilitate monitoring sample collection (e.g., collecting a biological sample, such as an amount of mucus, saliva, blood, or earwax) from an orifice (e.g., of a person, such as a patient), and example systems (e.g., special-purpose machines configured by special-purpose software) are configured to facilitate monitoring sample collection from an orifice. Examples merely typify possible variations. Unless explicitly stated otherwise, structures (e.g., structural components, such as modules) are optional and may be combined or subdivided, and operations (e.g., in a procedure, algorithm, or other function) may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of various example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details.
[0012] A machine (e.g., a patient’s device, such as a smartphone, or a healthcare provider’s device, such as a tablet or kiosk) may be specially configured (e.g., with suitable software) to monitor a person (e.g., a patient or other user of the machine) in performing (e.g., on themselves) a collection of a sample (e.g., a biological sample to be tested for healthcare purposes, such as diagnosis of disease). For clarity in description, most of the discussion herein describes a machine that monitors (e.g., with or without provision of interactive guidance) the person in performing the sample collection from an orifice (e.g., a nostril, a mouth, an ear, a puncture, or a rectum) of the same patient. However, in some example embodiments, the machine guides one person (e.g., a healthcare worker) in performing sample collection on another person (e.g., a patient).
[0013] According to the methods and systems discussed herein, the machine accesses (e.g., by capturing with a camera, receiving from a source, or reading from a memory) a video that depicts an orifice of a person (e.g., a patient) from whom a biological sample (e.g., a sample of mucus, saliva, blood, or earwax) is to be collected by a portion of a collection instrument (e.g., a cotton-coated tip region of a swab, or an open container for receiving discharged fluid). The machine then detects (e.g., via object recognition performed by an artificial intelligence (Al) machine-vision engine) that the video depicts the portion of the collection instrument arriving at (e.g., entering into) the orifice of the person and remaining in or at the orifice of the person for a detected duration (e.g., for at least the detected duration).
[0014] The machine then performs a comparison of the detected duration to a threshold duration to determine whether the detected duration transgresses (e.g., exceeds) the threshold duration. Based on this comparison, the machine then generates an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video. The indicator may form all or part of a GUI configured to present sample collection information (e.g., status, progress, or both) to the person. After the indicator is generated, the machine causes a presentation of the generated indicator (e.g., to the person, via a display screen of the machine). The generated indicator may be or include a visual indicator, an audio indicator, a haptic indicator, or any suitable combination thereof, and the caused presentation of the generated indicator may be or include display of the visual indicator, play of the audio indicator, initiation of the haptic indicator, or any suitable combination thereof. Additional details and options are described below.
[0015] FIG. 1 is a network diagram illustrating a network environment 100 suitable for monitoring collection of a sample from an orifice of a patient, according to some example embodiments. The network environment 100 includes a machine 110 (e.g., a server machine), a database 115, and devices 130 and 150, all communicatively coupled to each other via a network 190. The machine 110, with or without the database 115, may form all or part of a cloud 118 (e.g., a geographically distributed set of multiple machines configured to function as a single server), which may form all or part of a network-based system 105 (e.g., a cloud-based server system configured to provide one or more network-based services to the devices 130 and 150). The machine 110 and the devices 130 and 150 may each be implemented in a special-purpose (e.g., specialized) computer system, in whole or in part, as described below with respect to FIG. 7.
[0016] Also shown in FIG. 1 are users 132 and 152. One or both of the users 132 and 152 may be a human user (e.g., a human being, also called a “person” herein), a machine user (e.g., a computer configured by a software program to interact with the device 130 or 150), or any suitable combination thereof (e.g., a human assisted by a machine or a machine supervised by a human). The user 132 is associated with the device 130 and may be a user of the device 130. For example, the device 130 may be a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user 132. Likewise, the user 152 is associated with the device 150 and may be a user of the device 150. As an example, the device 150 may be a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user 152.
[0017] Any of the systems or machines (e.g., databases and devices) shown in FIG. 1 may be, include, or otherwise be implemented in a specialpurpose (e.g., specialized or otherwise non-conventional and non-generic) computer that has been modified to perform one or more of the functions described herein for that system or machine (e.g., configured or programmed by special-purpose software, such as one or more software modules of a specialpurpose application, operating system, firmware, middleware, or other software program). For example, a special-purpose computer system able to implement any one or more of the methodologies described herein is discussed below with respect to FIG. 7, and such a special-purpose computer may accordingly be a means for performing any one or more of the methodologies discussed herein. Within the technical field of such special-purpose computers, a special-purpose computer that has been specially modified (e.g., configured by special-purpose software) by the structures discussed herein to perform the functions discussed herein is technically improved compared to other special-purpose computers that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein. Accordingly, a special-purpose machine configured according to the systems and methods discussed herein provides an improvement to the technology of similar special-purpose machines.
[0018] As used herein, a “database” is a data storage resource and may store data structured in any of various ways, for example, as a text file, a table, a spreadsheet, a relational database (e.g., an object-relational database), a triple store, a hierarchical data store, a document database, a graph database, key -value pairs, or any suitable combination thereof. Moreover, any two or more of the systems or machines illustrated in FIG. 1 may be combined into a single system or machine, and the functions described herein for any single system or machine may be subdivided among multiple systems or machines.
[0019] The network 190 may be any network that enables communication between or among systems, machines, databases, and devices (e.g., between the machine 110 and the device 130). Accordingly, the network 190 may be a wired network, a wireless network (e.g., a mobile or cellular network), or any suitable combination thereof. The network 190 may include one or more portions that constitute a private network, a public network (e.g., the Internet), or any suitable combination thereof. Accordingly, the network 190 may include one or more portions that incorporate a local area network (LAN), a wide area network (WAN), the Internet, a mobile telephone network (e.g., a cellular network), a wired telephone network (e.g., a plain old telephone service (POTS) network), a wireless data network (e.g., a WiFi network or WiMax network), or any suitable combination thereof. Any one or more portions of the network 190 may communicate information via a transmission medium. As used herein, “transmission medium” refers to any intangible (e.g., transitory) medium that is capable of communicating (e.g., transmitting) instructions for execution by a machine (e.g., by one or more processors of such a machine), and includes digital or analog communication signals or other intangible media to facilitate communication of such software.
[0020] FIG. 2 is a block diagram illustrating components of the device 130 (e.g., a smartphone or a tablet) suitable for monitoring sample collection from an orifice of a person (e.g., a patient), according to some example embodiments. The device 130 is shown as including a video accessor 210 (e.g., an access module or suitable code for accessing data, such as a video or data that encodes or otherwise represents a video), an object recognizer 220 (e.g., a recognition module or suitable code for recognizing objects, such as orifices, collection instruments, or faces of people), an indicator generator 230 (e.g., a generation module or suitable code for generating one or more indicators), a user interface 240 (e.g., a GUI, presentation module, or suitable code for presenting one or more indicators), and a camera 250 (e.g., a high-definition video camera), all configured to communicate with each other (e.g., via a bus, shared memory, or a switch). The video accessor 210 may be or include any suitable hardware, software, or combination thereof, and is configured to access video (e.g., from the camera 250 or via the network 190).
[0021] The object recognizer 220 may be or include any suitable hardware, software, or combination thereof, and is configured to perform object recognition (e.g., shape recognition, face recognition, or both). Accordingly, the object recognizer 220 may be or include one or more suitably trained Al modules (e.g., a learning machine trained to implement one or more machinevision algorithms).
[0022] The indicator generator 230 may be or include any suitable hardware, software, or combination thereof, and is configured to generate an indicator of information, as described herein. For example, the indicator generator 230 may be configured to generate all or part of a GUI that presents the indicated information.
[0023] The user interface 240 may be or include any suitable hardware, software, or combination thereof, and is configured to present or cause presentation of the indicator generated by the indicator generator 230. For example, the indicator generated by the indicator generator 230 may be or include visual data, audio data, haptic data, or any suitable combination thereof, and thus the user interface 240 may be configured to cause such visual data, audio data, haptic data, or any suitable combination thereof, to be presented by the device 130 (e.g., to the user 132).
[0024] The camera 250 may be or include any suitable camera capable of generating a video with the characteristics described herein. As examples, the camera 250 may be or include a visible spectrum camera (e.g., with a charge coupled device (CCD)), a near-infrared (NIR) camera, a depth camera (e.g., laser-based), or any suitable combination thereof.
[0025] As shown in FIG. 2, one or more of the video accessor 210, the object recognizer 220, the indicator generator 230, and the user interface 240 may form all or part of an app 200 (e.g., a mobile app) that is stored (e.g., installed) on the device 130 (e.g., responsive to or otherwise as a result of data being received from the machine 110 or the database 115 via the network 190). Furthermore, one or more processors 299 (e.g., hardware processors, digital processors, or any suitable combination thereof) may be included (e.g., temporarily or permanently) in the app 200, the video accessor 210, the object recognizer 220, the indicator generator 230, the user interface 240, or any suitable combination thereof.
[0026] Any one or more of the components (e.g., modules) described herein may be implemented using hardware alone (e.g., one or more of the processors 299) or a combination of hardware and software. For example, any component described herein may physically include an arrangement of one or more of the processors 299 (e.g., a subset of or among the processors 299) configured to perform the operations described herein for that component. As another example, any component described herein may include software, hardware, or both, that configure an arrangement of one or more of the processors 299 to perform the operations described herein for that component. Accordingly, different components described herein may include and configure different arrangements of the processors 299 at different points in time or a single arrangement of the processors 299 at different points in time. Each component (e.g., module) described herein is an example of a means for performing the operations described herein for that component. Moreover, any two or more components described herein may be combined into a single component, and the functions described herein for a single component may be subdivided among multiple components. Furthermore, according to various example embodiments, components described herein as being implemented within a single system or machine (e.g., a single device) may be distributed across multiple systems or machines (e.g., multiple devices).
[0027] FIG. 3 is a diagram illustrating the user 132 collecting a sample (e.g., a biological sample) from an orifice of the user 132, while the device 130 monitors the collection of the sample (e.g., with provision of guidance to the user 132 regarding extent of progress in obtaining an adequate sample), according to some example embodiments. In the context of self-service sample collection, as shown in FIG. 3, one or more deficiencies in the process of sample collection (e.g., insufficient duration, insufficient insertion depth, or insufficient movements) may result in collection of an inadequate sample for purposes of accurate testing. In accordance with the systems and methods discussed herein, the device 130 of the user 132 may be configured to monitor a video of the user 132 in performing the collection of the sample. Feedback in the example form of one or more indicators may be provided by the device 130 to inform the user 132 regarding status of the sample collection, extent of progress in the sample collection, prompts, warnings, guidance, other information helpful toward collection of an adequate sample, or any suitable combination thereof.
[0028] As shown in FIG. 3, the user 132 is holding the device 130 while proceeding to perform, or at least attempt, a collection of a sample (e.g., a nasal mucus sample) from an orifice 310 (e.g., her left nostril), using a collection instrument 300 (e.g., a swab). The device 130 includes the camera 250 (e.g., a front-facing video camera) and is configured to monitor the user 132 in her attempt to collect the sample adequately (e.g., collect a minimum amount of mucus).
[0029] FIG. 4 is a diagram illustrating a GUI 400 presented by the device 130 as part of monitoring collection of the sample (e.g., the mucus sample) from the orifice (e.g., the left nostril) of the user 132, according to some example embodiments. The GUI 400 may include one or more indicators (e.g., generated and presented by the device 130), and the GUI 400 in FIG. 4 is shown as including various data. For example, the GUI 400 may include a progress indicator 410 that indicates a degree of progress toward the sample being fully collected (e.g., surpassing a minimum duration or a minimum count of periodic movements, such as swipes, either or both of which may be inferred as indicating collection of a minimum amount of the sample); a graphical representation 420 (e.g., a first graphical representation) of a portion (e.g., a first portion, such as a cotton-coated tip region) of the collection instrument 300 (e.g., a swab); a graphical representation 430 (e.g., a second graphical representation) of the orifice 310; a graphical representation 440 (e.g., a fourth graphical representation) of a depth of insertion by the portion (e.g., the tip) of the collection instrument 300 (e.g., the swab) into the orifice 310 (e.g., the nostril); or any suitable combination thereof. The degree of progress, the depth of insertion, or both, may be depicted by a video of the sample collection and detected or inferred (e.g., extrapolated) from the video (e.g., by the device 130, in executing the app 200). [0030] As shown in FIG. 4, the GUI 400 may also include a count 450 of periodic movements (e.g., rotations) made by the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) or made by another portion (e.g., a flexible shaft) of the collection instrument 300 (e.g., the swab). Such movements may be depicted by a video of the sample collection and detected from the video (e.g., by the device 130, in executing the app 200).
[0031] FIGS. 5 and 6 are flowcharts illustrating operations of the device 130 (e.g., as configured by execution of the app 200) in performing a method 500 of monitoring sample collection from an orifice (e.g., orifice 310) of a person (e.g., the user 132), according to some example embodiments.
Operations in the method 500 may be performed by the device 130, using components (e.g., modules) described above with respect to FIG. 2, using one or more processors (e.g., microprocessors or other hardware processors), or using any suitable combination thereof. As shown in FIG. 5, the method 500 includes operations 510, 520, 530, 540, and 550.
[0032] In operation 510, the video accessor 210 accesses a video (e.g., live or recorded video data that encodes or otherwise represents such a video) that depicts the orifice 310 from which a sample (e.g., a biological sample) is to be collected by a portion (e.g., a tip region) of the collection instrument 300 (e.g., a swab). The accessing of the video may be or include capturing the video (e.g., using the camera 250 of the device 130), receiving the video (e.g., from the camera 250 or via the network 190), reading the video (e.g., from a memory of the device 130 or from the database 115), or any suitable combination thereof. In some example embodiments, the video is self-shot in real time (e.g., with latency under 50 milliseconds) by the user 132 (e.g., by orienting the device 130 such that its camera 250 is aimed at the orifice 310).
[0033] In certain example embodiments, if no suitable video is currently available (e.g., live or recorded), the video accessor 210 guides the person (e.g., the user 132) through creating a suitable video, which is then accessed by the video accessor 210 as described above. For example, the video accessor 210 may prompt the person to position their face a certain way for video capture by the camera 250 of the device 130, prompt the person to adjust lighting conditions, prompt the person to begin an attempt to perform sample collection from the orifice 310, notify the person to restart the sample collection, or any suitable combination thereof.
[0034] In operation 520, the object recognizer 220 detects that the video accessed in operation 510 depicts the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) arriving at (e.g., entering into or making contact with edges of) the orifice 310 and remaining in or at the orifice for at least a detected duration (e.g., and later exiting the orifice of the person after the detected duration). In some example embodiments, the object recognizer 220 also detects that the video depicts the portion of the collection instrument departing from (e.g., exiting from or breaking contact with edges of) the orifice 310 after the detected duration. The detecting that the video depicts the portion of the collection instrument 300 may include identifying the portion of the collection instrument 300, recognizing the portion of the collection instrument 300, or both. In performing operation 520, the object recognizer 220 may implement or otherwise use one or more of various image processing techniques (e.g., segmentation, edge detection, or both), computer vision techniques (e.g., using a trained Al module), or any suitable combination thereof.
[0035] In operation 530, the indicator generator 230 performs a comparison of the detected duration to a threshold duration. In some example embodiments, the threshold duration is a minimum duration, while in alternative example embodiments, the threshold duration is a maximum duration.
[0036] In operation 540, the indicator generator 230 generates one or more indicator (e.g., as described above with respect to FIG. 4). For example, the indicator generator 230 may generate the progress indicator 410, which may indicate the extent to which the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) collected the sample (e.g., probably or actually) from the orifice 310 depicted by the video. The performance of operation 540 may be based on the comparison of the detected duration to the threshold duration in operation 530, and thus, the generation of any one or more indicators by the indicator generator 230 may be based on that comparison of the detected duration to the threshold duration. [0037] In operation 550, the user interface 240 causes a presentation of the indicator (e.g., the progress indicator 410) generated in operation 540 (e.g., along with one or more other indicators, which may also be generated in operation 540). For example, the user interface 240 may present or otherwise cause presentation of the progress indicator 410, which may indicate the extent to which the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) has thus far collected the sample (e.g., probably or actually) from the orifice 310 (e.g., of the user 132). The caused presentation of the indicator (e.g., the progress indicator 410) consequently may be exhibited (e.g., displayed or otherwise presented) by the device 130.
[0038] As shown in FIG. 6, in addition to any one or more of the operations previously described, the method 500 may include one or more of operations 610, 612, 614, 620, 622, 630, 640, 650, 660, 670, and 672. In operation 610, the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) is a first portion of the collection instrument 300, and the object recognizer 220 detects that the video depicts a movement of a second portion (e.g., the shaft) of the collection instrument 300 (e.g., the swab) while the first portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310. In some example embodiments, the detected movement of the second portion (e.g., the shaft) may be or include repetitions of one or more periodic movements of the second portion. In example embodiments that include operation 610, the generating of the indicator in operation 540 may be based on the depicted movement of the second portion (e.g., the shaft) of the collection instrument 300 while the first portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310.
[0039] One or more of operations 612 and 614 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 610. In alternative example embodiments, one or more of operations 612 and 614 are performed as separate operations (e.g., between performance of 510 and performance of operation 530) with or without performance of operation 610.
[0040] In operation 612, the object recognizer 220 detects a movement duration during which the depicted movement (e.g., as detected in operation 610) of the second portion (e.g., the shaft) of the collection instrument 300 (e.g., the swab) occurs. In example embodiments that include operation 612, the generating of the indicator in operation 540 may be based on the movement duration during which the depicted movement of the second portion (e.g., the shaft) of the collection instrument 300 occurs.
[0041] In operation 614, the object recognizer 220 counts a number of periodic movements (e.g., rotations, oscillations, swipes, flexes, spitting motions, or other repeated strokes) within the depicted movement of the second portion (e.g., the shaft) of the collection instrument 300 (e.g., the swab). For example, each instance of a periodic movement may be detected and counted based on a distance travelled (e.g., a change in location) by the second portion (e.g., the shaft), a change in an orientation of the second portion, or both. In example embodiments that include operation 614, the generating of the indicator in operation 540 may be based on the counted number of periodic movements within the depicted movement of the second portion of the collection instrument 300. For example, the generating of the indicator in operation 540 may be based on a comparison (e.g., performed by the object recognizer 220) of the counted number of periodic movements to a threshold number (e.g., a minimum number or a maximum number) of periodic movements.
[0042] According to some example embodiments of the method 500, the number of periodic movements is used instead of the duration that the first portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) is in the orifice 310. In such example embodiments, performance of operation 520 may substitute a counting of the number of periodic movements (e.g., as described for operation 614) in place of the detecting of the detected duration (e.g., as described for operation 520), and performance of operation 530 may accordingly compare the counted number of periodic movements to a threshold number of periodic movements (e.g., as described for operation 614), instead of comparing the detected duration to a threshold duration (e.g., as described for operation 520).
[0043] In operation 620, the object recognizer 220 performs shape recognition on at least a portion of the video accessed in operation 510. Accordingly, the shape recognition performed by the object recognizer 220 may recognize a shape of the orifice 310. For example, if the orifice 310 is the opening (e.g., aperture) of a left nostril, the object recognizer 220 may recognize the shape of the opening of the left nostril. As another example, if the orifice 310 is the mouth of the person (e.g., with pursed lips for discharging a saliva sample), the object recognizer 220 may recognize the shape (e.g., pursed) of the mouth. In example embodiments that include operation 620, the generating of the indicator in operation 540 may be based on the recognized shape of the orifice 310.
[0044] Operation 622 may be performed as part of operation 620. In alternative example embodiments, operation 622 is performed as a separate operation (e.g., between performance of 510 and performance of operation 530) with or without performance of operation 620.
[0045] In operation 622, the object recognizer 220 detects that the video accessed in operation 510 depicts a deformation of the orifice 310 (e.g., deformed compared to the shape recognized in operation 620) while the portion (e.g., the first portion, such as the tip region) of the collection instrument 300 (e.g., the swab) is in the orifice 310. In example embodiments that include operation 622, the generating of the indicator in operation 540 may be based on the detected deformation of the orifice 310 while the portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310.
[0046] In operation 630, the object recognizer 220 detects that the video accessed in operation 510 depicts a deformation of a surface region of the person (e.g., the user 132) while the portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310 of the person. For example, if a tip region of a swab is inserted into a left nostril of the person to collect the sample, the outer surface of the left nostril may exhibit detectable deformation from pressure applied by the swab. As another example, if a tip region of a swab is inserted into the mouth of the person to collect the sample from the inner surface of the person’s cheek, the outer surface of the cheek may exhibit detectable deformation from pressure applied by the swab. In example embodiments that include operation 630, the generating of the indicator in operation 540 may be based on the depicted deformation of the surface region while the portion (e.g., the tip region) of the collection instrument 300 is in the orifice 310. [0047] In operation 640, the object recognizer 220 detects that the video accessed in operation 510 depicts a depth of insertion by the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab) into the orifice 310. The depth of insertion may be detected based on shape recognition of the portion (e.g., the tip region) of the collection instrument 300 (e.g., the swab), including a detected speed of the portion, a detected direction of motion by the portion, or both. The depth of insertion may be detected based on shape recognition of another portion (e.g., the shaft) of the collection instrument 300, including a detected speed of the other portion, a detected direction of motion by the other portion, or both. Furthermore, the depth of insertion may be detected based on shape recognition of a fiducial mark (e.g., a logo or a target symbol) on the collection instrument 300 (e.g., the swab).
[0048] The depth of insertion may be detected based on shape recognition of all or part of a hand (e.g., one or more fingers) of the person (e.g., the user 132), including a detected speed of all or part of the hand, a detected direction of motion by all or part of the hand, or both. In some example embodiments, a tip of a finger (e.g., an index fingertip) may be recognized and treated as a fiducial mark. Furthermore, the depth of insertion may be detected based on deformation of the orifice 310 (e.g., as described above with respect to operation 622), deformation of a surface region of the person (e.g., as described above with respect to operation 630), or both. In example embodiments that include operation 640, the generating of the indicator in operation 540 may be based on the depicted depth of insertion by the portion (e.g., the tip region) of the collection instrument 300 into the orifice 310.
[0049] In operation 650, the object recognizer 220 detects that the video depicts a movement of at least a portion of a hand (e.g., one or more fingers) of the person (e.g., the user 132) from whom the biological sample is to be collected. As noted above, the object recognizer 220 may detect such movement by detecting a speed of all or part of the hand, a direction of motion by all or part of the hand, or both. In addition, or alternatively, the object recognizer 220 may detect such movement by detection changes in a shape (e.g., pose) of all or part of the hand. In some example embodiments, the detected movement of all or part of the hand may be or include repetitions of one or more periodic movements of all or part of the hand. In example embodiments that include operation 650, the generating of the indicator in operation 540 may be based on the depicted movement of at least the portion of the hand of the person.
[0050] In operation 660, the object recognizer 220 performs facial recognition on at least a portion of the video that depicts the orifice 310 of the person (e.g., the user 132). Accordingly, the facial recognition performed by the object recognizer 220 may recognize the face of the person and thus validate an identity of the person. For example, the recognized face of the person (e.g., the user 132) may be compared (e.g., by the indicator generator 230) to a reference image of the person (e.g., a driver’s license photo of the user 132), and a validation of the person (e.g., by the indicator generator 230) may be performed based on such a comparison. In example embodiments that include operation 660, the method 500 may include an operation or sub-operation in which, based on the facial recognition, the indicator generator 230, the user interface 240, or both, cause a presentation (e.g., a further presentation) of a notification that the video does indeed depict the proper person (e.g., the user 132) from whom the sample is to be collected. For example, such a sub-operation may be performed (e.g., by the indicator generator 230) as part of operation 540.
[0051] In some example embodiments, the collection instrument 300 is or includes a container configured to receive fluid (e.g., an effluent) that is discharged from the orifice 310 of the person (e.g., the user 132). For example, the collection instrument 300 may be or include a vial for collecting an amount of saliva as the sample, and the orifice 310 may be the mouth of the person. As another example, the collection instrument 300 may be or include a blood capillary action tube (e.g., a blood pipette) for collecting an amount of blood as the sample, and the orifice 310 may be a puncture in the skin of the person. In example embodiments where the collection instrument 300 is or includes such a container, one or both of operations 670 and 672 may be performed as part of the method 500. Operation 672 may be performed as part of operation 670. In alternative example embodiments, operation 672 is performed as a separate operation (e.g., between performance of 510 and performance of operation 530) with or without performance of operation 670. [0052] In operation 670, the object recognizer 220 detects that the video accessed in operation 510 depicts an amount of fluid (e.g., saliva or blood) discharged from the orifice 310 of the person and received by the container (e.g., a vial or a pipette) of the collection instrument 300. For example, the amount of the fluid may be detected by performing shape recognition on a meniscus of the fluid in the container and calculating or estimating the amount of the fluid based on the location of the meniscus, the orientation of the meniscus, or both, relative to the container. In example embodiments that include operation 670, the generating of the indicator in operation 540 may be based on the depicted amount of the fluid received by the container (e.g., the vial or the pipette) of the collection instrument 300.
[0053] In operation 672, the object recognizer 220 detects a reception duration during which the depicted amount of the fluid (e.g., saliva or blood) is received by the container (e.g., the vial or the pipette). In example embodiments that include operation 672, the generating of the indicator in operation 540 may be based on the reception duration during which the depicted amount of the fluid (e.g., saliva or blood) is received by the container (e.g., the vial or the pipette).
[0054] According to various example embodiments, one or more of the methodologies described herein may facilitate monitoring of sample collection from an orifice of a person. Moreover, one or more of the methodologies described herein may facilitate guiding (e.g., notifying, instructing, reminding, warning, or any suitable combination thereof) an attempt at self-service sample collection, performed by the person. Hence, one or more of the methodologies described herein may facilitate increased accuracy or precision in sample collection, which may result in increased accuracy or precision in healthcare tests based on the collected samples, as well as reductions in spoilage or other waste of test kits (e.g., self-service sample collection kits) due to improper performance of the sample collection, compared to capabilities of pre-existing systems and methods.
[0055] When these effects are considered in aggregate, one or more of the methodologies described herein may obviate a need for certain efforts or resources that otherwise would be involved in the monitoring of sample collection from various orifices of various persons. Efforts expended by a user in administering collections of samples from other persons or in guiding other persons in performing self-service collections of samples may be reduced by use of (e.g., reliance upon) a special-purpose machine that implements one or more of the methodologies described herein. Computing resources used by one or more systems or machines (e.g., within the network environment 100) may similarly be reduced (e.g., compared to systems or machines that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein). Examples of such computing resources include processor cycles, network traffic, computational capacity, main memory usage, graphics rendering capacity, graphics memory usage, data storage capacity, power consumption, and cooling capacity.
[0056] FIG. 7 is a block diagram illustrating components of a machine 700, according to some example embodiments, able to read instructions 724 from a machine-readable medium 722 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part. Specifically, FIG. 7 shows the machine 700 in the example form of a computer system (e.g., a computer) within which the instructions 724 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 700 to perform any one or more of the methodologies discussed herein may be executed, in whole or in part.
[0057] In alternative embodiments, the machine 700 operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines. In a networked deployment, the machine 700 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment. The machine 700 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 724, sequentially or otherwise, that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute the instructions 724 to perform all or part of any one or more of the methodologies discussed herein.
[0058] The machine 700 includes a processor 702 (e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory 704, and a static memory 706, which are configured to communicate with each other via a bus 708. The processor 702 contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions 724 such that the processor 702 is configurable to perform any one or more of the methodologies described herein, in whole or in part. For example, a set of one or more microcircuits of the processor 702 may be configurable to execute one or more modules (e.g., software modules) described herein. In some example embodiments, the processor 702 is a multicore CPU (e.g., a dual -core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU) within which each of multiple cores behaves as a separate processor that is able to perform any one or more of the methodologies discussed herein, in whole or in part. Although the beneficial effects described herein may be provided by the machine 700 with at least the processor 702, these same beneficial effects may be provided by a different kind of machine that contains no processors (e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system), if such a processor-less machine is configured to perform one or more of the methodologies described herein.
[0059] The machine 700 may further include a graphics display 710 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video). The machine 700 may also include an alphanumeric input device 712 (e.g., a keyboard or keypad), a pointer input device 714 (e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument), a data storage 716, an audio generation device 718 (e.g., a sound card, an amplifier, a speaker, a headphone j ack, or any suitable combination thereof), and a network interface device 720.
[0060] The data storage 716 (e.g., a data storage device) includes the machine-readable medium 722 (e.g., a tangible and non-transitory machine- readable storage medium) on which are stored the instructions 724 embodying any one or more of the methodologies or functions described herein. The instructions 724 may also reside, completely or at least partially, within the main memory 704, within the static memory 706, within the processor 702 (e.g., within the processor’s cache memory), or any suitable combination thereof, before or during execution thereof by the machine 700. Accordingly, the main memory 704, the static memory 706, and the processor 702 may be considered machine-readable media (e.g., tangible and non-transitory machine-readable media). The instructions 724 may be transmitted or received over the network 190 via the network interface device 720. For example, the network interface device 720 may communicate the instructions 724 using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)).
[0061] In some example embodiments, the machine 700 may be a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device) and may have one or more additional input components 730 (e.g., sensors or gauges). Examples of such input components 730 include an image input component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a temperature input component (e.g., a thermometer), and a gas detection component (e.g., a gas sensor). Input data gathered by any one or more of these input components 730 may be accessible and available for use by any of the modules described herein (e.g., with suitable privacy notifications and protections, such as opt-in consent or opt-out consent, implemented in accordance with user preference, applicable regulations, or any suitable combination thereof). [0062] As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 722 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine- readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or communicating) the instructions 724 for execution by the machine 700, such that the instructions 724, when executed by one or more processors of the machine 700 (e.g., processor 702), cause the machine 700 to perform any one or more of the methodologies described herein, in whole or in part. Accordingly, a “machine- readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more tangible and non- transitory data repositories (e.g., data volumes) in the example form of a solid- state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof.
[0063] A “non-transitory” machine-readable medium, as used herein, specifically excludes propagating signals per se. According to various example embodiments, the instructions 724 for execution by the machine 700 can be communicated via a carrier medium (e.g., a machine-readable carrier medium). Examples of such a carrier medium include a non-transient carrier medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place) and a transient carrier medium (e.g., a carrier wave or other propagating signal that communicates the instructions 724).
[0064] Certain example embodiments are described herein as including modules. Modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof. A “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module.
[0065] In some example embodiments, a hardware module may be implemented mechanically, electronically, hydraulically, or any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. As an example, a hardware module may include software encompassed within a CPU or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, hydraulically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
[0066] Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Furthermore, as used herein, the phrase “hardware-implemented module” refers to a hardware module. Considering example embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times. Software (e.g., a software module) may accordingly configure one or more processors, for example, to become or otherwise constitute a particular hardware module at one instance of time and to become or otherwise constitute a different hardware module at a different instance of time.
[0067] Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource).
[0068] The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor- implemented module” refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof. [0069] Moreover, such one or more processors may perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein may be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations may be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines. In some example embodiments, the one or more processors or hardware modules (e.g., processor-implemented modules) may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules may be distributed across a number of geographic locations.
[0070] Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and their functionality presented as separate components and functions in example configurations may be implemented as a combined structure or component with combined functions. Similarly, structures and functionality presented as a single component may be implemented as separate components and functions. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
[0071] Some portions of the subject matter discussed herein may be presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a memory (e.g., a computer memory or other machine memory). Such algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consi stent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities.
[0072] Unless specifically stated otherwise, discussions herein using words such as “accessing,” “processing,” “detecting,” “computing,” “calculating,” “determining,” “generating,” “presenting,” “displaying,” or the like refer to actions or processes performable by a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or any suitable combination thereof), registers, or other machine components that receive, store, transmit, or display information. Furthermore, unless specifically stated otherwise, the terms “a” or “an” are herein used, as is common in patent documents, to include one or more than one instance. Finally, as used herein, the conjunction “or” refers to a nonexclusive “or,” unless specifically stated otherwise.
[0073] The following enumerated descriptions describe various examples of methods, machine-readable media, and systems (e.g., machines, devices, or other apparatus) discussed herein.
[0074] A first example provides a method comprising: accessing, by one or more processors of a machine, a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting, by one or more processors of the machine, that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration (e.g., and later departing from the orifice of the person after the detected duration); performing, by one or more processors of the machine, a comparison of the detected duration to a threshold duration; generating, by one or more processors of the machine and based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing, by one or more processors of the machine, a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
[0075] A second example provides a method according to the first example, wherein: the portion of the collection instrument is a first portion of the collection instrument; the method further comprises: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
[0076] A third example provides a method according to the second example, further comprising: detecting a movement duration during which the depicted movement of the second portion of the collection instrument occurs; and wherein: the generating of the indicator is based on the movement duration during which the depicted movement of the second portion of the collection instrument occurs.
[0077] A fourth example provides a method according to the second example or the third example, further comprising: counting a number of periodic movements within the depicted movement of the second portion of the collection instrument; and wherein: the generating of the indicator is based on the counted number of periodic movements within the depicted movement of the second portion of the collection instrument.
[0078] A fifth example provides a method according to any of the first through fourth examples, further comprising: performing shape recognition on at least a portion of the video that depicts the orifice of the person, the shape recognition recognizing a shape of the orifice; and wherein: the generating of the indicator is based on the recognized shape of the orifice.
[0079] A sixth example provides a method according to the fifth example, further comprising: detecting that the video depicts a deformation of the orifice from the recognized shape while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the orifice while the portion of the collection instrument is in the orifice.
[0080] A seventh example provides a method according to any of the first through sixth examples, further comprising: detecting that the video depicts a deformation of a surface region of the person while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the surface region of the person while the portion of the collection instrument is in the orifice.
[0081] An eighth example provides a method according to any of the first through seventh examples, further comprising: detecting that the video depicts a depth of insertion by the portion of the collection instrument into the orifice; and wherein: the generating of the indicator is based on the depicted depth of insertion by the portion of the collection instrument into the orifice. [0082] A ninth example provides a method according to any of the first through eighth examples, wherein: detecting the video depicts a movement of at least a portion of a hand of the person from whom the biological sample is to be collected; and wherein: the generating of the indicator is based on the depicted movement of at least the portion of the hand of the person.
[0083] A tenth example provides a method according to any of the first through ninth examples, further comprising: performing facial recognition on at least a portion of the video that depicts the orifice of the person; and based on the facial recognition, causing a further presentation of a notification that the video depicts the person from whom the biological sample is to be collected.
[0084] An eleventh example provides a method according to any of the first through tenth examples, wherein: the collection instrument includes a container to receive fluid discharged from the orifice of the person; and the method further comprises: detecting that the video depicts an amount of the fluid discharged from the orifice of the person and received by the container of the collection instrument; and wherein: the generating of the indicator is based on the depicted amount of the fluid received by the container of the collection instrument.
[0085] A twelfth example provides a method according to the eleventh example, further comprising: detecting a reception duration during which the depicted amount of the fluid is received by the container; and wherein: the generating of the indicator is based on the reception duration during which the depicted amount of the fluid is received by the container. [0086] A thirteenth example provides a method according to any of the first through twelfth examples, wherein: the generated indicator includes at least one of: a progress indicator that indicates a degree of progress toward the biological sample being fully collected, a first graphical representation of the portion of the collection instrument, a second graphical representation of the orifice, or a third graphical representation of a depth of insertion by the portion of the collection instrument into the orifice.
[0087] A fourteenth example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the detected duration to a threshold duration; generating, based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
[0088] A fifteenth example provides a machine-readable medium according to the fourteenth example, wherein: the portion of the collection instrument is a first portion of the collection instrument; the operations further comprise: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
[0089] A sixteenth example provides a machine-readable medium according to the fourteenth example or the fifteenth example, wherein the operations further comprise: detecting that the video depicts a deformation of the orifice while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the orifice while the portion of the collection instrument is in the orifice.
[0090] A seventeenth example provides a machine-readable medium of claim 14, wherein the operations further comprise: detecting that the video depicts a depth of insertion by the portion of the collection instrument into the orifice; and wherein: the generating of the indicator is based on the depicted depth of insertion by the portion of the collection instrument into the orifice.
[0091] An eighteenth example provides a system (e.g., a computer system or other system of one or more machines) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the detected duration to a threshold duration; generating, based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
[0092] A nineteenth example provides a system according to the eighteenth example, wherein the operations further comprise: performing facial recognition on at least a portion of the video that depicts the orifice of the person; and based on the facial recognition, causing a further presentation of a notification that the video depicts the person from whom the biological sample is to be collected.
[0093] A twentieth example provides a system according to the eighteenth example or the nineteenth example, wherein: the collection instrument includes a container to receive fluid discharged from the orifice of the person; and the operations further comprise: detecting that the video depicts an amount of the fluid discharged from the orifice of the person and received by the container of the collection instrument; and wherein: the generating of the indicator is based on the depicted amount of the fluid received by the container of the collection instrument.
[0094] A twenty-first example provides a system according to any of the eighteenth through twentieth examples, wherein: the portion of the collection instrument is a first portion of the collection instrument; the operations further comprise: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and counting a number of rotations within the depicted movement of the second portion of the collection instrument; and wherein: the generating of the indicator is based on the counted number of rotations within the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
[0095] A twenty-second example provides a method comprising: accessing, by one or more processors of a machine, a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting, by one or more processors of the machine, that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a counted number of periodic movements (e.g., and later departing from the orifice of the person after the detected duration); performing, by one or more processors of the machine, a comparison of the counted number of periodic movements to a threshold number of periodic movements; generating, by one or more processors of the machine and based on the comparison of the counted number of periodic movements to the threshold number of periodic movements, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing, by one or more processors of the machine, a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
[0096] A twenty-third example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a counted number of periodic movements (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the counted number of periodic movements to a threshold number of periodic movements; generating, based on the comparison of the counted number of periodic movements to the threshold number of periodic movements, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
[0097] A twenty-fourth example provides a system (e.g., a computer system or other system of one or more machines) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a counted number of periodic movements (e.g., and later departing from the orifice of the person after the detected duration); performing a comparison of the counted number of periodic movements to a threshold number of periodic movements; generating, based on the comparison of the counted number of periodic movements to the threshold number of periodic movements, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
[0098] A twenty-fifth example provides a carrier medium carrying machine-readable instructions for controlling a machine to carry out the operations (e.g., method operations) performed in any one of the previously described examples.

Claims

What is claimed is:
1. A method comprising: accessing, by one or more processors of a machine, a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting, by one or more processors of the machine, that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration; performing, by one or more processors of the machine, a comparison of the detected duration to a threshold duration; generating, by one or more processors of the machine and based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing, by one or more processors of the machine, a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
2. The method of claim 1, wherein: the portion of the collection instrument is a first portion of the collection instrument; the method further comprises: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
36 method of claim 2, further comprising: detecting a movement duration during which the depicted movement of the second portion of the collection instrument occurs; and wherein: the generating of the indicator is based on the movement duration during which the depicted movement of the second portion of the collection instrument occurs. method of claim 2, further comprising : counting a number of periodic movements within the depicted movement of the second portion of the collection instrument; and wherein: the generating of the indicator is based on the counted number of periodic movements within the depicted movement of the second portion of the collection instrument. method of claim 1, further comprising: performing shape recognition on at least a portion of the video that depicts the orifice of the person, the shape recognition recognizing a shape of the orifice; and wherein: the generating of the indicator is based on the recognized shape of the orifice. method of claim 5, further comprising: detecting that the video depicts a deformation of the orifice from the recognized shape while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the orifice while the portion of the collection instrument is in the orifice. method of claim 1, further comprising: detecting that the video depicts a deformation of a surface region of the person while the portion of the collection instrument is in the orifice; and wherein:
37 the generating of the indicator is based on the depicted deformation of the surface region of the person while the portion of the collection instrument is in the orifice.
8. The method of claim 1, further comprising: detecting that the video depicts a depth of insertion by the portion of the collection instrument into the orifice; and wherein: the generating of the indicator is based on the depicted depth of insertion by the portion of the collection instrument into the orifice.
9. The method of claim 1, wherein: detecting the video depicts a movement of at least a portion of a hand of the person from whom the biological sample is to be collected; and wherein: the generating of the indicator is based on the depicted movement of at least the portion of the hand of the person.
10. The method of claim 1, further comprising: performing facial recognition on at least a portion of the video that depicts the orifice of the person; and based on the facial recognition, causing a further presentation of a notification that the video depicts the person from whom the biological sample is to be collected.
11. The method of claim 1, wherein: the collection instrument includes a container to receive fluid discharged from the orifice of the person; and the method further comprises: detecting that the video depicts an amount of the fluid discharged from the orifice of the person and received by the container of the collection instrument; and wherein: the generating of the indicator is based on the depicted amount of the fluid received by the container of the collection instrument.
12. The method of claim 11, further comprising: detecting a reception duration during which the depicted amount of the fluid is received by the container; and wherein: the generating of the indicator is based on the reception duration during which the depicted amount of the fluid is received by the container.
13. The method of claim 1, wherein: the generated indicator includes at least one of: a progress indicator that indicates a degree of progress toward the biological sample being fully collected, a first graphical representation of the portion of the collection instrument, a second graphical representation of the orifice, or a third graphical representation of a depth of insertion by the portion of the collection instrument into the orifice.
14. A machine-readable medium comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration; performing a comparison of the detected duration to a threshold duration; generating, based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample.
15. The machine-readable medium of claim 14, wherein: the portion of the collection instrument is a first portion of the collection instrument; the operations further comprise: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
16. The machine-readable medium of claim 14, wherein the operations further comprise: detecting that the video depicts a deformation of the orifice while the portion of the collection instrument is in the orifice; and wherein: the generating of the indicator is based on the depicted deformation of the orifice while the portion of the collection instrument is in the orifice.
17. The machine-readable medium of claim 14, wherein the operations further comprise: detecting that the video depicts a depth of insertion by the portion of the collection instrument into the orifice; and wherein: the generating of the indicator is based on the depicted depth of insertion by the portion of the collection instrument into the orifice.
18. A system comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing a video that depicts an orifice of a person from whom a biological sample is to be collected by a portion of a collection instrument; detecting that the video depicts the portion of the collection instrument arriving at the orifice of the person and remaining in or at the orifice for a detected duration; performing a comparison of the detected duration to a threshold duration; generating, based on the comparison of the detected duration to the threshold duration, an indicator of an extent to which the portion of the collection instrument collected the biological sample from the orifice depicted by the video; and causing a presentation of the generated indicator of the extent to which the portion of the collection instrument collected the biological sample. system of claim 18, wherein the operations further comprise: performing facial recognition on at least a portion of the video that depicts the orifice of the person; and based on the facial recognition, causing a further presentation of a notification that the video depicts the person from whom the biological sample is to be collected. system of claim 18, wherein: the collection instrument includes a container to receive fluid discharged from the orifice of the person; and the operations further comprise: detecting that the video depicts an amount of the fluid discharged from the orifice of the person and received by the container of the collection instrument; and wherein: the generating of the indicator is based on the depicted amount of the fluid received by the container of the collection instrument.
41 system of claim 18, wherein: the portion of the collection instrument is a first portion of the collection instrument; the operations further comprise: detecting that the video depicts a movement of a second portion of the collection instrument while the first portion of the collection instrument is in the orifice; and counting a number of rotations within the depicted movement of the second portion of the collection instrument; and wherein: the generating of the indicator is based on the counted number of rotations within the depicted movement of the second portion of the collection instrument while the first portion of the collection instrument is in the orifice.
42
PCT/US2022/070535 2021-02-08 2022-02-04 Monitoring sample collection from an orifice WO2022170350A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163146821P 2021-02-08 2021-02-08
US63/146,821 2021-02-08

Publications (1)

Publication Number Publication Date
WO2022170350A1 true WO2022170350A1 (en) 2022-08-11

Family

ID=82741687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/070535 WO2022170350A1 (en) 2021-02-08 2022-02-04 Monitoring sample collection from an orifice

Country Status (1)

Country Link
WO (1) WO2022170350A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150211055A1 (en) * 2014-01-25 2015-07-30 uBiome, Inc. Method and system for microbiome analysis
US20160025709A1 (en) * 2014-02-27 2016-01-28 The Regents Of The University Of California High throughput dna damage quantification of human tissue with home-based collection device
US20170236281A1 (en) * 2014-07-24 2017-08-17 University Health Network Collection and analysis of data for diagnostic purposes
US20190083975A1 (en) * 2016-03-14 2019-03-21 Diassess Inc. Systems and Methods for Performing Biological Assays
US20190350934A1 (en) * 2005-04-29 2019-11-21 Cyrano Therapeutics, Inc. Compositions and methods for treating chemosensory dysfunction
US20200023353A1 (en) * 2013-09-06 2020-01-23 Theranos Ip Company, Llc Devices, systems, methods, and kits for receiving a swab
WO2021224907A1 (en) * 2020-05-06 2021-11-11 Tyto Care Ltd. A remote medical examination system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190350934A1 (en) * 2005-04-29 2019-11-21 Cyrano Therapeutics, Inc. Compositions and methods for treating chemosensory dysfunction
US20200023353A1 (en) * 2013-09-06 2020-01-23 Theranos Ip Company, Llc Devices, systems, methods, and kits for receiving a swab
US20150211055A1 (en) * 2014-01-25 2015-07-30 uBiome, Inc. Method and system for microbiome analysis
US20160025709A1 (en) * 2014-02-27 2016-01-28 The Regents Of The University Of California High throughput dna damage quantification of human tissue with home-based collection device
US20170236281A1 (en) * 2014-07-24 2017-08-17 University Health Network Collection and analysis of data for diagnostic purposes
US20190083975A1 (en) * 2016-03-14 2019-03-21 Diassess Inc. Systems and Methods for Performing Biological Assays
WO2021224907A1 (en) * 2020-05-06 2021-11-11 Tyto Care Ltd. A remote medical examination system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AQUILA ISABELLA, SACCO MATTEO ANTONIO, ABENAVOLI LUDOVICO, MALARA NATALIA, ARENA VINCENZO, GRASSI SIMONE, AUSANIA FRANCESCO, BOCCU: "Severe Acute Respiratory Syndrome Coronavirus 2 Pandemic", ARCH PATHOL LAB MED, vol. 144, September 2020 (2020-09-01), pages 1048 - 1056, XP055959861, Retrieved from the Internet <URL:https://meridian.allenpress.com/aplm/article/144/9/1048/442313/Severe-Acute-Respiratory-Syndrome-Coronavirus-2> [retrieved on 20220329] *

Similar Documents

Publication Publication Date Title
US11847911B2 (en) Object-model based event detection system
US11494921B2 (en) Machine-learned model based event detection
US10999374B2 (en) Event detection system
US10373264B1 (en) Vehicle image and sound data gathering for insurance rating purposes
US20190223719A1 (en) Fundus Image Capture System
US20160062456A1 (en) Method and apparatus for live user recognition
US20170061608A1 (en) Cloud-based pathological analysis system and method
US20210341725A1 (en) Image status determining method an apparatus, device, system, and computer storage medium
US20160278664A1 (en) Facilitating dynamic and seamless breath testing using user-controlled personal computing devices
CN109032351B (en) Fixation point function determination method, fixation point determination device and terminal equipment
KR20140125207A (en) Method and apparatus for analyzing image for comprehending testee&#39;s psychology and recording medium thereof
KR102094953B1 (en) Method for eye-tracking and terminal for executing the same
US10474232B2 (en) Information processing method, information processing apparatus and user equipment
US10116788B2 (en) Detecting notable events and annotating multimedia data based on the notable events
US20200311933A1 (en) Processing fundus images using machine learning models to generate blood-related predictions
EP3879439A1 (en) Method and device for detecting body temperature, electronic apparatus and storage medium
CN112150463A (en) Method and device for determining fovea position of macula lutea
JP2014164573A (en) Image processing device, image processing method, and image processing program
WO2022170350A1 (en) Monitoring sample collection from an orifice
CN115984228A (en) Gastroscope image processing method and device, electronic equipment and storage medium
US20210338123A1 (en) Assessment of facial paralysis and gaze deviation
CN106815264B (en) Information processing method and system
CN115857678B (en) Eye movement testing method, device, equipment and storage medium
US10268265B2 (en) Information processing method, information processing apparatus and user equipment
KR102569998B1 (en) Method for managing notifications of applications and an electronic device thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22750638

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22750638

Country of ref document: EP

Kind code of ref document: A1