CN115229793A - Sampling method and device, equipment and storage medium - Google Patents

Sampling method and device, equipment and storage medium Download PDF

Info

Publication number
CN115229793A
CN115229793A CN202210895165.4A CN202210895165A CN115229793A CN 115229793 A CN115229793 A CN 115229793A CN 202210895165 A CN202210895165 A CN 202210895165A CN 115229793 A CN115229793 A CN 115229793A
Authority
CN
China
Prior art keywords
sampling
target object
sampled
position information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210895165.4A
Other languages
Chinese (zh)
Inventor
韦燕华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wingtech Communication Co Ltd
Original Assignee
Wingtech Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wingtech Communication Co Ltd filed Critical Wingtech Communication Co Ltd
Priority to CN202210895165.4A priority Critical patent/CN115229793A/en
Publication of CN115229793A publication Critical patent/CN115229793A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0045Devices for taking samples of body liquids
    • A61B10/0051Devices for taking samples of body liquids for taking saliva or sputum samples
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Hematology (AREA)
  • Pulmonology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a sampling method, a sampling device, sampling equipment and a storage medium; wherein the method comprises the following steps: shooting to obtain a target image containing a to-be-sampled part of a target object in a preset area; determining first relative position information between the target image and a part to be sampled of the target object based on the position information of the target image; moving to a target range away from the part to be sampled according to the first relative position information; acquiring a sampling area positioned in a part to be sampled; determining second relative position information with the sampling region based on the position information of the sampling region; and moving to the sampling area based on the second relative position information, and sampling the target object in the sampling area. Therefore, automatic sampling can be realized, and the labor cost is reduced.

Description

Sampling method and device, equipment and storage medium
Technical Field
The embodiment of the application relates to an image processing technology, and relates to but is not limited to a sampling method, a sampling device, sampling equipment and a storage medium.
Background
With the normalization of nucleic acid sampling, the nucleic acid sampling needs to be carried out again at fixed intervals, and the nucleic acid sampling needs to be carried out once a day in emergency areas, so that the demand of the nucleic acid sampling is large, but most of the existing nucleic acid sampling depends on manual sampling; in addition, for some diseases with strong infectivity, the sampling is also mostly completed by manual sampling of medical workers.
However, the time required by manual sampling is long, and the personnel to be detected often need to queue; the sampling force and the sampling standard are difficult to control by manual sampling, the number of medical workers is limited, and a large amount of manual sampling brings high load work of the medical workers; in addition, the problem of cross infection caused by a large amount of mutual contact between sampled persons and between the sampled persons and medical workers cannot be avoided in manual sampling. Therefore, in such a case, it is necessary to develop automated sampling.
Disclosure of Invention
In view of this, the sampling method and apparatus, the device, and the storage medium provided in the embodiments of the present application can save labor and time, improve sampling efficiency, make sampling more standardized and accurate, facilitate large-scale sampling, prevent and avoid cross-infection risk, and improve comfort and satisfaction of the examinee. The sampling method, the device, the equipment and the storage medium provided by the embodiment of the application are realized as follows:
the sampling method provided by the embodiment of the application comprises the following steps:
shooting to obtain a target image containing a to-be-sampled part of a target object in a preset area;
determining first relative position information between the target image and a part to be sampled of the target object based on the position information of the target image;
moving to a target range away from the part to be sampled according to the first relative position information;
acquiring a sampling area positioned in a part to be sampled;
determining second relative position information with the sampling area based on the position information of the sampling area;
and moving to the sampling area based on the second relative position information, and sampling the target object in the sampling area.
In some embodiments, acquiring a sampling region located inside a site to be sampled comprises:
scanning the interior of a part to be sampled to obtain three-dimensional point cloud data;
and identifying a sampling area positioned in the part to be sampled according to the three-dimensional point cloud data.
In some embodiments, sampling the target object within the sampling region includes:
and sampling the target object in the sampling region based on the preset sampling force.
In some embodiments, the relative position information includes distance information and/or azimuth information.
In some embodiments, before capturing a target image including a to-be-sampled portion of a target object located in a preset area, the method further includes:
acquiring at least one facial image of a target object;
performing living body detection on the target object based on the at least one face image, and determining whether the target object is a real object;
and if the target object is determined to be a real object, prompting the target object to move to a preset area.
In some embodiments, if the target object is determined to be a non-real object, re-capturing at least one facial image of the target object; and performing living body detection on the target object based on the at least one acquired face image, and determining whether the target object is a real object.
In some embodiments, if the target object is determined to be a non-real object, scanning the face of the target object to obtain at least one frame of face point cloud data; and performing living body detection on the target object based on at least one frame of the face point cloud data, and determining whether the target object is a real object.
The sampling device that this application embodiment provided includes:
the acquisition module is used for shooting to obtain a target image containing a to-be-sampled part of a target object in a preset area;
the determination module is used for determining first relative position information between the target image and a to-be-sampled part of the target object based on the position information of the target image;
the control module is used for moving to a target range away from the part to be sampled according to the first relative position information;
the acquisition module is also used for acquiring a sampling area positioned in the part to be sampled;
the determining module is further used for determining second relative position information between the sampling region and the sampling region based on the position information of the sampling region;
and the sampling module is used for moving to the sampling area based on the second relative position information and sampling the target object in the sampling area.
The electronic device provided by the embodiment of the application comprises a memory and a processor, wherein the memory stores a computer program which can run on the processor, and the processor executes the program to realize the method provided by the embodiment of the application.
The computer readable storage medium provided by the embodiment of the present application has a computer program stored thereon, and the computer program is used for implementing the method provided by the embodiment of the present application when being executed by a processor.
The sampling method, the sampling device, the computer equipment and the computer readable storage medium are simple in structure and convenient to operate, labor and time can be saved, sampling efficiency is improved, sampling is more standardized and precise, large-scale sampling is facilitated, cross infection risks can be prevented and avoided, and comfort and satisfaction of examinees are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a sampling apparatus according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating an implementation of a sampling method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a manipulator provided in the embodiment of the present application moving to a target range of a to-be-sampled portion;
fig. 4 is a schematic structural diagram of a manipulator moving to a sampling area according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart illustrating an implementation of a sampling method according to an embodiment of the present application;
fig. 6 is a schematic flow chart illustrating an implementation of a sampling method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a sampling apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application, but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should be noted that the terms "first \ second \ third" are used herein to distinguish similar or different objects and do not denote a particular order or importance to the objects, and it should be understood that "first \ second \ third" may be interchanged with a particular order or sequence where permissible to enable embodiments of the present application described herein to be practiced otherwise than as shown or described herein.
The manual sampling methods commonly used in the related art can be roughly divided into oropharyngeal swab sampling and nasal swab sampling.
The oropharyngeal swab sampling refers to a technical method for friction acquisition of mucous membrane surface mucus specimens on the pharyngeal side walls of examinees by holding sampling articles by medical workers, and the nasal swab sampling refers to a technical method for friction acquisition of mucous membrane surface mucus specimens in the nasal cavities of the examinees by holding sampling articles by medical workers. The sampling article can be a sampling cotton swab or sampling test paper and the like.
The implementation of above-mentioned sampling needs medical staff to hand the sampling articles for use and wipes in examinee's pharyngeal wall mucous membrane surface part to obtain mucous membrane surface mucus sample, send corresponding virus nucleic acid detection analysis to the clinical laboratory through the sample, can confirm whether have the virus in the examinee's sample. Although the operation method is simple, convenient and intuitive, the sampling workload of medical workers is large, the burden is heavy, the fatigue is easy to occur, and the utilization degree of medical resources is low.
In the whole sampling process, the medical workers are in a sitting position or a standing position for a long time, and the arms are easy to fatigue due to long-time arm lifting; and the sampling quality can not be standardized, and the sampling efficiency is low.
In view of the above, embodiments of the present application provide a sampling method, which is applied to a sampling device, and the sampling device may be various types of devices having a mobile function and an information processing function during implementation. The functions implemented by the method may be implemented by calling program code by a processor in the sampling device, although the program code may be stored in a computer storage medium.
As shown in fig. 1, in one embodiment, the sampling device 101 may include a robot motion module 102, a camera module 103, a pressure sensor module 104, and/or a radar scanner module 105. The camera module 103, the pressure sensor module 104 and the radar scanner module 105 are all installed at the front end of the manipulator motion module 102, and each module is in signal connection with the processor. The specific installation orientations of the camera module 103, the pressure sensor module 104, and the radar scanner module 105 are not limited.
The radar scanner module in this application embodiment can be various types of laser radar, so-called laser radar, is the radar system of characteristic quantity such as position, speed that launches laser beam detection target. The working principle is to transmit a detection signal (laser beam) to a target, then compare the received signal (target echo) reflected from the target with the transmitted signal, and after proper processing, obtain the relevant information of the target, such as target distance, direction, height, speed, attitude, even shape and other parameters, thereby detecting, tracking and identifying the target.
According to the radar thread classification, the laser radar can be divided into a single-line laser radar, a multi-line laser radar and the like; according to the classification of the ranging mode, the laser radar can be classified into a triangular ranging laser radar, a flight time ranging laser radar and the like. In addition, the sampling distances supported by different types of laser radars are also different, some radars support short-distance sampling, and some radars support long-distance sampling.
In some embodiments, the sampling device may further include a display module, a voice prompt module, a body temperature detection module, a barcode printing module, and/or a sampling article providing module, besides the above modules.
Fig. 2 is a schematic flow chart of an implementation of a sampling method provided in an embodiment of the present application, and as shown in fig. 2, the method may include the following steps 201 to 206:
step 201, shooting to obtain a target image containing a to-be-sampled part of a target object located in a preset area.
In some embodiments, after detecting that a target object enters a preset area within a certain range from the sampling device, the voice prompt module or the display module can prompt the target object to stand or sit at a specified position of the preset area; and then, the camera module 103 in the sampling device is used for acquiring images of the target object, so that a certain position range is preliminarily provided when the specific position of the target object is determined subsequently.
In the embodiment of the present application, the portion to be sampled is not limited, and is directly related to the sampling manner. For example, if the sampling mode is oropharyngeal swab sampling, the part to be sampled is the mouth of the target object; if the sampling mode is nose swab sampling, the part to be sampled is the nose of the target object.
It can be understood that after a target image including a to-be-sampled portion of a target object is captured, the captured target image is usually affected by noise, light, image contrast, and performance of the capturing device due to different environments for capturing the target image. Therefore, in order to reduce the influence of external factors on the identification of the to-be-sampled part as much as possible and restore the real information of the to-be-sampled part to the maximum extent, the target image can be preprocessed in advance before the to-be-sampled part of the target object in the target image is identified.
In some embodiments, the pre-processing process may include, but is not limited to, image denoising, image segmentation, image enhancement, etc. of the target image. The image segmentation can be to carry out binarization processing on the image so as to realize segmentation of foreground and background areas, so that the processed target image only retains the target object, and the influence of background interference factors can be abandoned, thereby improving the subsequent identification speed and accuracy of the target object in the target image.
Here, the number of the captured target images is not limited, and one target image may be obtained for one capturing, or a plurality of target images may be obtained for a plurality of capturing.
It should be noted that after the target object in the target image is identified, the identity information of the target object needs to be identified and matched. In the embodiment of the present application, the manner of identifying and matching the identity information of the target object is not limited. As in some embodiments, target identity information and/or contact information that matches the current target object may be called from a database; in other embodiments, the target object may also be prompted to present a sampling code at a particular location; and then, scanning the sampling code displayed by the target object by using the camera module 103 to acquire the identity information and the contact information of the target object.
In some embodiments, after the information matching of the target object is completed, the target identity information and/or the contact information may be displayed on a display module of the sampling device, and a confirmation button is displayed on the display module to allow the target object to confirm whether the information is correct.
In some embodiments, after it is determined that the identity information and/or the contact information of the target object are correctly matched, the sampling device 101 may control the printing module to print out the identity barcode information of the target object, and control the manipulator motion module 102 to attach the identity barcode information to the reagent tube on which the sampling article is placed, so as to avoid false detection. Wherein, in order to facilitate the moving and taking, the manipulator motion module 102 and the reagent tube may be arranged on a horizontal line.
Step 202, based on the position information of the target image, determining first relative position information between the target image and a to-be-sampled part of the target object.
It is understood that the robot motion module 102 in the sampling device is housed inside the sampling device 101 when sampling is not performed, and therefore the position information of the robot motion module 102 in the sampling device when sampling is not performed is known. Based on this, after the target image is obtained by shooting, position information, such as coordinate information, of the to-be-sampled portion of the target object in the target image may be determined, and then, based on the position information of the to-be-sampled portion and the position information of the robot motion module 102, first relative position information between the sampling device 101 and the to-be-sampled portion of the target object may be calculated.
It should be noted that, when determining the first relative position information, if the requirement on the accuracy of the position information is not high, the camera module may be used to perform image shooting determination. Of course, if the accuracy requirement for the position information is high, the radar scanner module may also be used to scan and determine the image.
In some embodiments, the relative position information includes distance information and/or azimuth information. Correspondingly, the calculated first relative position information between the manipulator motion module and the part to be sampled of the target object is the distance between the manipulator motion module and the part to be sampled and/or the azimuth angle of the part to be sampled relative to the manipulator motion module.
And step 203, moving to a target range away from the part to be sampled according to the first relative position information.
It can be understood that, if the sampling area in the to-be-sampled portion of the target object is determined based on the target image captured by the camera module, there may be a problem of insufficient precision, which may result in inaccurate determination of the sampling area. Therefore, in the embodiment of the application, the sampling area is determined without directly depending on the target image shot by the camera module, and the manipulator motion module is controlled to move to the sampling area at one time; instead, as shown in fig. 3, the position of the target object is preliminarily determined through a target image obtained by shooting through the camera module, only the manipulator motion module 302 is controlled to move to a target range which is far away from the to-be-sampled part of the target object 304, and then the sampling area is re-determined, so that the determination accuracy can be improved.
Specifically, if the part to be sampled is a mouth, the manipulator motion module is controlled to move to a certain distance away from the mouth, and then the manipulator motion module stops; if the part to be sampled is a nose, the manipulator motion module is controlled to move to a certain distance away from the nose, and then the manipulator motion module stops.
In some embodiments, the sampling device 301 further needs to control the robot motion module 302 to take the sampling article 303 out of the reagent tube, and then control the robot motion module 302 holding the sampling article 303 to move toward the target range of the to-be-sampled portion of the target object 304, and prompt the target object to open its mouth or raise its head for subsequent rescanning to determine the sampling region located inside the to-be-sampled portion of the target object.
And step 204, acquiring a sampling area positioned in the part to be sampled.
In some embodiments, step 204 may be implemented by performing steps 2041 through 2042 as follows:
step 2041, scanning the interior of the part to be sampled to obtain three-dimensional point cloud data.
It should be noted that, in order to accurately identify the sampling area inside the to-be-sampled portion of the target object, in the embodiment of the present application, the radar scanner module 105 with higher precision may be used to scan the to-be-sampled portion of the target object, so as to obtain the three-dimensional point cloud image of the to-be-sampled portion.
In some embodiments, assuming that the radar scanner module 105 in the embodiment of the present application is a lidar supporting short-distance sampling, and the ranging mode adopted by the lidar is time-of-flight ranging, a specific process of acquiring three-dimensional point cloud data through the lidar is as follows:
the laser radar emits light pulses to the inside of the part to be sampled of the target object, and then receives light pulse signals reflected back from the inside of the part to be sampled, so that a receiver in the laser radar can accurately measure the propagation time of the light pulses from emission to reflection. Given that the speed of light is known, the travel time can be converted into a measure of the distance to each point inside the site to be sampled; and then, by combining the height of the laser radar and the laser scanning angle, the three-dimensional coordinates X, Y and Z corresponding to each point in the part to be sampled can be accurately calculated, so that three-dimensional point cloud data in the part to be sampled are formed.
Step 2042, identifying a sampling area located inside the part to be sampled according to the three-dimensional point cloud data.
After the three-dimensional point cloud image of the part to be sampled of the target object is obtained through scanning, the three-dimensional point cloud image can be decomposed to obtain different areas of the part to be sampled, and the sampling area is selected from the different areas.
Based on this, in some embodiments, identifying a sampling region located inside a part to be sampled according to the three-dimensional point cloud data may be achieved by performing the following steps: constructing a three-dimensional model inside a part to be sampled according to the three-dimensional point cloud data; identifying the three-dimensional model to obtain each candidate sampling area; and selecting a sampling area from each candidate sampling area based on the position information and the depth information of each candidate sampling area.
Specifically, if the site to be sampled is the oral cavity, the three-dimensional point cloud data obtained from the scanning is processed by point cloud processing software to construct a three-dimensional model including the entire oral cavity, throat, tonsils, and tongue regions of the target subject. In oropharyngeal swab sampling, the area that is typically best sampled is the location of the posterior pharyngeal wall and tonsils, and the area of the posterior pharyngeal wall and tonsils can be identified as the sampling area. And if the part to be sampled is a nose, constructing a three-dimensional model of the whole nasal cavity, the lateral wall of the nasal cavity and other areas of the target object according to the three-dimensional point cloud data obtained by scanning. When the nasal swab sampling is carried out, in order to enable the experience of a target object to be better, the acquisition depth of a sampling article needs to be determined according to three-dimensional point cloud data, and therefore the area where a nasal cavity with a specific depth is located is determined as a sampling area.
In this application embodiment, treat through radar scanner module 105 and treat that the sampling position is inside to scan, confirm the sampling region based on the three-dimensional point cloud image that the scanning obtained, can discern the specific position in sampling region more accurately to provide accurate position basis for follow-up sampling.
And step 205, determining second relative position information between the sampling region and the sampling region based on the position information of the sampling region.
In some embodiments, the relative position information includes distance information and/or azimuth information. Correspondingly, the calculated second relative position information between the manipulator motion module and the sampling area is the distance between the manipulator motion module and the sampling area and/or the azimuth angle of the sampling area relative to the manipulator motion module.
And step 206, moving to the sampling area based on the second relative position information, and sampling the target object in the sampling area.
As shown in fig. 4, the sampling device 401 controls the robot motion module 402 holding the sampling article 403 to move into the sampling area based on the second relative position information of the robot motion module 402 holding the sampling article 403 with respect to the sampling area.
In some embodiments, a depth camera may be further installed at the front end of the robot motion module 402, and it is determined whether to move to the sampling area by the depth camera, and when it is determined that the robot motion module 402 has moved to the sampling area, the target object may be sampled.
In some embodiments, when sampling the target object, the magnitude of the force applied by sampling may also be determined by a pressure sensor installed at the front end of the manipulator motion module 402, so that the target object is sampled in the sampling region based on the preset sampling force. Thus, on one hand, the precision of operation execution can be ensured; on the other hand, soft sampling is carried out based on preset force, and the damage to the target object possibly caused by overlarge sampling force in the operation process of the manipulator can be prevented.
In some embodiments, after the sampling area samples the target object, the sampling device can also control the mechanical arm motion module holding the used sampling article to return to the original position according to the original path; and then controlling the two manipulator motion modules to perform coordination control according to the position of the reagent tube, the position of the manipulator motion module currently holding a sampling article and the position of the other manipulator motion module, sending the used sampling article into the corresponding reagent tube (if the sampling article is a sampling cotton swab, the other manipulator motion module can also be used for cutting off redundant sampling cotton swabs), thereby completing sampling of the current target object, and resetting the two manipulator motion modules for next sampling.
In some embodiments, after sampling when, sampling device can also carry out disinfection to parts such as manipulator motion module, camera module, pressure sensor module and radar scanner module to guarantee the sampling security, avoid appearing the cross infection between the person being examined.
In the embodiment of the application, an automatic sampling device with a simple structure and convenient operation is provided, automatic sampling is carried out based on the sampling device, labor and time can be saved, sampling efficiency is improved, sampling is more standardized and accurate, large-scale sampling is facilitated, cross infection risks can be prevented and avoided, and comfort and satisfaction of examinees are improved.
Fig. 5 is a schematic implementation flow diagram of a sampling method provided in an embodiment of the present application, and as shown in fig. 5, the method may include the following steps 501 to 509:
at step 501, at least one facial image of a target subject is acquired.
In the embodiment of the present application, the number of the collected face images of the target object is not limited, and may be one or more. The manner of capturing the facial image is not limited, and for example, in some embodiments, the facial image may be captured in a silent capturing manner. The silence is to automatically acquire the facial image of the target object without any prompt (such as sound, text, image prompt or indicator light). For example, an image capture device for capturing a facial image of a target object may capture the facial image of the target object coming within its capture range in real time.
Step 502, performing living body detection on a target object based on at least one face image, and determining whether the target object is a real object; if the target object is determined to be a real object, go to step 503; otherwise, the method returns to step 501, and at least one face image of the target object is re-acquired.
In the embodiment of the present application, the way of performing live body detection on a target object differs depending on the number of face images acquired.
In some embodiments, if multiple consecutive frames of facial images are acquired, it may be determined whether the target object is a real object based on whether the target object makes certain facial movements, which may include, but are not limited to: blinking, closing left/right/eyes, eye-bead left/right movement, mouth opening, left/right/up/down head turning, smiling, grimacing, etc. Judging whether the facial motion of the target object is continuous and in line with expectation through multiple frames of continuous facial images, and if the facial motion is continuous and in line with expectation, determining that the target object is a real object.
In some embodiments, if a plurality of consecutive frames of facial images are collected, it may also be determined whether the target object is a real object based on whether the facial light change of the target object is expected. Specifically, the target object is prompted to move from a dark area to a bright area, whether the change in facial light of the target object meets an expected setting is detected, and if so, the target object is determined to be a real object.
In some embodiments, if only one face image of the target object is collected, it may be determined whether the target object is a real object according to the difference between the specular reflection and the diffuse reflection principles of light reflection by determining the brightness parameter of the light spot region in the face image. Specifically, judging whether the brightness parameter of a light spot area in the acquired face image is matched with a reference brightness parameter; and when the brightness parameter of the light spot area is matched with the reference brightness parameter, determining that the target object is a real object.
In some embodiments, if it is determined that the target object is a non-real object, the following steps may be further performed:
scanning the face of a target object to obtain at least one frame of face point cloud data; and performing living body detection on the target object based on the at least one frame of face point cloud data, and determining whether the target object is a real object.
Here, if a plurality of facial images are collected based on the camera module for the first time and the target object is determined to be a non-real object, at least one frame of facial point cloud data of the target object may be collected again by using the radar scanner, and whether the target object is a real object is determined again based on the facial point cloud data, so that the detection accuracy can be improved and the detection error can be reduced.
In some embodiments, if the target object or the non-real object is determined based on the re-acquired face image or the face point cloud data, the detection result may be reported to the sampling point for the sampling point to confirm manually.
Step 503, prompting the target object to move to a preset area.
Step 504, a target image including a to-be-sampled part of the target object located in the preset area is obtained through shooting.
Step 505, based on the position information of the target image, determining first relative position information between the target image and the to-be-sampled part of the target object.
Step 506, moving to a target range away from the part to be sampled according to the first relative position information.
And 507, acquiring a sampling area positioned inside the part to be sampled.
And step 508, determining second relative position information with the sampling area based on the position information of the sampling area.
And 509, moving to the sampling area based on the second relative position information, and sampling the target object in the sampling area.
As shown in fig. 6, based on the above implementation method, an exemplary schematic diagram of the method in the embodiment of the present application in a specific use scenario is further provided, where the method includes steps 601 to 608:
step 601, judging whether the examinee is a real examinee; if yes, go to step 603; otherwise, step 602 is performed.
And step 602, acquiring the face image of the current examinee again and identifying the face image, and reporting the identification result to the nucleic acid detection point if the identification result is a non-real examinee.
Step 603, performing face recognition on the detected person.
And step 604, matching and displaying the relevant identity information of the examinee, and corresponding the identity information of the examinee to the reagent tube.
And step 605, after receiving the instruction of the examinee for confirming the identity information check, acquiring the examinee graph shot by the camera, determining the distance from the manipulator to the mouth of the examinee according to the examinee graph, and driving the manipulator to move to the position away from the mouth of the examinee by the preset distance.
And 606, acquiring three-dimensional point cloud data of the detected person through a laser radar, constructing a three-dimensional point cloud picture, sequentially determining the distance from the manipulator to the sampling area of the detected person, and driving the manipulator to move to the sampling area.
And step 607, sampling is realized by controlling the force through the pressure sensor.
And step 608, sending the sampling test paper into a corresponding reagent tube through the motion coordination control of the two manipulators to finish the sampling of the current detected person.
In the embodiment of the application, an automatic sampling device with a simple structure and convenient operation is provided, automatic sampling is carried out based on the sampling device, labor and time can be saved, sampling efficiency is improved, sampling is more standardized and accurate, large-scale sampling is facilitated, cross infection risks can be prevented and avoided, and comfort and satisfaction of examinees are improved.
It should be understood that although the steps in the flowcharts of fig. 2, 5 and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 5, and 6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Based on the foregoing embodiments, the present application provides a sampling device, which includes modules and units included in the modules, and can be implemented by a processor; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 7 is a schematic structural diagram of a sampling apparatus according to an embodiment of the present application, and as shown in fig. 7, the apparatus 700 includes an obtaining module 701, a determining module 702, a control module 703 and a sampling module 704, where:
the acquisition module 701 is used for shooting and obtaining a target image containing a to-be-sampled part of a target object located in a preset area; a determining module 702, configured to determine first relative position information with a to-be-sampled portion of the target object based on the position information of the target image; the control module 703 is configured to move to a target range away from the to-be-sampled portion according to the first relative position information; the acquiring module 701 is further configured to acquire a sampling region located inside a to-be-sampled portion; a determining module 702, configured to determine second relative position information with respect to the sampling region based on the position information of the sampling region; and the sampling module 704 is used for moving to the sampling region based on the second relative position information and sampling the target object in the sampling region.
In some embodiments, the apparatus further includes an identification module, an acquisition module 702, and is further configured to scan the inside of the to-be-sampled portion to obtain three-dimensional point cloud data; and the identification module is used for identifying a sampling area positioned in the part to be sampled according to the three-dimensional point cloud data.
In some embodiments, the sampling module 704 is further configured to sample the target object within the sampling region based on a preset sampling strength.
In some embodiments, the relative position information includes distance information and/or azimuth information. .
In some embodiments, the apparatus further comprises a detection module and a prompt module; the acquisition module is also used for acquiring at least one facial image of the target object; the detection module is used for carrying out living body detection on the target object based on at least one face image and determining whether the target object is a real object; the prompting module is used for prompting the target object to move to a preset area if the target object is determined to be a real object.
In some embodiments, if it is determined that the target object is a non-real object, the acquiring module is further configured to re-acquire at least one facial image of the target object; the detection module is further used for performing living body detection on the target object based on the at least one acquired face image and determining whether the target object is a real object.
In some embodiments, if it is determined that the target object is a non-real object, the obtaining module is further configured to scan a face of the target object to obtain at least one frame of face point cloud data; the detection module is further used for performing living body detection on the target object based on the at least one frame of face point cloud data and determining whether the target object is a real object.
In the embodiment of the application, an automatic sampling device with a simple structure and convenient operation is provided, automatic sampling is carried out based on the sampling device, labor and time can be saved, sampling efficiency is improved, sampling is more standardized and accurate, large-scale sampling is facilitated, cross infection risks can be prevented and avoided, and comfort and satisfaction of examinees are improved.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, the division of the sampling apparatus shown in fig. 7 into modules is schematic, and is only one logic function division, and another division manner may be used in actual implementation. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, may exist alone physically, or may be integrated into one unit by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. Or in a combination of software and hardware.
It should be noted that, in the embodiment of the present application, if the method described above is implemented in the form of a software functional module and sold or used as a standalone product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
An embodiment of the present application provides a computer device, which may be a server, and an internal structure diagram of the computer device may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an interactive mode switching method.
Embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps in the methods provided in the above embodiments.
Embodiments of the present application provide a computer program product containing instructions, which when executed on a computer, cause the computer to perform the steps of the method provided by the above method embodiments.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the sampling apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 8. The memory of the computer device may store various program modules that make up the sampling apparatus, such as the acquisition module, determination module, control module, and sampling module shown in FIG. 7. The computer program constituted by the respective program modules causes the processor to execute the steps in the sampling method of the respective embodiments of the present application described in the present specification.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: shooting to obtain a target image containing a to-be-sampled part of a target object in a preset area; determining first relative position information between the target image and a to-be-sampled part of a target object based on the position information of the target image; moving to a target range away from the part to be sampled according to the first relative position information; acquiring a sampling area positioned in a part to be sampled; determining second relative position information with the sampling area based on the position information of the sampling area; and moving to the sampling area based on the second relative position information, and sampling the target object in the sampling area.
In one embodiment, the processor when executing the computer program further performs the steps of: scanning the interior of a part to be sampled to obtain three-dimensional point cloud data; and identifying a sampling area positioned in the part to be sampled according to the three-dimensional point cloud data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and sampling the target object in the sampling area based on the preset sampling strength.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring at least one facial image of a target object; performing living body detection on the target object based on the at least one face image, and determining whether the target object is a real object; and if the target object is determined to be the real object, prompting the target object to move to a preset area.
In one embodiment, the processor when executing the computer program further performs the steps of: if the target object is determined to be a non-real object, re-collecting at least one face image of the target object; and performing living body detection on the target object based on the at least one facial image acquired again, and determining whether the target object is a real object.
In one embodiment, the processor, when executing the computer program, further performs the steps of: if the target object is determined to be a non-real object, scanning the face of the target object to obtain at least one frame of face point cloud data; and performing living body detection on the target object based on at least one frame of the face point cloud data, and determining whether the target object is a real object.
In the embodiment of the application, an automatic sampling device with a simple structure and convenient operation is provided, automatic sampling is carried out based on the sampling device, labor and time can be saved, sampling efficiency is improved, sampling is more standardized and accurate, large-scale sampling is facilitated, cross infection risks can be prevented and avoided, and comfort and satisfaction of examinees are improved.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: shooting to obtain a target image containing a to-be-sampled part of a target object in a preset area; determining first relative position information between the target image and a part to be sampled of the target object based on the position information of the target image; moving to a target range away from the part to be sampled according to the first relative position information; acquiring a sampling area positioned in a part to be sampled; determining second relative position information with the sampling region based on the position information of the sampling region; and moving to the sampling area based on the second relative position information, and sampling the target object in the sampling area.
In one embodiment, the computer program when executed by the processor further performs the steps of: scanning the interior of a part to be sampled to obtain three-dimensional point cloud data; and identifying a sampling area positioned in the part to be sampled according to the three-dimensional point cloud data.
In one embodiment, the computer program when executed by the processor further performs the steps of: and sampling the target object in the sampling area based on the preset sampling strength.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring at least one facial image of a target object; performing living body detection on the target object based on the at least one face image, and determining whether the target object is a real object; and if the target object is determined to be a real object, prompting the target object to move to a preset area.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the target object is determined to be a non-real object, re-collecting at least one facial image of the target object; and performing living body detection on the target object based on the at least one facial image acquired again, and determining whether the target object is a real object.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the target object is determined to be a non-real object, scanning the face of the target object to obtain at least one frame of face point cloud data; and performing living body detection on the target object based on the at least one frame of face point cloud data, and determining whether the target object is a real object.
In the embodiment of the application, an automatic sampling device with a simple structure and convenient operation is provided, automatic sampling is carried out based on the automatic sampling device, labor and time can be saved, sampling efficiency is improved, sampling is more standardized and precise, large-scale sampling is facilitated, cross infection risks can be prevented and avoided, and the comfort level and the satisfaction degree of an examinee are improved.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium, the storage medium and the device of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
The term "and/or" herein is merely an association relationship describing an associated object, and means that three relationships may exist, for example, object a and/or object B, may mean: the object A exists alone, the object A and the object B exist simultaneously, and the object B exists alone.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be electrical, mechanical or in other forms.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules; can be located in one place or distributed on a plurality of network units; some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may be separately regarded as one unit, or two or more modules may be integrated into one unit; the integrated module can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps of implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer-readable storage medium, and when executed, executes the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit described above may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to arrive at new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall cover the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A sampling method, which is applied to a sampling device, is characterized by comprising the following steps:
shooting to obtain a target image containing a to-be-sampled part of a target object in a preset area;
determining first relative position information between the target image and a part to be sampled of the target object based on the position information of the target image;
moving to a target range away from the part to be sampled according to the first relative position information;
acquiring a sampling area positioned in the part to be sampled;
determining second relative position information with the sampling area based on the position information of the sampling area;
moving into the sampling region based on the second relative position information, and sampling the target object within the sampling region.
2. The method of claim 1, wherein said obtaining a sampling region located inside the site to be sampled comprises:
scanning the interior of the part to be sampled to obtain three-dimensional point cloud data;
and identifying a sampling area positioned in the part to be sampled according to the three-dimensional point cloud data.
3. The method of claim 1, wherein said sampling said target object within said sampling region comprises:
and sampling the target object in the sampling area based on a preset sampling strength.
4. The method of claim 1, wherein the relative position information comprises distance information and/or azimuth information.
5. The method according to claim 1, wherein before capturing a target image containing a portion to be sampled of a target object located in a preset area, the method further comprises:
acquiring at least one facial image of the target object;
performing living body detection on the target object based on at least one face image, and determining whether the target object is a real object;
and if the target object is determined to be a real object, prompting the target object to move to the preset area.
6. The method of claim 5, further comprising:
if the target object is determined to be a non-real object, re-collecting at least one facial image of the target object;
and performing living body detection on the target object based on the at least one acquired face image, and determining whether the target object is a real object.
7. The method of claim 5, further comprising:
if the target object is determined to be a non-real object, scanning the face of the target object to obtain at least one frame of face point cloud data;
and performing living body detection on the target object based on at least one frame of the facial point cloud data, and determining whether the target object is a real object.
8. A sampling device, comprising:
the acquisition module is used for shooting a target image containing a to-be-sampled part of a target object in a preset area;
the determining module is used for determining first relative position information between the target image and the part to be sampled of the target object based on the position information of the target image;
the control module is used for moving to a target range away from the part to be sampled according to the first relative position information;
the acquisition module is also used for acquiring a sampling area positioned in the part to be sampled;
the determining module is further configured to determine second relative position information with the sampling region based on the position information of the sampling region;
and the sampling module is used for moving to the sampling area based on the second relative position information and sampling the target object in the sampling area.
9. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the method of any of claims 1 to 7 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202210895165.4A 2022-07-26 2022-07-26 Sampling method and device, equipment and storage medium Pending CN115229793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210895165.4A CN115229793A (en) 2022-07-26 2022-07-26 Sampling method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210895165.4A CN115229793A (en) 2022-07-26 2022-07-26 Sampling method and device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115229793A true CN115229793A (en) 2022-10-25

Family

ID=83677893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210895165.4A Pending CN115229793A (en) 2022-07-26 2022-07-26 Sampling method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115229793A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661728A (en) * 2022-12-29 2023-01-31 北京正大创新医药有限公司 Virus sampling in-place judgment method based on image recognition and virus sampling system
CN116109982A (en) * 2023-02-16 2023-05-12 哈尔滨星云智造科技有限公司 Biological sample collection validity checking method based on artificial intelligence
CN116168385A (en) * 2023-02-22 2023-05-26 哈尔滨星云智造科技有限公司 Sample acquisition result evaluation method based on visual three-dimensional scene reconstruction

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661728A (en) * 2022-12-29 2023-01-31 北京正大创新医药有限公司 Virus sampling in-place judgment method based on image recognition and virus sampling system
CN116109982A (en) * 2023-02-16 2023-05-12 哈尔滨星云智造科技有限公司 Biological sample collection validity checking method based on artificial intelligence
CN116168385A (en) * 2023-02-22 2023-05-26 哈尔滨星云智造科技有限公司 Sample acquisition result evaluation method based on visual three-dimensional scene reconstruction
CN116168385B (en) * 2023-02-22 2023-10-27 哈尔滨星云智造科技有限公司 Sample acquisition result evaluation method based on visual three-dimensional scene reconstruction

Similar Documents

Publication Publication Date Title
CN115229793A (en) Sampling method and device, equipment and storage medium
CN112288742B (en) Navigation method and device for ultrasonic probe, storage medium and electronic equipment
CN111200973B (en) Fertility monitoring based on intelligent ultrasound
US20190239851A1 (en) Position correlated ultrasonic imaging
CN109924994B (en) Method and system for automatically calibrating detection position in x-ray shooting process
US11464490B2 (en) Real-time feedback and semantic-rich guidance on quality ultrasound image acquisition
US11250580B2 (en) Method, system and computer readable storage media for registering intraoral measurements
JP7006776B2 (en) Analytical instruments, analytical methods, programs and aquatic organism monitoring systems
CN111511288B (en) Ultrasound lung assessment
CN113030987B (en) Laser emergent angle measuring method and system for multi-line laser radar and electronic equipment
CN115590584A (en) Hair follicle hair taking control method and system based on mechanical arm
Salau et al. 2.3. Development of a multi-Kinect-system for gait analysis and measuring body characteristics in dairy cows
CN113116384A (en) Ultrasonic scanning guidance method, ultrasonic device and storage medium
CN113297882A (en) Intelligent morning check robot, height measuring method and application
WO2021003711A1 (en) Ultrasonic imaging apparatus and method and device for detecting b-lines, and storage medium
EP4310549A1 (en) Sensing system
CN113219450B (en) Ranging positioning method, ranging device and readable storage medium
US20230186477A1 (en) System and methods for segmenting images
JP2021012043A (en) Information processing device for machine learning, information processing method for machine learning, and information processing program for machine learning
CN116687445B (en) Automatic positioning and tracking method, device, equipment and storage medium for ultrasonic fetal heart
WO2023216594A1 (en) Ultrasonic imaging system and method
CN109447044B (en) Scanning control method and system
US20240260934A1 (en) Ultrasound imaging system
CN111562592B (en) Object edge detection method, device and storage medium
KR20210112530A (en) Measuring System for Fish Based On Deep Learnin Using TOF camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination