CN112967796A - Non-contact control method and device for in-vitro diagnostic equipment and storage medium - Google Patents

Non-contact control method and device for in-vitro diagnostic equipment and storage medium Download PDF

Info

Publication number
CN112967796A
CN112967796A CN201911289014.9A CN201911289014A CN112967796A CN 112967796 A CN112967796 A CN 112967796A CN 201911289014 A CN201911289014 A CN 201911289014A CN 112967796 A CN112967796 A CN 112967796A
Authority
CN
China
Prior art keywords
target
content
control
motion state
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911289014.9A
Other languages
Chinese (zh)
Inventor
刘鹏昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN201911289014.9A priority Critical patent/CN112967796A/en
Publication of CN112967796A publication Critical patent/CN112967796A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a non-contact control method of in-vitro diagnostic equipment, which comprises the following steps: tracking a target object of an operation user to obtain a motion state of the target object; determining target content in display content on a display interface according to the motion state of the target object; and executing control operation based on the target content. The embodiment of the invention also discloses an in-vitro diagnosis device and a storage medium.

Description

Non-contact control method and device for in-vitro diagnostic equipment and storage medium
Technical Field
The embodiment of the invention relates to an in-vitro diagnosis technology, in particular to a non-contact control method and device of in-vitro diagnosis equipment and a storage medium.
Background
In the related art, after the in-vitro diagnostic device in the in-vitro diagnostic system obtains the detection result, the examining physician needs to manually confirm the detection result. In the manual confirmation process, a final detection report can be generated only after a plurality of manual operations are performed on the basis of a display interface. However, the sanitation requirement of the laboratory is high, the examination doctors wear rubber gloves, and thus the hands wearing the rubber gloves interact with the display interface in a mouse or touch screen mode, so the interaction process is very inconvenient, and the mouse, touch mode and the like have the possibility of cross propagation and biological pollution based on the situation that a plurality of people operate one device.
Disclosure of Invention
To solve the above technical problem, embodiments of the present invention provide a non-contact control method and device for an extracorporeal diagnostic apparatus, and a storage medium.
The embodiment of the invention provides a non-contact control method of in-vitro diagnostic equipment, which comprises the following steps: tracking a target object of an operation user to obtain a motion state of the target object; determining target content in display content on a display interface according to the motion state of the target object; and executing control operation based on the target content.
The device provided by the embodiment of the invention, namely the in-vitro diagnosis device comprises: the image acquisition device is used for dynamically tracking a target object of an operation user to obtain the motion state of the target object; the display device is used for displaying a display interface related to sample detection; the control device is used for determining target content in display content on the display interface according to the motion state of the target object; the control device is also used for executing control operation based on the target content.
The storage medium provided by the embodiment of the invention stores a non-contact control program of the in-vitro diagnostic equipment, and the non-contact control program of the in-vitro diagnostic equipment realizes the steps of the non-contact control method of the in-vitro diagnostic equipment when being executed by a processor.
In the embodiment of the invention, a target object of an operation user is tracked to obtain the motion state of the target object; determining target content in display content on a display interface according to the motion state of the target object; performing a control operation based on the target content; therefore, the display content on the display interface is controlled to execute the control operation by tracking the target object of the operation user, and the hand of the operation user can not contact with the input devices such as a touch screen, a mouse, a keyboard and the like of the in-vitro diagnosis equipment in the control process, so that the non-contact control is realized, the convenience of the interaction mode of the user and the interaction interface is improved, and the possibility of cross propagation and biological pollution is avoided.
Drawings
Fig. 1 is a schematic structural diagram of an alternative IVD device provided in an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an alternative IVD device provided in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an alternative IVD device provided in an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an alternative IVD device provided by an embodiment of the present invention;
fig. 5 is a schematic flow chart of an alternative non-contact control method of an in vitro diagnostic apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an optional application scenario provided in the embodiment of the present invention;
FIG. 7 is a schematic diagram of an alternative display interface provided by an embodiment of the present invention;
FIG. 8 is a schematic view of an alternative calibration interface provided by an embodiment of the present invention;
FIG. 9 is a schematic diagram of an alternative display interface provided by an embodiment of the invention;
FIG. 10 is a schematic view of an alternative display interface provided by an embodiment of the present invention;
FIG. 11 is a schematic view of an alternative display interface provided by an embodiment of the present invention;
fig. 12 is a flowchart illustrating an alternative contactless control method for an extracorporeal diagnostic apparatus according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the examples provided herein are merely illustrative of the present invention and are not intended to limit the present invention. In addition, the following embodiments are provided as partial embodiments for implementing the present invention, not all embodiments for implementing the present invention, and the technical solutions described in the embodiments of the present invention may be implemented in any combination without conflict.
In the related art, after the in vitro diagnostic device obtains the detection result of the sample, the examining physician needs to manually confirm the detection result through a mouse, a touch screen, and other devices. However, the space where the in vitro diagnostic device is located has a high requirement on sanitation, inspection doctors need to wear rubber gloves, interaction with software is very inconvenient through a mouse, a touch screen and the like, and in the case that a plurality of people operate one computer, devices such as the mouse, the touch screen and the like can be used by a plurality of people at the same time, so that the possibility of cross-spreading biological pollution exists.
Based on the above technical problem, in various embodiments of the present invention: tracking a target object of an operation user to obtain a motion state of the target object; determining target content in display content on a display interface according to the motion state of the target object; performing a control operation based on the target content; therefore, the display content on the display interface is controlled to execute the control operation by tracking the target object of the operation user, and the hand of the operation user can not contact with the input devices such as a touch screen, a mouse, a keyboard and the like of the in-vitro diagnosis equipment in the control process, so that the non-contact control is realized, the convenience of the interaction mode of the user and the interaction interface is improved, and the possibility of cross propagation and biological pollution is avoided.
An embodiment of the present invention provides a non-contact control method for an In Vitro Diagnostic apparatus, which is applied to an In Vitro Diagnostic (IVD) apparatus. The IVD device in the embodiments of the present invention can be divided into: a detection device and a data management device. In one example, the detection device and the data management device may be implemented on separate electronic devices, in which case the data management device is implemented on an electronic device separate from the detection device, such as: various types of electronic devices such as mobile terminals, computers, digital broadcast terminals, information transmitting and receiving devices, tablet devices, in-vehicle devices, personal digital assistants, and the like. In yet another example, the detection device and the data management device may also be integrated on an electronic device.
The detection device is an in vitro diagnosis instrument for acquiring clinical diagnosis information by detecting human body samples (blood, body fluid, tissues and the like) outside a human body and further judging diseases or body functions, and different detection devices can detect different detection items. The data management device is a device that manages data of the detection device.
The IVD device may be an in vitro assay line or part of an in vitro assay line, and the assay devices on the line may include a slide, a C Reactive Protein (CRP) detector, a glycation analyzer, a hematology analyzer, a slide reader, etc., and there may be one or more of each type of assay device. The data management device on the in vitro detection assembly line records data of the detection device, and the recorded data can comprise: sample information, sample detection results, and the like, and manages the recorded data. Wherein, a data management device can manage the data of one or more detection devices.
In one example, the structure of the IVD device may be as shown in fig. 1, including: an image acquisition device 101, a display device 102 and a control device 103. In yet another example, the structure of the IVD device may be as shown in fig. 2, including: an image acquisition device 101, a display device 102, a control device 103, and a sample analysis device 104. The image acquiring device 101, the display device 102, and the control device 103 may constitute a data management apparatus, and the sample analyzing device 104 may constitute a detection apparatus.
The image acquisition apparatus 101 dynamically tracks a target object such as a hand, a head, or an eyeball of an operation user by using a motion sensing technique, and determines a motion state of the target object.
The display device 102 is used for displaying a software interface related to sample detection; the software interface related to the sample detection comprises a sample detection result display interface, a sample detection management interface, a quality control/calibration interface and the like.
The control device 103 processes data in the IVD device.
And a sample analysis device 104 for sample detection. In the sample detection process, samples such as blood samples and the like are added with the reagent and mixed evenly, and then the mixed solution is subjected to detection and analysis.
In one example, the structure of the sample analysis device 104 can be as shown in fig. 3, including: the sample analysis section 1041a, the tube rack recovery section 1042a, the tube rack transport track 1043a, and the tube rack placing section 1044a, wherein the tube rack placing section 1044a can place the sample rack, the tube rack placing section 1044a places the tube rack through placing the test tube rack on the sample rack, the test tube rack is transported to the sample analysis section 1041a through the tube rack transport track 1043a for testing (adding reagent mixing before testing), and the test tube rack is transported to the recovery section after the testing is finished. Wherein, also including multirow test-tube rack on giving the sample frame, all be provided with a plurality of test-tube positions on every test-tube rack.
In one example, as shown in fig. 4, the sample analysis device 104 may include: a cell image analysis device 104-1 and a blood analyzer 104-2. In the embodiment of the present invention, the number of the cell image analyzing devices 104-1 in the blood cell detecting system may be one or more, and the number of the blood analyzers 104-2 may also be one or more.
The cell image analysis device 104-1 is used for carrying out image shooting and analysis on the cells in the smear to obtain an interpretation result, the blood analyzer 104-2 is used for carrying out routine blood detection on the blood sample to be detected to obtain a routine blood result, and the control device 103 is in communication connection with the cell image analysis device 104-1 and the blood analyzer 104-2. Wherein the control device 103 collects the interpretation result of the cell image analysis device 104-1 and the blood routine result of the blood analyzer 104-2 through the communication connection with the cell image analysis device 104-1 and the blood analyzer 104-2, and processes the collected interpretation result and the blood routine result.
In practice, the IVD apparatus may further comprise a loading station and an unloading station coordinated with the detection apparatus to load the sample at the loading station prior to detection and unload the sample at the unloading station after detection.
The functional modules in the IVD device may be cooperatively implemented by hardware resources of a device (e.g., a terminal device, a server or a server cluster), such as computing resources like a processor, and communication resources (e.g., for supporting various communication modes such as optical cable and cellular).
Of course, the embodiments of the present invention are not limited to being provided as methods and hardware, and various implementations are possible, such as providing a storage medium (storing a program or instructions for executing the non-contact control method of the extracorporeal diagnostic apparatus provided by the embodiments of the present invention).
Fig. 5 is a schematic flow chart of an implementation of a non-contact control method of an in-vitro diagnostic apparatus according to an embodiment of the present invention, and as shown in fig. 5, the non-contact control method of the in-vitro diagnostic apparatus includes:
s501, tracking a target object of an operation user to obtain the motion state of the target object.
An image acquisition device in the in-vitro diagnosis equipment dynamically tracks a target object of an operation user for in-vitro diagnosis to obtain the motion state of the target object. The motion state represents whether the target object moves or not and a moving track when the target object moves.
In the embodiment of the present invention, the target object may include: the human body surface organs such as hands, heads, eyeballs, mouths and the like can move.
The image acquisition device can acquire the motion state of the target object through a somatosensory technology. In one example, an image acquisition apparatus includes: image sensors, and the like that are capable of capturing images. The in-vitro diagnosis equipment captures and tracks the state and the change of the target object through the image acquisition device to obtain the motion state of the target object.
In one example, when the target object is a hand, the motion state of the hand may include: waving, fist-gripping, hand-moving to the left, hand-moving up, etc. In one example, when the target object is a head, the motion state of the head may include: nodding, shaking, etc. In one example, when the target object is an eyeball, the motion state of the eyeball may include: gaze, eyeball rotation, blinking of the eye to cause intermittent appearance of the eyeball, movement of the eyeball to the left, downward, etc.
In the embodiment of the present invention, related operations on the display interface may be controlled according to a motion state of the target object, for example, if a time of gazing at a control (virtual button) in the display interface by an eyeball exceeds 2 seconds, the target object that can be gazed at may be selected as the target object and corresponding operations are performed, for example, the control is a test start button in the display interface, and if the time of gazing at the control (virtual button) exceeds 2 seconds, it is equivalent to that the test start button is clicked, and the controller executes a test start instruction; or, if the eyeball gazes at an object on the display interface for more than 0.5 second, the gazed object is selected, the eyeball moves to the left, and the selected object moves to the left along with the eyeball to a position where the eyeball moves to the left and stops, for example, in the slide reader result display interface, the eyeball of the user gazes at a cell image on the display interface for more than 0.5 second, the cell image is selected, then the eyeball of the user moves to the left, the left side of the display interface is listed with cell classification items (such as neutrophile, lymphocyte, monocyte, and the like), and when the eyeball of the user moves to the monocyte classification item on the left side of the display interface, the selected cell image is classified into monocyte.
The image acquisition device in the embodiment of the invention can track only the target object of the operation user positioned in the operation area of the in-vitro diagnosis equipment and ignore the target object of the operation user positioned outside the operation area. Here, the operation region is a region in which the distance from the in-vitro diagnostic apparatus is smaller than a distance threshold value in the tracking region of the image acquirable apparatus.
For example: in the scenario shown in fig. 6, a region 601 is an operation region of an in-vitro diagnostic apparatus 602, an image acquisition device in the in-vitro diagnostic apparatus only tracks a target object of an operation user 603 located in the region 601, and does not track a target object of an operation user 604 and an operation user 605 located outside the region 601, where the operation user 604 is located in a tracking region of the image acquisition device, but a distance D' from the in-vitro diagnostic apparatus 602 is greater than a distance threshold D, and the operation user 605 is located outside the tracking region of the image acquisition device, where a region between a line 606 and a line 607 located in front of the in-vitro diagnostic apparatus 602 is the tracking region of the image acquisition device.
In the embodiment of the present invention, the image obtaining apparatus determines whether the operation user is located in the operation area by the size of the area occupied by the operation user in the image including the operation user, and may also determine whether the operation user is located in the operation area by the depth information sensor, and the determination method of determining whether the operation user is located in the operation area is not limited at all.
In the embodiment of the invention, the in-vitro diagnosis equipment can be used for judging whether an operation user exists or not, and when the operation user exists, the image acquisition device is triggered to track the target object of the operation user. The in-vitro diagnosis device judges whether the operation user exists or not comprises at least one of the following modes: the method comprises the steps of detecting whether an operation user approaches through a proximity sensor, determining whether the operation user exists in a collected image through an image sensor, and detecting whether a specified voice instruction is included in collected sound through a sound sensor. The embodiment of the present invention does not limit the following manner for determining whether the operation user exists in the in-vitro diagnostic apparatus.
Taking the example that whether an operation user exists is judged by detecting whether the collected sound comprises a specified voice instruction through the sound sensor, wherein the specified voice instruction is voice information preset by the user, when the in-vitro diagnosis device receives the voice instruction, the requirement for operating the in-vitro diagnosis device currently exists is represented, and the in-vitro diagnosis device triggers the image acquisition device to track a target object of the currently operated user.
In one example, the specified voice instruction is "start operation", and when the in-vitro diagnostic apparatus receives the voice instruction of "start operation", the image acquisition device is triggered to start tracking the target object of the currently operating user.
The image acquisition device in the present invention tracks the motion state of the operation target object as a continuous tracking, and at this time, obtains a plurality of motion states at different times, such as: the tracked motion states include: motion state 1 at time t1, motion state 1 between time t2 and time t3, and motion state 3 between time t4 and time t 5. Note that the time t1 and the time t2, and the time t3 and the time t4 may be continuous or discontinuous.
S502, determining target content in display content on a display interface according to the motion state of the target object.
And the detection result is displayed on the display interface. A display interface is displayed on a display device of the in-vitro diagnosis equipment, and a detection result of the detection equipment for detecting the sample is displayed on the display interface. Here, the in-vitro diagnostic apparatus in which the display device is located can collect and display the detection results of the plurality of detection apparatuses. And displaying the detection result on the display interface as part of the display content.
The display content on the display interface comprises: text, images, controls, etc. When the display content is text or image and can be edited, the display content is movable content.
The control device of the in-vitro diagnostic apparatus determines the motion state of the target object and determines the target content in the display content according to the motion state of the target object.
Here, the in-vitro diagnostic apparatus may include an association relationship between the motion state of the target object and the operation command, and the control device may determine the operation command corresponding to the motion state of the current target object based on the association relationship, and determine the target content based on the operation command and the position of the current cursor. Here, the cursor may indicate the content to be operated or already operated.
In the embodiment of the present invention, the operation instruction in the association relationship may include: selection, cursor movement, etc. The shipping status of the target object in the associative relationship includes: nodding the head, shaking the head, waving the hand, making a fist, moving the hand to the left, blinking, and the like. Wherein, the motion state of one or more target objects corresponds to an operation instruction. In the embodiment of the present invention, the association relationship between the motion state and the operation instruction is not limited at all.
In practical application, a user can set the association relationship between the motion state of the target object and the operation instruction based on the operation instruction setting interface.
Taking the operation instruction as an example for selection, the control device of the in-vitro diagnostic apparatus determines the display content of the position where the cursor is currently located as the target content. When the operation instruction is cursor movement, the control device of the in-vitro diagnosis device determines the position of the cursor after the cursor movement according to the current position of the cursor and the cursor movement, and determines the display content of the position of the cursor after the cursor movement as target content.
Taking an operation instruction corresponding to the motion state as an example of cursor movement, the direction of movement and the magnitude of displacement can be determined based on the current motion state. In an example, when the target object is a hand and the motion state of the hand is a hand-holding punch, the corresponding operation instruction is selected. In one example, when the target object is a head and the head is biased to the right, the corresponding operation instruction is a cursor movement.
In one example, when the operation instruction is cursor movement, the movement direction of the cursor movement may be determined according to the movement direction of the motion state. Such as: when the target object is the head and the head deviates to the right, the corresponding operation instruction is that the cursor moves to the right. For another example: when the target object is an eyeball and the eyeball sweeps to the right, the corresponding operation instruction is that the cursor moves to the right.
In one example, when the operation instruction is cursor movement, the displacement of the cursor movement is determined according to the size of the motion trajectory of the target object. Such as: when the size of the motion trail of the target object is L1 and the motion trail L1 can be mapped to a displacement L2, the displacement of the cursor movement is L2.
In one example, when the operation instruction is cursor movement, the displacement of the cursor movement is a fixed displacement. Such as: the target object is the head, and if the head is turned right once, the displacement of the cursor is Δ L, and if the head is turned right twice, the displacement of the cursor is 2 Δ L.
In an example, when the operation instruction is cursor movement, the cursor moves to a position where a next display object is located. Such as: the display content comprises: images 1 to 5, and image 1 at the position of the current cursor, wherein images 1 to 5 are arranged from left to right in sequence; when the target object is the head and the head is turned right once, the cursor moves from the position of the image 1 to the position of the image 2, and when the head is turned right once again, the cursor moves from the position of the image 2 to the position of the image 3.
In the embodiment, the image acquisition device tracks a plurality of motion states of the target object, and then determines one or more operation instructions based on the plurality of motion states. In one example, the tracked motion states include: the control device determines the target content a based on the operation command a and determines the target content B based on the operation command B, when the motion state 1, and the motion state t3 are in the motion state 1, the operation command corresponding to the motion state 1 is the operation command a, the motion state 1 does not correspond to the operation command, and the operation command corresponding to the motion state 1 is the operation command B.
In an embodiment, the target object comprises an eyeball; the implementation of S502 includes: when the motion state comprises a first motion state meeting a first motion condition, determining a target gaze position of the eyeball on the display interface; and determining the display content corresponding to the target gaze position in the display content on the display interface as the target content.
Here, when the movement state of the eyeball tracked by the image acquisition device includes the first movement state, the corresponding operation instruction is selected, the current gaze position of the eyeball on the display interface is determined as the target gaze position, and the display content corresponding to the target gaze position in the display interface is selected as the target content. Such as: when the display interface is as shown in fig. 7, the interpretation result displayed in the display interface includes: analytical information, white blood cell, red blood cell and platelet counts. Wherein, the content correspondingly displayed corresponding to the leukocyte menu comprises: the detection result corresponding to the white blood cells and the detection result corresponding to the non-white blood cells, wherein the detection result corresponding to the white blood cells comprises the counting, ratio and cell image of the following cells: neutrophilic lobular granulocytes, neutrophilic baculocytes, lymphocytes, monocytes, eosinophils, basophils, neutrophils, promyelocytes, blasts, heterolymphocytes, plasma cells, and non-leukocytes, wherein the corresponding detection results include counts and ratios of the following cells: nucleated red blood cells, giant platelets, platelet aggregation, smear cells, pycnoclasia, megakaryocytes, and sediments. In the interpretation result interface shown in fig. 7, only the cell images of neutrophil rod, lymphocyte, monocyte, eosinophil, basophil, and neutrophil are exemplarily shown. When the target gaze position of the eyeball is position 701, the target content is the first image in the images of the neutrophil rod-shaped granulocytes corresponding to position 701, and when the eyeball moves to the gaze position 702 to the left, the cell images at position 701 can be reclassified into the heterolymphocyte; or, if the gaze position of the eyeball is 702, the target content is the text "anisotropic lymphocyte" corresponding to the position 702, and if the gaze position of the eyeball is 703, the target content is the "edit" control corresponding to the position 703.
When the current gaze position of the eyeball on the display interface is determined as the target gaze position, the cursor can be moved to the position of the target gaze position, and the display content of the position of the cursor is determined as the target content, so that the position of the cursor and the position of the target content are kept synchronous.
In practical application, when the current gaze position of the eyeball on the display interface is determined as the target gaze position and the display content corresponding to the target gaze position is determined as the target content, the position of the cursor can not be affected, so that the determination of the target content does not affect the position of the cursor.
In the embodiment of the present invention, the motion state satisfying the first condition is referred to as a first motion state. The first condition may include at least one of the following conditions:
condition 1: the gaze position of the eyeball is fixed and the fixed time is greater than a time threshold;
condition 2: the motion track of the eyeball is a first designated track.
When the first condition is condition 1, and when the motion state comprises a fixed state representing that the gaze position of the eyeball is fixed and the duration of the fixed state is greater than a time threshold, the in-vitro diagnostic device determines that the motion state comprises a first motion state meeting the first motion condition. At this time, when the user is gazing at a fixed position all the time, the display content corresponding to the gazing position is determined as the target content. The time threshold may be set by the user according to requirements, such as: for 2 seconds.
When the first condition is condition 2, and the motion state of the eyeball comprises a movement state which represents that the motion trail of the eyeball is a first designated trail, the in-vitro diagnostic equipment determines that the motion state comprises a first motion state which meets the first motion condition. Here, the first designated trajectory may include: the trajectory is set in advance based on a trajectory in which an eyeball of a blinking eye intermittently appears, "a C-shaped trajectory", or the like.
In the embodiment of the invention, the non-contact interaction process between the user and the in-vitro diagnostic equipment can be assisted by the visual technology, the control on the display content in the display interface is realized by tracking the vision, and the convenient and quick control on the display content is realized.
In one embodiment, the method for contactless control of an extracorporeal diagnostic apparatus further comprises: displaying a reference gaze location based on a correction interface, and outputting prompt information prompting the operating user to gaze at the reference gaze location; determining a position difference between the gaze position of the operating user in the correction interface and the reference gaze position, the position difference being used to correct the target gaze position.
Here, the in-vitro diagnosis apparatus provides a correction page and displays the reference gaze position on the correction page, and outputs prompt information prompting the user to gaze at the reference gaze position when the correction page is displayed. Here, the manner of outputting the prompt information may be a page prompt, a voice prompt, or the like. When the prompting mode is page prompting, a prompting area can be divided in the correction page, and the prompting information is output in the prompting area, or a prompting page is provided outside the correction page, and the prompting information is output in the prompting page. When the prompt information is output in the correction page, the correction area can be divided in the correction page, and the reference gaze position is shown in the correction area.
The correction page, while presenting the reference gaze location, may present the reference gaze location to the user based on the gaze identification. The gaze identification may be an identification of a cross, a point, etc. The embodiment of the present invention does not set any limit to the form of the gaze identification.
Multiple reference gaze locations may be shown in the correction page. Ways to present multiple reference gaze locations include:
displaying a plurality of reference gaze positions simultaneously in a first display mode;
and a second display mode is that the plurality of reference gaze positions are displayed in sequence.
In the first display mode, the correction page displays all the reference gaze positions at one time and prompts the user of the reference gaze position needing to be gazed at present. The prompting mode for prompting the current reference gaze position may be: a gaze identification flashing, a gaze identification having a different color than other reference gaze locations, etc., in a manner that can distinguish the current reference gaze location from the other reference gaze locations.
In an example, as shown in fig. 8, a correction area 802 and a cue area 803 are included in a correction page 801. The correction area 802 displays a reference gaze position 8021, a reference gaze position 8022, a reference gaze position 8023, and a reference gaze position 8024. The gaze mark x is displayed by the reference gaze position 8021, the reference gaze position 8022, the reference gaze position 8023 and the reference gaze position 8024, and the display mark of the reference gaze position 8023 is heavy in color relative to other reference gaze positions, so that the reference gaze position 8023 is the current reference gaze position to be gazed at. Prompt information is output in the prompt area 803, wherein the content of the prompt information may be "please watch the prompt position".
In the second display mode, the correction page displays one reference gaze position at a time, so that each reference gaze position is displayed in a time division manner, and the reference gaze position displayed in the correction page is the reference gaze position at which the user needs to gaze currently. At this time, the in-vitro diagnostic apparatus replaces the reference gaze position in the correction page with another reference gaze position after exhibiting one reference gaze position and detecting the gaze of the operating user to the reference gaze position.
The in-vitro diagnosis device operates the position difference between the gaze position of the user in the correction interface and the reference gaze position to characterize the deviation between the gaze position of the user and the position at which the user is actually gazing, based on the position difference between the gaze position of the user for each reference gaze position and the corresponding reference gaze position. The target gaze position of the user can be corrected by the position difference, so that the influence of the actual inconsistency between the gaze position and the gaze position, such as strabismus, on the target gaze position is avoided.
And S503, executing control operation based on the target content.
After the control device of the extracorporeal diagnostic apparatus determines the target content based on S502, the control device performs a control operation on the target content. In the embodiment of the present invention, the execution mode for executing the control operation based on the target content includes:
the method comprises the steps that in the first execution mode, when the target content is a control, a first control instruction and content to be processed which are related to the control are determined; and executing the control operation corresponding to the first control instruction on the content to be processed.
Executing a second mode, when the motion state comprises a second motion state meeting a second condition, wherein the second condition is related to a second designated track; determining a second control instruction corresponding to the second designated track; and executing the control operation corresponding to the second control instruction on the target content.
Receiving voice information input by the target operation main body; determining the voice information or a corresponding third control instruction; and executing the control operation corresponding to the second control instruction on the target content.
And fourthly, when the target content comprises movable content, controlling the position of the movable content in the display interface according to the motion track of the target object.
In the first execution mode, the target content is a control, the control instruction corresponding to the control is a first control instruction, and at this time, the control function corresponding to the control is executed. Such as: and when the target content is the auditing control, auditing the auditing content corresponding to the auditing control. For another example: and when the target content is the report sending control, sending out the generated sending report. For another example: and when the target content is the rechecking control, rechecking the sample to which the detection result corresponding to the rechecking control belongs.
In the second execution mode, a control operation corresponding to the motion state of the tracked target object is executed on the target content. Here, the second control instruction may include: deleting instructions, quitting instructions, moving instructions, identifying instructions and the like, wherein the corresponding control operation can comprise the following steps: deleting, exiting, moving, identifying and the like. Wherein, different second control instructions may correspond to different designated trajectories, and the designated trajectory for determining the control operation is referred to as a second designated trajectory. Such as: and if the motion track of the gesture is a C-shaped track and the corresponding control instruction is deletion, deleting the current target content. For another example: and if the movement locus of the eyeball is a circle in the designated range, the corresponding control instruction is an identifier, and when the target content is data, the data can be identified.
In the third embodiment, the control command determined based on the received voice information is referred to as a third control command. Such as: the user sends 'audit' through voice, and the target content is audited. For another example: and the user voice sends a 'recheck', and rechecks the sample to which the target content belongs.
In the fourth execution mode, after the target content is determined and the target content is the movable content, the target content is dragged based on the motion trail of the target object. Such as: the target object is an eyeball, and when the target content is an image, the image is moved based on the gaze position of the eyeball until the eyeball stays at the target position, so that the picture is moved from the position where the image is started to the target position.
In an example, as shown in fig. 9, when the user finds that an image 901 belonging to lymphocytes is classified into eosinophils during an examination, the user regards an eyeball as a target object, regards the image 901 as target content, and the movement trajectory of the gaze position of the eyeball is shown by a dotted line, from the position 902 to the position 903, then as shown in fig. 10, the position of the image 901 moves from the position 902 to the position 903 along with the delivery trajectory of the gaze position of the eyeball, and as a result of the movement, as shown in fig. 11, the classification to which the image 901 belongs is modified from eosinophils to lymphocytes.
In the embodiment of the present invention, in addition to the above four execution modes, other control operations such as page switching and information input may be performed on the target object, and the operation type of the control operation performed on the target content in the embodiment of the present invention is not limited at all.
The non-contact control method of the in-vitro diagnostic equipment provided by the embodiment of the invention tracks the target object of an operation user to obtain the motion state of the target object; determining target content in display content on a display interface according to the motion state of the target object, wherein the display interface displays a detection result; performing a control operation based on the target content; therefore, the display content on the display interface is controlled to execute the control operation by tracking the target object of the operation user, and the hand of the operation user can not contact with the input devices such as a touch screen, a mouse, a keyboard and the like of the in-vitro diagnosis equipment in the control process, so that the non-contact control is realized, the convenience of the interaction mode of the user and the interaction interface is improved, and the possibility of cross propagation and biological pollution is avoided.
In an embodiment, as shown in fig. 12, before S501, the method further includes:
and S1201, acquiring an image of the target object.
The in-vitro diagnostic apparatus may acquire an image of the target object based on the image acquisition device before the image acquisition device tracks the target object.
And S1202, matching the acquired image with a pre-stored image of the target object of the operation user.
The in-vitro diagnostic apparatus includes a pre-stored image, which is an image of a target object of an authenticated operation user. For the same operation user, the in-vitro diagnostic device may be pre-stored images of a plurality of target objects of the operation user, or may only include pre-stored images of one target object of the operation user. In one example, an in vitro diagnostic apparatus includes: the pre-stored images of the hands of the user A, the pre-stored head portrait of the face and the pre-stored head portrait of the eyeballs, and the pre-stored images of the hands of the user B.
After the image of the target object is acquired based on S1201, the acquired image is matched with the pre-stored image. Here, the acquired image is matched with a target image in the pre-stored image, and when the matching degree is greater than the threshold value of the matching degree, it may be determined that the acquired image is matched with the pre-stored image, otherwise, the acquired image is not matched with the pre-stored image.
And S1203, when the acquired image is matched with the pre-stored image, determining that the identity authentication of the operation user is successful.
Here, when the acquired image is matched with the pre-stored image, the current operating user is represented as an authenticated user, and the identity authentication of the current operating user is determined to be successful. And when the acquired image is not matched with the pre-stored image, representing that the current operation user is not an authenticated user, and determining that the identity authentication of the current operation user fails.
Under the condition that the identity authentication of the current operation user is successful, the in-vitro diagnosis device executes S501, and under the condition that the identity authentication of the current operation user is failed, the in-vitro diagnosis device can not execute S501 and output prompt information to prompt that the current operation user does not have the interactive authority for the display interface.
In practical applications, the execution of S1201 to S1203 may also be after S501, and the embodiment of the present invention does not limit the sequence of the identity authentication and the contactless interaction.
The non-contact control method of the in-vitro diagnostic equipment provided by the embodiment of the invention can authenticate the identity of the operation user based on the target object of the operation user before or during the non-contact interaction of the operation user, and only allow the current operation user to operate the content of the display interface under the condition of successful authentication.
The non-contact control method of the in-vitro diagnostic equipment provided by the embodiment of the invention can be applied to the following scenes: the in-vitro diagnosis equipment receives the sample, and analyzes and detects the sample to obtain a detection result. The in-vitro diagnosis device collects the detection results and displays the detection results based on the display interface so that the user can detect the results, wherein the detection results can include: digital results, alarm information, graphical results, etc. The operation user can check the detection result in the process of checking the detection result to determine whether the detection result is abnormal, when the detection result has an abnormal result, the operation user can confirm the abnormal result by checking the related information such as the diagnosis information, the medication information, the historical result and the like of the patient to which the detection result belongs, identify the abnormality by means of image marking and the like, and can initiate the rechecking of the detection result and send a checking report if the operation user deems necessary.
In the above scenario, the operation user may control the content of the display interface in a non-touch manner, such as: and initiating inquiry of associated information of a patient to which the detection result belongs, identification of abnormality, initiation of accessories and the like, so that direct contact between a user and in-vitro diagnosis equipment through input devices such as a mouse, a keyboard and the like is avoided, convenience is brought to user operation, and meanwhile, cross infection is avoided.
In the following, a non-contact control method of an in-vitro diagnostic apparatus provided by an embodiment of the present invention is described with an application scenario in which a target object is taken as an eyeball and software interaction is assisted by a visual assistance technology.
The in-vitro diagnosis equipment adopts the camera device to calibrate the eye angle of the examining physician and the mouse position of the system, so as to obtain the deviation between the position of the sight of the examining physician falling on the display screen and the actual gaze position, and the gaze position of the examining physician can be calibrated through the obtained deviation, so that the mouse position of the system can be calculated and changed in real time by capturing the video data of the head and eyes of the examining physician in real time.
When the eye of the examining doctor stays on an object for a long time, the object is indicated to be clicked. For example, the in-vitro diagnostic device detects that the time that the vision of the examining doctor stays on one of the auditing buttons is greater than the time threshold value of 0.5 seconds, which means that the auditing button is clicked to audit the sample result. Wherein the duration of the time threshold can be set.
The in-vitro diagnosis device can define the modes of left and right light sweeping, small range circle and the like of the sight as operations of right mouse button, dragging and the like. For example, when the in-vitro diagnosis device tracks the movement state of the gaze of the examination doctor to sweep left and right, the image result of the slide reader where the current cursor is located is selected, and the movement state of the gaze of the examination doctor is tracked as a small-range circle, the image result of the slide reader is classified by dragging and the like, and the image is dragged from the classification 1 to the classification B, so that the classification to which the image result of the slide reader belongs is changed.
In the embodiment of the present invention, the operation user may define the mode of selecting the object by using a specific gaze movement motion such as a slight gaze movement from left to right or up and down, a quick movement in a specific direction, a blinking, and the like, in addition to gazing at the object for a certain period of time.
In the embodiment of the invention, the software interactive operation can be carried out by means of auxiliary visual technology in a manner of finger action, sound and the like. For example, after the operator visually selects a certain sample result, the operator performs an audit operation on the sample by making an "audit" sound.
An embodiment of the present invention provides an in-vitro diagnostic apparatus, fig. 1 is a schematic diagram of a composition structure of the in-vitro diagnostic apparatus according to the embodiment of the present invention, and as shown in fig. 1, the apparatus includes:
the image acquisition device 101 is used for dynamically tracking a target object of an operation user to obtain a motion state of the target object;
the display device 102 is used for displaying a display interface related to sample detection, wherein the interface related to sample detection comprises a sample detection result display interface, a sample detection management interface, a quality control/calibration interface and the like;
the control device 103 is used for determining target content in the display content on the display interface according to the motion state of the target object;
and the control device 103 is also used for executing control operation based on the target content.
In an embodiment, the image acquiring device 101 is further configured to acquire an image of the target object;
the control device 103 is further configured to: and matching the acquired image with a prestored image of the target object of the operating user, and determining that the identity authentication of the operating user is successful when the acquired image is matched with the prestored image.
In an embodiment, the control device 103 is further configured to:
when the motion state comprises a first motion state meeting a first motion condition, determining a target gaze position of the eyeball on the display interface;
and determining the display content corresponding to the target gaze position in the display content on the display interface as the target content.
In an embodiment, the control device 103 is further configured to: and when the motion state comprises a fixed state which represents that the gaze position of the eyeball is fixed and the duration of the fixed state is greater than a time threshold, determining that the motion state comprises a first motion state which meets a first motion condition.
In an embodiment, the control device 103 is further configured to:
and when the motion state of the eyeball comprises a moving state which represents that the motion track of the eyeball is a first appointed track, determining that the motion state comprises a first motion state which meets a first motion condition.
In one embodiment, the display device 102 is further configured to: displaying a reference gaze location based on a correction interface, and outputting prompt information prompting the operating user to gaze at the reference gaze location;
the control device 103 is further used for determining a position difference between the gaze position of the operating user in the correction interface and the reference gaze position, wherein the position difference is used for correcting the target gaze position.
In an embodiment, the control device 103 is further configured to:
when the target content is a control, determining a first control instruction and content to be processed which are associated with the control;
and executing the control operation corresponding to the first control instruction on the content to be processed.
In an embodiment, the control device 103 is further configured to:
when the motion state comprises a second motion state meeting a second condition, the second condition is associated with a second designated trajectory;
determining a second control instruction corresponding to the second designated track;
and executing the control operation corresponding to the second control instruction on the target content.
In an embodiment, the apparatus further comprises:
the input device is used for receiving voice information input by the target operation body;
the control device 103 is further configured to determine the voice information or a corresponding third control instruction, and execute a control operation corresponding to the second control instruction on the target content.
In an embodiment, the control device 103 is further configured to:
when the target content comprises movable content, controlling the position of the movable content in the display interface according to the motion trail of the target object.
Accordingly, an embodiment of the present invention further provides a storage medium, namely a computer-readable storage medium, where a non-contact control program of an extracorporeal diagnosis apparatus is stored, and the non-contact control program of the extracorporeal diagnosis apparatus, when executed by a processor, implements the steps of the non-contact control method of the extracorporeal diagnosis apparatus described above.
The above description of the in vitro diagnostic device and medium embodiments, similar to the description of the method embodiments above, have similar advantageous effects as the method embodiments. For technical details not disclosed in embodiments of the in vitro diagnostic apparatus and of the storage medium according to the invention, reference is made to the description of embodiments of the method according to the invention for understanding.
In the embodiment of the present invention, if the non-contact control method of the in-vitro diagnostic apparatus is implemented in the form of a software functional module and is sold or used as a standalone product, the method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (21)

1. A method of contactless control of an in vitro diagnostic device, characterized in that the method comprises:
tracking a target object of an operation user to obtain a motion state of the target object;
determining target content in display content on a display interface according to the motion state of the target object;
and executing control operation based on the target content.
2. The method of claim 1, further comprising:
acquiring an image of the target object;
matching the acquired image with a prestored image of a target object of the operating user;
and when the acquired image is matched with the prestored image, determining that the identity authentication of the operation user is successful.
3. The method of claim 1, wherein the target object comprises an eyeball; determining target content in display content on a display interface according to the motion state of the target object, wherein the determining comprises:
when the motion state comprises a first motion state meeting a first motion condition, determining a target gaze position of the eyeball on the display interface;
and determining the display content corresponding to the target gaze position in the display content on the display interface as the target content.
4. The method of claim 3, further comprising:
and when the motion state comprises a fixed state which represents that the gaze position of the eyeball is fixed and the duration of the fixed state is greater than a time threshold, determining that the motion state comprises a first motion state which meets a first motion condition.
5. The method of claim 3, further comprising:
and when the motion state of the eyeball comprises a moving state which represents that the motion track of the eyeball is a first appointed track, determining that the motion state comprises a first motion state which meets a first motion condition.
6. The method of claim 3, further comprising:
displaying a reference gaze location based on a correction interface, and outputting prompt information prompting the operating user to gaze at the reference gaze location;
determining a position difference between the gaze position of the operating user in the correction interface and the reference gaze position, the position difference being used to correct the target gaze position.
7. The method of claim 1, wherein performing a control operation based on the target object comprises:
when the target content is a control, determining a first control instruction and content to be processed which are associated with the control;
and executing the control operation corresponding to the first control instruction on the content to be processed.
8. The method of claim 1, wherein performing control operations based on the target content comprises:
when the motion state comprises a second motion state meeting a second condition, the second condition is associated with a second designated trajectory;
determining a second control instruction corresponding to the second designated track;
and executing the control operation corresponding to the second control instruction on the target content.
9. The method of claim 1, wherein performing control operations based on the target content comprises:
receiving voice information input by the target operation body;
determining the voice information or a corresponding third control instruction;
and executing the control operation corresponding to the second control instruction on the target content.
10. The method of claim 1, wherein performing control operations based on the target content comprises:
when the target content comprises movable content, controlling the position of the movable content in the display interface according to the motion trail of the target object.
11. An in vitro diagnostic apparatus, characterized in that it comprises:
the image acquisition device is used for dynamically tracking a target object of an operation user to obtain the motion state of the target object;
the display device is used for displaying a display interface related to sample detection;
the control device is used for determining target content in display content on the display interface according to the motion state of the target object;
the control device is also used for executing control operation based on the target content.
12. The apparatus of claim 11,
the image acquisition device is further used for acquiring an image of the target object;
the control device is further configured to: and matching the acquired image with a prestored image of the target object of the operating user, and determining that the identity authentication of the operating user is successful when the acquired image is matched with the prestored image.
13. The apparatus of claim 11, wherein the control device is further configured to:
when the motion state comprises a first motion state meeting a first motion condition, determining a target gaze position of the eyeball on the display interface;
and determining the display content corresponding to the target gaze position in the display content on the display interface as the target content.
14. The apparatus of claim 13, wherein the control device is further configured to: and when the motion state comprises a fixed state which represents that the gaze position of the eyeball is fixed and the duration of the fixed state is greater than a time threshold, determining that the motion state comprises a first motion state which meets a first motion condition.
15. The apparatus of claim 13, wherein the control device is further configured to:
and when the motion state of the eyeball comprises a moving state which represents that the motion track of the eyeball is a first appointed track, determining that the motion state comprises a first motion state which meets a first motion condition.
16. The apparatus of claim 13,
the display device is further configured to: displaying a reference gaze location based on a correction interface, and outputting prompt information prompting the operating user to gaze at the reference gaze location;
the control device is further used for determining a position difference between the gaze position of the operating user in the correction interface and the reference gaze position, and the position difference is used for correcting the target gaze position.
17. The apparatus of claim 11, wherein the control device is further configured to:
when the target content is a control, determining a first control instruction and content to be processed which are associated with the control;
and executing the control operation corresponding to the first control instruction on the content to be processed.
18. The apparatus of claim 11, wherein the control device is further configured to:
when the motion state comprises a second motion state meeting a second condition, the second condition is associated with a second designated trajectory;
determining a second control instruction corresponding to the second designated track;
and executing the control operation corresponding to the second control instruction on the target content.
19. The apparatus of claim 11, further comprising:
the input device is used for receiving voice information input by the target operation body;
the control device is further configured to determine the voice information or a corresponding third control instruction, and execute a control operation corresponding to the second control instruction on the target content.
20. The apparatus of claim 11, wherein the control device is further configured to:
when the target content comprises movable content, controlling the position of the movable content in the display interface according to the motion trail of the target object.
21. A storage medium, characterized in that the storage medium has stored thereon a contactless control program of an extracorporeal diagnostic apparatus, which, when executed by a processor, implements a contactless control method of the extracorporeal diagnostic apparatus according to any one of claims 1 to 10.
CN201911289014.9A 2019-12-13 2019-12-13 Non-contact control method and device for in-vitro diagnostic equipment and storage medium Pending CN112967796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911289014.9A CN112967796A (en) 2019-12-13 2019-12-13 Non-contact control method and device for in-vitro diagnostic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911289014.9A CN112967796A (en) 2019-12-13 2019-12-13 Non-contact control method and device for in-vitro diagnostic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112967796A true CN112967796A (en) 2021-06-15

Family

ID=76270875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911289014.9A Pending CN112967796A (en) 2019-12-13 2019-12-13 Non-contact control method and device for in-vitro diagnostic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112967796A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115530855A (en) * 2022-09-30 2022-12-30 先临三维科技股份有限公司 Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502378A (en) * 2016-09-08 2017-03-15 深圳市元征科技股份有限公司 The control method at a kind of electronic equipment interface and electronic equipment
WO2017190360A1 (en) * 2016-05-06 2017-11-09 深圳迈瑞生物医疗电子股份有限公司 Medical detection system and control method therefor
CN108538288A (en) * 2018-02-12 2018-09-14 深圳迎凯生物科技有限公司 In-vitro diagnosis apparatus control method, device, computer equipment and storage medium
CN108959273A (en) * 2018-06-15 2018-12-07 Oppo广东移动通信有限公司 Interpretation method, electronic device and storage medium
CN109683705A (en) * 2018-11-30 2019-04-26 北京七鑫易维信息技术有限公司 The methods, devices and systems of eyeball fixes control interactive controls
CN110368097A (en) * 2019-07-18 2019-10-25 上海联影医疗科技有限公司 A kind of Medical Devices and its control method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017190360A1 (en) * 2016-05-06 2017-11-09 深圳迈瑞生物医疗电子股份有限公司 Medical detection system and control method therefor
CN106502378A (en) * 2016-09-08 2017-03-15 深圳市元征科技股份有限公司 The control method at a kind of electronic equipment interface and electronic equipment
CN108538288A (en) * 2018-02-12 2018-09-14 深圳迎凯生物科技有限公司 In-vitro diagnosis apparatus control method, device, computer equipment and storage medium
CN108959273A (en) * 2018-06-15 2018-12-07 Oppo广东移动通信有限公司 Interpretation method, electronic device and storage medium
CN109683705A (en) * 2018-11-30 2019-04-26 北京七鑫易维信息技术有限公司 The methods, devices and systems of eyeball fixes control interactive controls
CN110368097A (en) * 2019-07-18 2019-10-25 上海联影医疗科技有限公司 A kind of Medical Devices and its control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周象贤: "《广告情感诉求探微》", vol. 1, 厦门大学出版社, pages: 231 - 233 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115530855A (en) * 2022-09-30 2022-12-30 先临三维科技股份有限公司 Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment
WO2024067027A1 (en) * 2022-09-30 2024-04-04 先临三维科技股份有限公司 Control method and apparatus for three-dimensional data acquisition device, and three-dimensional data acquisition device

Similar Documents

Publication Publication Date Title
US11650659B2 (en) User input processing with eye tracking
US11429194B2 (en) Cursor mode switching
JP5658500B2 (en) Information processing apparatus and control method thereof
GB2498299B (en) Evaluating an input relative to a display
CN105229582A (en) Based on the gestures detection of Proximity Sensor and imageing sensor
KR102269429B1 (en) How to improve the usefulness and accuracy of physiological measurements
EP2202506A2 (en) Cell image display apparatus, cell image display method, and computer program product
KR101631011B1 (en) Gesture recognition apparatus and control method of gesture recognition apparatus
KR20150111696A (en) Apparatus for blood testing and method for blood testing thereof
KR20160101605A (en) Gesture input processing method and electronic device supporting the same
CN112967796A (en) Non-contact control method and device for in-vitro diagnostic equipment and storage medium
WO2016131337A1 (en) Method and terminal for detecting vision
CN109330559A (en) Assessment method, device, computer equipment and the computer storage medium of Determination of cortisol
CN111684279B (en) Cell analysis method, cell analysis device and storage medium
KR101554966B1 (en) Method and apparatus for analyzing result of psychology test and recording medium thereof
WO2015164100A1 (en) Urine sediment analysis workstation
KR101550805B1 (en) Alternative Human-Computer interfacing method
CN114255833A (en) Sample analysis system
CN115904065A (en) A separate empty gesture operation three-dimensional model viewing system for in operating room
CN116413191A (en) Graphic result display method, device, equipment and storage medium for blood analysis
Elmanakhly et al. Interactive Gestures for Liver Angiography Operation
CN106527918A (en) Man-machine interactive system
CHOI iFinger-Study of Gesture Recognition Technologies & Its Applications (Volume II of II)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination