CN109300387B - Virtual microscope object interaction suite and application thereof - Google Patents

Virtual microscope object interaction suite and application thereof Download PDF

Info

Publication number
CN109300387B
CN109300387B CN201811477589.9A CN201811477589A CN109300387B CN 109300387 B CN109300387 B CN 109300387B CN 201811477589 A CN201811477589 A CN 201811477589A CN 109300387 B CN109300387 B CN 109300387B
Authority
CN
China
Prior art keywords
image
microscope
user
slide
magnetic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811477589.9A
Other languages
Chinese (zh)
Other versions
CN109300387A (en
Inventor
冯志全
杨文珍
杨旭波
彭群生
潘志庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201811477589.9A priority Critical patent/CN109300387B/en
Publication of CN109300387A publication Critical patent/CN109300387A/en
Application granted granted Critical
Publication of CN109300387B publication Critical patent/CN109300387B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B25/00Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a virtual microscope object interaction suite and application thereof, and belongs to the field of experimental equipment. This virtual microscope object interaction external member includes: the microscope comprises a microscope body model, and a rotation sensor, a pressure sensor, an electronic chip and a remote communication module which are arranged on the microscope body model; a remote computing display device and a local computing display device; the rotation sensor, the pressure sensor and the remote communication module are respectively connected with the electronic chip; the remote communication module is capable of communicating with a remote computing display device and a local computing display device, respectively. The invention utilizes the virtual fusion technology to enhance the information of the observation result of the user, thereby being beneficial to the process, mechanism and principle of randomly exploring the experimental phenomenon by the user; and through the material object operation, the operation experience under the real microscope condition is obtained, and the experimenters are helped to master the related experimental skills.

Description

Virtual microscope object interaction suite and application thereof
Technical Field
The invention belongs to the field of experimental equipment, and particularly relates to a virtual microscope object interaction suite and application thereof.
Background
At present, most of primary and secondary schools in China do not have microscopes for experiments, so that many experimental courses using the equipment for biology and chemistry cannot be set normally; secondly, even if some schools have microscope equipment, key experiment samples such as cells, microorganisms and the like are often lacked; secondly, the traditional experimental method cannot realize information enhancement, namely, the mechanism of the observation sample and other things which cannot be observed by naked eyes cannot be observed, and various possible situations cannot be observed.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a virtual microscope object interaction kit and application thereof, which not only solve the above-mentioned partial bottleneck problem and pain point problem which are difficult to solve in the current experiment teaching of the microscopes in middle and primary schools for many years, but also endow the experiment method of the microscope with typical characteristics of intelligence, interactivity and the like.
The invention is realized by the following technical scheme:
a method for monitoring interaction situation of a virtual microscope real object interaction suite comprises the virtual microscope real object interaction suite, wherein the virtual microscope real object interaction suite comprises the following steps: the microscope comprises a microscope body model, and a rotation sensor, a pressure sensor, an electronic chip and a remote communication module which are arranged on the microscope body model; a remote computing display device and a local computing display device;
the rotation sensor, the pressure sensor and the remote communication module are respectively connected with the electronic chip;
the remote communication module can be respectively communicated with remote computing display equipment and local computing display equipment;
respectively arranging rotation sensors at the coarse focusing screw and the fine focusing screw of the microscope body model;
pressure sensors are respectively arranged on the microscope arm and the microscope base of the microscope body model, as well as the tweezers, the rubber head dropper, the glass slide and the cover glass;
arranging the electronic chip and the remote communication module on a microscope base of the microscope body model;
arranging a display on an eyepiece of the microscope body model;
the rotation sensor includes: the device comprises a rotating shaft, an inner ring device and an outer ring device which are coaxially arranged, wherein the radius of the inner ring device is smaller than that of the outer ring device;
the inner ring device is fixedly connected with the rotating shaft and can rotate along with the rotating shaft;
the outer ring device is fixed;
photosensitive units are uniformly distributed on the circumference of the inner wall of the outer ring device;
a light channel is formed on the inner ring device;
a light source is fixed on the rotating shaft, and light rays of the light source are sensed by the photosensitive units on the circumference of the inner wall of the outer ring device after passing through the light ray channel on the inner ring device;
characterized in that the method comprises:
(1) acquiring the operation data of a user through the rotation sensor and the pressure sensor, and sending the data of the rotation sensor and the data of the pressure sensor to the electronic chip;
(2) processing the data of the rotation sensor on a remote computing display device;
(3) processing data of the pressure sensor on a local computing display device;
(5) monitoring the relative position of the glass slide and the light through hole;
the operation of the step (2) comprises the following steps:
(2.1) transmitting the data of the rotation sensor to a remote computing and displaying device through a remote communication module;
(2.2) after the remote computing and displaying device receives the data of the rotation sensor, recognizing the user's behavior:
calculating the rotating direction and angle size through the data of the rotating sensor at the continuous adjacent time:
Δ=θ2-θ1 (1)
wherein θ 1 and θ 2 respectively represent data of the rotation sensor at the previous time and the current time, and Δ represents a difference between the data of the rotation sensor at the current time and the previous time;
if delta is greater than 0, clockwise rotation is represented, and the semantic meaning of the user is amplification; if delta is less than 0, the rotation is anticlockwise, and the semantic meaning of the user is reduced; if Δ is 0, it means that the user does not perform scaling;
the scaling factor is:
α=kΔ (2)
wherein k is an empirical parameter;
(2.3) the remote computing display device zooms the current sample image according to the alpha and the semantics of the user thereof, or queries in a database to find the sample image corresponding to the zoom multiple:
for a coarse in-focus spiral, image Y is generated using the following stepstSo that: the image visual field range changes, and the visual field is clearer along a certain direction:
(A1) assume that the original sample image is Y1The image after the visual field is changed is Y2(ii) a From Y1Is changed into Y2The scaling factor of the time image size is given by:
γ=wα (3)
wherein w is an empirical parameter controlling the speed of change of the field of view;
(A2) calculating Y2
Y2=β[Y1](4)
The above formula represents: with Y1Is a transformation center with S2=βS1Building a window for the size; y is1The image within the window is Y2At Y position in Y12The outside images are cropped away; s1、S2Respectively represent images Y1And image Y2The area of (d);
if rotated clockwise, β ═ γ, making the field of view increasingly large; if rotating counterclockwise, β is 1/γ, making the field of view smaller and smaller;
(A3) image X is gradually rendered as followst
Xt=tY2+(1-t)Y1(4)
In the above formula, XtRepresenting the result of sharpening the image at the current instant t, t ∈ [0,1];
(A4) Mixing XtThe information is transmitted to local computing display equipment through a remote communication module;
for a fine in-focus spiral, image X is generated using the following stepstSo that: the visual field range of the image is kept unchanged, and the image becomes clearer along a certain direction:
(B1) assuming that the image obtained after the coarse focusing screw adjustment is X1Calculating blur or sharpness to obtain an image X2The scaling factor of the pixel point is as follows:
v=sα (5)
wherein s is an empirical parameter for controlling the zoom speed;
(B2) calculating X2:
X2=βX1(6)
If the image is rotated clockwise, the beta is v, so that the image becomes clearer; if rotating counterclockwise, β is 1/v, making the image more and more blurred;
(B3) image X is gradually rendered as followst
Xt=tX2+(1-t)X1(7)
In the formula, XtRepresenting the result of sharpening the image at the current instant t, t ∈ [0,1];
(B4) Mixing XtAnd transmitting the data to the local computing display device through the remote communication module.
The operation of the step (3) comprises:
(3.1) identifying stress behavior of the user on the local computing display device: calculating a pressure change value through data of the pressure sensor at the continuous adjacent time:
ω=p2-p1 (8)
wherein p1 and p2 represent the data of the pressure sensor at the previous time and the current time, respectively;
if ω >0, meaning the user's semantic is pressure increase;
if omega is less than 0, the semantic meaning of the user is pressure reduction;
if ω is 0, meaning that the user's semantic is pressure invariant;
and (3.2) judging the action of the human hand on the contacted part according to the semantic meaning of the user.
The first operation of the step (4) comprises:
4.1 installing a transparent material under the objective table of the microscope body model;
4.2 a closed space is formed between the transparent material and the bottom of the objective table, a camera is installed in the closed space, and the camera is vertically aligned to the plane of the transparent material; a light source is arranged in the sealed space;
4.3 putting the glass slide to a correct position, then shooting an image by using a camera, and segmenting the image of the glass slide;
4.4 calculating the center position P1 and the direction of the length side D1: solving an edge image ABCD of the slide image; according to the positions of the four corner points of A, B, C, D, the central position P1 and the direction D1 of the length side are obtained;
4.5 sensing the slide position placed by the user, comprising:
4.5.1 shooting an image I by a camera;
4.5.1 segmenting the slide image from the image I;
4.5.2 calculating the center position P2 of the slide image and the direction D2 of the length side of the slide;
4.5.3 calculate position and angle deviations:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||×||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation;
4.5.4 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
4.6 display image h (x, y) on the display in real time.
The second operation of the step (4) comprises:
5.1, conducting wires are buried in an object stage of the microscope body model in a hidden mode, and two ends of each conducting wire are provided with pinholes;
5.2 embedding conducting wires on the glass slide, wherein two ends of each conducting wire are respectively connected with a conducting pin; when the glass slide is placed on the objective table according to a correct method, the conductive pin is just inserted into the pinhole on the objective table to form a loop;
5.3 arranging a micro power supply on the objective table, and connecting the micro power supply with a lead on the objective table; meanwhile, a current detector is arranged on each wire;
5.4 arranging corresponding lead wires, pinholes, a micro power supply and a current detector on the objective table according to various possible deflection positions P and deflection angles theta of the glass slide; setting a unique current detector for each deflection situation, wherein each current detector has a unique number, so that a one-to-one correspondence relationship between the (P, theta) and the current detectors is established;
5.5 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (12)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
5.6 when the user places the slide on the stage, if the number of the current detector which detects the presence of the signal is N, (P, theta) is retrieved from N, and then the image h (x, y) is calculated from equation (12);
5.8 display the image h (x, y) on the display in real time.
The third operation of the step (4) comprises:
6.1 arranging a magnetic unit array on the objective table, wherein each magnetic unit has a unique identifiable position and a unique number; each magnetic unit is a closed circuit controlled by a circuit switch, the closed circuit comprises a micro power supply and a current detector, and the circuit switch is controlled by a magnetic needle; one end of the magnetic needle is an N pole, and the other end of the magnetic needle is an S pole; the magnetic needle is connected with the circuit switch to control the switch to be closed, and the circuits of all the magnetic units are closed under the condition that the glass slide is not placed;
6.2 magnetizing the upper and lower bottom surfaces of the glass slide into an N pole and an S pole respectively;
6.3 setting the magnetic poles and the magnetic sizes of the magnetic units in the magnetic unit array, so that when the glass slide is placed on the objective table, the magnetic units which are positioned on the objective table right below the glass slide can be attracted to the direction of the glass slide, thereby disconnecting the circuit of the corresponding magnetic unit and taking the magnetic unit with the disconnected circuit as an induction unit;
6.4 the current detector on each magnetic unit is connected with the electronic chip through a circuit;
6.5 the electronic chip judges the position and the number of the magnetic unit of the induction unit according to the received information of the magnetic unit;
6.6 putting the glass slide at the correct position, and calculating the gravity center P1 and the length direction D1 of the graph where all the sensing units are positioned;
6.8 when the user places the slide on the stage, calculating the center of gravity P2 of the pattern where all the sensing units are located; a longitudinal direction D2;
6.9 calculate position deviation and position deviation:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||×||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation;
6.10 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
6.11 display the image h (x, y) on the display in real time.
The fourth operation of the step (4) comprises:
7.1 setting small holes on the object stage, and setting a photosensitive unit in each small hole;
7.2 each photosensitive unit transmits the sensing result to the electronic chip through an electronic circuit;
7.3 each photosensitive unit has a unique position mark;
7.4 putting the slide glass at the correct position, and calculating the gravity center position P1 and the length direction D1 of the area where the photosensitive unit without the sensing signal is located;
7.5 when the user places the glass slide on the object stage, the electronic chip acquires the positions of all the photosensitive units without sensing signals according to the information transmitted by all the photosensitive units;
7.6 calculating the gravity center position P2 and the length direction D2 of the area where the photosensitive unit without the sensing signal is located;
7.7 calculate position deviation and position deviation:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation;
7.8 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
7.9 display the image h (x, y) on the display in real time.
Compared with the prior art, the invention has the beneficial effects that: on one hand, the invention utilizes the virtual fusion technology to enhance the information of the observation result of the user, thereby being beneficial to the process, mechanism and principle of randomly exploring the experimental phenomenon by the user; on the other hand, the method obtains the operation experience under the real microscope condition through the real object operation, and helps experimenters master the related experiment skills.
Drawings
FIG. 1 is a schematic diagram of a virtual microscope object interaction suite according to the present invention;
FIG. 2 is a schematic view of a slide position monitoring sensor;
FIG. 3-1 shows a schematic view of a buried conductor on a stage;
FIG. 3-2 is a schematic view of a slide with a wire and a micro-fine needle;
FIG. 4-1 is a schematic view of a magnetic cell;
FIG. 4-2 is a schematic view of a slide;
4-3 schematic representation of the stage;
FIG. 5 is a schematic diagram of a carrier stage structure based on a photosensitive unit;
FIG. 6 is a schematic view of the structure of the rotation sensor;
FIG. 7 an FPGA chip.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
as shown in fig. 1, the present invention includes: the microscope body model 7, the slide glass, the rotation sensor 2, the pressure sensor 3, the remote communication module 4, the remote computing and displaying device 5 and the local computing and displaying device 6. The microscope body model 7 is a physical model composed of an ocular lens, an objective lens, a lens cone, a converter, a light through hole, a rubber head dropper and the like.
Respectively arranging rotation sensors on a coarse focusing screw and a fine focusing screw of the microscope body model, wherein the rotation sensors can acquire rotation direction and angle data; pressure sensors are respectively arranged on a microscope arm, a microscope base, tweezers, a rubber head dropper, a glass slide and a cover glass of the microscope body model, and the pressure sensors acquire pressure data exerted on the pressure sensors by hands; an electronic chip 1 and a remote communication module 4 are arranged on the microscope body model; a display 8 is provided on the eyepiece.
The sensing data of the rotary sensor 2, the pressure sensor 3 and the like are connected with the electronic chip 1 through an electronic circuit, the electronic chip 1 transmits sensing and processing results to the remote computing and displaying device 5 through the remote communication module 4 (the electronic chip 1 is communicated with the remote computing and displaying device 5 through the remote communication module 4, the remote computing and displaying device 5 is communicated with the local computing and displaying device 6 through the same remote communication module 4), the remote computing and displaying device 5 analyzes and calculates the received sensing data, and the processing results are transmitted to the local computing and displaying device 6 or a display through the remote communication module 4 to be displayed. The local computing and displaying device 6 displays the specimen image on the local display in real time according to the received data.
The main tasks of a local or remote computing display device are: (1) detecting whether a hand holds the microscope arm or not and whether the other hand supports the microscope base or not; (2) sensing and responding to basic situations such as 'rotating', 'holding', 'hand-holding', 'squeezing' and the like in real time; (3) the relative position of the slide and the clear hole is monitored.
The interactive situation monitoring method comprises the following steps:
(1) acquiring operation data (behavior data) of a user through sensing equipment such as a rotation sensor and a pressure sensor;
(2) processing the rotation sensor data on a remote computing display device:
(2.1) transmitting the rotation behavior data of the user to a remote computing display device through a remote communication module (Internet or Bluetooth communication device);
and (2.2) after the remote computing display device receives the data, identifying the rotating behavior semantic of the user. Calculating the rotating direction and angle size through the sensing data of the continuous adjacent time:
Δ=θ2-θ1 (1)
where θ 1 and θ 2 represent the previous time and the current time, respectively, and Δ represents the difference between the current time and the previous time sensing value. If Δ >0, it represents a clockwise rotation, the semantics are "amplified"; if Δ <0, then it represents a counterclockwise rotation, the semantics are "zoom out"; if Δ is 0, the scaling factor is 0. The scaling factor is:
α=kΔ (2)
and k is an empirical parameter, different k is used for testing, so that a user can evaluate the presentation effect under the alpha condition, and the k corresponding to the most satisfactory alpha is selected.
(2.3) the remote computing display equipment zooms the current sample image according to alpha and the semantic meaning thereof (the sample image refers to the sample image under the current microscope; the original sample image is stored in the remote computer in advance and is transmitted through a communication module), or inquires in a database to find the sample image corresponding to the zoom multiple (the sample image is pre-stored in the remote computing display equipment);
(2.4) for the coarse in-focus helix, image Y is generated using the following methodtSo that: the image visual field range changes, and the visual field is clearer along a certain direction:
(2.4.1) assume the original sample image is Y1Calculating the image with changed visual field as Y2. From Y1Is changed into Y2The zoom factor of the image size is obtained by the following formula:
γ=wα (3)
wherein, w is an empirical parameter for controlling the speed of changing the visual field, and the user subjectively evaluates the image presenting effect and evaluates the w corresponding to the most satisfactory zooming speed.
(2.4.2) calculation of Y2
Y2=β[Y1](4)
The above formula represents: with Y1Is a transformation center with S2=βS1Is the size (S)1、S2Respectively represent images Y1And image Y2Area of) to construct a window; y is1The image located within the window is Y2. Y position in Y12If rotated clockwise β is γ, making the field of view larger and smaller, if rotated counterclockwise β is 1/γ, making the field of view smaller and smaller;
(2.4.3) gradually presenting the image X according to the following equationt
Xt=tY2+(1-t)Y1(4)
In the formula, XtRepresents the result of the sharpening at the current time t ∈ [0,1 ]]。
(2.4.4) mixing XtAnd transmitting the data to the local computing display device through the remote communication module.
(2.5) for the fine quadcocal helix, image X is generated using the following methodtSo that: the visual field range of the image is kept unchanged, and the image becomes clearer along a certain direction:
(2.5.1) assuming a coarse quadcocal helix, the resulting image is X1Calculating blur/sharpness to obtain image X2The scaling/magnification of the pixel point is as follows:
v=sα (5)
and s is an empirical parameter for controlling the zooming speed, the user subjectively evaluates the image presentation effect, and evaluates the s corresponding to the most satisfactory zooming speed.
(2.5.2) calculation of X2:
X2=βX1(6)
If the image is rotated clockwise, β ═ v, so that the image becomes clearer; if rotating in the counterclockwise direction, β is 1/v, making the image more and more blurred;
(2.5.3) gradually presenting the image X according to the following equationt
Xt=tX2+(1-t)X1(7)
In the formula, XtRepresents the result of the sharpening at the current time t ∈ [0,1 ]]。
(2.5.4) mixing XtAnd transmitting the data to the local computing display device through the remote communication module.
(3) Processing pressure sensor data on a local computing display device:
(3.1) identifying "stress" behavior semantics of the user on the local computing display device. Calculating a pressure change value by sensing data at consecutive adjacent moments:
ω=p2-p1 (8)
here, p1 and p2 represent the previous time and the current time of the pressure sensor data, respectively. If ω >0, the behavior semantics of the representation is "pressure increase"; if ω <0, the behavior semantics represented is "pressure decrease"; if ω is 0, the behavior semantic of the representation is "pressure invariant".
And (3.2) judging the relation between the human hand and the contacted part according to the behavior semantics. For example, if the contacted part is a rubber-tipped dropper and the action semantic is "pressure increase," it indicates that the user is squeezing the rubber-tipped dropper head and should animate the effect of dropping a drop in the center of the slide.
(4) Monitoring the relative position of the glass slide and the light through hole:
method 1, as shown in fig. 2:
4.1 mounting a piece of transparent material 201 (e.g., transparent glass) under the stage 206; the objective table 206 is provided with a light through hole 202;
4.2 a closed space is formed between the transparent material 201 and the bottom of the object stage 206, the camera 204 is arranged in the closed space, and the lens of the camera is vertically aligned with the plane of the transparent material; the light source 203 is disposed inside the sealed space so that the sealed space has stable light irradiation.
4.3 manually placing the slide 205 in the correct position, then taking an image with the camera and segmenting the slide image (in the taken image, the color of the slide is different from the color of the other positions, so that the slide image can be segmented);
4.4 calculating the center position P1 and the direction of the length side D1:
4.4.1, calculating an edge image ABCD of the slide image;
4.4.2 obtaining a central position P1 and a length side direction D1 according to the positions of four corner points of A, B, C, D;
4.5 perception of slide position placed by user:
4.5.1 shooting an image I by a camera;
4.5.2 segmenting the slide image from the image I;
4.5.3 calculating the center position P2 of the slide image and the direction D2 of the length side of the slide;
4.5.4 calculate the position and angle deviations:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||×||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation.
4.5.5 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the function f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the image is rotated first and then a translation transformation P is applied to it;
4.6 display image h (x, y) in real time on the display device.
The method 2 comprises the following steps:
another method of obtaining the relative position of the monitoring slide and the clear aperture is shown in fig. 3-1 and 3-2:
5.1 as shown in fig. 3-1, a lead 302 is buried on the stage (i.e. the lead is embedded in the stage), and pinholes 303 are arranged at two ends of the lead 302; the lead 302 is connected with the micro power supply and current detector 301;
5.2 as shown in fig. 3-2, a conducting wire 305 is buried on the glass slide, and two ends of the conducting wire 305 are respectively connected with a needle (a micro fine needle 304) capable of conducting electricity; when the glass slide is placed on the objective table according to a correct method, the micro fine needle is just inserted into a needle hole on the objective table, and the two groups of wires can respectively form two groups of loops;
5.3 arranging a (micro) power supply on the objective table and connecting the (micro) power supply with a lead; meanwhile, a current detector is arranged on the lead;
5.4 arranging corresponding lead wires, pinhole, micro power supply and current detector on the objective table according to various deflection positions P and deflection angles theta of the slide glass provided with the lead wires and the micro fine needles; setting a unique current detector (each current detector has a unique identification number No) for each deflection situation, thereby establishing a one-to-one correspondence relationship between (P, theta) and the current detector number No;
5.5 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (12)
wherein f isθ(x, y) represents rotating the function f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the image is rotated first and then a translation transformation P is applied to it;
5.6 when the user places the slide on the stage, if the current detector detecting the current has the number N, retrieving (P, θ) from N, and calculating the image h (x, y) from equation (12);
5.9 display the image h (x, y) in real time on the display device.
The method 3 comprises the following steps:
a third method of obtaining the relative position of the monitoring slide and the clear aperture is shown in fig. 4-1, 4-2, and 4-3:
6.1 As shown in FIGS. 4-3, an array of magnetic units is disposed on the stage, each magnetic unit 409 having a unique identifiable location and number; as shown in fig. 4-1, each of the magnetic units is a closed circuit controlled by a line switch, and includes a micro power source 406 and a current detector 405, and the line switch 404 is controlled by a magnetic needle 402; one end of the magnetic needle 402 is an N pole 401, and the other end is an S pole 403; the magnetic needle 402 is connected with a circuit switch 404 to control the switch to be closed;
6.2 As shown in FIG. 4-2, the upper and lower bottom surfaces of the slide are magnetized to an N pole 407 and an S pole 408, respectively;
6.3 setting the magnetic poles and the magnetic sizes of the magnetic units in the magnetic unit array, so that when the glass slide is placed on the objective table, the magnetic units which are positioned on the objective table right below the glass slide can be attracted to the direction of the glass slide, thereby disconnecting the circuit of the corresponding magnetic unit and taking the magnetic unit with the disconnected circuit as an induction unit;
6.4 the current detector on each magnetic unit is connected with the FPGA chip through a circuit;
6.5 the FPGA chip judges the position and the number of the magnetic unit of the induction unit according to the received information of the magnetic unit;
6.6 putting the glass slide at the correct position, and calculating the gravity center P1 and the length direction D1 of the graph where all the sensing units are positioned;
6.8 when the user places the slide on the stage, calculating the center of gravity P2 of the pattern where all the sensing units are located; a longitudinal direction D2;
6.9 calculate position deviation and position deviation:
P=||P1-P2|| (9)
=cos-1((D1·D2)/(||D1|| ||D2||))(10)
wherein, represents the dot product of two vectors, | | · | | represents the length value of the vector, cos-1 represents the inverse cosine operation;
6.10 for each (P,), establish a deflection image h (x, y) of the original sample image f (x, y):
h(x,y)=f(x,y)+P (11)
wherein f (x, y) denotes rotating the function f (x, y) along its center of gravity; f (x, y) + P denotes that the image is rotated first and then a translation transformation P is applied thereto;
6.11 display the image h (x, y) in real time on the display device.
The method 4 comprises the following steps:
a fourth method of acquiring the relative position of the monitoring slide and the clear aperture is shown in fig. 5:
7.1 setting small holes on the object stage, and setting a photosensitive unit 501 in each small hole;
7.2 each photosensitive unit transmits the sensing result to the electronic chip through an electronic circuit;
7.3 each photosensitive unit has a unique position mark;
7.4 put the slide glass at the right position, calculate the gravity center position P1 and length direction D1 of the area without sensing signal (the light sensing unit covered by the slide glass);
7.5 the electronic chip processes according to the information transmitted from each photosensitive unit:
7.6 acquiring the position of each non-sensing signal;
7.7 calculating the gravity center position P2 and the length direction D2 of the no-sense signal area;
7.9 calculate position deviation and position deviation:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation.
7.10 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the function f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the image is rotated first and then a translation transformation P is applied to it;
7.11 display the image h (x, y) in real time on the display device.
As shown in fig. 6, the rotation sensor of the present invention comprises a light source/rotation shaft 601, a light channel 603, an inner ring means that can rotate with the rotation shaft, and an outer ring means 602 that cannot rotate (it can be connected to the non-rotating part of the coarse or fine quasi-focal spiral). The photosensitive unit 604 is arranged on the outer ring device which cannot rotate; the light channel is arranged on the inner ring device which can rotate. The rotatable inner ring means is a screw that can be operated by a user. And a photosensitive unit is arranged on the circumference of the inner wall of the outer ring device. The photosensitive unit is connected to an angle encoder 605 via a unit output line, and the angle encoder 605 outputs the current angle. Wherein the spiral is fixed on the rotating shaft and can rotate along the rotating shaft. The light source is fixed on the rotating shaft and sealed in the light channel, namely, light can only be emitted along the radius direction of the spiral (deviating from the circle center) and can be sensed by the photosensitive units distributed on the circumference of the spiral, and the sensing result is transmitted to the angle encoding device through the output line of the units. In fig. 6, O is the light source/rotation axis, and the closed light path is composed of OAB, and the light source O can only emit to the light sensing unit AB.
When the user turns the spiral along the rotation axis, the light path rotates along with it, activating the corresponding light sensing unit. Obviously, different photosensitive units correspond to different angles. Therefore, the encoding device, the computer can sense which photosensitive unit is activated, so as to obtain the corresponding current angle data. Wherein, the angle encoder can be realized by the existing encoder chip or FPGA.
And finally, sending the data of the rotation sensor, the pressure sensor and the remote communication module to an electronic chip, wherein the chip is arranged in the virtual microscope body model, and the electronic chip transmits the sensing result of the sensors to a local/remote computing display device, as shown in fig. 7.
One embodiment of the invention is as follows:
1. the microscope is taken out of the microscope box, one hand holds the microscope arm, the other hand holds the microscope base (whether the hand holds the microscope arm is detected, and whether the other hand holds the microscope base is detected (judged by pressure sensors arranged on the microscope arm and the microscope base)). If the user gesture is not correct, the pressure sensor arranged on the microscope base can give an alarm prompt.
2. Light focusing: a large aperture is aligned with the light-transmitting hole, and the reflector is rotated to make the reflected light pass through the light-transmitting hole, the objective lens and the lens cone and then reach the ocular lens.
3. The glass slide is placed on the experiment table, a user holds the rubber head dropper with one hand, and the display device drops a drop of virtual water in the center of the glass slide through the sensing pressure sensor.
4. The coverslip was grasped with forceps and one side of the coverslip was first contacted with the water drop on the slide and then gently covered.
5. The loading piece to be observed is placed on an objective table and clamped and pressed by a tabletting. The glass slide specimen is just opposite to the center of the light through hole. The user continually adjusts the slide position until the sample image is centered on the display screen (the slide position will be transferred in real time to a remote computer device that presents the slide position on the display in real time by running the slide position in one of the several methods described above).
6. The coarse quasi-focus spiral and the fine quasi-focus spiral are sequentially rotated, and the influence of different rotating directions and rotating angles on the image on the display screen is carefully considered (the rotating angles and the directions of the coarse quasi-focus spiral and the fine quasi-focus spiral are transmitted to a remote computer device).
7. After the high power lens is used, the lens cone must be lifted up, the lens is removed, and then the glass slide specimen is taken out, so that the lens surface of the lens is not scratched when the glass slide is taken out.
The above-described embodiment is only one embodiment of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be easily made based on the application and principle of the present invention disclosed in the present application, and the present invention is not limited to the method described in the above-described embodiment of the present invention, so that the above-described embodiment is only preferred, and not restrictive.

Claims (6)

1. A method for monitoring interaction situation of a virtual microscope real object interaction suite comprises the virtual microscope real object interaction suite, wherein the virtual microscope real object interaction suite comprises the following steps: the microscope comprises a microscope body model, and a rotation sensor, a pressure sensor, an electronic chip and a remote communication module which are arranged on the microscope body model; a remote computing display device and a local computing display device;
the rotation sensor, the pressure sensor and the remote communication module are respectively connected with the electronic chip;
the remote communication module can be respectively communicated with remote computing display equipment and local computing display equipment;
respectively arranging rotation sensors at the coarse focusing screw and the fine focusing screw of the microscope body model;
pressure sensors are respectively arranged on the microscope arm and the microscope base of the microscope body model, as well as the tweezers, the rubber head dropper, the glass slide and the cover glass;
arranging the electronic chip and the remote communication module on a microscope base of the microscope body model;
arranging a display on an eyepiece of the microscope body model;
the rotation sensor includes: the device comprises a rotating shaft, an inner ring device and an outer ring device which are coaxially arranged, wherein the radius of the inner ring device is smaller than that of the outer ring device;
the inner ring device is fixedly connected with the rotating shaft and can rotate along with the rotating shaft;
the outer ring device is fixed;
photosensitive units are uniformly distributed on the circumference of the inner wall of the outer ring device;
a light channel is formed on the inner ring device;
a light source is fixed on the rotating shaft, and light rays of the light source are sensed by the photosensitive units on the circumference of the inner wall of the outer ring device after passing through the light ray channel on the inner ring device;
characterized in that the method comprises:
(1) acquiring the operation data of a user through the rotation sensor and the pressure sensor, and sending the data of the rotation sensor and the data of the pressure sensor to the electronic chip;
(2) processing the data of the rotation sensor on a remote computing display device;
(3) processing data of the pressure sensor on a local computing display device;
(4) monitoring the relative position of the glass slide and the light through hole;
the operation of the step (2) comprises the following steps:
(2.1) transmitting the data of the rotation sensor to a remote computing and displaying device through a remote communication module;
(2.2) after the remote computing and displaying device receives the data of the rotation sensor, recognizing the user's behavior:
calculating the rotating direction and angle size through the data of the rotating sensor at the continuous adjacent time:
Δ=θ2-θ1 (1)
wherein θ 1 and θ 2 respectively represent data of the rotation sensor at the previous time and the current time, and Δ represents a difference between the data of the rotation sensor at the current time and the previous time;
if delta is greater than 0, clockwise rotation is represented, and the semantic meaning of the user is amplification; if delta is less than 0, the rotation is anticlockwise, and the semantic meaning of the user is reduced; if Δ is 0, it means that the user does not perform scaling;
the scaling factor is:
α=kΔ (2)
wherein k is an empirical parameter;
(2.3) the remote computing display device zooms the current sample image according to the alpha and the semantics of the user thereof, or queries in a database to find the sample image corresponding to the zoom multiple:
for a coarse in-focus spiral, image Y is generated using the following stepstSo that: the image visual field range changes, and the visual field is clearer along a certain direction:
(A1) assume that the original sample image is Y1The image after the visual field is changed is Y2(ii) a From Y1Is changed into Y2The scaling factor of the time image size is given by:
γ=wα (3)
wherein w is an empirical parameter controlling the speed of change of the field of view;
(A2) calculating Y2
Y2=β[Y1](4)
The above formula represents: with Y1Is a transformation center with S2=βS1Building a window for the size; y is1The image within the window is Y2At Y position in Y12The outside images are cropped away; s1、S2Respectively represent images Y1And image Y2The area of (d);
if rotated clockwise, β ═ γ, making the field of view increasingly large; if rotating counterclockwise, β is 1/γ, making the field of view smaller and smaller;
(A3) image X is gradually rendered as followst
Xt=tY2+(1-t)Y1(4)
In the above formula, XtRepresenting the result of sharpening the image at the current instant t, t ∈ [0,1];
(A4) Mixing XtThe information is transmitted to local computing display equipment through a remote communication module;
for a fine in-focus spiral, image X is generated using the following stepstSo that: the visual field range of the image is kept unchanged, and the image becomes clearer along a certain direction:
(B1) assuming that the image obtained after the coarse focusing screw adjustment is X1Calculating blur or sharpness to obtain an image X2The scaling factor of the pixel point is as follows:
v=sα (5)
wherein s is an empirical parameter for controlling the zoom speed;
(B2) calculating X2:
X2=βX1(6)
If the image is rotated clockwise, the beta is v, so that the image becomes clearer; if rotating counterclockwise, β is 1/v, making the image more and more blurred;
(B3) image X is gradually rendered as followst
Xt=tX2+(1-t)X1(7)
In the formula,XtRepresenting the result of sharpening the image at the current instant t, t ∈ [0,1];
(B4) Mixing XtAnd transmitting the data to the local computing display device through the remote communication module.
2. The method of claim 1, wherein: the operation of the step (3) comprises:
(3.1) identifying stress behavior of the user on the local computing display device: calculating a pressure change value through data of the pressure sensor at the continuous adjacent time:
ω=p2-p1 (8)
wherein p1 and p2 represent the data of the pressure sensor at the previous time and the current time, respectively;
if ω >0, meaning the user's semantic is pressure increase;
if omega is less than 0, the semantic meaning of the user is pressure reduction;
if ω is 0, meaning that the user's semantic is pressure invariant;
and (3.2) judging the action of the human hand on the contacted part according to the semantic meaning of the user.
3. The method of claim 2, wherein: the operation of the step (4) comprises the following steps:
4.1 installing a transparent material under the objective table of the microscope body model;
4.2 a closed space is formed between the transparent material and the bottom of the objective table, a camera is installed in the closed space, and the camera is vertically aligned to the plane of the transparent material; a light source is arranged in the sealed space;
4.3 putting the glass slide to a correct position, then shooting an image by using a camera, and segmenting the image of the glass slide;
4.4 calculating the center position P1 and the direction of the length side D1: solving an edge image ABCD of the slide image; according to the positions of the four corner points of A, B, C, D, the central position P1 and the direction D1 of the length side are obtained;
4.5 sensing the slide position placed by the user, comprising:
4.5.1 shooting an image I by a camera;
4.5.1 segmenting the slide image from the image I;
4.5.2 calculating the center position P2 of the slide image and the direction D2 of the length side of the slide;
4.5.3 calculate position and angle deviations:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||×||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation;
4.5.4 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
4.6 display image h (x, y) on the display in real time.
4. The method of claim 3, wherein: the operation of the step (4) comprises the following steps:
5.1, conducting wires are buried in an object stage of the microscope body model in a hidden mode, and two ends of each conducting wire are provided with pinholes;
5.2 embedding conducting wires on the glass slide, wherein two ends of each conducting wire are respectively connected with a conducting pin; when the glass slide is placed on the objective table according to a correct method, the conductive pin is just inserted into the pinhole on the objective table to form a loop;
5.3 arranging a micro power supply on the objective table, and connecting the micro power supply with a lead on the objective table; meanwhile, a current detector is arranged on each wire;
5.4 arranging corresponding lead wires, pinholes, a micro power supply and a current detector on the objective table according to various possible deflection positions P and deflection angles theta of the glass slide; setting a unique current detector for each deflection situation, wherein each current detector has a unique number, so that a one-to-one correspondence relationship between the (P, theta) and the current detectors is established;
5.5 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (12)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
5.6 when the user places the slide on the stage, if the number of the current detector which detects the presence of the signal is N, (P, theta) is retrieved from N, and then the image h (x, y) is calculated from equation (12);
5.7 display the image h (x, y) on the display in real time.
5. The method of claim 4, wherein: the operation of the step (4) comprises the following steps:
6.1 arranging a magnetic unit array on the objective table, wherein each magnetic unit has a unique identifiable position and a unique number; each magnetic unit is a closed circuit controlled by a circuit switch, the closed circuit comprises a micro power supply and a current detector, and the circuit switch is controlled by a magnetic needle; one end of the magnetic needle is an N pole, and the other end of the magnetic needle is an S pole; the magnetic needle is connected with the circuit switch to control the switch to be closed, and the circuits of all the magnetic units are closed under the condition that the glass slide is not placed;
6.2 magnetizing the upper and lower bottom surfaces of the glass slide into an N pole and an S pole respectively;
6.3 setting the magnetic poles and the magnetic sizes of the magnetic units in the magnetic unit array, so that when the glass slide is placed on the objective table, the magnetic units which are positioned on the objective table right below the glass slide can be attracted to the direction of the glass slide, thereby disconnecting the circuit of the corresponding magnetic unit and taking the magnetic unit with the disconnected circuit as an induction unit;
6.4 the current detector on each magnetic unit is connected with the electronic chip through a circuit;
6.5 the electronic chip judges the position and the number of the magnetic unit of the induction unit according to the received information of the magnetic unit;
6.6 putting the glass slide at the correct position, and calculating the gravity center P1 and the length direction D1 of the graph where all the sensing units are positioned;
6.8 when the user places the slide on the stage, calculating the center of gravity P2 of the pattern where all the sensing units are located; a longitudinal direction D2;
6.9 calculate position deviation and position deviation:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||×||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation;
6.10 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
6.11 display the image h (x, y) on the display in real time.
6. The method of claim 4, wherein: the operation of the step (4) comprises the following steps:
7.1 setting small holes on the object stage, and setting a photosensitive unit in each small hole;
7.2 each photosensitive unit transmits the sensing result to the electronic chip through an electronic circuit;
7.3 each photosensitive unit has a unique position mark;
7.4 putting the slide glass at the correct position, and calculating the gravity center position P1 and the length direction D1 of the area where the photosensitive unit without the sensing signal is located;
7.5 when the user places the glass slide on the object stage, the electronic chip acquires the positions of all the photosensitive units without sensing signals according to the information transmitted by all the photosensitive units;
7.6 calculating the gravity center position P2 and the length direction D2 of the area where the photosensitive unit without the sensing signal is located;
7.7 calculate position deviation and position deviation:
P=||P1-P2|| (9)
θ=cos-1((D1·D2)/(||D1||||D2||)) (10)
where, represents the dot product of two vectors, | | · | | | represents the length value of the vector, cos-1Representing an inverse cosine operation;
7.8 for each (P, θ), a deflection image h (x, y) of the original sample image f (x, y) is created:
h(x,y)=fθ(x,y)+P (11)
wherein f isθ(x, y) represents rotating the original sample image f (x, y) by θ along its center of gravity; f. ofθ(x, y) + P denotes that the original sample image f (x, y) is first rotated by θ along its center of gravity and then a translation transformation P is applied to it;
7.9 display the image h (x, y) on the display in real time.
CN201811477589.9A 2018-12-05 2018-12-05 Virtual microscope object interaction suite and application thereof Expired - Fee Related CN109300387B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811477589.9A CN109300387B (en) 2018-12-05 2018-12-05 Virtual microscope object interaction suite and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811477589.9A CN109300387B (en) 2018-12-05 2018-12-05 Virtual microscope object interaction suite and application thereof

Publications (2)

Publication Number Publication Date
CN109300387A CN109300387A (en) 2019-02-01
CN109300387B true CN109300387B (en) 2020-09-29

Family

ID=65141574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811477589.9A Expired - Fee Related CN109300387B (en) 2018-12-05 2018-12-05 Virtual microscope object interaction suite and application thereof

Country Status (1)

Country Link
CN (1) CN109300387B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103083089A (en) * 2012-12-27 2013-05-08 广东圣洋信息科技实业有限公司 Virtual scale method and system of digital stereo-micrography system
CN107833513A (en) * 2017-12-04 2018-03-23 哈尔滨工业大学深圳研究生院 A kind of ESEM demenstration method and device without using optical lens

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170961A (en) * 2005-03-11 2008-04-30 布拉科成像S.P.A.公司 Methods and devices for surgical navigation and visualization with microscope
CN1925549A (en) * 2005-08-30 2007-03-07 麦克奥迪实业集团有限公司 Virtual microscopic section method and system
EP2171641A4 (en) * 2007-06-21 2012-11-14 Univ Johns Hopkins Manipulation device for navigating virtual microscopy slides/digital images and methods related thereto
JP2013174709A (en) * 2012-02-24 2013-09-05 Olympus Corp Microscope device and virtual microscope device
CN103092346B (en) * 2013-01-14 2016-01-27 哈尔滨工业大学 A kind of Virtual force field based on scanning electron microscope is distant receives operating platform and realize the method for virtual dynamic sensing interexchanging
CN203133378U (en) * 2013-02-01 2013-08-14 山东大学 Virtual binocular microscope
CN107205779A (en) * 2014-12-29 2017-09-26 助视会有限公司 Surgical simulation device system and method
CN108652824B (en) * 2018-05-18 2020-10-20 深圳市莫廷影像技术有限公司 Ophthalmic surgery microscope system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103083089A (en) * 2012-12-27 2013-05-08 广东圣洋信息科技实业有限公司 Virtual scale method and system of digital stereo-micrography system
CN107833513A (en) * 2017-12-04 2018-03-23 哈尔滨工业大学深圳研究生院 A kind of ESEM demenstration method and device without using optical lens

Also Published As

Publication number Publication date
CN109300387A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
Ge et al. A bimodal soft electronic skin for tactile and touchless interaction in real time
US10235928B2 (en) Wearable display device
CN103731602B (en) Multi-display equipment and its image pickup method
TWI533025B (en) Portable microscope
CN104620197B (en) Transparent display device and its object selection method
RU2719108C1 (en) Tablet/smartphone holder with wired connection in form of equipment control panel
CN107045190A (en) sample carrier module and portable microscope device
US10587787B2 (en) Displacement sensor and camera module having the same
CN109495724B (en) Virtual microscope based on visual perception and application thereof
CN107077285A (en) Operation device, the information processor with operation device and the operation acceptance method for information processor
CN203367220U (en) Precipitation auxiliary device for assisting in manually preparing MALDI sample
CN109300387B (en) Virtual microscope object interaction suite and application thereof
Zhang et al. Multidimensional tactile sensor with a thin compound eye-inspired imaging system
Ohba et al. Microscopic vision system with all-in-focus and depth images
US20170363854A1 (en) Augmented reality visual rendering device
CN111833689A (en) Grading method and device for electrical experiment
EP3767359A1 (en) Liquid iris, optical device comprising same, and mobile terminal
CN208366322U (en) Ranging group robot and system
EP3443734B1 (en) Camera module with displacement sensor
US7432942B2 (en) Electric display media
JP2007309852A (en) Prepared slide, and apparatus and method for controlling the prepared slide
CN111237664A (en) Control method and device of rotary table lamp and rotary table lamp
KR101835384B1 (en) Image enlargement apparatus for low vision person
CN109085947A (en) A kind of electric answer card, electric marking device and electric marking system
CN111899615A (en) Scoring method, device, equipment and storage medium for experiment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200929

Termination date: 20211205

CF01 Termination of patent right due to non-payment of annual fee