CN109326166B - Virtual microscope object kit and application thereof - Google Patents

Virtual microscope object kit and application thereof Download PDF

Info

Publication number
CN109326166B
CN109326166B CN201811477671.1A CN201811477671A CN109326166B CN 109326166 B CN109326166 B CN 109326166B CN 201811477671 A CN201811477671 A CN 201811477671A CN 109326166 B CN109326166 B CN 109326166B
Authority
CN
China
Prior art keywords
sequence
theta
image
rotating
spiral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811477671.1A
Other languages
Chinese (zh)
Other versions
CN109326166A (en
Inventor
冯志全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201811477671.1A priority Critical patent/CN109326166B/en
Publication of CN109326166A publication Critical patent/CN109326166A/en
Application granted granted Critical
Publication of CN109326166B publication Critical patent/CN109326166B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a virtual microscope object kit and application thereof, and belongs to the field of experimental equipment. This virtual microscope object kit includes: the microscope comprises a microscope body model, and a spiral sensor, a pressure sensor and a slide glass position sensor which are arranged on the microscope body model; the system comprises an FPGA chip and local display equipment; the spiral sensor, the pressure sensor and the slide glass position sensor are respectively connected with the FPGA chip; the FPGA chip is communicated with the local display equipment in a wired or wireless mode. On one hand, the invention utilizes the virtual fusion technology to enhance the information of the observation result of the user, thereby being beneficial to the process, mechanism and principle of randomly exploring the experimental phenomenon by the user; on the other hand, the method obtains the operation experience under the real microscope condition through the real object operation, and helps experimenters master the related experiment skills.

Description

Virtual microscope object kit and application thereof
Technical Field
The invention belongs to the field of experimental equipment, and particularly relates to a virtual microscope object kit and application thereof.
Background
At present, most of primary and secondary schools in China do not have microscopes for experiments, so that many experimental courses using the equipment for biology and chemistry cannot be set normally; secondly, even if some schools have microscope equipment, key experiment samples such as cells, microorganisms and the like are often lacked; secondly, the traditional experimental method cannot realize information enhancement, namely, the mechanism of the observation sample and other things which cannot be observed by naked eyes cannot be observed, and various possible situations cannot be observed.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a virtual microscope object kit and application thereof, which not only solve the above-mentioned part of bottleneck problems and pain points which are difficult to solve in the experimental teaching of the microscopes in middle and primary schools for many years, but also endow the experimental method of the microscope with typical characteristics of intelligence, interactivity and the like.
The invention is realized by the following technical scheme:
a virtual microscope physical kit comprising: the microscope comprises a microscope body model, and a spiral sensor, a pressure sensor and a slide glass position sensor which are arranged on the microscope body model; the system comprises an FPGA chip and local display equipment;
the spiral sensor, the pressure sensor and the slide glass position sensor are respectively connected with the FPGA chip;
the FPGA chip is communicated with the local display equipment in a wired or wireless mode.
Respectively arranging a spiral sensor at the coarse focusing spiral and the fine focusing spiral of the microscope body model;
pressure sensors are respectively arranged on the microscope arm and the microscope base of the microscope body model, as well as the tweezers, the rubber head dropper, the glass slide and the cover glass;
and a slide glass position sensor is arranged on the object stage and the slide glass of the microscope body model.
The spiral sensor includes: the device comprises a rotating shaft, a rotating rod, a chain rod, a sliding block and a fixed track;
the rotating shaft can rotate around the axis of the rotating shaft;
a plane perpendicular to the rotating shaft is a rotating plane, and the intersection point of the rotating shaft and the rotating plane is an axis;
one end of the rotating rod is fixedly connected with the rotating shaft at the axis, the other end of the rotating rod is connected with one end of the chain rod through a hinge, and the other end of the chain rod is connected with the sliding block;
the rotating rod can rotate along with the rotating shaft in the rotating plane;
the chain bar can rotate around the hinge in the rotation plane;
the fixed track is positioned in the rotating plane, one end of the fixed track is fixed at the axle center, and the other end of the fixed track extends out along the radius direction;
the light source is arranged on the sliding block, and the sliding block can slide on the fixed track.
The length of the rotating rod is R, the length of the chain rod is Z, and Z is larger than or equal to R;
the minimum position of the slide block on the fixed track is a point A, and the coordinate of the point A is as follows:
Pmin=Z-R(1)
the maximum position of the slide block on the fixed track is a point B, and the coordinate of the point B is
Pmax=Z+R(2)
The slide block moves in the area between the point A and the point B;
a plurality of light-transmitting small holes are arranged between the point A and the point B, and a light ray detector is arranged in each light-transmitting small hole;
an included angle between the rotating rod and the fixed track is theta, and if the theta is gradually increased, the rotating direction D of the rotating rod is anticlockwise; if θ is gradually decreased, the rotating direction D of the rotating lever is clockwise.
The slide position sensor includes: the current detection device comprises conductive pins arranged according to a rectangular lattice on an object stage, concave holes arranged according to a rectangular lattice on a glass slide, a plurality of micro batteries arranged on the object stage and/or the glass slide, and a plurality of current detection devices; each conductive pin has a unique position coordinate;
any one conductive pin can be inserted into any one concave hole;
the current detection device comprises a conductive pin, a concave hole, a micro battery and a current detection device which can be connected in series to form a current loop, wherein the conductive pin and the concave hole which form the current loop are a response unit;
all the current detection devices are connected with the FPGA chip.
The interaction method realized by applying the virtual microscope object suite comprises the following steps:
(1) acquiring control data of a user through a spiral sensor and a pressure sensor, and sending the data of the spiral sensor and the data of the pressure sensor to an FPGA chip;
(2) the FPGA chip processes data of the spiral sensor at the coarse focusing spiral position to obtain an interactive behavior at the coarse focusing spiral position;
(3) the FPGA chip processes data of the spiral sensor at the fine quasi-focus spiral to obtain an interactive behavior at the fine quasi-focus spiral;
(4) the FPGA chip processes the data of the pressure sensors to obtain the pressure at each position where the pressure sensors are arranged;
(5) the FPGA chip monitors the position of the slide.
The operation of the step (2) comprises the following steps:
A. for the current image, two consecutive image sequences are generated: one is an image sequence for increasing the current visual field range, namely a large visual field sequence, and the other is an image sequence for reducing the current visual field range, namely a small visual field sequence;
the steps for generating a sequence of consecutive images are as follows:
a.1, constructing a visual field radius function r:
r=h[tan(α)](6)
h=ωθ(7)
wherein tan represents a tangent function, theta is a rotation angle measured by the spiral sensor, omega is an empirical parameter, h is a height of a lens barrel of the microscope body model, and alpha is a visual field range of an objective lens of the microscope body model;
a.2 Generation of Large-View sequences:
a.2.1 theta is sequentially taken as theta (0), theta (0) +1, theta (0) +2, … … theta (0) + L, and r0, r1, r2, … and rL are calculated according to the formulas (6) and (7); wherein L represents a sequence length;
a.2.2, for each ri (i is more than or equal to 0 and less than or equal to L), drawing a circle by taking the center of the original sample image as the center and ri as the radius;
a.2.3 cutting off the original sample image outside the circle, only keeping the original sample image inside the circle to obtain an image I (I), and storing the image I (I), wherein I (0), I (1), … I (L) are the large-field-of-view sequence;
a.3 Generation of Small field sequences:
a.3.1 theta is sequentially taken as theta (0), theta (0) -1, theta (0) -2, … … theta (0) -L, and r0, r1, r2, … and rL are calculated according to the formulas (6) and (7); wherein L represents a sequence length;
a.3.2 for each ri (i is more than or equal to 0 and less than or equal to L), drawing a circle by taking the center of the original sample image as the center and ri as the radius;
a.3.3, cutting off the original sample image outside the circle, only keeping the original sample image inside the circle to obtain an image I (I), and storing the image I (I), wherein I (0), I (1), … I (L) are the small field sequence;
B. if the rotation direction D is clockwise, displaying a small view sequence on the screen; if the rotation direction D is a counterclockwise direction, displaying a large-field sequence on the screen;
C. in the process of displaying the large-view sequence and the small-view sequence, if the new rotation direction D and the new rotation angle theta are received, the following processing is carried out:
c.1, if the rotating direction D is kept unchanged, keeping the current sequence to be completely presented, and then waiting for the next new rotating direction D and rotating angle theta;
c.2 if the rotation direction D changes, terminating the presentation process of the current sequence and then returning to step B.
The operation of the step (3) comprises:
a generates two consecutive image sequences for the current image: one is a continuous field-of-view image sharpening sequence; the other is a continuous visual field image blurring sequence;
the steps for generating a sequence of consecutive images are as follows:
a.1 the steps of generating a sequence of successive field-of-view image sharpening are as follows:
a.1.1, constructing a relation between a spiral angle parameter theta and image resolution:
4K=λθ(8)
wherein K represents the number of pixels added between adjacent pixels of the pixels, K is greater than 1, and lambda is an empirical parameter;
a.1.2, solving K according to an expression (8);
a.1.3, linear interpolation is carried out between all two adjacent pixels of the original sample image I, K pixel points are added, and the steps are as follows:
assuming that the position of a certain pixel point in I is P and the position of one adjacent pixel point is Q, K pixel points X are added between P and Q according to the following formula:
X=(1-t)P+tQ(9)
where t is one of K values taken at a moderate distance of 0 to 1, t ∈ [0,1 ];
a.1.4, assuming that the larger the theta, the clearer the image, sequentially increasing the current value of the theta to obtain a parameter sequence: theta(1),θ(2),…,θ(M)Wherein M is an empirical parameter representing the length of the sequence;
a.1.5 mixing of theta(1),θ(2),…,θ(M)The parameters are substituted into the formula (8) one by one, and then the corresponding theta is calculated according to the formula (9)Images, a sequence of M images is obtained: i is(1),I(2),…,I(M)The sequence is the continuous visual field image sharpening sequence;
a.2, the steps of generating the continuous visual field image blurring sequence are as follows:
a.2.1 generates a blurred image h (x, y) from the current image f (x, y) using the following equation:
h(x,y)=f(x,y)*g(x,y)(10)
Figure BDA0001892582540000051
where (x, y) represents the position coordinates of a point on the image, which is the convolution operator.
A.2.2 generates a continuous sequence of parameters:
and sequentially increasing the current value of theta to obtain a parameter sequence: theta(1),θ(2),…,θ(N)Wherein N is an empirical parameter representing the length of the sequence;
a.2.3 generating a continuous sequence of image blurring
Will theta(1),θ(2),…,θ(N)The parameters in (2) are substituted into the equations (10) and (11) one by one, and the image corresponding to each theta is calculated to obtain a sequence consisting of N images: h is(1),h(2),…,h(N)The sequence is the blurring sequence of the continuous visual field images;
B. if the rotation direction D is clockwise, displaying a continuous visual field image sharpening sequence on the screen; if the rotation direction D is a counterclockwise direction, displaying a continuous blurred sequence of the visual field images on the screen;
C. when receiving the new rotation direction D and the new rotation angle θ in the process of displaying the consecutive visual field image sharpening sequence and the consecutive visual field image blurring sequence, performing the following processing:
c.1, if the rotating direction D is kept unchanged, keeping the current sequence to be completely presented, and waiting for the next new rotating direction D and rotating angle theta;
c.2 if the rotation direction D changes, terminating the presentation process of the current sequence and then returning to step B.
The operation of the step (5) comprises the following steps:
E. retrieving the positions P1, P2, … PM of all response units;
F. obtaining direction vectors Q1 and Q2 of the length direction and the width direction of the glass slide by a linear fitting method;
G. calculate the angle between Q1 and Q2:
Figure BDA0001892582540000061
H. establishing a relationship between the degree of deflection of the image and the slide placement position:
h(x,y)=Gθ(f(x,y))(4)
where f (x, y) represents the original sample image, Gθ(f (x, y)) means that the original sample image f (x, y) is rotated by theta in the counterclockwise direction or the clockwise direction if theta>0, rotating theta in the counterclockwise direction; if theta is greater than theta>0, then rotate clockwise by θ.
Compared with the prior art, the invention has the beneficial effects that: on one hand, the invention utilizes the virtual fusion technology to enhance the information of the observation result of the user, thereby being beneficial to the process, mechanism and principle of randomly exploring the experimental phenomenon by the user; on the other hand, the method obtains the operation experience under the real microscope condition through the real object operation, and helps experimenters master the related experiment skills.
Drawings
FIG. 1 is a schematic diagram of a virtual microscope object interaction suite according to the present invention;
FIG. 2 is a schematic diagram of a spiral sensor according to the present invention;
FIG. 3-1 is a schematic view of the structure of the stage of the present invention;
FIG. 3-2 is a schematic view of the structure of a slide in the present invention;
FIG. 4 (pyramidal) view-generating model schematic.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
as shown in fig. 1, the virtual microscope object kit provided by the present invention includes: the microscope body model 1 (including thick accurate focus spiral, thin accurate focus spiral, mirror arm, clear aperture, film clamp, mirror post, eyepiece, copper mirror, objective, converter, objective table, reflector, mirror base, tweezers, rubber head burette, slide glass, cover glass, etc.), spiral sensor 2, pressure sensor 3, FPGA chip 5, local computing device 6.
Respectively arranging a spiral sensor 2 on a coarse focusing spiral and a fine focusing spiral on a microscope body model 1; pressure sensors 3 are respectively arranged on models such as a microscope arm, a microscope base, tweezers, a rubber head dropper, a glass slide, a cover glass and the like on the microscope body model; a slide position sensor 4 (i.e., conductive pins and recesses as shown in fig. 3-1 and 3-2) is disposed above the stage and the slide.
The values of the pressure sensor, the spiral sensor, the slide position sensor and other sensors are transmitted to an FPGA chip 5 for uniform identification, the sensing and identification results of the FPGA chip 5 are transmitted to a local display device in a wired or Bluetooth wireless mode, and the interaction process and the interaction result of a user and a microscope are displayed on the local display device 6 in real time.
The spiral sensor of the present invention is shown in fig. 2. A rotating rod with the radius of R is arranged on a rotating shaft O of the spiral, one end of the rotating rod is an axis O, and the other end of the rotating rod is J; one end of the other chain rod Z (with the length being Z) is connected with the J, the other end is connected with a slide block P, and the slide block P moves on a fixed track L passing through the axis. Thus, segment OJ forms a link structure with segment Z (i.e., a hinge is mounted at point J), which can rotate about point J in a plane perpendicular to the axis of rotation of the helix. The slider P is provided with a light source. Suppose Z ≧ R. Assuming that the origin of the axis L is O, the minimum position of the slider P (corresponding to the point A in FIG. 2) is
Pmin=Z-R (1)
The maximum position of the slider P (corresponding to point B in FIG. 2) is
Pmax=Z+R (2)
P is in the region [ Pmin,Pmax]Internal i.e. [ A, B ]]And (4) inward movement. In [ A, B ]]Set up logical light aperture K according to certain distance between, be provided with light detector C in every logical light aperture, light detector C sends the response result to the FPGA chip on, the current angle theta and the direction of rotation D of OJ are calculated to the chip, and theta has the one-to-one correspondence with the position of P, consequently, can know the value of theta through the position of P. If theta is gradually increased, the rotation is carried out in the anticlockwise direction; if θ is gradually decreased, it is rotated clockwise.
In actual use, both the thick and thin quasi-focal spirals are replaced by spiral sensors.
The interaction method of the coarse quasi-focal spiral is as follows:
for the current image, two consecutive image sequences are predicted: one is to increase the current field of view (abbreviated as large field of view sequence); the other is to reduce the current field of view (abbreviated small field sequence).
The method for generating the continuous image sequence comprises the following steps:
a.1 constructs a radius of field function r, as shown in FIG. 4:
r=h[tan(α)](6)
h=ωθ (7)
where tan represents a tangent function, θ is a rotation angle (the angle is calculated by the spiral sensor), ω is an empirical parameter (a parameter that a user perceives that an image on a computer is presented with a satisfactory effect is found through multiple experiments), h is a barrel height, and α is an objective model parameter (which describes a field range of the objective in a geometric sense (the parameter is a known parameter of a microscope)). Fig. 4 is an axial sectional model schematic diagram of the objective lens. Where O is the viewpoint position and the vertical direction is the optical axis direction.
A.2 construction of Large View sequences:
a.2.1 theta are sequentially taken as theta (0), theta (0) +1, theta (0) +2, … … theta (0) + L, and r0, r1, r2, … and rL are calculated according to the formulas (6) and (7).
A.2.2, for each ri (i is more than or equal to 0 and less than or equal to L), drawing a circle by taking the center of the original sample image as the center and ri as the radius;
a.2.3 cutting off the original sample image outside the circle, only keeping the original sample image inside the circle to obtain an image I (i), and storing the image I (i);
a.2.4(I (0), I (1), … I (L)) is a large field sequence.
Where θ (0) represents the current-view image; l represents the sequence length.
Taking theta in the step A.2.1 as theta (0), theta (0) -1, theta (0) -2 and … … theta (0) -L in sequence, and obtaining the small-field sequence without changing other steps.
The large-view sequence and the small-view sequence of each current image can be calculated in real time, and then the next step is executed; or it may be pre-calculated, stored in the computing device according to a certain data structure, then retrieved according to (D, θ), and executed in the next step.
B. If the rotation direction D is clockwise, displaying a small view sequence on the screen; displaying a large-field-of-view sequence on the screen if the rotation direction is a counterclockwise direction;
C. during the presentation of the large/small field sequence, if a new (D, θ) instruction is received:
c.1 if the direction D is kept unchanged, keeping the current sequence to be completely presented, and waiting for the next instruction;
c.2 if the direction D changes, terminating the current sequence presentation process; and C, turning to the step B.
The interaction method of the fine quasi-focal spiral is as follows:
a predicts two consecutive image sequences for the current image: one is a continuous field-of-view image sharpening sequence; the other is a sequence of consecutive field-of-view image blurring.
The method for generating the continuous image sequence comprises the following steps:
a.1 sequential visual field image sharpening sequence
A.1.1, constructing a relation between a spiral angle parameter theta and image resolution:
4K=λθ (8)
wherein K (K >1) represents the number of pixels added between adjacent pixels of a pixel (only the horizontal and vertical "four neighborhood" ranges are considered for the moment), and λ is an empirical parameter (the parameter most satisfactory to the user is selected among a plurality of λ, depending on the subjective feeling of the user on the image presentation effect).
A.1.2, solving K according to an expression (8);
a.1.3, linear interpolation is carried out between all two adjacent pixels of the original sample image I, and K pixel points are added:
assuming that a certain pixel point position in I is P, one adjacent pixel point position is Q, and K pixel points are added between P and Q according to the following formula:
X=(1-t)P+tQ (9)
wherein t ∈ [0,1] medium distance takes K values, i.e., t is K values taken at a medium distance of 0 to 1.
A.1.4 assuming that the larger the theta, the sharper the image, the current value of theta is sequentially increased (same as the former 'theta (0), theta (0) +1, theta (0) +2, … … theta (0) + L') to obtain a parameter sequence (theta (0))(1),θ(2),…,θ(M)). Where M is an empirical parameter (determined by the user evaluating through subjective perception) representing the length of the sequence.
A.1.5 will (θ)(1),θ(2),…,θ(M)) The parameters are substituted into expression (8) one by one, and then an image is calculated for each parameter theta according to expression (9), thereby obtaining a sequence (I) consisting of M images(1),I(2),…,I(M)). This sequence is the sequence of successive field-of-view image sharpening described.
A.2 continuous field-of-view image blurring sequence
A.2.1 generates a blurred image h (x, y) from the current image:
h(x,y)=f(x,y)*g(x,y) (10)
Figure BDA0001892582540000101
where (x, y) represents the position coordinates of a point on the image, which is the convolution operator. And pi is a constant.
A.2.2 generates a continuous sequence of parameters:
sequentially increasing the current value of theta to obtain a parameter sequence (theta)(1),θ(2),…,θ(N)) (with the above (I)(1),I(2),…,I(M)) The same). Where N is an empirical parameter (different parameters N, whose presentation effect is different, the parameter N at which the user considers the most satisfactory presentation is selected), and represents the length of the sequence.
A.2.3 generating a continuous sequence of image blurring
Will (theta)(1),θ(2),…,θ(N)) The parameters in (a) are substituted into expressions (10) and (11) one by one, and one image is calculated for each parameter theta, thereby obtaining a sequence (h) of N images(1),h(2),…,h(N)). This sequence is the sequence of blurring of the successive views.
The visual field image sharpening sequence of each current image can be calculated in real time, and then the next step is executed; or it may be pre-calculated, stored in the computing device according to a certain data structure, then retrieved according to (D, θ), and executed in the next step.
B. If the rotation direction D is clockwise, displaying a visual field image sharpening sequence on the screen; if the rotation direction is a counterclockwise direction, displaying a blurred sequence of the visual field image on the screen;
C. during the presentation of the sharpening/blurring sequence, if a new (D, θ) instruction is received:
c.1 if the direction D is kept unchanged, keeping the current sequence to be completely presented, and waiting for the next instruction;
c.2 if the direction D changes, terminating the current sequence presentation process; and C, turning to the step B.
As shown in fig. 3-1 and 3-2, the slide position sensor 4 has the following structure: conductive pins 303 (e.g., iron pins) and light-passing holes 301, which are crisscrossed (each conductive pin is perpendicular to the stage, "crisscrossed" means arranged in a rectangular lattice), are provided on the stage 302, and recessed holes 305, into which the conductive pins 303 can be inserted, are provided in the slide 304; and micro batteries are provided on the stage 302 and the slide 304 so that any one of the conductive pins 303 is inserted into the recess 305 to form a current loop (as a unit), and current detecting means (one conductive pin, one recess to form a loop, one current detecting means on one loop) are provided on the current loop. And the detection results on all the current loops are sent to an FPGA chip nested on the virtual microscope body for processing. Each conductive pin on the object stage has a unique position coordinate.
The processing method of the FPGA chip comprises the following steps:
I. based on the response units (i.e. the conductive pins and the recessed holes forming the loop), the positions P1, P2, … PM of all the response units are retrieved;
J. the shape of the slide expressed by P1, P2 and … PM (generally rectangular) is fitted by a linear fitting method by using a rectangle or other shapes, and direction vectors Q1 and Q2 in the length and width directions of the slide are obtained.
K. Calculate the angle between Q1 and Q2:
Figure BDA0001892582540000121
l. establishing the relationship between the degree of deflection of the image and the slide placement position:
h(x,y)=Gθ(f(x,y)) (4)
where f (x, y) represents the original sample image, Gθ(f (x, y)) means that the original sample image f (x, y) is oriented in a counterclockwise direction or a clockwise direction (if θ [)>0, namely counterclockwise; if theta is greater than theta>0, or clockwise) rotation theta (theta is the angle between the lengthwise and widthwise directional vectors of the slide, and its physical meaning represents the angle of inclination of the slide when it is tilted).
One embodiment of the invention is as follows:
1. the microscope is taken out of the microscope box, one hand holds the microscope arm, the other hand holds the microscope base, and the pressure sensors arranged on the microscope arm and the microscope base are used for detecting whether the microscope arm is held by the hand or not and whether the microscope base is held by the other hand or not.
2. And rotating the rotator to align the low power lens with the light through hole.
3. Light focusing: a large aperture is aligned with the light-transmitting hole, and the reflector is rotated to make the reflected light pass through the light-transmitting hole, the objective lens and the lens cone and then reach the ocular lens. Preferably, a bright circular field of view is seen through the eyepiece.
4. The slide was placed on the laboratory bench and a drop of water was applied to the center of the slide with the rubber-tipped dropper pinched by one hand.
5. The cover slip was gripped with tweezers so that one side of the cover slip first contacted the water drop on the slide and then lightly covered (the tweezers were held with the tweezers gripping the cover slip with one end of the cover slip first, the relative position of the cover slip and the slide was determined, and the user was monitored for erroneous operation and the generation of air bubbles).
6. The loading piece to be observed is placed on an objective table and clamped and pressed by a tabletting. The glass slide specimen is just opposite to the center of the light through hole.
7. The coarse focusing screw is rotated to slowly descend the lens cone (the rotation gesture is that the lens cone rotates clockwise and descends, otherwise, the rotation angle corresponds to the descending distance of the lens cone) until the objective lens approaches the position of the slide specimen. And observing the visual field, and simultaneously rotating the coarse focusing screw and the fine focusing screw in the anticlockwise direction to enable the observed object image to be clear (the object image is corresponding to the type of the mounted film, and the data of the object image being observed is transmitted remotely).
The above-described embodiment is only one embodiment of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be easily made based on the application and principle of the present invention disclosed in the present application, and the present invention is not limited to the method described in the above-described embodiment of the present invention, so that the above-described embodiment is only preferred, and not restrictive.

Claims (5)

1. An interaction method implemented by a virtual microscope physical suite, comprising a virtual microscope physical suite, the virtual microscope physical suite comprising: the microscope comprises a microscope body model, and a spiral sensor, a pressure sensor and a slide glass position sensor which are arranged on the microscope body model; the system comprises an FPGA chip and local display equipment;
the spiral sensor, the pressure sensor and the slide glass position sensor are respectively connected with the FPGA chip;
the FPGA chip is communicated with local display equipment in a wired or wireless mode;
respectively arranging a spiral sensor at the coarse focusing spiral and the fine focusing spiral of the microscope body model;
pressure sensors are respectively arranged on the microscope arm and the microscope base of the microscope body model, as well as the tweezers, the rubber head dropper, the glass slide and the cover glass;
arranging slide position sensors on the object stage and the slide of the microscope body model;
the spiral sensor includes: the device comprises a rotating shaft, a rotating rod, a chain rod, a sliding block and a fixed track;
the rotating shaft can rotate around the axis of the rotating shaft;
a plane perpendicular to the rotating shaft is a rotating plane, and the intersection point of the rotating shaft and the rotating plane is an axis;
one end of the rotating rod is fixedly connected with the rotating shaft at the axis, the other end of the rotating rod is connected with one end of the chain rod through a hinge, and the other end of the chain rod is connected with the sliding block;
the rotating rod can rotate along with the rotating shaft in the rotating plane;
the chain bar can rotate around the hinge in the rotation plane;
the fixed track is positioned in the rotating plane, one end of the fixed track is fixed at the axle center, and the other end of the fixed track extends out along the radius direction;
a light source is disposed on the slider, the slider being capable of sliding on a fixed track, the method comprising:
(1) acquiring control data of a user through a spiral sensor and a pressure sensor, and sending the data of the spiral sensor and the data of the pressure sensor to an FPGA chip;
(2) the FPGA chip processes data of the spiral sensor at the coarse focusing spiral position to obtain an interactive behavior at the coarse focusing spiral position;
(3) the FPGA chip processes data of the spiral sensor at the fine quasi-focus spiral to obtain an interactive behavior at the fine quasi-focus spiral;
(4) the FPGA chip processes the data of the pressure sensors to obtain the pressure at each position where the pressure sensors are arranged;
(5) the FPGA chip monitors the position of the glass slide;
the operation of the step (2) comprises the following steps:
A. for the current image, two consecutive image sequences are generated: one is an image sequence for increasing the current visual field range, namely a large visual field sequence, and the other is an image sequence for reducing the current visual field range, namely a small visual field sequence;
the steps for generating a sequence of consecutive images are as follows:
a.1, constructing a visual field radius function r:
r=h[tan(α)](6)
h=ωθ (7)
wherein tan represents a tangent function, theta is a rotation angle measured by the spiral sensor, omega is an empirical parameter, h is a height of a lens barrel of the microscope body model, and alpha is a visual field range of an objective lens of the microscope body model;
a.2 Generation of Large-View sequences:
a.2.1 theta is sequentially taken as theta (0), theta (0) +1, theta (0) +2, … … theta (0) + L, and r0, r1, r2, … and rL are calculated according to the formulas (6) and (7); wherein L represents a sequence length;
a.2.2, for each ri (i is more than or equal to 0 and less than or equal to L), drawing a circle by taking the center of the original sample image as the center and ri as the radius;
a.2.3 cutting off the original sample image outside the circle, only keeping the original sample image inside the circle to obtain an image I (I), and storing the image I (I), wherein I (0), I (1), … I (L) are the large-field-of-view sequence;
a.3 Generation of Small field sequences:
a.3.1 theta is sequentially taken as theta (0), theta (0) -1, theta (0) -2, … … theta (0) -L, and r0, r1, r2, … and rL are calculated according to the formulas (6) and (7); wherein L represents a sequence length;
a.3.2 for each ri (i is more than or equal to 0 and less than or equal to L), drawing a circle by taking the center of the original sample image as the center and ri as the radius;
a.3.3, cutting off the original sample image outside the circle, only keeping the original sample image inside the circle to obtain an image I (I), and storing the image I (I), wherein I (0), I (1), … I (L) are the small field sequence;
B. if the rotation direction D is clockwise, displaying a small view sequence on the screen; if the rotation direction D is a counterclockwise direction, displaying a large-field sequence on the screen;
C. in the process of displaying the large-view sequence and the small-view sequence, if the new rotation direction D and the new rotation angle theta are received, the following processing is carried out:
c.1, if the rotating direction D is kept unchanged, keeping the current sequence to be completely presented, and then waiting for the next new rotating direction D and rotating angle theta;
c.2 if the rotation direction D changes, terminating the presentation process of the current sequence and then returning to step B.
2. The interaction method of claim 1, wherein: the length of the rotating rod is R, the length of the chain rod is Z, and Z is larger than or equal to R;
the minimum position of the slide block on the fixed track is a point A, and the coordinate of the point A is as follows:
Pmin=Z-R (1)
the maximum position of the slide block on the fixed track is a point B, and the coordinate of the point B is
Pmax=Z+R (2)
The slide block moves in the area between the point A and the point B;
a plurality of light-transmitting small holes are arranged between the point A and the point B, and a light ray detector is arranged in each light-transmitting small hole;
an included angle between the rotating rod and the fixed track is theta, and if the theta is gradually increased, the rotating direction D of the rotating rod is anticlockwise; if θ is gradually decreased, the rotating direction D of the rotating lever is clockwise.
3. The interaction method of claim 2, wherein: the slide position sensor includes: the current detection device comprises conductive pins arranged according to a rectangular lattice on an object stage, concave holes arranged according to a rectangular lattice on a glass slide, a plurality of micro batteries arranged on the object stage and/or the glass slide, and a plurality of current detection devices; each conductive pin has a unique position coordinate;
any one conductive pin can be inserted into any one concave hole;
the current detection device comprises a conductive pin, a concave hole, a micro battery and a current detection device which can be connected in series to form a current loop, wherein the conductive pin and the concave hole which form the current loop are a response unit;
all the current detection devices are connected with the FPGA chip.
4. The method of claim 3, wherein: the operation of the step (3) comprises:
a generates two consecutive image sequences for the current image: one is a continuous field-of-view image sharpening sequence; the other is a continuous visual field image blurring sequence;
the steps for generating a sequence of consecutive images are as follows:
a.1 the steps of generating a sequence of successive field-of-view image sharpening are as follows:
a.1.1, constructing a relation between a spiral angle parameter theta and image resolution:
4K=λθ (8)
wherein K represents the number of pixels added between adjacent pixels of the pixels, K is greater than 1, and lambda is an empirical parameter;
a.1.2, calculating K according to a formula (8);
a.1.3, linear interpolation is carried out between all two adjacent pixels of the original sample image I, K pixel points are added, and the steps are as follows:
assuming that the position of a certain pixel point in I is P and the position of one adjacent pixel point is Q, K pixel points X are added between P and Q according to the following formula:
X=(1-t)P+tQ (9)
where t is one of K values taken at a moderate distance of 0 to 1, t ∈ [0,1 ];
a.1.4, assuming that the larger the theta, the clearer the image, sequentially increasing the current value of the theta to obtain a parameter sequence: theta(1),θ(2),…,θ(M)Wherein M is an empirical parameter representing the length of the sequence;
a.1.5 mixing of theta(1),θ(2),…,θ(M)Substituting the parameters into a formula (8) one by one, and calculating the image corresponding to each theta according to a formula (9) to obtain a sequence consisting of M images: i is(1),I(2),…,I(M)The sequence is the continuous visual field image sharpening sequence;
a.2, the steps of generating the continuous visual field image blurring sequence are as follows:
a.2.1 generates a blurred image h (x, y) from the current image f (x, y) using the following equation:
h(x,y)=f(x,y)*g(x,y) (10)
Figure FDA0002693583280000051
wherein (x, y) represents the position coordinates of a point on the image, which is a convolution operator;
a.2.2 generates a continuous sequence of parameters:
and sequentially increasing the current value of theta to obtain a parameter sequence: theta(1),θ(2),…,θ(N)Wherein N is an empirical parameter representing the length of the sequence;
a.2.3 generating a continuous sequence of image blurring
Will theta(1),θ(2),…,θ(N)The parameters in (2) are substituted into the equations (10) and (11) one by one, and the image corresponding to each theta is calculated to obtain a sequence consisting of N images: h is(1),h(2),…,h(N)The sequence is the blurring sequence of the continuous visual field images;
B. if the rotation direction D is clockwise, displaying a continuous visual field image sharpening sequence on the screen; if the rotation direction D is a counterclockwise direction, displaying a continuous blurred sequence of the visual field images on the screen;
C. when receiving the new rotation direction D and the new rotation angle θ in the process of displaying the consecutive visual field image sharpening sequence and the consecutive visual field image blurring sequence, performing the following processing:
c.1, if the rotating direction D is kept unchanged, keeping the current sequence to be completely presented, and waiting for the next new rotating direction D and rotating angle theta;
c.2 if the rotation direction D changes, terminating the presentation process of the current sequence and then returning to step B.
5. The method of claim 4, wherein: the operation of the step (5) comprises the following steps:
A. retrieving the positions P1, P2, … PM of all response units;
B. obtaining direction vectors Q1 and Q2 of the length direction and the width direction of the glass slide by a linear fitting method;
C. calculate the angle between Q1 and Q2:
Figure FDA0002693583280000061
D. establishing a relationship between the degree of deflection of the image and the slide placement position:
h(x,y)=Gθ(f(x,y)) (4)
where f (x, y) represents the original sample image, Gθ(f (x, y)) means that the original sample image f (x, y) is rotated by theta in the counterclockwise direction or the clockwise direction if theta>0, rotating theta in the counterclockwise direction; if theta is greater than theta>0, then rotate clockwise by θ.
CN201811477671.1A 2018-12-05 2018-12-05 Virtual microscope object kit and application thereof Expired - Fee Related CN109326166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811477671.1A CN109326166B (en) 2018-12-05 2018-12-05 Virtual microscope object kit and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811477671.1A CN109326166B (en) 2018-12-05 2018-12-05 Virtual microscope object kit and application thereof

Publications (2)

Publication Number Publication Date
CN109326166A CN109326166A (en) 2019-02-12
CN109326166B true CN109326166B (en) 2020-11-06

Family

ID=65256759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811477671.1A Expired - Fee Related CN109326166B (en) 2018-12-05 2018-12-05 Virtual microscope object kit and application thereof

Country Status (1)

Country Link
CN (1) CN109326166B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1925549A (en) * 2005-08-30 2007-03-07 麦克奥迪实业集团有限公司 Virtual microscopic section method and system
CN101170961A (en) * 2005-03-11 2008-04-30 布拉科成像S.P.A.公司 Methods and devices for surgical navigation and visualization with microscope
CN201803700U (en) * 2010-07-29 2011-04-20 麦克奥迪实业集团有限公司 Optical mouse sensor microscope stage position detecting device
CN102368283A (en) * 2011-02-21 2012-03-07 麦克奥迪实业集团有限公司 Digital slice-based digital remote pathological diagnosis system and method
CN105892030A (en) * 2016-06-08 2016-08-24 麦克奥迪实业集团有限公司 Internet communication based digital microscope and interaction method of digital microscope
CN108154778A (en) * 2017-12-28 2018-06-12 深圳科创广泰技术有限公司 Based on motion-captured and mixed reality ophthalmologic operation training system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090271715A1 (en) * 2008-01-29 2009-10-29 Tumuluri Ramakrishna J Collaborative augmented virtuality system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170961A (en) * 2005-03-11 2008-04-30 布拉科成像S.P.A.公司 Methods and devices for surgical navigation and visualization with microscope
CN1925549A (en) * 2005-08-30 2007-03-07 麦克奥迪实业集团有限公司 Virtual microscopic section method and system
CN201803700U (en) * 2010-07-29 2011-04-20 麦克奥迪实业集团有限公司 Optical mouse sensor microscope stage position detecting device
CN102368283A (en) * 2011-02-21 2012-03-07 麦克奥迪实业集团有限公司 Digital slice-based digital remote pathological diagnosis system and method
CN105892030A (en) * 2016-06-08 2016-08-24 麦克奥迪实业集团有限公司 Internet communication based digital microscope and interaction method of digital microscope
CN108154778A (en) * 2017-12-28 2018-06-12 深圳科创广泰技术有限公司 Based on motion-captured and mixed reality ophthalmologic operation training system and method

Also Published As

Publication number Publication date
CN109326166A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
JP6975474B2 (en) Systems and methods for performing automated analysis of air samples
EP2770360A2 (en) Microscope system and program
EP1336888A1 (en) Microscopy imaging system and data acquisition method
DE102015211859A1 (en) Locating system with hand-held locating device and method for locating
US20020149628A1 (en) Positioning an item in three dimensions via a graphical representation
CN105004723A (en) Pathological section scanning 3D imaging and fusion device and method
CN107077285A (en) Operation device, the information processor with operation device and the operation acceptance method for information processor
CN104899578B (en) A kind of method and device of recognition of face
CN103257438B (en) Plane two-dimension rectangular scanning device based on automatic-control electric translation stage and scanning method thereof
JP2013214275A (en) Three-dimensional position specification method
US7954069B2 (en) Microscopic-measurement apparatus
CN109326166B (en) Virtual microscope object kit and application thereof
CN109495724B (en) Virtual microscope based on visual perception and application thereof
US20170363854A1 (en) Augmented reality visual rendering device
US10429632B2 (en) Microscopy system, microscopy method, and computer-readable recording medium
CN110196642B (en) Navigation type virtual microscope based on intention understanding model
JP2004354261A (en) Method of dynamically displaying scattering vector of x-ray diffraction
CN109983766A (en) Image processing apparatus, microscopic system, image processing method and program
JPH0613011A (en) Sample position controller of electron microscope
WO2018197078A1 (en) Inspection apparatus for optically inspecting an object, production plant having the inspection apparatus, and method for optically inspecting the object using the inspection apparatus
CN109300387B (en) Virtual microscope object interaction suite and application thereof
JP6787396B2 (en) Optical measuring device, image generation method and image generation program
CN113470166B (en) Method and device for presenting three-dimensional microscopic image
Wang et al. A structural design and interaction algorithm of smart microscope embedded on virtual and real fusion technologies
Miller et al. Interactive visualization of intercluster galaxy structures in the horologium-reticulum supercluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201106

Termination date: 20211205