CN110196642B - Navigation type virtual microscope based on intention understanding model - Google Patents

Navigation type virtual microscope based on intention understanding model Download PDF

Info

Publication number
CN110196642B
CN110196642B CN201910543153.3A CN201910543153A CN110196642B CN 110196642 B CN110196642 B CN 110196642B CN 201910543153 A CN201910543153 A CN 201910543153A CN 110196642 B CN110196642 B CN 110196642B
Authority
CN
China
Prior art keywords
image
sample
microscope
user
integration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910543153.3A
Other languages
Chinese (zh)
Other versions
CN110196642A (en
Inventor
冯志全
王康
田京兰
冯仕昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201910543153.3A priority Critical patent/CN110196642B/en
Publication of CN110196642A publication Critical patent/CN110196642A/en
Application granted granted Critical
Publication of CN110196642B publication Critical patent/CN110196642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/242Devices for focusing with coarse and fine adjustment mechanism
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a navigation type virtual microscope based on an intention understanding model, which comprises a multi-mode input and perception module, a multi-mode information integration module and an interactive application module; the multi-mode input and perception module is used for acquiring voice information of a user through a microphone and acquiring operation behaviors of the user; the multi-mode information integration module is used for processing the voice information through a visual channel and processing the operation behavior through a tactile channel, and then integrating the processed voice information and the operation behavior through multi-channel information to finish the interaction between the microscope and a user. According to the invention, through multi-mode information acquisition and integration, a simple sensing element is utilized, and a multi-mode signal input and intelligent sensing technology is added, so that on the basis of ensuring the advantages of the digital microscope, a large number of common and poor middle school students can also perform microscope learning conditionally, the cognitive experience of the micro world is increased, and the intelligent microscope is experienced.

Description

Navigation type virtual microscope based on intention understanding model
Technical Field
The invention belongs to the technical field of intelligent microscope design, and particularly relates to a navigation type virtual microscope based on an intention understanding model.
Background
The use of the microscope is a key content of biological science learning of middle school, and is also a content that students often make mistakes in the examination process, and in the learning process, the use rules of the microscope are mastered, so that the microscope is of great benefit to the learning of theoretical knowledge and the exploration of the microscopic world. However, microscopes frequently used by middle schools and introduced by textbooks are quite traditional monocular microscopes, students often spend a lot of time in using the microscopes due to adjustment of focal lengths and proper brightness, imaging effects are poor, and the emotion of the students is often changed from initial excitement to fatigue after a class of experiment. Traditional teaching experiment class is that the teacher demonstrates at the podium generally, and the student observes below, even student's operation process is more normal, also can not obtain good result, and the teacher often walks between different students, leads to exchanging between teacher and student not effective enough, leads to the student perception biology on the perceptual aspect to have the deviation. Therefore, it becomes important how to make students correctly observe actual biological structures and phenomena. More restrictively, each microscope can only be observed by one person at the same time, and teachers want to see the experimental results of students, and need to check images under the student microscopes one by one, so that the purpose of sharing the experimental results among different students cannot be achieved.
Digital microscopes are now becoming popular. The digital microscope is connected with a computer, and displays the structure or phenomenon observed in the field of view of the microscope on a display screen of the computer through software, so that the observed structure or phenomenon is clearer, and a plurality of people can watch the structure or phenomenon at the same time. The digital microscope has the advantages that firstly, the object image is displayed on the computer display screen, so that the difficulty of searching the object image is reduced; secondly, teachers can monitor the computer display screens of students in real time, rapidly judge the operation problems of the students and adjust the operation problems in time; thirdly, if the student fails in the experiment, the teacher can share the object image screenshots of other groups successfully by the computer, and the student is supervised to analyze the failure reason. By using the digital microscope, the evaluation process of the experiment can be simplified, the feasibility is improved, and the coverage is enlarged. However, the digital microscope is still expensive, the most common microscope is about 2000 yuan, and it is difficult to purchase a large amount of microscope equipment for general or poor middle schools.
Disclosure of Invention
The invention provides a navigation type virtual microscope based on an intention understanding model, which enables general and poverty-stricken students to perform microscope learning conditionally, increases cognitive feeling on the micro world and experiences an intelligent microscope.
In order to achieve the purpose, the invention provides a navigation type virtual microscope based on an intention understanding model, which comprises a multi-modal input and perception module, a multi-modal information integration module and an interactive application module;
the multi-mode input and perception module is used for acquiring voice information of a user through a microphone and acquiring operation behaviors of the user; the operation behavior comprises the steps of obtaining the direction of a user adjusting the thickness and the focus of the microscope through a rotation sensor, identifying a sample to be observed through image acquisition equipment and detecting image information of the movement of the sample to be observed;
the multi-mode information integration module is used for processing the voice information through a visual channel and processing the operation behavior through a tactile channel, and then integrating the processed voice information and the operation behavior through multi-channel information to finish the interaction between the microscope and a user;
the interactive application module performs prediction of user intention and gives operation guidance through visual display and auditory guidance.
Further, the method for identifying the sample to be observed through the image acquisition equipment is that a two-dimensional code picture representing the sample to be observed is arranged on the upper surface of the glass slide, the image acquisition equipment identifies different two-dimensional codes, and then the sample image is called from the database.
Further, the method for processing the visual channel information comprises the following steps: identifying an original RGB image of a sample, wherein D is a sample image collection; observing the change of the sample, wherein the currently observed image of the sample is P, the motion transformation function is PW (), the newly generated image is P ', and the displacement transformation process formula is P' (X, Y) ═ PM (X-delta X, Y-delta Y);
and P belongs to D, X and Y) belongs to P, and delta X and delta Y are respectively the offset of the pixel points in the image P.
Further, the method for detecting the image information of the movement of the sample to be observed is that the lower surface of the glass slide is provided with a circular surface with a detection mark, the sample to be observed moves in position during the operation process, the position movement of the sample is calculated by using the circular surface with the detection mark and a formula of the displacement conversion process, and the image of the sample is observed according to the offset delta X and delta Y.
Further, the method for processing the haptic channel information comprises the following steps: the image definition is adjusted by adjusting the coarse quasi-focus spiral or the fine quasi-focus spiral, wherein the rotation change function of the coarse quasi-focus spiral or the fine quasi-focus spiral is PV (), the coarse quasi-focus spiral is adjusted, and a discrete value T epsilon T {1 is obtained1,12,13,14,15,16},SxyRepresenting the center point at (x, y), the size of the filter window at mxn, the mean filter function at PV, and
Figure BDA0002103193730000031
so that a newly generated image P' can be obtained from the arithmetic mean filtering function
Figure BDA0002103193730000032
Adjusting the fine focusing screw to obtain a discrete value S, which belongs to S {1,2,3,4,5,6}, SxyRepresenting the center point at (x, y), the size of the filter window at mxn, the mean filter function at PV, and
Figure BDA0002103193730000033
so that a newly generated image P' can be obtained from the arithmetic mean filtering function
Figure BDA0002103193730000034
And P (x, y) is an image which is not subjected to coarse focusing screw adjustment or fine focusing screw adjustment currently.
Further, the multi-channel information integration comprises two-channel information integration and three-channel information integration;
the two-channel information integration is used for adjusting any one of a coarse quasi-focal spiral or a fine quasi-focal spiral, moving a sample and observing data integration of the sample; the multi-channel integration function is Muf (), the unadjusted image is P, the shift transformation function is PM (), and the rotation transformation function is PV (), so the multi-channel integration function can be expressed as:
Muf(P)=ɑPM(P)+(1-ɑ)PV(P);
the alpha is a selection parameter, the alpha is 0, 1, the rotation sensor rotates, and after one adjustment is completed, P' is Muf (P); p' "as the current viewed image can continue to be adjusted;
and the three-channel information integration is used for inputting image information, operation behaviors and voice information in an integration strategy, and under the condition of the current state, the system enters different system states according to different user behaviors.
Further, the operation guide is that when the user uses the navigation type virtual microscope, the voice prompt operation step or the operation method enables the sample to be detected to be located in the field of view of the microscope.
Further, when the image is stored in a manner that the coarse focusing screw or the fine focusing screw is rotated, when the definition of the sample in the visual field reaches a set threshold value, the image is stored through voice prompt, and after the user confirms the image, the virtual microscope system stores the image in the current visual field into an appointed folder.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the embodiment of the invention provides a navigation type virtual microscope based on an intention understanding model, which comprises a multi-mode input and perception module, a multi-mode information integration module and an interactive application module; the multi-mode input and perception module is used for acquiring voice information of a user through a microphone and acquiring operation behaviors of the user; the operation behavior comprises the steps of obtaining the direction of a user adjusting the thickness and the focus of the microscope through a rotation sensor, identifying a sample to be observed through image acquisition equipment and detecting image information of the movement of the sample to be observed; the multi-mode information integration module is used for processing the voice information through a visual channel and processing the operation behavior through a tactile channel, and then integrating the processed voice information and the operation behavior through multi-channel information to finish the interaction between the microscope and a user. The interactive application module carries out prediction on user intention and gives operation guidance through visual display and auditory guidance. According to the invention, through multi-mode information acquisition and integration, in the adjusting process of the microscope, the thickness and the focal length of the microscope are adjusted through acquisition of voice information and operation behaviors, the slide glass is moved, and different ocular lenses and objective lenses are switched to observe sample images under different multiples. For students, voice and visual presentation are natural interaction modes, the deep learning technology enables voice recognition, voice conversion into text information and the voice synthesis technology to be more accurate, and the human-computer interaction process is more flexible and intelligent. In the aspect of image processing, the intelligent microscope can accurately identify different sample labels by means of an image processing technology, calculate the position of a tracking point in real time, and control the movement of a sample image according to a certain rule. The invention utilizes simple sensing elements, adds signal input of various modes and intelligent sensing technology, and enables general and poor middle school students to study the microscope conditionally on the basis of ensuring the advantages of the digital microscope, thereby increasing the cognitive experience of the micro world and experiencing the intelligent microscope.
Drawings
FIG. 1 is a general framework diagram of a navigation type virtual microscope multi-modal interaction based on an intention understanding model proposed in embodiment 1 of the present invention;
fig. 2 is a schematic diagram of a three-channel information integration strategy in a navigation type virtual microscope based on an intention understanding model according to embodiment 1 of the present invention;
FIG. 3 is a schematic structural diagram of an implementation of a navigation virtual microscope based on an intention understanding model according to embodiment 1 of the present invention;
FIG. 4 is a schematic view of an eyepiece structure of a navigation type virtual microscope based on an intention understanding model according to embodiment 1 of the present invention;
wherein: 1-a light through hole; 2-an object stage; 3-a camera; 4-lower surface of glass slide; 5-the upper surface of the glass slide 1; 6-upper surface of slide 2.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
Example 1
The embodiment 1 of the invention provides a navigation type virtual microscope based on an intention understanding model, which comprises a multi-mode input and perception module, a multi-mode information integration module and an interactive application module;
the multi-mode input and perception module is used for acquiring voice information of a user through a microphone and acquiring operation behaviors of the user; the operation behavior comprises the steps of obtaining the direction of the thickness and focus-aligning spiral of the microscope adjusted by a user through a rotation sensor, identifying a sample to be observed through image acquisition equipment and detecting the image information of the movement of the sample to be observed;
the multi-mode information integration module is used for processing the voice information through a visual channel and processing the operation behavior through a tactile channel, and then integrating the processed voice information and the operation behavior through multi-channel information to finish the interaction between the microscope and a user;
the interactive application module carries out prediction on user intention and gives operation guidance through visual display and auditory guidance.
Fig. 1 is a general framework diagram of a navigation type virtual microscope multi-modal interaction based on an intention understanding model according to embodiment 1 of the present invention;
the multi-modal input and perception module comprises multi-modal input and interaction equipment, a microphone is used for acquiring voice information of a user, the microphone is connected with voice recognition equipment, an operation line of the user is transmitted through touch sense and vision, namely, the operation line is transmitted from a rotation sensor and image acquisition equipment, the rotation sensor is used for capturing the direction of a user for adjusting the thickness and the focusing screw, the rotation sensor is used for adjusting the definition of a sample, and the image acquisition equipment is used for identifying the sample and detecting the movement of the sample. Wherein the image capturing device uses a camera.
The method for identifying the sample to be observed through the image acquisition equipment comprises the steps that a two-dimensional code picture representing the sample to be observed is arranged on the upper surface of the glass slide, the image acquisition equipment identifies different two-dimensional codes, and then the sample image is called from a database.
The multi-mode information integration module is used for intention understanding, processing the voice information through visual channel information and processing the operation behavior through tactile channel information, and then integrating the processed voice information and the operation behavior through multi-channel information. The method for processing the voice information through the visual channel information comprises the following steps: identifying an original RGB image of a sample, wherein D is a sample image collection; observing the change of the sample, wherein the currently observed image of the sample is P, the motion transformation function is PW (), the newly generated image is P ', and the displacement transformation process formula is P' (X, Y) ═ PM (X-delta X, Y-delta Y);
wherein P belongs to D, (X, Y) belongs to P, and DeltaX and DeltaY are the offset of the pixel points in the image P respectively.
The method for processing the operation behavior through the tactile channel information comprises the steps of adjusting the image definition through adjusting a coarse quasi-focus spiral or a fine quasi-focus spiral, wherein the rotation change function of the coarse quasi-focus spiral or the fine quasi-focus spiral is PV (), adjusting the coarse quasi-focus spiral to obtain a discrete value T belonging to T {11,12,13,14,15,16}, and SxyRepresenting the center point at (x, y), the size of the filter window at mxn, the mean filter function at PV, and
Figure BDA0002103193730000071
so that a newly generated image P' can be obtained from the arithmetic mean filtering function
Figure BDA0002103193730000072
Adjusting the fine focusing helix to obtain a discrete value S, which belongs to S {1,2,3,4,5,6},Sxyrepresenting the center point at (x, y), the size of the filter window at mxn, the mean filter function at PV, and
Figure BDA0002103193730000073
so that a newly generated image P' can be obtained from the arithmetic mean filtering function
Figure BDA0002103193730000081
Wherein P (X, Y) is an image that is currently not subject to coarse or fine focus screw adjustment.
Integrating the processed voice information and the operation behavior through multi-channel information, wherein the multi-channel information integration comprises two-channel information integration and three-channel information integration; the double-channel information integration is used for adjusting any one of a coarse quasi-focal spiral or a fine quasi-focal spiral, moving a sample and observing data integration of the sample; the multi-channel integration function is Muf (), the unadjusted image is P, the shift transformation function is PM (), and the rotation transformation function is PV (), so the multi-channel integration function can be expressed as:
Muf(P)=ɑPM(P)+(1-ɑ)PV(P);
wherein, α is a selection parameter, α ═ 0, 1, the rotation sensor rotates, and when one adjustment is completed, P' "═ muf (P); p' "as the current viewed image can continue to be adjusted;
fig. 2 is a schematic diagram of a three-channel information integration strategy in a navigation type virtual microscope based on an intention understanding model according to embodiment 1 of the present invention. The circle represents the current execution state, the directional edge direction may be converted into the state, and the condition that the conversion needs to be met is attached to the directional edge. The input in the integration strategy comprises the image information P0,P1Operation behavior, obtained by a rotation sensor), speech information, output content is P, P1,P2And voice prompts. Wherein, P0Is a sample label graph, P1Is a marker map, P, controlling the movement of the sample1,P2And is the observed sample image. In the system, under the current state condition, the system enters different system states according to different user behaviors. In FIG. 3, for example, a sample label P0After successful identification, the blurred sample P is displayed, the user may select a moving sample to observe, or may select to adjust the current sample clearly, for example, α ═ 0, the system enters a sample moving state, the camera detects the current sample movement, calculates the offsets (Δ X, Δ Y) in real time, and displays the moved sample P2In this state, the user can continue to continuously "interact" with the Picture Move state, and alternate state switching occurs, or the user can transition to the registration adjustment state according to the user behavior. C is an intention predictor variable that is not user input, but rather the system is based on the current P2Definition of (2), control conditions of generation, when P2After the definition requirement is met, the user is considered to obtain the sample image which the user most wants to observe, namely the user is inferred to stay at the moment to observe the sample, so that the system guides the user to store the current image and reserve the current image for later observation. Therefore, the system can enter a voice interaction state to prompt a user whether to save the current image, if the user selects to save the image, the system enters a saving state, and if not, the system returns to the previous state. And according to the confidence level of the voice recognition, the reason of the unsuccessful voice input is considered to be that the voice input is not qualified, so that the user needs to carry out the voice interaction state again.
The navigation type virtual microscope based on the intention understanding model is different from the traditional microscope, is not constructed by adopting optical elements any more, but uses various sensors and communication modules to identify the designed simple sample with the label, so that the navigation type virtual microscope has the advantages of avoiding excessive energy of students on sample manufacturing, avoiding the occurrence of difficult observation conditions caused by improper sample manufacturing, and solving the problems that some samples are difficult to obtain materials and cannot be reused. The invention is additionally provided with an interactive application module on the basis of the traditional microscope function, wherein the interactive application module is used for predicting the intention of a user and giving operation guidance through visual display and auditory guidance.
When a user uses a traditional microscope, in the process of moving a sample up, down, left and right, sample cells to be observed are easy to move out of the visual field range, and meanwhile, when the intelligent microscope is used, the detection marks below the glass slide can also move out of the visual field range of the camera. Therefore, this situation is prompted. For example: the user moves the sample to the left all the time, the image in the visual field moves to the right all the time, and when the sample image is about to move out of the visual field range, the user is prompted by voice: "image will exceed left area and move right".
The saving of image is through the conversation experiment between the man-machine, and the user rotates thick accurate focus spiral or thin accurate focus spiral under certain magnification, adjusts the definition of the interior sample of field of vision, and the sample definition in the field of vision reaches the threshold value of settlement, and the system thinks that the user has obtained the sample that wants to observe, consequently, the voice prompt user: "sample image is already clear and you can choose to save the image". At this time, the user may say "save", "save this image", etc., and the system saves the image in the current field of view intelligently to the designated folder. It can also be said that the coarse and fine quasi-focal screws are not needed or continuously adjusted, and the observed sample is continuously adjusted.
Fig. 3 is a schematic structural diagram of an implementation of a navigation virtual microscope based on an intention understanding model according to embodiment 1 of the present invention. On the intelligent microscope, the mirror and stage were replaced with a square cube and the slide was specially treated. The miniature camera is arranged in the cube and is right opposite to the light through hole for observing the glass slide sample. The upper surface of the glass slide is a two-dimensional code, the two-dimensional codes of different samples are different, and the lower surface of the glass slide is a round surface with marks. And identifying a sample to be observed by adopting image acquisition equipment, wherein the upper surface of each glass slide is provided with a two-dimensional code picture representing the sample, and identifying different two dimensions according to the image so as to call the sample image from the database. The sample image is initialized and a blurred sample P can be observed.
The sample to be detected moves in position during the operation,calculating the sample position shift by using the formula P' (X, Y) of the circle with the detection mark and the displacement conversion process as PM (X-delta X, Y-delta Y), and observing the sample image P according to the offset delta X and delta Y2
The two-dimensional code on the upper surface of the glass slide faces the light through hole, the camera identifies the sample label, the sample picture is seen under the microscope, the lower surface of the glass slide covers the light through hole and moves left and right, and the camera controls the sample observed under the microscope in the opposite direction according to the detected position of the circular surface with the detection mark.
Thick accurate burnt spiral or thin accurate burnt spiral comprises two concentric cylinders not of uniform size, and one end is sealed, and the other end has designed the axis of rotation that can the shading, axis of rotation one end, and 6 photosensitive sensor have evenly been installed to inlayer cylinder inside, rotate the pivot, through handling the back, will obtain discrete value t in proper order, s, as the measurement that the user rotated the scale. According to the formula
Figure BDA0002103193730000101
Or
Figure BDA0002103193730000102
And
Figure BDA0002103193730000103
adjusting a sample image P3The definition of (c).
As shown in fig. 4, an eyepiece structure diagram of a navigation type virtual microscope based on an intention understanding model is provided in embodiment 1 of the present invention, for an eyepiece of the microscope, the top is an intelligent display, the effect of sample adjustment is displayed on a screen in real time, 3 sensor buttons are designed on a lens barrel and represent magnification factors of 5 times, 10 times and 40 times respectively, and when different buttons are selected, the current sample image can be observed to be magnified.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the present invention as defined in the accompanying claims.

Claims (7)

1. A navigation type virtual microscope based on an intention understanding model is characterized by comprising a multi-modal input and perception module, a multi-modal information integration module and an interactive application module;
the multi-mode input and perception module is used for acquiring voice information of a user through a microphone and acquiring operation behaviors of the user; the operation behavior comprises the steps of obtaining the direction of a user adjusting the thickness and the focus of the microscope through a rotation sensor, identifying a sample to be observed through image acquisition equipment and detecting image information of the movement of the sample to be observed;
the multi-mode information integration module is used for processing the voice information through a visual channel and processing the operation behavior through a tactile channel, and then integrating the processed voice information and the operation behavior through multi-channel information to finish the interaction between the microscope and a user; the multi-channel information integration comprises two-channel information integration and three-channel information integration;
the two-channel information integration is used for adjusting any one of a coarse quasi-focal spiral or a fine quasi-focal spiral, moving a sample and observing data integration of the sample; the multi-channel integration function is Muf (), the unadjusted image is P, the shift transformation function is PM (), and the rotation transformation function is PV (), so the multi-channel integration function can be expressed as:
Muf(P)=ɑPM(P)+(1-ɑ)PV(P);
the alpha is a selection parameter, the alpha is 0, 1, the rotation sensor rotates, and after one adjustment is completed, P' is Muf (P); p' "as the current viewed image can continue to be adjusted;
the three-channel information integration is used for inputting image information, operation behaviors and voice information in an integration strategy, and under the condition of the current state, the system enters different system states according to different user behaviors;
the interactive application module carries out prediction on user intention and gives operation guidance through visual display and auditory guidance.
2. The virtual microscope of claim 1, wherein the identification of the sample to be observed by the image capturing device is performed by arranging a two-dimensional code picture representing the sample to be observed on the upper surface of the slide, and the image capturing device identifies different two-dimensional codes to retrieve the sample image from the database.
3. The virtual microscope of claim 1, wherein the visual channel information is processed by: identifying an original RGB image of a sample, wherein D is a sample image collection; observing the change of the sample, wherein the currently observed image of the sample is P, the motion transformation function is PW (), the newly generated image is P ', and the displacement transformation process formula is P' (X, Y) ═ PM (X-delta X, Y-delta Y);
and P belongs to D, X and Y) belongs to P, and delta X and delta Y are respectively the offset of the pixel points in the image P.
4. The virtual microscope of claim 1, wherein the method of detecting the image information of the movement of the sample to be observed is that the lower surface of the slide glass is provided with a circular surface with a detection mark, the sample to be observed is moved in position during the operation, the movement of the sample position is calculated by using the circular surface with the detection mark and the formula of the displacement transformation process, and the image of the sample is observed according to the offsets Δ X and Δ Y.
5. The virtual microscope of claim 1, wherein the haptic channel information processing method is as follows: adjusting the image definition by adjusting a coarse quasi-focus spiral or a fine quasi-focus spiral, wherein the rotation change function of the coarse quasi-focus spiral or the fine quasi-focus spiral is PV (), adjusting the coarse quasi-focus spiral to obtain a discrete value T e to T {11,12,13,14,15,16},SxyDenotes the center point at (x, y), the size of the filter window is mxn, the mean filter function is PV, and
Figure FDA0003460089850000031
so that a newly generated image P' can be obtained from the arithmetic mean filtering function
Figure FDA0003460089850000032
Adjusting the fine focusing screw to obtain a discrete value S, which belongs to S {1,2,3,4,5,6}, SxyDenotes the center point at (x, y), the size of the filter window is mxn, the mean filter function is PV, and
Figure FDA0003460089850000033
so that a newly generated image P' can be obtained from the arithmetic mean filtering function
Figure FDA0003460089850000034
And P (x, y) is an image which is not subjected to coarse focusing screw adjustment or fine focusing screw adjustment currently.
6. The virtual microscope of claim 1, wherein the manipulation is directed to voice prompt manipulation steps or methods to position the sample in the field of view of the microscope when the user uses the virtual microscope.
7. The virtual microscope of claim 1, wherein the image is saved as a coarse quasi-focal spiral or a fine quasi-focal spiral, when the sample definition in the field of view reaches a set threshold, the image is saved by voice prompt, and after confirmation by the user, the virtual microscope system saves the image in the current field of view into a designated folder.
CN201910543153.3A 2019-06-21 2019-06-21 Navigation type virtual microscope based on intention understanding model Active CN110196642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910543153.3A CN110196642B (en) 2019-06-21 2019-06-21 Navigation type virtual microscope based on intention understanding model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910543153.3A CN110196642B (en) 2019-06-21 2019-06-21 Navigation type virtual microscope based on intention understanding model

Publications (2)

Publication Number Publication Date
CN110196642A CN110196642A (en) 2019-09-03
CN110196642B true CN110196642B (en) 2022-05-17

Family

ID=67755017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910543153.3A Active CN110196642B (en) 2019-06-21 2019-06-21 Navigation type virtual microscope based on intention understanding model

Country Status (1)

Country Link
CN (1) CN110196642B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694476B (en) * 2020-05-15 2022-07-08 平安科技(深圳)有限公司 Translation browsing method and device, computer system and readable storage medium
CN111999880A (en) * 2020-09-15 2020-11-27 亚龙智能装备集团股份有限公司 Virtual microscope system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002332910A1 (en) * 2001-09-07 2003-03-24 Arizona Board Of Regents On Behalf Of The University Of Arizona Multimodal miniature microscope
CN102354349A (en) * 2011-10-26 2012-02-15 华中师范大学 Human-machine interaction multi-mode early intervention system for improving social interaction capacity of autistic children
CN102405463A (en) * 2009-04-30 2012-04-04 三星电子株式会社 Apparatus and method for user intention inference using multimodal information
CN104965592A (en) * 2015-07-08 2015-10-07 苏州思必驰信息科技有限公司 Voice and gesture recognition based multimodal non-touch human-machine interaction method and system
CN106997236A (en) * 2016-01-25 2017-08-01 亮风台(上海)信息科技有限公司 Based on the multi-modal method and apparatus for inputting and interacting
CN107239139A (en) * 2017-05-18 2017-10-10 刘国华 Based on the man-machine interaction method and system faced
CN109495724A (en) * 2018-12-05 2019-03-19 济南大学 A kind of virtual microscopic of view-based access control model perception and its application

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002332910A1 (en) * 2001-09-07 2003-03-24 Arizona Board Of Regents On Behalf Of The University Of Arizona Multimodal miniature microscope
CN102405463A (en) * 2009-04-30 2012-04-04 三星电子株式会社 Apparatus and method for user intention inference using multimodal information
CN102354349A (en) * 2011-10-26 2012-02-15 华中师范大学 Human-machine interaction multi-mode early intervention system for improving social interaction capacity of autistic children
CN104965592A (en) * 2015-07-08 2015-10-07 苏州思必驰信息科技有限公司 Voice and gesture recognition based multimodal non-touch human-machine interaction method and system
CN106997236A (en) * 2016-01-25 2017-08-01 亮风台(上海)信息科技有限公司 Based on the multi-modal method and apparatus for inputting and interacting
CN107239139A (en) * 2017-05-18 2017-10-10 刘国华 Based on the man-machine interaction method and system faced
CN109495724A (en) * 2018-12-05 2019-03-19 济南大学 A kind of virtual microscopic of view-based access control model perception and its application

Also Published As

Publication number Publication date
CN110196642A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
US10578851B2 (en) Automated hardware and software for mobile microscopy
JP4970869B2 (en) Observation apparatus and observation method
Kim et al. The interaction experiences of visually impaired people with assistive technology: A case study of smartphones
US11594051B2 (en) Microscope system and projection unit
US20210191101A1 (en) Microscope system, projection unit, and image projection method
CN110196642B (en) Navigation type virtual microscope based on intention understanding model
JP2019532352A (en) System for histological examination of tissue specimens
EP1350156A1 (en) Positioning an item in three dimensions via a graphical representation
US20170285319A1 (en) Digital microscope and focusing method thereof
US11112952B2 (en) Interface for display of multi-layer images in digital microscopy
EP4043999A1 (en) Eye tracking system for smart glasses and method therefor
US7954069B2 (en) Microscopic-measurement apparatus
Gallas et al. Evaluation environment for digital and analog pathology: a platform for validation studies
US11262572B2 (en) Augmented reality visual rendering device
US10429632B2 (en) Microscopy system, microscopy method, and computer-readable recording medium
US20160005337A1 (en) Microscope-based learning
CN111656247A (en) Cell image processing system, cell image processing method, automatic film reading device and storage medium
Sankaran et al. A survey report on the emerging technologies on assistive device for visually challenged people for analyzing traffic rules
JPH04116512A (en) In-microscopic visual field display device and microscopic image processor equipped with said device
CN109326166B (en) Virtual microscope object kit and application thereof
Giannopoulos Supporting Wayfinding Through Mobile Gaze-Based Interaction
JP2003030638A (en) Image display device, image display method and control program therefor
JP2018073310A (en) Display system and display program
CN110031960A (en) Automatic adjustment method and device
Li A human-machine interaction system for visual acuity test based on gesture recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant