CN100351057C - Method and equipment for deep information extraction for micro-operation tool based-on microscopic image processing - Google Patents

Method and equipment for deep information extraction for micro-operation tool based-on microscopic image processing Download PDF

Info

Publication number
CN100351057C
CN100351057C CNB2005100162967A CN200510016296A CN100351057C CN 100351057 C CN100351057 C CN 100351057C CN B2005100162967 A CNB2005100162967 A CN B2005100162967A CN 200510016296 A CN200510016296 A CN 200510016296A CN 100351057 C CN100351057 C CN 100351057C
Authority
CN
China
Prior art keywords
image
micro
point spread
sigma
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005100162967A
Other languages
Chinese (zh)
Other versions
CN1693037A (en
Inventor
赵新
卢桂章
孙明竹
黄大刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TAIYANG HIGH-TECH DEVELOPMENT Co Ltd NANKAI UNIV TIANJIN
Nankai University
Original Assignee
TAIYANG HIGH-TECH DEVELOPMENT Co Ltd NANKAI UNIV TIANJIN
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TAIYANG HIGH-TECH DEVELOPMENT Co Ltd NANKAI UNIV TIANJIN, Nankai University filed Critical TAIYANG HIGH-TECH DEVELOPMENT Co Ltd NANKAI UNIV TIANJIN
Priority to CNB2005100162967A priority Critical patent/CN100351057C/en
Publication of CN1693037A publication Critical patent/CN1693037A/en
Application granted granted Critical
Publication of CN100351057C publication Critical patent/CN100351057C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention relates to a method and equipment for depth information extraction for a micro-operation tool, and particularly, for the depth information extraction of micro-organisms, such as cells, chromosomes, etc. based on microscopic image processing. The present invention belongs to the technical field of a micro-operation robot. The the prior art obtains depth information by adding an extra sensing device; thus, cost is large, and practical application is affected. The present invention has the technical scheme that a point diffusion parameter sigma of a microscope is extracted by micro image processing image according to an imaging model of the microscope; the depth information of an operation tool is obtained by the relation of the point diffusion parameter and defocusing amount established by a neural network. The extracting device comprises the microscope, a micro needle, a CCD camera, a micro-operation manipulator, a computer, etc. The present invention has the advantages that under the condition of the 40X microscope, the depth extracting standard deviation is within 2 micrometres, and the maximum effective range is 70 micrometres above and below a focusing band of the microscope. The time for calculating a depth value is 0.2 second or so, and thus, the present invention achieves the level of on-line application at two aspects of precision and efficiency.

Description

Micro manipulator tool depth information extracting method and device based on the micro-image processing
Technical field
The present invention relates to a kind of extraction of micro manipulator tool depth information, particularly handle based on micro-image, be used for biomedicine experiment, to as if the extracting method and the device of the depth informations of fine organisms such as cell, chromosome, belong to the micro-manipulating robot technical field.
Background technology
Micro-manipulating robot is the extension of Robotics to fine field operation.The fundamental mode of robot be based on Cartesian coordinate operate [Lu Guizhang, Zhang Jianxun, Zhao Xin. towards the micro-manipulating robot of bioengineering experiment. Nankai University's journal, 1999,32 (3): 42-46].No matter utilize the micro-manipulating robot system to carry out biomedicine experiment, be to drive operation tool, still to the trajectory planning of impact point, all requires high accuracy; Therefore, the accurate location of micro manipulator tool is very crucial in the operating process.Micro-image as visual feedback in the micro OS is a two dimensional image, handles by micro-image and obtains X, and the method for Y direction coordinate has a lot; Its difficult point is how to obtain Z direction coordinate, i.e. depth information.
The method that obtains depth information has two kinds: increase extra sensing device and handle existing micro-image.A kind of extra sensing device is, increases other microscope camera lens and ccd video camera system perpendicular to Z-X plane or Y~Z plane, makes Z direction coordinate can resemble X, and Y direction coordinate is the same, directly handles obtaining by micro-image.Another kind of mode is to increase microsensor, is mainly used in the assembling of MEMS (MEMS).Towards the micro-manipulating robot system of biomedical engineering, operand is fine organism (as cell, chromosome); Operation tool is to draw the glass needle that forms by glass tube, and its end diameter is at 1 micron~tens microns; This just causes installing sensor additional at the operation tool end and obtains very difficulty of feedback signal.In addition, install additional perpendicular to Z-X plane or Y-Z plane microlens and will inevitably strengthen the expense of whole system, and might influence practical application.
Obtaining depth information by the image processing is a research direction [Pemtland who is subjected to extensive concern, A.A New Sensefor Depth of Field.IEEE Trans.on Pattern Analysis and Machine Intelligence.1987,9 (4): 523-531] [Grossmann, P.Depth from Focus.Pattern Recognition Letters, 1987,5 (1): 63-69], more representational method is people's propositions such as Pentland, utilize the out-of-focus appearance image information, calculate video camera diffusion parameter σ, recover object depth information [2 Pemtland, A.A New Sense for Depth of Field.IEEE Trans.on PatternAnalysis and Machine Intelligence.1987,9 (4): 523-531].The method core part is: calculate video camera diffusion parameter σ by the pixel shade of gray.This method has solved the problems referred to above preferably, and the precision that finally obtains " can be compared " (in 18 meters scopes, 40 centimetres of worst errors) with measurement in space.The unique restriction of the method is " guaranteeing that enough high-frequency informations obtain the variable quantity between the image ".
Along with the more extensive development of micro-manipulating robot system research, handle by micro-image and to obtain depth information and receive more and more widely concern.Our experiment in early stage is verified, micro-image has the advantages that to be different from grand design, the pixel shade of gray is because to noise-sensitive, out of focus degree [the Xie Shaorong that can not correctly reflect micro-image, the virtual environment research of micro-system, University Of Tianjin's thesis for the doctorate, 2001], document [Pemtland, A.A New Sense for Depth of Field.IEEETrans.on Pattern Analysis and Machine Intelligence.1987,9 (4): 523-531] also just can't realize by pixel shade of gray calculating video camera diffusion parameter σ.Document [Pemtland, A.A New Sense for Depth of Field.IEEETrans.on Pattern Analysis and Machine Intelligence.1987,9 (4): 523-531] in, require enough high-frequency informations, refer to enough high-frequency informations of picture material, just the picture material of saying usually " clear-cut margin "; And to micro-image, particularly the micro-image of biomedical applications is difficult to satisfy " clear-cut margin " this condition.Just think, in grand design, the depth of field can have several meters, so, just can be called the edge at the width below 0.1 millimeter at least; With biological cell injection 40X microscope commonly used is example, and the depth of field has only 2 microns, just can become the edge at nanoscale at least.Therefore, in the micro-image of biomedical applications, high-frequency information mainly comprises picture noise, and the high-frequency information of picture material is non-existent; Calculating video camera diffusion parameter σ by the pixel shade of gray just can't realize.
Document [Zhang Jianxun, Xue Daqing, Lu Guizhang, Li Bin.Obtain the vertical information of microoperation target by the micro-image feature extraction.Robot, 2001,23 (1): 73-77] provided and a kind ofly realized the method that depth information recovers based on technology of auto.Basic process is as follows: 1) utilize the Fourier conversion, provide the fuzziness criterion of micro-image; 2) utilize the fuzziness criterion, obtain focus state micro-image the most clearly, the picture rich in detail of corresponding focus state, the current Z direction of observed object position is defined as the Z direction origin of coordinates; 3) after observed object moved along the Z direction, memory was moved relative variation along the Z direction automatically, is converted into Z direction coordinate figure.Because micro-image medium-high frequency component is a noise, and picture material is present in low frequency part; So said method and grand design different disposal are: make low-pass filtering treatment after the Fourier conversion, filter high fdrequency component.
Document [Zhang Jianxun, Xue Daqing, Lu Guizhang, Li Bin.Obtain the vertical information of microoperation target by the micro-image feature extraction.Robot, 2001,23 (1): 73-77] utilize the most clear micro-image of focus state to set up the depth information restoration methods as the location benchmark, have two aspect problems: 1) technology of auto is to derive from optical technology, for optical system, final purpose is imaging, the depth of field mean greatly can become in a bigger zone clearly as.And the micro-manipulating robot system needs locating information, and Z direction coordinate obtains based on the microscope focal zone and provides, and means then that positioning accuracy can not be higher than field depth.As the 40X microscopic system, 2 microns of the depth of field, available accuracy is at the 4-5 micron.Improve the main path of precision and select the littler microscope of the depth of field for use, this obviously is worthless.2) observed object is in out-of-focus appearance, and coordinate is to be converted by the Z direction displacement of automatic memory to obtain; Therefore, positioning accuracy depends on Z direction displacement accuracy, and out-of-focus appearance Z direction positioning accuracy further reduces like this.
Simultaneously, some researchers start with from research microscope imaging angle, have designed various specific light source conditions, adjust the telescope to one's eyes point spread parameter σ obtain and character is analyzed, done a lot of well work [Xie Shaorong, the virtual environment researchs of micro-system, University Of Tianjin's thesis for the doctorate, 2001]; Its purpose is the fundamental property of research microscope imaging.But, utilize some object image-forming model to come the identification point diffusion parameter because its main means are to make the stack of various spot lights or equivalent point light source; It is enough that the method is used for studying imaging system character, but can't obtain the diffusion parameter σ of any given micro-image, also just can't obtain the depth information corresponding with it." utilize micro-image extract small items depth information " this problem does not obtain fine solution as yet.
Summary of the invention
Sum up said method,, only utilized the micro-image characteristic recovery depth information of focus state, and the out-of-focus appearance microscopic image information is not used on a large scale based on the depth information restoration methods of technology of auto.Although document [Pemtland, A.A New Sense for Depth of Field.IEEE Trans.on Pattern Analysis andMachine Intelligence.1987,9 (4): 523-531] method of passing through pixel shade of gray calculating video camera diffusion parameter σ that proposes is inapplicable to micro-image, but by calculating video camera diffusion parameter σ, obtaining depth information still is a good idea.Because diffusion parameter σ is the intrinsic parameter of an optical system, can not change because of conditions such as illumination; For a given optical system, the relation between diffusion parameter σ and the defocusing amount is fixed.Therefore, the present invention is devoted to set up appropriate methods, handles by micro-image, obtains corresponding microscope diffusion parameter σ; And set up the relation of itself and Z direction coordinate by experiment; Finally, can handle acquisition object depth information by micro-image.The method is applied in the microoperation process, extracts micro manipulator tool---the depth information of micropin, obtained effect preferably.
The purpose of this invention is to provide a kind of micro manipulator tool depth information extracting method of handling based on image, this method is converted into the depth information extraction problem of observed object the extraction problem of microscope point spread parameter.At first, according to the microscope imaging model, handle by micro-image, at line drawing microscope point spread parameter; Afterwards, set up the mapping relations of point spread parameter and defocusing amount (Z direction depth information) by neutral net; Then, by above-mentioned mapping relations, obtain the depth information of micro manipulator tool; Finally, the method is applied to the micro-manipulating robot system, the validity of method of having utilized the experimental verification of out-of-focus appearance crosspointer interfix.
Technical scheme of the present invention:
This micro manipulator tool depth information extraction element of handling based on micro-image, its characteristics are: the inverted microscope light source sends light and enters Amici prism through condenser, and light beam is divided into orthogonal two bundles; Wherein a branch of light transmission Amici prism illuminates the micropin that is positioned on the XOY plane, micropin images in the charge-coupled device (CCD) video camera that is contained on the microscope by the inverted microscope object lens, through image capture interface the micropin image of different out-of-focus appearances is sent into computer; Another Shu Guang reflexes to the vertical scale that is positioned at the YOZ plane through Amici prism, and the topography of scale images in the ccd video camera of microimaging device by another micro objective, reads actual micropin defocusing amount; The microoperation manipulator is by computer-controlled.
This micro manipulator tool depth information extracting method of handling based on micro-image, to pass through following step: according to the microscope imaging model, handle by micro-image, extract microscope point spread parameter σ, the relation of point spread parameter α that sets up according to neutral net and defocusing amount obtains the depth information of operation tool then.
Below such scheme is done some explanations:
1. microscope imaging model
As shown in Figure 1, according to the geometric optical imaging principle, if there is certain distance on actual imaging plane and focal imaging plane, object point P imaging on as the plane no longer is a picture point clearly so, but fuzzy circle, i.e. a focal spot.If the position of lens and imaging plane all immobilizes, only change the position of object point, then the diameter d of this circle of confusion and the pass between the object distance are:
d ( ϵ ) = ϵ · f · D ( u 0 + ϵ ) ( u 0 - f ) - - - ( 1 )
α in the formula 0Object distance during for the one-tenth sharply defined image; ε is object distance u 0Side-play amount, be referred to as defocusing amount again, set away from the lens direction to just, otherwise for bearing; F is the focal length of lens; D is the diameter of lens.For given microscopic system, f, D, u 0All be fixed value, can obtain through demarcating.
Because imperfectization of diffraction of light effect and lens imaging, the light intensity distributions in the focal spot can be similar to two-dimensional Gaussian function:
h ( x , y ) = 1 2 π σ 2 e - x 2 + y 2 2 σ 2 - - - ( 2 )
H (x in the formula, y) meaning is the picture of an object, the point spread function (point spread function) that is called this imaging system, σ is point spread parameter [Pemtland, A.A New Sense for Depth of Field.IEEE Trans.on Pattern Analysisand Machine Intelligence.1987,9 (4): 523-531].
Convolution can be described the imaging of microscopic system well, and [Zhao Xin, Yu Bin etc. are based on the microscope point spread parameter extracting method and the application of System Discrimination.Chinese journal of computers, 2004,27 (1): 140-144].(u v), is that (x y), then has g through an optical system imaging as if given observed object f
g ( x , y ) = f * h = ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) h ( x - u , y - v ) dudv - - - ( 3 )
* is a convolution operation in the formula, and (x y) is the point spread function of system to h, and it determines the diffusion after the observed object out of focus, just becomes micro-image g (x, " bluring " degree y).(x y) is provided by formula (2) h, and variable x, y are the image space positional informations, diffusion, i.e. " bluring " degree of micro-image after the unique definite observed object out of focus of point spread parameter σ.
2. microscope point spread parameter identification
For an optical imaging system, solution point diffusion parameter σ utilizes formula (2) to carry out usually.Because it is very difficult to obtain a desirable some object, consider the particularity of micro-image, select to utilize the face object image-forming model of formula (3) to come identification σ.With formula (2) substitution formula (3), obtain:
g ( x , y ) = 1 2 πσ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ 2 dudv - - - ( 4 )
Given observed object f (x, y) and point spread parameter σ, utilize formula (4) can obtain observed object imaging, as corresponding one by one (for showing difference with point spread parameter σ, the picture that calculates gained is called virtual representation, adopts to such an extent that image is called the picture of object) actual.Therefore, (x when picture y) is known, by contrasting with virtual representation, can determine the point spread parameter of its correspondence as observed object f.Based on this, the picture of the observed object of a given σ to be determined, can determine the microscope point spread parameter by the following step:
1), utilize formula (4) to generate the virtual representation of observed object for each possible σ;
2) picture with virtual representation and object compares, and tries to achieve the quadratic sum of corresponding points difference:
Σ x , y ( g ( x , y ) - 1 2 πσ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ 2 dudv ) 2 - - - ( 5 )
3) search σ, it is best to seek the picture coupling make virtual representation and object
Figure C20051001629600095
J = Min { Σ x , y ( g ( x , y ) - 1 2 π σ ^ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ ^ 2 dudv ) 2 } - - - ( 6 )
4) according to principle of least square method, the virtual representation that previous step finds is exactly " equivalent result " of object picture.This moment
Figure C20051001629600101
The point spread parameter of imaging system when being exactly the product body image.
3. the rapid extracting method of point spread parameter
Directly adopt said method to find the solution diffusion parameter, need carry out a large amount of two-dimensional convolution computings, computational efficiency is very low.Consider computational efficiency, suppose that the observed object that adopts is the endless slit that is parallel to X-axis.Because it is identical in the directions X imaging results, when considering its imaging, only need to calculate and be parallel to that each picture element gray value gets final product on line of Y direction, this line that we will choose is called scan line.Same reason when calculating on the scan line each picture element gray value, only needs to make convolution on the corresponding scanning line with it at observed object and gets final product, and so just imaging model is reduced to the form of one dimension, and then formula (4) can be expressed as:
g ( x , y ) = 1 2 π σ · ∫ y min y max f ( x , v ) e - ( y - v ) 2 2 σ 2 dv - - - ( 7 )
Wherein x is a determined value.Correspondingly, formula (5), (6) are expressed as respectively:
Σ y ( g ( x , y ) - 1 2 π σ · ∫ y min y max f ( x , y ) e - ( y - v ) 2 2 σ 2 dv ) - - - 8
J = Min { Σ y ( g ( x , y ) - 1 2 π σ ^ · ∫ y min y max f ( x , y ) e - ( y - v ) 2 2 σ ^ 2 dv ) } - - - ( 9 )
In fact, be no more than the directions X two ends as long as satisfy the range of scatter of observing object swept-volume line, scan line intensity profile in the image space then is identical with the intensity profile of corresponding desirable endless slit arbitrary scan line.For micro manipulator tool, the width at micropin tip has only about 1 micron, and length is longer, therefore as long as the choose reasonable scan line position can satisfy above-mentioned hypothesis fully.So, when calculating diffusion parameter, can only consider the data on the vertical scan line.
Because micropin itself is the object with a dimensioning, when it during in the microscopically imaging, each several part is not on the same plane, that is to say that the out of focus degree of image each several part also is a diffusion parameter and inequality.Because final purpose is to obtain needle point clearly, therefore, scan line will be chosen at the micropin tip as far as possible, the middle part effect that is taken at the most advanced and sophisticated similar rectangle of needle point is best, satisfied above-mentioned hypothesis on the one hand, it is also comparatively approaching to locate point spread parameter simultaneously, is beneficial to reduce error.Therefore, the position of scan line is relevant with tip position, and the present invention obtains the tip position of micropin at first by image process method, selects rational scan line position by tip position again.
In addition, when choosing scan line, also there is the problem of " aligning ".Promptly require the vertical scan line from each width of cloth out-of-focus image, elected, must be directly corresponding with that scan line of from the sharply defined image of observed object, selecting, the former is the picture that the latter generates.This requires in imaging process, and the position of needle point is constant all the time.The present invention has solved the alignment issues of needle point preferably by the method for image model coupling, thereby makes the scan line of choosing can support the processing of algorithm well.
4. micropin Boundary Extraction
In the process of search point spread parameter, need to calculate the quadratic sum of pointwise gray scale difference between virtual representation and the sample, the size of this gray scale difference quadratic sum has determined the quality of matching degree between virtual representation and the sample.In computational process, the size of virtual representation is consistent with sample, when still asking quadratic sum, the whole piece scan line all can not be counted, because the key message of micropin imaging is included in the micropin picture, rather than the background area of outside.Therefore, during calculating, must ignore most of point of background area, and only consider the micropin imaging.The result who tries to achieve has so just really reflected matching degree between virtual representation and the object picture.
Dividing the zone at micropin place in the micro-image, is a Boundary Extraction problem of having simplified basically: do not need accurately to mark the border of micropin, only need guarantee that the zone of dividing is included micropin fully to get final product.Through test of many times, the present invention has provided a method based on medium filtering, has realized extracting from scan line the zone at micropin place.
5. the searching method of point spread parameter
The method that the golden section search algorithm adopts when being the search point spread parameter.This algorithm requires searched function to have unimodal characteristic in the given region of search as a kind of linear search algorithm.According to the analysis of formula (9) as can be known, when point spread parameter during as independent variable, the assessed value function is unimodal generally.Therefore, the search problem of point spread parameter can be solved by the golden section search algorithm.
On the other hand, the search efficiency of this algorithm is higher.From the result who analyzes, interval interior (this interval is the reasonable value scope of point spread parameter) searches in [1,100], and when search precision was 0.005, algorithm iteration can provide the result about 10 times; When precision is 0.1, only need iteration 5 times.In fact, when diffusion parameter less than 1/6 the time, this pixel can be ignored to the influence of other pixels.Therefore, it is very high that search precision needn't be established, and can further improve search efficiency like this.
6. virtual image data computing algorithm
According to formula (3) as can be known, when calculating the virtual image data, need realize convolution: at first obtain the limit of integration, and the method for discrete integrand (observed object) by interpolation is converted into conitnuous forms, finally finish integration by integration.The result shows that by this kind method, searching for a point spread parameter roughly needs 10-20 second.
In fact, because the sampled images itself that obtains disperses, there is no need to use continuous convolution, this can't improve the precision of virtual data, has but increased amount of calculation to a great extent.Therefore, the present invention uses discrete convolution to replace continuous convolution, and according to the character of Fourier transform, in computational process, adopts the method for " Fourier transform-product-inverse Fourier transform " to realize discrete convolution.Through above-mentioned improvement, under the constant situation of other conditions, search for only need 0.1 second of a point spread parameter.
7. set up the relation of point spread parameter and defocusing amount by neutral net
The present invention is by the main tool of current research nonlinear system---and artificial neural network is set up the mapping relations of point spread parameter and defocusing amount.
The BP network is a most widely used class artificial neural network.It is a multilayer, have the non-linear neutral net of leading transfer function.Neuron is the most basic part of neutral net, and usually, one has R the neuron models of importing as shown in Figure 13.Wherein P is an input vector, and w is a weight vector, and b is a threshold value, and f is a transfer function, and a is neuron output.All input P are weighted by weight w and add upper threshold value b after the summation be this neuronic output a again after the effect of transfer function f, that is:
a = f ( Σ i P i w i + b ) - - - ( 10 )
Transfer function can be any function that can be little, and commonly used have Sigmoid type transfer function and a linear transfer function.Accompanying drawing 14 is two-stage BP network topology structures, is formed by input layer, output layer and a hidden layer interconnection.Neural network training, the input and output that are about to network act on network repeatedly, constantly adjust its weight and threshold value, so that the network error functional value reaches minimum, thereby realize the Nonlinear Mapping between input, output.
Verified in theory, a band threshold value, hidden layer adopts S type transfer function, output layer adopts the secondary BP network of linear transfer function can approach any function with limited discontinuity point.What adopt in the reality is exactly above-mentioned secondary BP network, observes from the result, and this kind network can well be described the relation between point spread parameter and the depth information.
Beneficial effect of the present invention: the present invention proposes and has realized a kind of micro manipulator tool depth information extracting method of handling based on micro-image.The method is under 40X microscope condition, and the depth extraction standard deviation is in 2 microns, and maximum effective range is each 70 microns up and down of microscope focal zones; On common computer, the time of calculating a depth value is about 0.2 second, in the level that has all reached online application aspect precision and the efficient two.
Further, said method is applied to the micro-manipulating robot system, has designed and Implemented the experiment of crosspointer interfix, verified the validity of method, also the operating space of micro-manipulating robot is expanded simultaneously, made to operate under focus state and the out-of-focus appearance and all can carry out.
Description of drawings
Fig. 1. the microscope imaging model
Fig. 2. depth information extracts handler module and divides
Fig. 3. depth information extracts handling process
Fig. 4. point spread parameter is debated the knowledge program flow diagram
Fig. 5. micropin micro-image and scale map picture
Fig. 6. the point spread parameter fitting result
Fig. 7. point spread parameter match standard deviation
Fig. 8. the relation of micropin point spread parameter and defocusing amount
Fig. 9. depth information extracts result verification
Figure 10. the crosspointer interfix experiment of micro-manipulating robot system
Figure 11. microcobjective spread function testing arrangement structural representation
Figure 12. many groups standard slit plate partial enlarged drawing
Figure 13. neuron models
Figure 14. secondary BP neural network topology structure
Wherein: 1. condenser 3. Amici prisms, 4. micro manipulator tools (micropin) in bulb 2. microscope light sources in the inverted microscope light source 5. the microcobjective 9. of gathering longitudinal position information of the charge-coupled device (CCD) video camera 7. vertical scales 8. on object lens 6. microscopes on the microscope gather charge-coupled device (CCD) 10. microoperation manipulators 11. computers of longitudinal position information
The specific embodiment
Below in conjunction with accompanying drawing the present invention is specified:
This micro manipulator tool depth information extraction element of handling based on micro-image, its characteristics are: inverted microscope light source 1 sends light and enters Amici prism 3 through condenser 2, and light beam is divided into orthogonal two bundles; Wherein a branch of light transmission Amici prism illuminates the micropin 4 that is positioned on the XOY plane, and micropin images in the ccd video camera 6 that is contained on the microscope by inverted microscope object lens 5, through image capture interface the micropin image of different out-of-focus appearances is sent into computer 11; Another Shu Guang reflexes to the vertical scale 7 that is positioned at the YOZ plane through Amici prism, and the topography of scale images in the ccd video camera 9 of microimaging device by another micro objective 8, reads actual micropin defocusing amount; Microoperation manipulator 10 is by computer 11 controls.
This micro manipulator tool depth information extracting method of handling based on micro-image, its characteristics are to pass through following step:
1) obtains sampled images and scale map picture
2) read and handle picture rich in detail, obtain scan line position, preserving needs data
3) in order, read and handle sampled images successively, preserving needs data
4) ask for micropin border on the sampled images scan line
5) point spread parameter of calculating sampling image
6) judge whether all sampled images are handled, are then to continue, otherwise change 3)
7) read the scale information of all sampled images, obtain defocusing amount
8) set up the mapping relations of point spread parameter and out of focus by neutral net, keep the result
9) obtain the blurred picture that need carry out the depth information extraction
10) read and handle this image, preserving needs data
11) ask for micropin border on the blurred picture scan line
12) point spread parameter of calculating blurred picture
13) utilize the neutral net result, ask for the defocusing amount of object in the blurred picture
Extracting method recited above, step 5) comprises the following steps:
1) point spread parameter of image is set at random
2), calculate the virtual image data according to imaging model
3) calculate assessed value
4) change according to assessed value, again the acquisition point diffusion parameter
5) judge whether parameters precision reaches requirement, is then to continue, otherwise change 2)
6) reservation makes the less diffusion parameter of assessed value
7) side-play amount on the y direction is set at random
8) add side-play amount, recomputate the virtual image data
9) calculate assessed value
10) change according to assessed value, obtain side-play amount again
11) judge whether precision reaches requirement, is then to continue, otherwise change 8)
12) keep the assessed value less deviation amount that makes
13) judge whether side-play amount changes, and is then to continue, otherwise keep the result, search finishes
14) and before the skew compare, judge whether assessed value reduces, and is then to continue, search for end otherwise keep the result
15) keep side-play amount, search for point spread parameter again, change 1)
The microscope imaging model, because imperfectization of diffraction of light effect and lens imaging, the point spread function of imaging system can be similar to two-dimensional Gaussian function:
h ( x , y ) = 1 2 π σ 2 e - x 2 + y 2 2 σ 2 - - - ( 2 )
H in the formula (x, y) meaning is called point spread function for the picture of an object, and σ is a point spread parameter;
Convolution can be described the imaging [6] of microscopic system well.(u v), is that (x y), then has g through an optical system imaging as if given observed object f
g ( x , y ) = f * h = ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) h ( x - u , y - v ) dudv - - - ( 3 )
* is a convolution operation in the formula, and (x y) is the point spread function of system to h, and it determines the diffusion after the observed object out of focus, just becomes micro-image g (x, " bluring " degree y).(x y) is provided by formula (2) h, and variable x, y are the image space positional informations, diffusion, i.e. " bluring " degree of micro-image after the unique definite observed object out of focus of point spread parameter σ.
Point spread parameter σ utilizes formula (2) to find the solution usually, because it is very difficult to obtain a desirable some object, considers the particularity of micro-image, selects to utilize the face object image-forming model of formula (3) to come identification σ.With formula (2) substitution formula (3), obtain:
g ( x , y ) = 1 2 πσ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ 2 dudv - - - ( 4 )
Given observed object f (x, y) and point spread parameter σ, utilize formula (4) can obtain observed object imaging, as corresponding one by one, for showing difference with point spread parameter σ, the picture that calculates gained is called virtual representation, adopt to such an extent that image is called the picture of object with actual: therefore, (x is when picture y) is known as observed object f, by contrasting, can determine the point spread parameter of its correspondence with virtual representation; Based on this, the picture of the observed object of a given σ to be determined can be determined the microscope point spread parameter by the following step;
1), utilize formula (4) to generate the virtual representation of observed object for each possible σ;
2) picture with virtual representation and object compares, and tries to achieve the quadratic sum of corresponding points difference:
Σ x , y ( g ( x , y ) - 1 2 π σ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ 2 dudv ) 2 - - - ( 5 )
3) search σ, it is best to seek the picture coupling make virtual representation and object
Figure C20051001629600145
J = Min { Σ x , y ( g ( x , y ) - 1 2 π σ ^ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ ^ 2 dudv ) 2 } - - - ( 6 )
4) according to principle of least square method, the virtual representation that previous step finds is exactly " equivalent result " of object picture, this moment
Figure C20051001629600147
The point spread parameter of imaging system when being exactly the product body image.
If micro manipulator tool is the slender type micropin, point spread parameter σ can determine fast by following steps so:
1), utilize formula (7) to generate the virtual representation of observed object for each possible σ;
g ( x , y ) = 1 2 π σ · ∫ y min y max f ( x , v ) e ( y - v ) 2 2 σ 2 dv - - - ( 7 )
2) picture with virtual representation and object compares, and tries to achieve the quadratic sum of corresponding points difference:
Σ y ( g ( x , y ) - 1 2 π σ · ∫ y min y max f ( x , v ) e ( y - v ) 2 2 σ 2 dv ) - - - ( 8 )
3) search σ, it is best to seek the picture coupling make virtual representation and object
J = Min { Σ y ( g ( x , y ) - 1 2 π σ ^ · ∫ y min y max f ( x , v ) e ( y - v ) 2 2 σ ^ 2 dv ) } - - - ( 9 )
4) according to principle of least square method, the virtual representation that previous step finds is exactly " equivalent result " of object picture, this moment
Figure C20051001629600155
The point spread parameter σ of imaging system when being exactly the product body image.
Scan line will be chosen at the micropin tip, the middle part effect that is taken at the most advanced and sophisticated similar rectangle of needle point is best, and require the vertical scan line elected in each width of cloth out-of-focus image must with that scan line of from the sharply defined image of observed object, selecting directly " corresponding ", just require in imaging process, the position of needle point is constant all the time.
Point spread parameter σ when calculating the quadratic sum of pointwise gray scale difference between virtual representation and the sample, must ignore most of point of background area, and only consider the micropin imaging in the process of search, and the key message of micropin imaging is included in the micropin picture.
The zone at micropin imaging place; do not need accurately to mark the border of micropin, only need guarantee that the zone of dividing is included micropin fully get final product, pass through test of many times; the present invention has provided a method based on medium filtering, has realized extracting from scan line the zone at micropin place.
Point spread parameter σ search, can adopt the golden section search algorithm, this algorithm is as a kind of linear search algorithm, require searched function in the given region of search, to have unimodal characteristic, according to as can be known, when point spread parameter during as independent variable to the analysis of formula (9), the assessed value function is unimodal generally, therefore, the search problem of point spread parameter can be solved by the golden section search algorithm, and the search efficiency of this algorithm is higher.
When calculating the virtual image data, can use discrete convolution to replace continuous convolution, in computational process, adopt " Fourier transform one product-inverse Fourier transform " to realize discrete convolution.
The mapping relations of point spread parameter σ and defocusing amount can be by the main tool of current research nonlinear system---artificial neural network BP sets up, the present invention adopts be the band threshold value, hidden layer adopts S type transfer function, output layer to adopt the two-layer BP neutral net of linear transfer function.
Embodiment
1. the micro manipulator tool depth information extracts
The handler module that depth information extracts is divided and handling process is seen accompanying drawing 2,3.Below concrete steps are described in detail.
1) the sampling micro-image obtains
The micro-image of gathering comprises micropin image and side scale two parts.The micro objective multiplication factor of micro OS is 40 times; Under the control of micro-manipulating robot, the motion step-length that motion arm moves is 2 microns.The image of gathering comprises nearly out of focus and away from burnt two kinds, the starting point of collection and terminal point are chosen in micropin unresolvable moment in image.The nearly out of focus of micropin and away from Jiao's degree about 70 microns, exceed this scope after, micropin just and background merge, be difficult to differentiate.Micropin image that accompanying drawing 5 provides and corresponding scale map similarly are more representational several width of cloth in the set of image data, and this group has 100 width of cloth micro-images.Figure is the defocusing amount of numeral for reading according to scale down, only shown the situation of nearly out of focus herein, and defocusing amount is a negative value, and is similar with it away from Jiao's situation, defocusing amount be on the occasion of.
2) point spread parameter identification
The rapid extracting method of the point spread parameter of introducing according to the present invention, and, can accurately, promptly obtain the point spread parameter of micropin micro-image at the specially treated that the micropin image is done.Accompanying drawing 4 has provided the program flow diagram of point spread parameter identification; Accompanying drawing 6 has shown the fitting result of parts of images, and is corresponding one by one with accompanying drawing 5.The line of being retouched out by "+" among the figure is represented the gray value (span is [0,255]) on the sampled images scan line, and the line of being retouched out by ". " is the virtual representation that calculates according to optimum diffusion parameter σ.
Use the same method and to obtain the point spread parameter of all images, accompanying drawing 7 has provided match standard deviation between each width of cloth image sampling scan line and the virtual representation, and (the gray scale span is [0,1]), as can be observed from Figure, all standard deviations are all below 4%, and fitting result has reached predetermined requirement.
3) mapping relations of point spread parameter and defocusing amount are set up
According to the point spread parameter of trying to achieve by discrimination method, add the defocusing amount that looks like to read from scale map, can simulate both mapping relations.The relation of point spread parameter and defocusing amount is divided into two sections, describes the variation tendency of micropin under nearly out of focus for one section, and another section described it away from the variation tendency under Jiao.
The present invention is by the relation of two layers of BP neutral net acquisition point diffusion parameter and defocusing amount, and the basic condition of BP network is as follows: the network input layer has only a node, is the point spread parameter of image; The number of hidden layer node is configurable, and actual what choose is 30 nodes, employing be S type transfer function; Output layer also has only a node, adopts linear transfer function, and the result of output is exactly the defocusing amount of micropin.Following table enumerated respectively nearly out of focus with away under the burnt situation, the weight of 30 hidden layer nodes and thresholding.
Nearly out of focus weight Nearly out of focus thresholding Away from burnt weight Away from burnt thresholding
1 -0.980000 98.000000 -0.980000 98.000000
2 0.980000 -95.117647 -0.980000 95.117647
3 0.980000 -92.235294 -0.980000 92.235294
4 0.980000 -89.352941 -0.980000 89.352941
5 -0.980000 86.470588 -0.980000 86.470588
6 0.980000 -83.588235 -0.980000 83.588235
7 0.980000 -80.705882 0.980000 -80.705882
8 0.980000 -77.823529 0.980000 -77.823529
9 -0.980000 74.941176 0.980000 -74.941176
10 -0.980000 72.058824 0.980000 -72.058824
11 -0.980000 69.176471 -0.980000 69.176471
12 -0.980000 66.294118 0.980000 -66.294118
13 0.980000 -63.411765 0.980000 -63.411765
14 0.980000 -60.529412 0.980000 -60.529412
15 -0.980000 57.647059 -0.980000 57.647059
16 0.980000 -54.764706 0.980030 -54.764706
17 -0.980000 51.882353 0.986107 -51.882212
18 0.979947 -49.000002 5.323811 -233.107915
19 2.498959 -96.439338 -4.028866 -46.229822
20 -0.846112 43.238769 -0.623575 43.108117
21 -5.874143 -40.529506 -4.140310 70.527853
22 -1.243301 -37.783145 -2.900080 119.154198
23 0.109782 -34.745844 1.107216 -36.352941
24 -1.886418 15.147839 -0.932524 20.563595
25 -3.973573 147.667518 -16.114888 293.220243
26 -0.070673 2.070272 3.458688 -142.172420
27 -0.354659 6.042087 4.451287 -87.171526
28 -0.459049 11.282148 -0.261895 10.823594
29 -0.814922 4.490244 -0.364621 4.838316
30 -5.519722 -14.393637 26.630394 -111.474262
Table 1 neural metwork training result
Accompanying drawing 8 has shown the training result of neutral net, and wherein, abscissa is microscopical point spread parameter, and unit is a number of picture elements, and ordinate is corresponding defocusing amount, and unit is a micron.As we can see from the figure, when micropin is in nearly out-of-focus appearance (ε<0), and defocusing amount is not when being very big, and the mapping relations of point spread parameter and defocusing amount have a sudden change; According to one's analysis, this is owing to micropin imaging itself forms.By to the research of other sampled images groups as can be known, the sudden change of mapping relations all exists in each group sampled images in such cases, and is consistent.
4) depth information extracts and product test
After the functional relation of point spread parameter and defocusing amount is set up, for a width of cloth out-of-focus image, calculate its diffusion parameter, and bring above-mentioned relation into, just can obtain the defocusing amount of micropin in this image, i.e. depth information.
In order to verify the correctness of depth extraction, with other one group of sampled images as test samples, its point spread parameter is brought in the above-mentioned relation, calculate defocusing amount ε ', with ε ' with compare by the observed defocusing amount ε of scale, the result is as shown in Figure 9, left figure has shown crude sampling result and depth extraction result's contrast, among the figure, ". " described the result of crude sampling, and "+" described the result of depth extraction; Right figure has shown the difference of ε and ε '.
In above-mentioned checking, use 67 of the test samples of defocusing amount in ± 70 μ m altogether, the result shows that the mean error of defocusing amount is 1.012 μ m, the depth offset more than 80% is in error range (± 2 μ m).Further, can obtain the depth extraction mean square deviation is 3.2279, and standard deviation is 1.797.
Handle other sampled images groups with same method, the result who obtains is as shown in the table.Hence one can see that, and above-mentioned depth information extracting method is effective.
Figure C20051001629600171
Percentage in 2 mu m ranges Sample 82.5% 79.4% 80.6% 90.6%
Standard deviation 1.797 2.002 1.994 1.5689
Table 2 depth extraction checking result
2. micro-manipulating robot system crosspointer interfix experiment
In order further intuitively to verify the result of depth extraction, the present invention has also designed the crosspointer interfix experiment of micro-manipulating robot system.In experiment, use two micropins altogether, divide to be in screen sides, parallel with the x axle, shown in accompanying drawing 10 (a).Left side micropin needle point hinge is thick, and the pin internal diameter is about 20 μ m; Right side micropin needle point is thinner, needle point width 2 μ m.Object of experiment is about to the right side micropin and inserts in the micropin of left side, and step is as follows:
1) adjust two pin tip positions, it is clear to make it to keep in screen, preserves right pin picture rich in detail, the observed object during as imaging.
2) on the z direction, move right pin at random, make it out of focus and fog (at this moment, left pin still keeps clear).
3) right pin is carried out depth information and extract, obtain its defocusing amount.The information that is presented on the dialog box is: the point spread parameter that calculates is 23.65; It is 46.8 μ m that depth information extracts the result.
4) this defocusing amount is acted on left pin, makes it on the z direction to move this distance, at this moment, about two pins all blur, and be in same horizontal plane in theory.
5) under out-of-focus appearance, finish the operation of crosspointer interfix.
6) regulate microscope, observed result.
Finish experiment according to above-mentioned steps, as shown in Figure 10.Above-mentioned experiment can be finished in out of focus ± 70 mu m ranges smoothly.

Claims (11)

1. micro manipulator tool depth information extraction element of handling based on micro-image, it is characterized in that: extraction element comprises: inverted microscope light source (1), the condenser that on its radiation direction, sets gradually (2) and be used for light is divided into the Amici prism (3) of orthogonal transmitted light and reverberation two-beam, transmitted light one side at Amici prism (3) is disposed with the micropin (4) that is positioned on the XOY plane, be used for the inverted microscope object lens (5) of micropin (4) imaging and be used for the charged coupled device CCD camera (6) of image recording, and the computer (11) that receives the image that charged coupled device CCD camera (6) shoots with video-corder by image capture interface; Be disposed with the vertical scale (7) that is positioned at the YOZ plane and be used for the micro objective (8) of the local imaging of scale (7) and be used for image recording and image imported into the charged coupled device CCD camera (9) of computer in reverberation one side of Amici prism (3); Microoperation manipulator (10) is controlled by computer (11).
2. micro manipulator tool depth information extracting method of handling based on micro-image is characterized in that passing through following step:
1) obtains sampled images and scale map picture;
2) read and handle picture rich in detail, obtain scan line position, preserving needs data;
3) in order, read and handle sampled images successively, preserving needs data;
4) ask for micropin border on the sampled images scan line;
5) point spread parameter of calculating sampling image;
6) judge whether all sampled images are handled, are then to continue, otherwise change 3);
7) read the scale information of all sampled images, obtain defocusing amount;
8) set up the mapping relations of point spread parameter and defocusing amount by neutral net, keep the result;
9) obtain the blurred picture that need carry out the depth information extraction;
10) read and handle this blurred picture, preserving needs data;
11) ask for micropin border on the blurred picture scan line;
12) point spread parameter of calculating blurred picture;
13) utilize the neutral net result, ask for the defocusing amount of object in the blurred picture.
3. extracting method according to claim 2, it is characterized in that: step 5) comprises the following steps:
1) point spread parameter of image is set at random;
2), calculate the virtual image data according to imaging model;
3) calculate assessed value;
4) change according to assessed value, again the acquisition point diffusion parameter;
5) judge whether parameters precision reaches requirement, is then to continue, otherwise change 2);
6) reservation makes the less diffusion parameter of assessed value;
7) side-play amount on the y direction is set at random;
8) add side-play amount, recomputate the virtual image data;
9) calculate assessed value;
10) change according to assessed value, obtain side-play amount again;
11) judge whether precision reaches requirement, is then to continue, otherwise change 8);
12) keep the assessed value less deviation amount that makes;
13) judge whether side-play amount changes, and is then to continue, otherwise keep the result, search finishes;
14) and before the skew compare, judge whether assessed value reduces, and is then to continue, search for end otherwise keep the result;
15) keep side-play amount, search for point spread parameter again, change 1).
4. extracting method according to claim 3 is characterized in that: the microscope imaging model description is as follows: because imperfectization of diffraction of light effect and lens imaging, the point spread function of imaging system can be similar to two-dimensional Gaussian function:
h ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ 2 - - - ( 2 )
H in the formula (x, y) meaning is called point spread function for the picture of an object, and σ is a point spread parameter;
Convolution can be described the imaging of microscopic system well, and (u v), is that (x y), then has g through an optical system imaging as if given observed object f
g ( x , y ) = f * h = ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) h ( x - u , y - v ) dudv - - - ( 3 )
* is a convolution operation in the formula, and (x y) is the point spread function of system to h, and it determines the diffusion after the observed object out of focus, just becomes micro-image g (x, " bluring " degree y); (x y) is provided by formula (2) h, and variable x, y are the image space positional informations, diffusion, i.e. " bluring " degree of micro-image after the unique definite observed object out of focus of point spread parameter σ; Point spread parameter σ utilizes formula (2) to find the solution usually, because it is very difficult to obtain a desirable some object, considers the particularity of micro-image, selects to utilize the face object image-forming model of formula (3) to come identification σ, with formula (2) substitution formula (3), obtains:
g ( x , y ) = 1 2 πσ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ 2 dudv - - - ( 4 )
(x y) with point spread parameter σ, utilizes formula (4) can obtain observed object imaging to given observed object f, picture is corresponding one by one with point spread parameter σ, for showing difference, the picture that calculates gained is called virtual representation, adopt to such an extent that image is called the picture of object with actual; Therefore, (x when picture y) is known, by contrasting with virtual representation, can determine the point spread parameter of its correspondence as observed object f; Based on this, the picture of the observed object of a given σ to be determined, can determine the microscope point spread parameter by the following step:
1), utilize formula (4) to generate the virtual representation of observed object for each possible σ;
2) picture with virtual representation and object compares, and tries to achieve the quadratic sum of corresponding points difference:
Σ x , y ( g ( x , y ) - 1 2 πσ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ 2 dudv ) 2 - - - ( 5 )
3) search σ, it is best to seek the picture coupling make virtual representation and object
Figure C2005100162960003C5
J = Min { Σ x , y ( g ( x , y ) - 1 2 π σ ^ 2 ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( u , v ) e - ( x - u ) 2 + ( y - v ) 2 2 σ ^ 2 dudv ) 2 } - - - ( 6 )
4) according to principle of least square method, the virtual representation that previous step finds is exactly " equivalent result " of object picture, this moment
Figure C2005100162960004C2
The point spread parameter of imaging system when being exactly the product body image.
5. extracting method according to claim 4 is characterized in that: for micro manipulator tool is the situation of slender type micropin, and point spread parameter σ determines fast by following steps:
1), utilize formula (7) to generate the virtual representation of observed object for each possible σ;
g ( x , y ) = 1 2 π σ · ∫ y min y max f ( x , v ) e - ( y - v ) 2 2 σ 2 dv - - - ( 7 )
2) picture with virtual representation and object compares, and tries to achieve the quadratic sum of corresponding points difference:
Σ y ( g ( x , y ) = 1 2 π σ · ∫ y min y max f ( x , v ) e - ( y - v ) 2 2 σ 2 dv ) - - - ( 8 )
3) search σ, it is best to seek the picture coupling make virtual representation and object
Figure C2005100162960004C5
J = Min { Σ y ( g ( x , y ) - 1 2 π σ ^ · ∫ y min y max f ( x , v ) e - ( y - v ) 2 2 σ ^ 2 dv ) } - - - ( 9 )
4) according to principle of least square method, the virtual representation that previous step finds is exactly " equivalent result " of object picture, this moment
Figure C2005100162960004C7
The point spread parameter σ of imaging system when being exactly the product body image.
6. extracting method according to claim 5, it is characterized in that: scan line will be chosen at the micropin tip, and require the vertical scan line elected in each width of cloth out-of-focus image must with that scan line of from the sharply defined image of observed object, selecting directly " corresponding ", just require in imaging process, the position of needle point is constant all the time.
7. extracting method according to claim 5, it is characterized in that: point spread parameter σ is in the process of search, when calculating the quadratic sum of pointwise gray scale difference between virtual representation and the sample, the most of point that must ignore the background area, and only consider the micropin imaging, the key message of micropin imaging is included in the micropin picture.
8. extracting method according to claim 7 is characterized in that: by extract the zone at micropin place from scan line based on the method for medium filtering.
9. extracting method according to claim 5 is characterized in that: adopt the golden section search algorithm during search point spread parameter σ.
10. extracting method according to claim 4 is characterized in that: when calculating the virtual image data, can use discrete convolution to replace continuous convolution, in computational process, adopt " Fourier transform-product-inverse Fourier transform " to realize discrete convolution.
11. extracting method according to claim 2, it is characterized in that: the mapping relations of point spread parameter σ and defocusing amount are set up by artificial neural network, employing be the band threshold value, hidden layer adopts S type transfer function, output layer to adopt the two-layer BP neutral net of linear transfer function.
CNB2005100162967A 2005-03-14 2005-03-14 Method and equipment for deep information extraction for micro-operation tool based-on microscopic image processing Expired - Fee Related CN100351057C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100162967A CN100351057C (en) 2005-03-14 2005-03-14 Method and equipment for deep information extraction for micro-operation tool based-on microscopic image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100162967A CN100351057C (en) 2005-03-14 2005-03-14 Method and equipment for deep information extraction for micro-operation tool based-on microscopic image processing

Publications (2)

Publication Number Publication Date
CN1693037A CN1693037A (en) 2005-11-09
CN100351057C true CN100351057C (en) 2007-11-28

Family

ID=35352254

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100162967A Expired - Fee Related CN100351057C (en) 2005-03-14 2005-03-14 Method and equipment for deep information extraction for micro-operation tool based-on microscopic image processing

Country Status (1)

Country Link
CN (1) CN100351057C (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102347193B (en) * 2010-08-02 2015-09-30 北京中科信电子装备有限公司 Optimization algorithm for fast beam adjustment of large-angle ion implanter
JP5644447B2 (en) * 2010-12-06 2014-12-24 ソニー株式会社 Microscope, region determination method, and program
CN103959307B (en) * 2011-08-31 2017-10-24 Metaio有限公司 The method of detection and Expressive Features from gray level image
US10607350B2 (en) 2011-08-31 2020-03-31 Apple Inc. Method of detecting and describing features from an intensity image
EP2933327A4 (en) * 2012-12-12 2016-08-03 Hitachi Chemical Co Ltd Cancer cell isolation device and cancer cell isolation method
CN108364274B (en) * 2018-02-10 2020-02-07 东北大学 Nondestructive clear reconstruction method of optical image under micro-nano scale
CN111239999B (en) * 2020-01-08 2022-02-11 腾讯科技(深圳)有限公司 Optical data processing method and device based on microscope and storage medium
CN111652848B (en) * 2020-05-07 2023-06-09 南开大学 Roboticized adherent cell three-dimensional positioning method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1037035A (en) * 1988-04-08 1989-11-08 神经医学系统公司 Automated cytological specimen classification system and method based on neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1037035A (en) * 1988-04-08 1989-11-08 神经医学系统公司 Automated cytological specimen classification system and method based on neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于系统辨识的显微镜点扩散参数提取方法及应用 赵新,余斌,李敏,卢桂章,刘景泰.计算机学报,第27卷第1期 2004 *
微操作显微图像中对象深度信息判据 谢少荣,罗均,赵新.机械科学与技术,第21卷第6期 2002 *
微操作机器人系统的大范围三维标定方法 黄大刚,卢桂章,赵新,张建勋.机器人,第24卷第4期 2002 *
通过显微图像特征抽取获得微操作目标纵向信息 张建勋,薛大庆,卢桂章,李彬.机器人,第23卷第1期 2001 *

Also Published As

Publication number Publication date
CN1693037A (en) 2005-11-09

Similar Documents

Publication Publication Date Title
CN100351057C (en) Method and equipment for deep information extraction for micro-operation tool based-on microscopic image processing
US20210327064A1 (en) System and method for calculating focus variation for a digital microscope
TWI829694B (en) Systems, devices, and methods for providing feedback on and improving the accuracy of super-resolution imaging
US10944896B2 (en) Single-frame autofocusing using multi-LED illumination
US8000511B2 (en) System for and method of focusing in automated microscope systems
US9297995B2 (en) Automatic stereological analysis of biological tissue including section thickness determination
US8502146B2 (en) Methods and apparatus for classification of defects using surface height attributes
CN1651905A (en) Quantitative analyzing method for non-metal residue in steel
CN1818927A (en) Fingerprint identifying method and system
CN1552041A (en) Face meta-data creation and face similarity calculation
US20120249770A1 (en) Method for automatically focusing a microscope on a predetermined object and microscope for automatic focusing
TWI811758B (en) Deep learning model for auto-focusing microscope systems, method of automatically focusing a microscope system, and non-transitory computer readable medium
US10475198B2 (en) Microscope system and specimen observation method
Yu et al. Autofocusing algorithm comparison in bright field microscopy for automatic vision aided cell micromanipulation
CN116579958A (en) Multi-focus image fusion method of depth neural network guided by regional difference priori
CN112069735B (en) Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
WO2023031622A1 (en) System and method for identifying and counting biological species
Wang et al. Simultaneous depth estimation and localization for cell manipulation based on deep learning
CN114384681A (en) Rapid and accurate automatic focusing method and system for microscope, computer equipment and medium
CN109856015B (en) Rapid processing method and system for automatic diagnosis of cancer cells
Song et al. A new auto-focusing algorithm for optical microscope based automated system
US20240062988A1 (en) Machine vision-based automatic focusing and automatic centering method and system
Redondo et al. Evaluation of autofocus measures for microscopy images of biopsy and cytology
CN116551701B (en) Robot control method, apparatus, electronic device and storage medium
US20210286972A1 (en) Determination method, elimination method and apparatus for electron microscope aberration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee