CN1987346A - Method and device for quick high precision positioning light spot image mass center - Google Patents
Method and device for quick high precision positioning light spot image mass center Download PDFInfo
- Publication number
- CN1987346A CN1987346A CN 200610161802 CN200610161802A CN1987346A CN 1987346 A CN1987346 A CN 1987346A CN 200610161802 CN200610161802 CN 200610161802 CN 200610161802 A CN200610161802 A CN 200610161802A CN 1987346 A CN1987346 A CN 1987346A
- Authority
- CN
- China
- Prior art keywords
- value
- pixel
- marking
- mark
- light spot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000004364 calculation method Methods 0.000 claims description 27
- 230000015654 memory Effects 0.000 claims description 27
- 239000000872 buffer Substances 0.000 claims description 21
- 238000001914 filtration Methods 0.000 claims description 19
- 238000003860 storage Methods 0.000 claims description 11
- 230000003139 buffering effect Effects 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims 1
- 238000013500 data storage Methods 0.000 description 9
- 238000009825 accumulation Methods 0.000 description 6
- 238000002372 labelling Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The method includes steps: carrying out Gaussian convolution operation for gray scale value (GSV) of pixels; determining whether GSV of pixels after operation is larger than prearranged threshold value; if yes, marking current pixels, and recognizing relevant light spot, and calculating product of GSV of current pixel and coordinate value and accumulated value of products of GSV and coordinate values of all pixels in same light spot; saving the accumulated value; otherwise, marking current pixel as pixel in background. When accomplishing whole output image, the method calculates out quotient between accumulated value of product of GSV by coordinate value and accumulated value of gray scale value. The calculated result is as coordinate value of mass center of facula. The invention also discloses mass center positioning device. Advantages are: increasing speed for processing data and capability for anti noise, and capable of processing multiple facula images.
Description
Technical Field
The invention relates to a machine vision detection technology, in particular to a method and a device for quickly and accurately positioning the centroid of a light spot image.
Background
The light spot image is common image information in machine vision and pattern recognition, and the light spot center is a characteristic of the light spot image. The facula center is widely applied to target tracking in machine vision, feature point extraction of high-precision three-dimensional measurement in visual detection, positioning of the laser facula center in deep space laser communication in space application, star point positioning of a star sensor of an attitude measurement component and sun facula positioning of a sun sensor.
Currently, methods for locating the center of a light spot can be divided into two main categories: a grayscale-based positioning method and an edge-based positioning method. The positioning method based on the gray scale generally utilizes gray scale distribution information of a target light spot image to position, and can adopt a centroid method, a curved surface fitting method and the like; the edge-based positioning method generally uses edge shape information of a target spot image for positioning, and includes: edge circle (ellipse) fits, Hough transforms, etc.
The positioning method based on the gray scale has higher precision than the positioning method based on the edge, generally, a curved surface fitting method based on the gray scale adopts a Gaussian curved surface to fit the gray scale distribution of a target light spot image, but the calculation of a commonly used two-dimensional Gaussian curved surface function is more complex, so that the centroid method is the most used positioning method due to simpler realization and higher positioning precision. The centroid method has some improved forms, and mainly comprises a centroid method with a threshold value and a square weighted centroid method, wherein the centroid method with the threshold value is equivalent to subtracting the original image from the background threshold value, and calculating the centroid of the pixel points which are larger than the threshold value in the original image; the square weighted centroid method adopts the square of the gray value to replace the gray value as the weight, and the method highlights the influence of the larger gray value pixel point close to the center on the center position.
In the prior art, in space applications with high real-time requirements for dynamic tracking and measurement of vision and miniaturization, the spot center positioning is to process images with large data volume, and the processing processes have great parallelism, including parallel operation, parallel images, parallel neighborhood, parallel pixel positions and the like. However, the current method for positioning the center of the light spot is mainly implemented by software on a computer, and the software implementation is executed in series according to an instruction mode, so that the positioning of the center of the light spot becomes the bottleneck of image data preprocessing. Therefore, in terms of real-time spot center positioning, the Jet Power Laboratory (JPL) in the united states proposes a window-based centroid positioning device, which is implemented using an analog circuit and embedded in an image sensor chip. The centroid positioning device can perform image centroid positioning on a plurality of windows simultaneously, but as the device mainly adopts an analog circuit and adopts a window-based data processing mode, the following defects exist in the realization:
1) the setting of the light spot processing window is not flexible, a too large window cannot be set, otherwise more than two light spots possibly existing in the window can be treated as one light spot, and the obtained result has errors;
2) the approximate position and range of the light spot in the image must be known in advance to set the window;
3) too many windows cannot be set due to the limitation of the processing speed and the transmission speed, so that when the number of light spots in an image is large, all the light spots in the image cannot be acquired;
4) due to the fact that the method is achieved through an analog circuit, the method is sensitive to noise, and the existence of the noise can generate large errors for positioning.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a fast and high-precision method for positioning the centroid of a light spot image, which can improve the data processing speed and the noise immunity in the centroid positioning of the light spot image, and can process any plurality of light spot images with any size.
Another objective of the present invention is to provide a fast and high-precision device for positioning the centroid of a light spot image, which can solve the bottleneck problem and the noise sensitivity problem of preprocessing a large data volume image in the centroid positioning of the light spot image, and can process any plurality of light spot images with any size.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a fast high-precision spot image centroid positioning method comprises the following steps:
A. performing Gaussian convolution operation on the pixel gray value, and judging whether the pixel gray value subjected to the Gaussian convolution operation is larger than a preset threshold value or not, if so, executing the step B, otherwise, executing the step C;
B. marking the current read pixel, identifying the light spot to which the current pixel belongs, calculating the product of the gray value and the coordinate value of the current pixel and the accumulated value of the products of the gray value and the coordinate value of all pixels of the same processed light spot, storing the obtained accumulated value, and executing the step D;
C. marking the current pixel as a background pixel, and judging whether to adjust the storage data of the light spot to which the current pixel belongs, if so, adjusting the storage data of the light spot to which the current pixel belongs, otherwise, executing the step D;
D. and C, judging whether the whole output image is processed or not, if not, returning to the step A, and if so, calculating the quotient of the accumulated value of the product of the gray value and the coordinate value of each light spot obtained in the step B and the accumulated value of the gray value, and outputting the obtained quotient as the coordinate value of the centroid of each light spot image.
Wherein, in the step B, the step of combining equivalent marks in the same light spot is further included at the same time of marking. The performing the gaussian convolution operation further comprises: and reading the gray value of the current pixel, and caching the gray value of the current pixel.
In the above method, the marking the current pixel in step B further includes:
b11, judging whether the marking value of the left pixel of the current pixel is zero, if not, marking the current pixel as the marking value of the left pixel, and executing the step B13, otherwise, executing the step B12;
b12, judging whether the marking value of the pixel above the current pixel is not zero, if so, marking the current pixel as the marking value of the pixel above, executing the step B13, otherwise, marking the current pixel as a new marking value, and updating the new marking value;
and B13, assigning the current pixel mark value to the corresponding mark parameter in the left mark parameter and the upper mark parameter group.
In the above method, the combining the medium valence marks in the same light spot further includes:
b21, judging the marking values of the left pixel and the upper pixel of the current pixel, if the marking values are zero, setting the equivalent marking parameter corresponding to the current pixel as a new equivalent marking value, updating the new equivalent marking value, and executing the step B22; if both are not zero and the two are not equal, merging the mark number +1, and executing the step B22;
b22, judging whether the number of the merged marks is 1, if so, merging the equivalent marks of the pixels on the left side of the current pixel into the equivalent marks of the pixels above the current pixel, and updating the new equivalent mark value to be the previous new equivalent mark value; if the merging flag number is not 1, executing step B23;
b23, judging whether the equivalent mark value of the left pixel of the current pixel is equal to the equivalent mark value of the upper pixel of the current pixel, if not, merging the equivalent data, merging the equivalent mark of the upper pixel of the current pixel into the equivalent mark of the left pixel of the current pixel, and if so, not processing.
In the above method, the step C further includes: clearing the upper marking parameter group and the left marking parameter; and step C, judging as follows: judging whether the marking value of the left pixel of the current pixel is larger than zero, if so, adjusting the stored data of the light spot to which the current pixel belongs, otherwise, not adjusting; the adjustment is as follows: and accumulating the value of the accumulator into the data memory corresponding to the equivalent mark value, and clearing the accumulator.
The invention also provides a rapid high-precision light spot image centroid positioning device, which comprises a Gaussian filtering unit, a light spot identification unit and a light spot centroid calculation unit, wherein the Gaussian filtering unit is used for performing Gaussian filtering on the gray value of the pixel of the output image and sending the pixel gray value subjected to the Gaussian filtering to the light spot centroid calculation unit; the light spot identification unit is used for receiving a control signal for light spot identification input by the light spot mass center calculation unit and finishing pixel marking of a light spot image; and the light spot centroid calculating unit is used for calculating the centroids of different light spot images according to the pixel marking values and outputting the final calculation result.
The spot identification unit further includes: the system comprises a mark judger, a left mark register, an upper mark register group, a current mark register and a new mark register; the mark judger is used for marking the pixels; the left marking register, the upper marking register group, the current marking register and the new marking register are used for storing and providing the marking value of the left pixel of the current pixel, the marking value of the pixel above the current pixel, the marking value of the current pixel and the new marking value for the marking judger.
The light spot identification unit further includes: a merging equivalent mark judger, a merging mark register, a new equivalent mark register and an equivalent mark buffer; the merging equivalent mark judger is used for merging equivalent marks in the same light spot; the equivalent mark buffer is used for storing the combined equivalent mark value; a merge flag register for storing a merge flag value; a new equivalence mark register for providing a new equivalence mark value to the merging equivalence mark judger; the left marking register, the upper marking register group and the current marking register are further used for providing the marking value of the left pixel of the current pixel, the marking value of the upper pixel of the current pixel and the marking value of the current pixel for the merging equivalent marking judger.
In the above apparatus, the gaussian filtering unit further includes: the image cache is used for caching the gray value of the read pixel and providing the gray value to the Gaussian convolution operation unit; and the Gaussian convolution operation unit is used for completing Gaussian convolution operation on the cached pixel gray value and outputting the calculation result to the light spot mass center calculation unit.
The spot centroid calculating unit further includes: the row-column counter is used for calculating and providing the coordinate value of each pixel point; the threshold comparator is used for comparing the pixel gray value subjected to the Gaussian convolution operation and output by the Gaussian convolution operation unit with a preset threshold, and outputting a comparison result as a control signal; the first multiplier and the second multiplier are respectively used for calculating the product of the pixel gray value and the x coordinate value and the product of the pixel gray value and the y coordinate value; the first adder and the second adder are respectively used for calculating the accumulated value of the product of the pixel gray value and the x coordinate value and the accumulated value of the product of the pixel gray value and the y coordinate value and respectively sending the obtained accumulated values to the first data memory and the second data memory for storage; the third adder is used for calculating the accumulated value of the pixel gray value and sending the obtained accumulated value to the third data memory for storage; the first data memory, the second data memory and the third data memory are respectively used for storing the accumulated value of the product of the pixel gray value and the x coordinate value, the accumulated value of the product of the pixel gray value and the y coordinate value and the accumulated value of the pixel gray value; the first divider and the second divider are respectively used for calculating the quotient of the pixel gray value and the x coordinate value multiplication accumulated value and the pixel gray value accumulated value and the quotient of the pixel gray value and the y coordinate value multiplication accumulated value and the pixel gray value accumulated value.
The fast high-precision light spot image centroid positioning method and the device provided by the invention have the advantages that after the Gaussian filtering processing is carried out on the pixel gray value of the output image, each pixel of the output image is simultaneously marked and calculated, so that one or more than one light spot images can be rapidly and automatically identified and processed. The invention has the following advantages:
1) the invention adopts a Gaussian weighted centroid positioning method, and performs Gaussian convolution operation on the output image data to complete Gaussian filtering and then perform spot identification, thereby improving the anti-noise capability of the method and the device and realizing high-precision positioning.
2) The invention marks and processes each pixel of the whole output image, but not adopts a window form, so that any plurality of light spots in the image can be identified and processed, and the size and the shape of the light spots are not limited.
3) When the light spot is initially marked, more than one equivalent mark may exist on the same light spot, so that the image data of the same light spot is stored in more than one data buffer, the equivalent marks belonging to the same light spot are merged while the pixels are marked, and the image data belonging to the same light spot is buffered in the same data buffer by merging the equivalent marks and compressing the equivalent mark values, so that the data storage space can be greatly saved.
4) Because the invention realizes the marking of the pixels, the combination of equivalent marks and the accumulation calculation of the pixels in parallel and adopts an FPGA hardware device to realize in real time, the bottleneck problem of image preprocessing with large data volume is solved, the data update rate can reach 30MHz to the maximum, and the real-time centroid extraction can be realized.
Drawings
FIG. 1 is a flowchart of an embodiment of a spot image centroid locating method according to the present invention;
FIG. 2 is a flow chart of pixel labeling in the flow chart of FIG. 1;
FIG. 3 is a schematic view of a spot image marked;
FIG. 4 is a flow chart of merging equivalence labels in the flow chart shown in FIG. 1;
FIG. 5 is a schematic view of a spot image after equivalent mark merging of the spot image shown in FIG. 3;
fig. 6 is a schematic structural diagram of a spot image centroid positioning device according to an embodiment of the present invention.
Detailed Description
The basic idea of the invention is: firstly, Gaussian filtering processing is carried out on the pixel gray value of the output image, and then marking and calculating processing are carried out on each pixel of the output image at the same time, so that automatic identification and processing can be carried out on one or more than one light spot images.
Here, the labeling and calculation processing specifically is: comparing each output pixel, marking each light spot pixel, and equivalently marking and combining different pixels in the same light spot when needed, so as to ensure that each pixel of the same light spot is given the same mark and different light spot pixels are marked differently; while marking and combining, carrying out accumulation of the product of the gray value and the coordinate value and accumulation of the gray value on the pixels with the same mark; and after the whole image data is output, dividing the accumulated value of the product of the gray value and the coordinate value of the pixel with the same mark by the accumulated value of the gray value to obtain the centroid positioning coordinate of each light spot. Therefore, the fast high-precision light spot centroid positioning of the light spots with multiple light spots and unlimited sizes and shapes can be realized.
It can be seen from the centroid location process in the prior art that noise has a great influence on the location accuracy, and therefore, the gaussian weighted centroid location method is adopted in the invention, that is: in the centroid location, the gray value of the original image pixel is not adopted for calculation, but Gaussian filtering is performed on the gray value of the original image pixel through a formula (1), and then the gray value of the original image pixel after the Gaussian filtering is adopted for calculation.
In formula (1), F (x, y) represents the gray-scale value of the output image data, I (x, y) represents the gray-scale value of the output image data after the gaussian convolution processing, and g (I, j) represents the gaussian filter coefficient.
Fig. 1 shows a processing procedure of a specific embodiment of the spot image centroid positioning method of the present invention, referring to fig. 1, the spot image centroid positioning method of the present embodiment includes the following processing steps:
step 101: and reading the gray value of the current pixel, and caching the gray value of the currently read pixel.
Here, the gray value of the buffered pixel is generally determined according to the size of the gaussian convolution template, i.e. the amount of line data of the output image, such as: and if the Gaussian convolution template is 7 multiplied by 7, 6 rows of data are cached each time, and the 7 th row is read out and then is subjected to subsequent processing.
Step 102-103: performing Gaussian convolution operation on the cached gray value, comparing the pixel gray value obtained after the Gaussian convolution operation with a preset threshold, judging whether the pixel gray value is greater than the preset threshold after the operation, if so, indicating that the current pixel is a light spot, and executing step 104 to perform light spot identification; otherwise, indicating that the current pixel is background, step 107 is performed.
Here, the performing of the gaussian convolution operation is also determined according to a gaussian convolution template, such as: for a 7 × 7 gaussian convolution template, 7 rows and 7 columns of data of an output image are processed each time, and the processing sequence is usually from left to right and from top to bottom by taking the starting point of the output image as a reference; the specific gaussian convolution operation adopts the calculation method of formula (1), and in fact, the specific implementation method only needs to achieve the calculation purpose of formula (1). The threshold is generally determined according to the contrast between the gray level of the output image and the background, and the smaller the contrast of the light spot is, the lower the threshold is set; the larger the speckle contrast, the higher the threshold setting.
Step 104: and marking the currently read pixel, and identifying the light spot to which the current pixel belongs.
Where background pixels may be marked with zero and non-background pixels with non-zero values. Of course, in practical applications, the background pixels may be labeled with other values, and correspondingly, the non-background pixels are labeled with non-background pixel labeling values, as long as the background and the non-background can be distinguished and different light spots can be distinguished. For convenience of calculation and labeling, zero and positive integers are generally used as selectable label values, although negative integers, decimal integers, and the like may also be used. The following steps are described by taking a positive integer with background pixels marked as zero and non-background pixels marked as non-zero as an example.
Specifically, the marking process of each pixel is shown in fig. 2, and includes the following steps:
step 104-104 b: and judging whether the marking value of the left pixel of the current pixel is zero, if not, marking the current pixel as the marking value of the left pixel, executing the step 104f, and if so, executing the step 104 c.
Steps 104c to 104 e: and judging whether the marking value of the pixel above the current pixel is zero, if not, marking the current pixel as the marking value of the pixel above, executing the step 104f, if so, marking the current pixel as a new marking value, and updating the new marking value.
Here, the new flag value may be stored using a special register for providing the pixel with a new flag value, which may be updated in different ways as long as it is ensured that the new flag value provided each time is not repeated. Such as: after each new mark value is used, the new mark value is added with 1 and is stored again for the next pixel marking.
Step 104 f: and assigning the current pixel marking value to the corresponding marking parameter in the left marking parameter group and the upper marking parameter group for marking the next pixel and the next line of pixels.
Here, the upper flag parameter group may be stored by a buffer, and the left flag parameter group may be stored by a register. The left marking parameter is a marking value, and is set to zero during initialization, the upper marking parameter group is used to store a group of marking parameter values, an array may be adopted, and each mark in the group corresponds to a pixel, for example: one line has 10 pixels, the upper mark parameter group is a mark group consisting of 10 marks, each mark corresponds to one pixel in the line, and the initial values of the group of mark parameters are all zero. Accordingly, in the case of assigning, the marking value of the current pixel is assigned to a marking parameter in the set of marking parameters corresponding to the current pixel, for example: there are 10 pixels in a row, the upper marking parameter group includes 10 marking parameters, and the current pixel is the 5 th pixel in the row, then the assigning means assigning the marking value of the current pixel to the 5 th marking parameter in the upper marking parameter group. When the judgment is carried out, the marking value of the pixel above the current pixel is also judged by finding the marking parameter corresponding to the serial number of the current pixel in the upper marking parameter group.
Steps 104a to 104f are a marking process for one pixel, and each pixel in the output image can be marked by repeatedly executing steps 104a to 104 f. Such as: for the pixel in the row 2 and the column 4 in fig. 3, the flag value of the pixel on the left side of the current pixel is determined first, and since the flag value is equal to zero, the flag value of the pixel above the current pixel is determined again, and if the flag value is also equal to zero, the current pixel is marked as a new flag value, and the new flag value is updated. For another example: for the pixel of row 2 and column 5 in fig. 3, the marking value of the pixel left of the current pixel is first determined, and since it is equal to 2, the current pixel is directly marked as 2.
Step 105: the equivalent marks in the same spot are combined.
Fig. 3 is a schematic diagram of an image marked by the method of fig. 2, wherein the area covered by the shadow in fig. 3 is a light spot, and four light spots are shown in fig. 3. As can be seen from fig. 3, for the same spot, there may be a plurality of different marks which are equivalent for the same spot, so in order to unify all marks in the same spot, the present invention adopts the flow shown in fig. 4 to merge equivalent marks, each spot is assigned with the same equivalent mark value, and in the case of zero background mark, the equivalent mark value is also a positive integer starting from 1. The specific process of merging equivalent labels is shown in fig. 4, and includes:
steps 105a to 105 c: judging the marking values of the left pixel and the upper pixel of the current pixel, if the marking values are zero, setting the equivalent marking parameter corresponding to the current pixel as a new equivalent marking value, updating the new equivalent marking value, and executing the step 105 d; if neither is zero and the two are not equal, the two flags are equivalent, the number +1 of flags will be merged, and step 105d is performed.
Here, the new equivalent mark value may be stored using a dedicated register for providing the pixel with a new equivalent mark value, which may be updated in different ways as long as it is ensured that the new equivalent mark value provided each time is not repeated. Such as: after each time a new equivalent mark value is used, the new equivalent mark value is added with 1 and is saved again for the next pixel marking. The number of the merged marks is used for recording the number of equivalent marks to be merged, the value of the number of the merged marks can be stored by a register, and the finally obtained equivalent mark value can be stored by a special buffer.
Step 105 d-105 h: and judging whether the number of the combined marks is equal to 1, if so, combining the equivalent marks of the pixels on the left side of the current pixel into the equivalent marks of the pixels above the current pixel, and updating the new equivalent mark value into the previous new equivalent mark value. If the updating of the new equivalent mark value in step 105b is to increment the new equivalent mark value by 1 each time, then updating the new equivalent mark value to the previous new equivalent mark value here is to decrement the current new equivalent mark value by 1.
The new equivalence mark value is updated to the previous one in the process of merging the equivalence marks, so that the range of the equivalence mark value is compressed, the equivalence mark value is the address corresponding to the data memory, and the compression of the range of the new equivalence mark greatly saves data memory units. As shown in fig. 3, the data storage unit used when the equivalent mark compression is not performed is 19, one pixel mark corresponds to one data storage unit, most of the data storage units are empty and useless, and as shown in fig. 5, the light spot image after the equivalent mark compression only needs 4 data storage units.
If the number of the merged marks is not equal to 1, further judging whether the equivalent mark value of the pixel on the left side of the current pixel is equal to the equivalent mark value of the pixel on the upper side of the current pixel, and if not, merging the equivalent data and merging the equivalent marks. Here, the equivalence data refers to data of a storage space corresponding to the equivalence index. The method specifically comprises the following steps: merging the data of the data memory space corresponding to the equivalent mark of the upper pixel into the memory space corresponding to the equivalent mark of the left pixel, clearing the data memory space corresponding to the equivalent mark of the upper pixel, merging the equivalent mark of the upper pixel of the current pixel into the equivalent mark of the left pixel of the current pixel, and if the equivalent marks are equal, not processing.
The result of the merging process of the image shown in fig. 3 is shown in fig. 5, and it should be noted that the mark on each pixel in fig. 5 is actually the data memory address where the spot image data to which the pixel belongs is finally stored, such as: the upper left hand spot in fig. 5 is marked 1 and spot image data representing this spot is stored in a data memory, equivalently marked 1.
In practical application, if all pixels in one light spot adopt the same mark value, equivalent mark combination is not needed; alternatively, if the reduction of the memory space occupation is not considered, this step may not be performed, so step 105 is optional.
Step 106: and accumulating the product of the gray value and the coordinate value of the current pixel and the accumulated value of the processed products of the gray value and the coordinate value of all pixels of the same light spot, accumulating the gray value of the current pixel and the processed gray accumulated value of all pixels of the same light spot, storing the obtained accumulated values, and executing the step 110.
In the present invention, the above steps 104, 105 and 106 are implemented in parallel for each pixel, so that the processing speed can be greatly increased.
Step 107: the current pixel is marked as a background pixel, the current pixel is marked as zero in the embodiment, and the upper marking parameter group and the left marking parameter group are cleared.
Here, the definition of the upper and left marking parameter sets is the same as that described in step 104 f.
108-109: judging whether the marking value of the left pixel of the current pixel is larger than zero, if not, directly executing the step 110; if yes, adjusting the storage data of the light spot to which the current pixel belongs, and specifically operating as follows: and accumulating the value of the accumulator into the data memory corresponding to the equivalent mark value, and clearing the accumulator.
Step 110: and judging whether the whole output image is processed or not, if so, executing the step 111, otherwise, returning to the step 101. Here, whether the output image is processed may be determined according to whether the end flag of the current output image is read.
Step 111: and (4) dividing the accumulated value of the product of the gray value and the coordinate value obtained in the step (106) by the accumulated value of the gray value according to a formula (2), and outputting the obtained quotient as the centroid coordinate value of the light spot image.
In the formula (2), I (x, y) represents the gray value of the output image data after gaussian convolution processing; x is the number of0、y0The x and y coordinate values of the centroid of the light spot image are shown.
In order to realize the above method, the present invention provides a corresponding light spot image centroid positioning device, as shown in fig. 6, the light spot image centroid positioning device of the present invention includes: spot identification unit 61, gaussian filter unit 62, spot centroid calculation unit 63. The light spot identification unit 61 is configured to receive a control signal for performing light spot identification, which is input by the threshold comparator 632 in the light spot centroid calculating unit 63, and complete merging of a pixel label of the light spot image and an equivalent label of the same light spot. The speckle identifying unit 61 further comprises a mark determiner 611, a combined equivalent mark determiner 612, a left mark register 613, an upper mark register group 614, a current mark register 615, a new mark register 616, a combined mark register 617, a new equivalent mark register 618, an equivalent mark buffer 619.
The mark determiner 611 is configured to mark the pixel, where the specific marking process is the process shown in fig. 2, and completes marking of the current pixel by combining the mark values stored in the current mark register 615, the left mark register 613, the upper mark register group 614, and the new mark register 616.
The merging equivalent mark determiner 612 is configured to merge equivalent marks in the same light spot, where the merging process adopts the process shown in fig. 4, and completes merging equivalent marks for different pixels of the same light spot by combining the left mark register 613, the upper mark register group 614, the merging mark register 617, and the new equivalent mark register 618, and stores the equivalent mark values in the equivalent mark buffer 619.
The current mark register 615, the left mark register 613, the upper mark register group 614, the new mark register 616, the merge mark register 617, the new equivalent mark register 618, and the equivalent mark buffer 619 are respectively used for storing and providing the mark value of the current pixel, the mark value of the pixel on the left side of the current pixel, the mark value of the pixel above the current pixel, the new mark value, the merge mark value, the new equivalent mark value, and the final equivalent mark value to the mark determiner 611 and the merge equivalent mark determiner 612. The upper flag register set 614 and the equivalent flag buffer 619 are used to store a set of flag parameter values, such as: the flag values for a row of pixels, only one flag value being stored in the remaining registers. The equivalent mark buffer 619 also supplies the equivalent mark of each light spot after combination as an address to the data memories 635a, 635b, and 635c in the light spot centroid calculating unit 63, so that the image data of each light spot is finally stored in the data memory corresponding to the equivalent mark after combination.
If no equivalence flag merge is made, the merge equivalence flag determiner 612, the merge flag register 617, the new equivalence flag register 618, and the equivalence flag buffer 619 may be omitted.
The gaussian filtering unit 62 is configured to perform gaussian filtering on the gray value of the output image pixel, and send the pixel gray value subjected to the gaussian filtering to the light spot centroid calculating unit 63; the gaussian filtering unit 62 further includes an image buffer 621 and a gaussian convolution operation unit 622, wherein the image buffer 621 is used for buffering the gray value of the read pixel and providing the gray value to the gaussian convolution operation unit 622; the gaussian convolution operation unit 622 is configured to perform gaussian convolution operation on the buffered pixel gray scale value, and output the calculation result to the threshold comparator 632, the multipliers 633a and 633b, and the adder 634c in the light spot centroid calculation unit 63.
The light spot centroid calculating unit 63 is configured to calculate a centroid of the light spot image, and output a final calculation result. The spot centroid calculating unit 63 further includes: a row-column counter 631 for calculating and providing coordinate values of each pixel point, and inputting the x and y coordinate values of each pixel to the multipliers 633a and 633b, respectively; the threshold comparator 632 is configured to receive the pixel grayscale value subjected to the gaussian convolution operation and output by the gaussian convolution operation unit 622 and a preset threshold value input separately, compare the two values, and send the comparison result as a control signal to the mark determiner 611, the merge equivalence mark determiner 612, and the adders 634a, 634b, and 634 c.
The spot centroid calculating unit 63 further includes: multipliers 633a and 633b, adders 634a, 634b, and 634c, data storages 635a, 635b, and 635c, and dividers 636a and 636 b. Multipliers 633a and 633b receive the gaussian-filtered pixel gray value output by the gaussian convolution operation unit 622 and the x and y coordinate values input by the row-column counter, and output the product of the pixel gray value and the coordinate values; adders 634a, 634b, and 634c receive the output result of threshold comparator 632 and the accumulated result thereof, and perform accumulation operation by receiving multipliers 633a and 633b and the result output from gaussian convolution operation section 622, respectively. In practice, the multiplier 633a and the adder 634a are used to calculate the accumulated value of the product of the x coordinate values and the gray value of all pixels of the same light spot, and store the calculation result in the data storage 635 a; the multiplier 633b and the adder 634b are used for calculating the accumulated value of the product of the y coordinate values and the gray value of all pixels of the same light spot, and storing the calculation result in the data memory 635 b; the adder 634c is used to calculate the accumulated value of all the pixel gray-scale values of the same light spot, and store the calculation result in the data storage 635 c. Dividers 636a and 636b are respectively used for calculating the quotient of the accumulated value of the product of the x coordinate value and the pixel gray value and the accumulated value of the pixel gray value, and the quotient of the accumulated value of the product of the y coordinate value and the pixel gray value and the accumulated value of the pixel gray value to obtain the x coordinate value and the y coordinate value of the light spot centroid. Multipliers 633a and 633b, adders 634a, 634b and 634c, data storages 635a, 635b and 635c, and dividers 636a and 636b are for performing the calculation of equation (2).
The spot image centroid positioning device can be realized by a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC).
When the output image is subjected to light spot centroid positioning, the device shown in fig. 6 reads the current pixel gray value from the current output image, and buffers the currently read pixel gray value in the image buffer 621, and then the pixel gray value in the image buffer 621 is sent to the gaussian convolution operation unit 622 for gaussian convolution operation to complete gaussian filtering; the gray value of the pixel after the gaussian convolution operation is input into the threshold comparator 632, and compared with the preset threshold which is input separately, and then whether the light spot identification is performed is determined according to the comparison result, if yes, the threshold comparison result is input into the mark judger 611 and the merged equivalent mark judger 612 in the light spot identification unit 61 as a control signal, and the merging of the mark of the light spot pixel point and the equivalent mark is started, and the specific marking and merging process is completed according to the flow shown in fig. 2 and 4 through the cooperation among the mark judger 611, the merged equivalent mark judger 612, the left mark register 613, the upper mark register group 614, the current mark register 615, the new mark register 616, the merged mark register 617, the new equivalent mark register 618 and the equivalent mark register 619; meanwhile, the accumulation of the product of the pixel gray value and the coordinate value and the accumulation and storage of the pixel gray value are accomplished by the multipliers 633a and 633b, the adders 634a, 634b and 634c, and the data memories 635a, 635b and 635 c; after determining that the entire output image is processed, the x and y coordinate values of the centroid of the light spot are calculated by dividers 636a and 636 b. Wherein the x, y coordinate values for each pixel are provided by the row-column counter 631.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (11)
1. A fast high-precision spot image centroid positioning method is characterized by comprising the following steps:
A. performing Gaussian convolution operation on the pixel gray value, and judging whether the pixel gray value subjected to the Gaussian convolution operation is larger than a preset threshold value or not, if so, executing the step B, otherwise, executing the step C;
B. marking the current read pixel, identifying the light spot to which the current pixel belongs, calculating the product of the gray value and the coordinate value of the current pixel and the accumulated value of the products of the gray value and the coordinate value of all pixels of the same processed light spot, storing the obtained accumulated value, and executing the step D;
C. marking the current pixel as a background pixel, and judging whether to adjust the storage data of the light spot to which the current pixel belongs, if so, adjusting the storage data of the light spot to which the current pixel belongs, otherwise, executing the step D;
D. and C, judging whether the whole output image is processed or not, if not, returning to the step A, and if so, calculating the quotient of the accumulated value of the product of the gray value and the coordinate value of each light spot obtained in the step B and the accumulated value of the gray value, and outputting the obtained quotient as the coordinate value of the centroid of each light spot image.
2. The method for locating the centroid of the spot image according to claim 1, wherein the step B further comprises a step of merging equivalent marks in the same spot while marking.
3. The method for positioning the centroid of the spot image according to claim 1 or 2, wherein the performing the gaussian convolution operation further comprises: and reading the gray value of the current pixel, and caching the gray value of the current pixel.
4. The method for locating the centroid of the light spot image according to claim 1 or 2, wherein the marking the current pixel in step B further comprises:
b11, judging whether the marking value of the left pixel of the current pixel is zero, if not, marking the current pixel as the marking value of the left pixel, and executing the step B13, otherwise, executing the step B12;
b12, judging whether the marking value of the pixel above the current pixel is not zero, if so, marking the current pixel as the marking value of the pixel above, executing the step B13, otherwise, marking the current pixel as a new marking value, and updating the new marking value;
and B13, assigning the current pixel mark value to the corresponding mark parameter in the left mark parameter and the upper mark parameter group.
5. The spot image centroid localization method according to claim 2, wherein said merging of medium valence marks in the same spot further comprises:
b21, judging the marking values of the left pixel and the upper pixel of the current pixel, if the marking values are zero, setting the equivalent marking parameter corresponding to the current pixel as a new equivalent marking value, updating the new equivalent marking value, and executing the step B22; if both are not zero and the two are not equal, merging the mark number +1, and executing the step B22;
b22, judging whether the number of the merged marks is 1, if so, merging the equivalent marks of the pixels on the left side of the current pixel into the equivalent marks of the pixels above the current pixel, and updating the new equivalent mark value to be the previous new equivalent mark value; if the merging flag number is not 1, executing step B23;
b23, judging whether the equivalent mark value of the left pixel of the current pixel is equal to the equivalent mark value of the upper pixel of the current pixel, if not, merging the equivalent data, merging the equivalent mark of the upper pixel of the current pixel into the equivalent mark of the left pixel of the current pixel, and if so, not processing.
6. The spot image centroid positioning method according to claim 1 or 2, wherein the step C further comprises: clearing the upper marking parameter group and the left marking parameter;
and step C, judging as follows: judging whether the marking value of the left pixel of the current pixel is larger than zero, if so, adjusting the stored data of the light spot to which the current pixel belongs, otherwise, not adjusting;
the adjustment is as follows: and accumulating the value of the accumulator into the data memory corresponding to the equivalent mark value, and clearing the accumulator.
7. A fast high-precision light spot image centroid positioning device is characterized by comprising a Gaussian filter unit, a light spot identification unit and a light spot centroid calculation unit, wherein,
the Gaussian filtering unit is used for carrying out Gaussian filtering on the gray value of the pixel of the output image and sending the pixel gray value subjected to the Gaussian filtering processing to the light spot centroid calculating unit;
the light spot identification unit is used for receiving a control signal for light spot identification input by the light spot mass center calculation unit and finishing pixel marking of a light spot image;
and the light spot centroid calculating unit is used for calculating the centroids of different light spot images according to the pixel marking values and outputting the final calculation result.
8. The spot image centroid locating device according to claim 7, wherein said spot identification unit further comprises: the system comprises a mark judger, a left mark register, an upper mark register group, a current mark register and a new mark register; wherein,
a marking judger for marking the pixels;
the left marking register, the upper marking register group, the current marking register and the new marking register are used for storing and providing the marking value of the left pixel of the current pixel, the marking value of the pixel above the current pixel, the marking value of the current pixel and the new marking value for the marking judger.
9. The spot image centroid locating device according to claim 8, wherein said spot identification unit further comprises: a merging equivalent mark judger, a merging mark register, a new equivalent mark register and an equivalent mark buffer; wherein,
a merging equivalence mark judger for merging equivalence marks in the same light spot;
the equivalent mark buffer is used for storing the combined equivalent mark value;
a merge flag register for storing a merge flag value;
a new equivalence mark register for providing a new equivalence mark value to the merging equivalence mark judger;
the left marking register, the upper marking register group and the current marking register are further used for providing the marking value of the left pixel of the current pixel, the marking value of the upper pixel of the current pixel and the marking value of the current pixel for the merging equivalent marking judger.
10. The spot image centroid locating device according to any one of claims 7 to 9, wherein said gaussian filtering unit further comprises: an image buffer and a gaussian convolution operation unit, wherein,
the image buffer is used for buffering the gray value of the read pixel and providing the gray value to the Gaussian convolution operation unit;
and the Gaussian convolution operation unit is used for completing Gaussian convolution operation on the cached pixel gray value and outputting the calculation result to the light spot mass center calculation unit.
11. The spot image centroid positioning device according to any one of claims 7 to 9, wherein the spot centroid calculating unit further comprises:
the row-column counter is used for calculating and providing the coordinate value of each pixel point;
the threshold comparator is used for comparing the pixel gray value subjected to the Gaussian convolution operation and output by the Gaussian convolution operation unit with a preset threshold, and outputting a comparison result as a control signal;
the first multiplier and the second multiplier are respectively used for calculating the product of the pixel gray value and the x coordinate value and the product of the pixel gray value and the y coordinate value;
the first adder and the second adder are respectively used for calculating the accumulated value of the product of the pixel gray value and the x coordinate value and the accumulated value of the product of the pixel gray value and the y coordinate value and respectively sending the obtained accumulated values to the first data memory and the second data memory for storage;
the third adder is used for calculating the accumulated value of the pixel gray value and sending the obtained accumulated value to the third data memory for storage;
the first data memory, the second data memory and the third data memory are respectively used for storing the accumulated value of the product of the pixel gray value and the x coordinate value, the accumulated value of the product of the pixel gray value and the y coordinate value and the accumulated value of the pixel gray value;
the first divider and the second divider are respectively used for calculating the quotient of the pixel gray value and the x coordinate value multiplication accumulated value and the pixel gray value accumulated value and the quotient of the pixel gray value and the y coordinate value multiplication accumulated value and the pixel gray value accumulated value.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006101618026A CN100371676C (en) | 2006-11-01 | 2006-12-01 | Method and device for quick high precision positioning light spot image mass center |
US11/687,338 US8068673B2 (en) | 2006-12-01 | 2007-03-16 | Rapid and high precision centroiding method and system for spots image |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200610114199 | 2006-11-01 | ||
CN200610114199.6 | 2006-11-01 | ||
CNB2006101618026A CN100371676C (en) | 2006-11-01 | 2006-12-01 | Method and device for quick high precision positioning light spot image mass center |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1987346A true CN1987346A (en) | 2007-06-27 |
CN100371676C CN100371676C (en) | 2008-02-27 |
Family
ID=38184230
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006101618026A Expired - Fee Related CN100371676C (en) | 2006-11-01 | 2006-12-01 | Method and device for quick high precision positioning light spot image mass center |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100371676C (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100580365C (en) * | 2008-09-17 | 2010-01-13 | 北京航空航天大学 | Two-way mass center tracking imaging method and device |
CN101349541B (en) * | 2007-07-20 | 2010-09-29 | 华硕电脑股份有限公司 | Method for searching specific image and method for compensating image bias position |
CN101968342A (en) * | 2010-09-21 | 2011-02-09 | 哈尔滨工业大学 | Orthogonal fine scanning based method for measuring mass centers of tiny light spots |
CN102081738A (en) * | 2011-01-06 | 2011-06-01 | 西北工业大学 | Method for positioning mass center of spatial object star image |
CN102193819A (en) * | 2010-08-03 | 2011-09-21 | 北京航空航天大学 | Single-point noise resistant method and device for positioning light spot center |
CN102331795A (en) * | 2011-08-26 | 2012-01-25 | 浙江中控太阳能技术有限公司 | Method for controlling sunlight reflecting device to automatically track sun based on facula identification |
CN102496015A (en) * | 2011-11-22 | 2012-06-13 | 南京航空航天大学 | High-precision method for quickly positioning centers of two-dimensional Gaussian distribution spot images |
CN103353387A (en) * | 2013-06-28 | 2013-10-16 | 哈尔滨工业大学 | Light-spot image processing detection system and method for detecting light-spot gray scale centroid and conventional gray-scale image-noise removal effect |
CN103630299A (en) * | 2013-11-29 | 2014-03-12 | 北京航空航天大学 | Positioning method and device for real time centroid of large-pixel light spot image |
CN104034353A (en) * | 2014-06-06 | 2014-09-10 | 中国科学院长春光学精密机械与物理研究所 | Computing method of digital sun sensor centroid based on detecting window |
CN104316049A (en) * | 2014-10-28 | 2015-01-28 | 中国科学院长春光学精密机械与物理研究所 | High-precision and low-signal-to-noise-ratio elliptic star spot subdivision location method |
CN107133627A (en) * | 2017-04-01 | 2017-09-05 | 深圳市欢创科技有限公司 | Infrared light spot center point extracting method and device |
CN107796323A (en) * | 2017-11-06 | 2018-03-13 | 东南大学 | A kind of micro- change detecting system of bridge based on hot spot vision signal intellectual analysis |
CN109949204A (en) * | 2019-03-29 | 2019-06-28 | 江苏亿通高科技股份有限公司 | The asterism mass center of pipeline organization extracts circuit |
CN113658241A (en) * | 2021-08-16 | 2021-11-16 | 北京的卢深视科技有限公司 | Monocular structured light depth recovery method, electronic device and storage medium |
CN116381708A (en) * | 2023-06-07 | 2023-07-04 | 深圳市圳阳精密技术有限公司 | High-precision laser triangular ranging system |
CN117788269A (en) * | 2024-02-27 | 2024-03-29 | 季华实验室 | FPGA-based spot centroid quick positioning method and related equipment thereof |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2889659B2 (en) * | 1990-05-31 | 1999-05-10 | 株式会社リコー | Optical function element |
JP3111434B2 (en) * | 1992-03-31 | 2000-11-20 | オムロン株式会社 | Image processing device |
CN100491899C (en) * | 2005-11-22 | 2009-05-27 | 北京航空航天大学 | Quick and high-precision method for extracting center of structured light stripe |
-
2006
- 2006-12-01 CN CNB2006101618026A patent/CN100371676C/en not_active Expired - Fee Related
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101349541B (en) * | 2007-07-20 | 2010-09-29 | 华硕电脑股份有限公司 | Method for searching specific image and method for compensating image bias position |
CN100580365C (en) * | 2008-09-17 | 2010-01-13 | 北京航空航天大学 | Two-way mass center tracking imaging method and device |
CN102193819A (en) * | 2010-08-03 | 2011-09-21 | 北京航空航天大学 | Single-point noise resistant method and device for positioning light spot center |
CN101968342A (en) * | 2010-09-21 | 2011-02-09 | 哈尔滨工业大学 | Orthogonal fine scanning based method for measuring mass centers of tiny light spots |
CN101968342B (en) * | 2010-09-21 | 2012-07-25 | 哈尔滨工业大学 | Orthogonal fine scanning based method for measuring mass centers of tiny light spots |
CN102081738B (en) * | 2011-01-06 | 2012-11-21 | 西北工业大学 | Method for positioning mass center of spatial object star image |
CN102081738A (en) * | 2011-01-06 | 2011-06-01 | 西北工业大学 | Method for positioning mass center of spatial object star image |
CN102331795A (en) * | 2011-08-26 | 2012-01-25 | 浙江中控太阳能技术有限公司 | Method for controlling sunlight reflecting device to automatically track sun based on facula identification |
CN102496015A (en) * | 2011-11-22 | 2012-06-13 | 南京航空航天大学 | High-precision method for quickly positioning centers of two-dimensional Gaussian distribution spot images |
CN102496015B (en) * | 2011-11-22 | 2013-08-21 | 南京航空航天大学 | High-precision method for quickly positioning centers of two-dimensional Gaussian distribution spot images |
CN103353387A (en) * | 2013-06-28 | 2013-10-16 | 哈尔滨工业大学 | Light-spot image processing detection system and method for detecting light-spot gray scale centroid and conventional gray-scale image-noise removal effect |
CN103353387B (en) * | 2013-06-28 | 2015-08-19 | 哈尔滨工业大学 | Light spot image process detection system and adopt the method for this systems axiol-ogy hot spot gray scale barycenter and existing gray level image noise remove effect |
CN103630299A (en) * | 2013-11-29 | 2014-03-12 | 北京航空航天大学 | Positioning method and device for real time centroid of large-pixel light spot image |
CN103630299B (en) * | 2013-11-29 | 2015-10-28 | 北京航空航天大学 | A kind of real-time method for positioning mass center of large pixel count light spot image and device |
CN104034353A (en) * | 2014-06-06 | 2014-09-10 | 中国科学院长春光学精密机械与物理研究所 | Computing method of digital sun sensor centroid based on detecting window |
CN104316049A (en) * | 2014-10-28 | 2015-01-28 | 中国科学院长春光学精密机械与物理研究所 | High-precision and low-signal-to-noise-ratio elliptic star spot subdivision location method |
CN107133627A (en) * | 2017-04-01 | 2017-09-05 | 深圳市欢创科技有限公司 | Infrared light spot center point extracting method and device |
WO2018176938A1 (en) * | 2017-04-01 | 2018-10-04 | 深圳市欢创科技有限公司 | Method and device for extracting center of infrared light spot, and electronic device |
US10719954B2 (en) | 2017-04-01 | 2020-07-21 | Shenzhen Camsense Technologies Co., Ltd | Method and electronic device for extracting a center position of an infrared spot |
CN107796323A (en) * | 2017-11-06 | 2018-03-13 | 东南大学 | A kind of micro- change detecting system of bridge based on hot spot vision signal intellectual analysis |
CN109949204A (en) * | 2019-03-29 | 2019-06-28 | 江苏亿通高科技股份有限公司 | The asterism mass center of pipeline organization extracts circuit |
CN109949204B (en) * | 2019-03-29 | 2023-08-15 | 江苏亿通高科技股份有限公司 | Star point centroid extraction circuit of pipeline structure |
CN113658241A (en) * | 2021-08-16 | 2021-11-16 | 北京的卢深视科技有限公司 | Monocular structured light depth recovery method, electronic device and storage medium |
CN116381708A (en) * | 2023-06-07 | 2023-07-04 | 深圳市圳阳精密技术有限公司 | High-precision laser triangular ranging system |
CN117788269A (en) * | 2024-02-27 | 2024-03-29 | 季华实验室 | FPGA-based spot centroid quick positioning method and related equipment thereof |
CN117788269B (en) * | 2024-02-27 | 2024-05-07 | 季华实验室 | FPGA-based spot centroid quick positioning method and related equipment thereof |
Also Published As
Publication number | Publication date |
---|---|
CN100371676C (en) | 2008-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100371676C (en) | Method and device for quick high precision positioning light spot image mass center | |
CN111178250B (en) | Object identification positioning method and device and terminal equipment | |
CN108629231B (en) | Obstacle detection method, apparatus, device and storage medium | |
CN110378297B (en) | Remote sensing image target detection method and device based on deep learning and storage medium | |
EP2858030A1 (en) | Performing a histogram using an array of addressable registers | |
CN108428248B (en) | Vehicle window positioning method, system, equipment and storage medium | |
CN100580365C (en) | Two-way mass center tracking imaging method and device | |
CN109871829B (en) | Detection model training method and device based on deep learning | |
CN103530590A (en) | DPM (direct part mark) two-dimensional code recognition system | |
CN101852616A (en) | Method and device for realizing extraction of star target under high dynamic condition | |
CN110570442A (en) | Contour detection method under complex background, terminal device and storage medium | |
Gluhaković et al. | Vehicle detection in the autonomous vehicle environment for potential collision warning | |
CN110689134A (en) | Method, apparatus, device and storage medium for performing machine learning process | |
CN104574312A (en) | Method and device of calculating center of circle for target image | |
US20080131002A1 (en) | Rapid and high precision centroiding method and system for spots image | |
CN115239700A (en) | Spine Cobb angle measurement method, device, equipment and storage medium | |
CN113034497A (en) | Vision-based thermos cup weld positioning detection method and system | |
CN111507340A (en) | Target point cloud data extraction method based on three-dimensional point cloud data | |
CN112861870A (en) | Pointer instrument image correction method, system and storage medium | |
CN110007764B (en) | Gesture skeleton recognition method, device and system and storage medium | |
CN109801428B (en) | Method and device for detecting edge straight line of paper money and terminal | |
CN113033593B (en) | Text detection training method and device based on deep learning | |
CN110490865B (en) | Stud point cloud segmentation method based on high light reflection characteristic of stud | |
CN116740375A (en) | Image feature extraction method, system and medium | |
CN111199228A (en) | License plate positioning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080227 Termination date: 20201201 |
|
CF01 | Termination of patent right due to non-payment of annual fee |