WO2019000653A1 - Image target identification method and apparatus - Google Patents
Image target identification method and apparatus Download PDFInfo
- Publication number
- WO2019000653A1 WO2019000653A1 PCT/CN2017/101704 CN2017101704W WO2019000653A1 WO 2019000653 A1 WO2019000653 A1 WO 2019000653A1 CN 2017101704 W CN2017101704 W CN 2017101704W WO 2019000653 A1 WO2019000653 A1 WO 2019000653A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pixel
- area
- target
- threshold
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Definitions
- the invention relates to an image object recognition method and device.
- Target recognition in images is a process that uses various algorithms to distinguish specific targets or features in an image from the machine, and provides a basis for further processing of the differentiated targets.
- the human eye tends to be slow in recognizing a specific target. If it is necessary to identify or distinguish a large amount of data or a large number of images, it requires a lot of manpower and material resources, using machine recognition instead of human eye recognition, and using computer computing to replace people. Eye use can increase speed and reduce energy consumption, which is very beneficial for the field of image recognition.
- Image target recognition technology generally follows the following processes: image preprocessing, image segmentation, feature extraction, and feature recognition or matching.
- the processed image is generally a clearer image, and there are few ways to image with lower contrast, and it is difficult to segment and extract effective target features.
- the technical problem to be solved by the present invention is to make up for the deficiencies of the prior art described above, and to provide an image object recognition method and apparatus, which can effectively recognize each target object in an image for an image with low contrast.
- An image object recognition method includes the following steps: S1, binarizing each pixel in an image into an effective pixel point and a background point, thereby converting the image into a binarized image; S2, according to the pixel of the image The total number of points and the size range of the target to be identified are set to a size of a third threshold, and the number of effective pixel points in the connected area in the binarized picture is compared with a third threshold, if less than The third threshold is used to set the pixel points in the area as the background point, thereby removing the area; S3, determining the circumscribed rectangular frame for the remaining connected areas to form a frame-taking area; wherein, the circumscribed rectangular frame The four sides are flat with the four sides of the image Line 4; S4, the connected area with overlapping areas of the frame is regarded as the combined whole area, and the circumscribed rectangular frame of the whole area is determined, and the four sides of the circumscribed rectangular frame are respectively parallel to the four sides of the image; in the image, the circumscribed rectangular frame The image content
- An image object recognition device includes a binarization processing module, an area removal module, an area frame extraction module, and a region merging module; wherein the binarization processing module is configured to binarize and divide each pixel in the image An effective pixel and a background point, thereby converting the image into a binarized picture; the area removing module is configured to set a third threshold according to the total number of pixels of the image and the size range of the target to be identified And comparing the number of effective pixel points in the connected area in the binarized picture with a third threshold, if less than the third threshold, setting the pixel points in the area as the background point, thereby removing
- the area frame extraction module is configured to determine an circumscribed rectangular frame for each of the remaining connected areas to form a frame extraction area, wherein the four sides of the circumscribed rectangular frame are respectively parallel to the four sides of the image; the area merging module The connected area that overlaps the framed area is regarded as the merged whole area, and the circumscribed rectangular frame of the whole area is determined,
- the image object recognition method and device of the present invention converts into a binarized picture by binarization processing, and compares the number of pixel points in the image with a threshold value of the target size range to be identified, and then effectively discards the background area. . Finally, the image is segmented and merged by the connected domain method, thereby effectively identifying the location of the target in the image and the number of images in the image.
- the present invention can improve the accuracy of identifying images with low contrast and unclear image features.
- FIG. 1 is a flow chart of an image object recognition method according to an embodiment of the present invention.
- FIG. 2 is an effect diagram of a whole image converted to a binarized image according to an embodiment of the present invention
- Figure 3 is an effect diagram of Figure 2 after optimization to remove scatter noise
- Figure 4 is an effect diagram after removing the interference area in Figure 3;
- FIG. 5 is an effect diagram of determining an circumscribed rectangular frame in an image according to an embodiment of the present invention.
- FIG. 6 is an effect diagram of determining a circumscribed rectangular frame by combining partial regions in an image according to an embodiment of the present invention
- FIG. 7 is a schematic diagram of a binary classification of a support vector machine according to an embodiment of the present invention.
- FIG. 8 is a schematic diagram of a multivariate classification of a support vector machine according to an embodiment of the present invention.
- FIG. 9 is a flow chart of a first classification process of a specific embodiment of the present invention.
- Figure 11 is an image of the region of interest of Figure 10;
- Figure 12 is an image obtained after the feature point extraction in Figure 11;
- FIG. 13 is a schematic diagram showing the distribution in the feature point statistical method in the specific embodiment of the present invention.
- FIG. 1 it is a flowchart of an image object recognition method in the specific embodiment, which includes the following steps:
- the binarization conversion process facilitates subsequent identification of the location of the target.
- the first window is set centering on the pixel point
- the first threshold value is set by the average value and the standard deviation of the pixel values of the pixel points in the first window
- the first threshold is compared with the pixel value of the pixel, and if the pixel value is greater than the first threshold, the pixel is set as the effective pixel; otherwise, the pixel is set as the background point.
- the first threshold can be obtained according to the following formula: Wherein, when the pixel point (x, y) is centered, T(x, y) represents a first threshold corresponding to the pixel point (x, y); and R represents a standard of pixel values of pixels of the entire image. a dynamic range of difference; k is a set deviation coefficient, taking a positive value; m(x, y) represents an average value of pixel values of pixel points in the first window; ⁇ (x, y) represents the first The standard deviation of the pixel grayscale values of the pixels within the window.
- the first threshold value can be adaptively adjusted according to the standard deviation of the pixel gray value of the pixel point in the first window.
- the window is swept around the pixel, and the threshold is set by the average pixel value of the pixel in the first window and the standard deviation of the pixel value.
- the standard deviation ⁇ (x, y) approaches R, so that the threshold T(x, y) is set to be approximately equal to the mean m(x, y), ie the central pixel point (x, y)
- the pixel value is compared with a threshold approximating the average pixel value of the local window, which is greater than the threshold, that is, greater than the average pixel value, thereby being confirmed as a valid pixel point.
- the standard deviation ⁇ (x, y) is much smaller than R, so that the threshold T(x, y) obtained is smaller than the mean m(x, y).
- the pixel value of the central pixel (x, y) is compared with a threshold smaller than the average pixel value of the local window, instead of always comparing with the fixed mean, so that the central pixel larger than the threshold can be reserved as Effective to avoid missing potential target pixels in the blurred area.
- the threshold value corresponding to each pixel point is set by using the local area as described above, and the threshold value is adaptively adjusted by using the standard deviation of the pixel points in the first window, so that the threshold value is adaptively adjusted according to the contrast of the image, so that the image can be Each pixel is accurately divided to avoid missing valid pixels due to image blur.
- the point is a valid pixel, which can be set as a white point, as shown by the white point in FIG. 2; otherwise, as a background point, such as The pixel points of the black area shown in Fig. 2, thereby converting the entire image into a binarized picture.
- the method further includes a process of performing a reconfirmation process on the binarized image, including: setting a second window centered on the pixel point, and setting a second threshold value according to the number of pixel points in the second window And comparing the number of effective pixel points in the second window with the second threshold, if the second threshold is greater than the second threshold, setting the pixel point as a valid pixel point; otherwise, setting the pixel point as a background point .
- the size of the second window may be the same as or different from the size of the first window.
- the second threshold can be obtained according to the following formula:
- the floor function represents a rounding down operation
- z represents the number of pixels in the second window.
- a square window is taken as an example. Can indicate the length of the side, It represents the square of the diagonal line. After rounding the root number, it can be approximated as the rounding of the diagonal length. That is, the method of setting the second threshold is to use the number of pixels on the diagonal of the second window as a threshold.
- the meaning of subtracting 2 is to remove one pixel of its own, and then remove a possible effective pixel point, so that the threshold setting is more accurate.
- the rest of the way to customize the threshold is also feasible, as long as the most effective pixels can be identified.
- the above further optimization process continues to select the second window centered on the pixel (the window size can be customized), thereby viewing the number of valid points in the second window as a whole, and The comparison is made from the set threshold. If it is larger than the threshold, the center pixel is set as the effective pixel point, otherwise it is noise, set as the background point, and removed. In this step, through the comparison process of the number of local effective pixel points in the second window, the central pixel point with more effective pixel points around is reconfirmed as a valid point, and the central pixel point with not too many effective pixel points around is Confirmed as a background point, effectively removing the scatter points in the image in Figure 2.
- the size of the third threshold is set, and the size of the third threshold is set according to the total number of pixels of the entire image and the size range of the target to be identified.
- the size of the third threshold may be set according to the following formula: ⁇ (a*b)*c/d ⁇ /e, where a*b represents the number of all pixels in the entire image, and a represents the pixel in the width direction. Number, b represents the number of pixels in the length direction; c represents the minimum size of the target to be identified; d represents the maximum size of the target to be identified; and e represents the maximum number of objects to be identified included in the estimated picture of a*b size.
- the size of the plankton is generally in the range of 20 ⁇ m to 5 cm.
- the total number of pixels included in the image acquired by the plankton collection device is 2448*2050.
- Estimate a picture containing up to 10 of the largest plankton (estimated, it can be viewed 1:1 according to the size and biological size of the whole picture, the size of the whole picture is 3 cm * 3.5 cm, 10.5 square centimeters, to float
- the average organism accounts for an area of 1 square centimeter, so rounding is estimated to include up to 10).
- the third threshold is set, the third threshold is set to 200.736 by [(2448*2050)*20/50000]/10 setting.
- the circumscribed rectangular frame in the horizontal direction of each area is determined by the above-described step S3 to form a frame-taking area.
- the circumscribed rectangle is a rectangle, and the four sides of the rectangle pass through the upper, lower, left, and right boundary pixels of the area (the top, bottom, left, and right pixels).
- the circumscribed rectangle in the horizontal direction indicates that the four sides of the rectangle are parallel to the four sides of the image and are horizontal.
- the content in the rectangle is the framed area. As shown in FIG. 5, it is a schematic diagram of the effect after determining the circumscribed rectangular frame.
- the connected area with overlapping areas in the frame is regarded as the combined whole area, and the whole area is determined.
- Connect the rectangle, the four sides of the circumscribed rectangle are parallel to the four sides of the image, and the image content of the circumscribed rectangle is the recognized target.
- the frame For the area taken by the frame, some areas are independent and scattered, and some areas overlap each other.
- the connected area of this part is regarded as the combined whole area, and the circumscribed rectangular frame of the horizontal direction is determined for the whole area.
- the effect of the circumscribed rectangular frame is determined in the image.
- some of the areas in Figure 6 are taken from a spliced box of circumscribed rectangles.
- the image content in each circumscribed rectangle is the identified target, thereby screening out the location of the suspected target, and the corresponding number.
- the blurred image for example, an image formed in a water body with high turbidity
- the local threshold is compared, and the pixel is accurately binarized as an effective point or a background noise point, and then Perform denoising again on the connected domain after binarization, connect the domain frame processing and merge processing, so as to effectively segment the image and extract the region of interest where the target is located, which can improve the contrast and image features.
- This target recognition method is particularly suitable for the recognition of plankton photographed in water.
- the classification method may be further combined with the classification method to identify the category information of the target.
- M may be included, normalized processing of each type of sample, and features of each type of sample, such as boundary gradient, edge density, and distribution of feature points obtained by an edge extraction algorithm, are hierarchically extracted according to categories.
- N normalizes the area to be detected, extracts the characteristics of each region, and introduces the classifier, classifies each region according to the learning situation of step M, and statistical results, Thereby identifying the category information to which the target belongs.
- the following two classification schemes are respectively classified from two aspects: a boundary gradient and a morphological structural unit feature.
- other classification methods that are more suitable may be selected according to actual conditions.
- each of the extracted regions is normalized and processed into an image containing 128*128 pixels.
- the first classification scheme the SVM+HOG classification method is used to analyze the boundary gradient for classification. After the normalized background denoising process is performed on the normalized image, the edge density and the boundary gradient of the extracted image are extracted into a histogram, so that the support vector machine (SVM) combined with the direction gradient histogram (HOG) is used to measure the image. Analyze and identify which category of target.
- SVM is a traditional binary classifier, and its principle is shown in Figure 7. Where x 1 represents a sample point with a denser line below; x 2 represents a sample point where the upper line is sparse.
- the classification process consists of the following steps:
- the samples are trained prior to classification (samples are selected beforehand).
- the training process is as follows: the n-type samples are divided into two types according to the dichotomy: 1 ⁇ n/2 and n/2+1 ⁇ n, and then the edge density and boundary gradient statistics of the two types of samples are included; The process continues to classify and count the two categories in a two-pointed manner until the sample is sorted into a separate category, indicating the end of training.
- the schematic is shown in Figure 8.
- the image of each connected domain after normalization is extracted, and the edge density and boundary gradient of the image in each region are extracted respectively.
- the statistical information of the sample obtained by the training is compared, and the image is classified.
- the classification process is repeated, the images are classified into n/4 categories in n/2 categories, and the classification is repeated until the images are classified into one of the categories, thereby obtaining an image.
- the biological category to which it belongs. The flow chart of the classification is shown in Figure 9.
- the most common ways of searching and sorting are bubbling, dichotomy, and quick sorting.
- the bubbling algorithm is O(n 2 )
- the dichotomy is O(log 2 n)
- the fast ordering is O(n*logn).
- the dichotomy is finally selected as the searching means.
- the second classification scheme the feature point distribution algorithm (shape-context) is used to analyze the morphological structural unit features for classification.
- the feature points are extracted by the edge fast extraction algorithm.
- the algorithm can directly extract the edges of the graph, so that the extracted points can be used as feature points to more effectively see the edges and feature distribution of the graph.
- the edge fast extraction algorithm is accurate and time consuming. Taking the original image shown in FIG. 10 as an example, the size is 2448*2050, and the image of the plankton in the region of interest is as shown in FIG. 11 and the size is 210*210.
- the process of extracting the feature points of the suspected plankton region is time-consuming. At 54 seconds, the image of the feature points (black pixels) obtained after the extraction is as shown in FIG.
- the process of analyzing boundary gradients for classification includes the following steps:
- the sample is trained (the sample is selected in advance), and the training process is: the sample is processed by the edge fast extraction algorithm to obtain the distribution of the edge and the feature points, and then the feature points shown in FIG. 13 are obtained.
- the method calculates the distribution of feature points, and statistically distributes the feature points of each sample in a separate text. The distribution of the feature points of all samples is calculated to complete the training.
- 13 is that 8 points are equally centered on the feature points (45° is an area, 360° is divided into 8 areas), and then 5 areas are spread out according to the size of the graphic feature, that is, the feature is The point is centered, the maximum radius of the circumcircle that can contain all the feature points, and the maximum radius is divided into five equal parts to form five circles, and each circle is divided into eight regions according to the above, thereby all the feature points in the graph Divided into 40 areas.
- the image of each connected domain after normalization is processed by the edge fast extraction algorithm to obtain the distribution of edges and feature points, and then the feature point distribution is statistically analyzed by the method shown in FIG.
- the statistical feature point distribution result is compared with the statistical point distribution statistical result of each sample obtained by the training, thereby identifying the category to which the image to be detected belongs.
- An embodiment of the present invention further provides an image object recognition apparatus, including a binarization processing module, an area removal module, an area frame extraction module, and a region merging module; wherein the binarization processing module is configured to use each pixel in the image Point binarization processing, which is divided into effective pixel points and background points, thereby converting the image into binarized pictures; the area removing module is used for the total number of pixels according to the image and the size range of the target to be identified Setting a size of the third threshold, comparing the number of valid pixel points in the connected area in the binarized picture with a third threshold, and if smaller than the third threshold, the pixel points in the area are Set as a background point to remove the area; the area frame fetching module is configured to determine the circumscribed rectangular frame of the remaining connected areas to form a frame taking area; wherein the four sides of the circumscribed rectangular frame respectively correspond to the four sides of the image Parallel; the area merging module is used to treat the connected area with overlapping areas of the frame as the merged whole area
- the above image object recognition apparatus may further include a feature extraction module, a trainer learning module, and a classification recognition module.
- the feature extraction module is configured to acquire features of the target regions identified in the image and collect statistics, and are used to acquire features of the samples of each category and count, for example, features of samples of the respective biological species.
- the trainer learning module is configured to import the features of the samples of the various kinds obtained by the feature extraction module into the trainer, and learn according to the characteristics of each type of sample.
- the classification and identification module is configured to import, into the classifier, the feature of the region where the target is located in the image obtained by the feature extraction module, where the classifier is configured to perform the feature of the region where the target is located and the result of the sample training learning.
- the comparison thereby classifying the targets within the region, and obtaining the category information to which the target belongs. Based on the sample of the biological species, the biological category information of the target of the biological species can be obtained.
- the image object recognition device adds the above module, and can further analyze the identified target of a certain kind, and acquire the category to which the target belongs, such as the biological category information.
Abstract
Description
Claims (12)
- 一种图像目标识别方法,其特征在于:包括以下步骤:S1,将图像中各像素点二值化处理,划分为有效像素点和背景点,从而将图像转换为二值化的图片;S2,根据图像的像素点的总个数和待识别的目标的尺寸范围设定第三阈值的大小,将二值化图片中已连通的区域内的有效像素点的个数与第三阈值进行比较,如果小于所述第三阈值,则将该区域内的像素点均设置为背景点,从而去除该区域;S3,对剩余的已连通的各区域确定出其外接矩形框,形成框取区域;其中,外接矩形框的四条边分别与图像的四条边平行;S4,将框取区域有重叠的已连通区域视为合并的整体区域,确定出整体区域的外接矩形框,外接矩形框的四条边分别与图像的四条边平行;图像中,外接矩形框中的图像内容为识别到的目标。An image object recognition method, comprising: the following steps: S1, binarizing each pixel in an image into an effective pixel point and a background point, thereby converting the image into a binarized image; S2, And setting a third threshold according to a total number of pixels of the image and a size range of the target to be identified, and comparing the number of effective pixel points in the connected region in the binarized picture with a third threshold, If it is smaller than the third threshold, the pixel points in the area are set as the background point, thereby removing the area; and S3, the circumscribed rectangular frame is determined for each of the remaining connected areas to form a frame-taking area; The four sides of the circumscribed rectangular frame are respectively parallel to the four sides of the image; S4, the connected area with overlapping areas of the frame is regarded as the merged whole area, and the circumscribed rectangular frame of the whole area is determined, and the four sides of the circumscribed rectangular frame are respectively Parallel to the four sides of the image; in the image, the image content in the circumscribed rectangle is the recognized target.
- 根据权利要求1所述的图像目标识别方法,其特征在于:步骤S1中,对图像中各像素点进行如下二值化处理:以像素点为中心设定第一窗口,通过第一窗口内像素点的像素值的平均值和标准差设置第一阈值的大小,以所述第一阈值与像素点的像素值进行比较,如果像素值大于第一阈值,则将像素点设为有效像素点;否则,将像素点设为背景点。The image object recognition method according to claim 1, wherein in step S1, each pixel in the image is subjected to binarization processing: setting a first window centering on the pixel, and passing the pixel in the first window The average value and the standard deviation of the pixel values of the points are set to a size of the first threshold, the first threshold is compared with the pixel value of the pixel, and if the pixel value is greater than the first threshold, the pixel is set as the effective pixel; Otherwise, set the pixel as the background point.
- 根据权利要求2所述的图像目标识别方法,其特征在于:所述第一阈值根据如下式子设置得到:其中,以像素点(x,y)为中心时,T(x,y)表示对应于所述像素点(x,y)的第一阈值;R表示整幅图像的像素点的像素灰度值的标准差的动态范围;k为设定的偏差系数,取正值;m(x,y)表示所述第一窗口内像素点的像素值的平均值;δ(x,y)表示所述第一窗口内像素点的像素灰度值的标准差。The image object recognition method according to claim 2, wherein the first threshold is obtained according to the following formula: Wherein, when the pixel point (x, y) is centered, T(x, y) represents a first threshold corresponding to the pixel point (x, y); and R represents a pixel gray value of a pixel of the entire image. The dynamic range of the standard deviation; k is the set deviation coefficient, taking a positive value; m(x, y) represents the average value of the pixel values of the pixel points in the first window; δ(x, y) represents the The standard deviation of the pixel gray value of the pixel within the first window.
- 根据权利要求2所述的图像目标识别方法,其特征在于:步骤S1中,还包括如下步骤:在二值化处理的基础上进行再确认处理:以像素点为中心设定第二窗口,根据第二窗口内像素点的个数设置第二阈值的大小;将第二窗口内有效像素点的个数与所述第二阈值进行比较,如果大于所述第二阈值,则将该像素点设为有效像素点;否则,将该像素点设为背景点。The image object recognition method according to claim 2, wherein the step S1 further comprises the step of: performing reconfirmation processing on the basis of the binarization processing: setting the second window centering on the pixel point, according to The number of pixels in the second window is set to a size of the second threshold; the number of effective pixels in the second window is compared with the second threshold, and if it is greater than the second threshold, the pixel is set Is a valid pixel; otherwise, the pixel is set as the background point.
- 根据权利要求4所述的图像目标识别方法,其特征在于:所述第二阈值根据如下式子设置得到:其中,floor函数表示向下取整运算,z 表示所述第二窗口内像素点的个数。The image object recognition method according to claim 4, wherein the second threshold is obtained according to the following formula: Wherein, the floor function represents a rounding down operation, and z represents the number of pixels in the second window.
- 根据权利要求1所述的图像目标识别方法,其特征在于:步骤S2中,所述第三阈值根据如下式子设置得到:{(a*b)*c/d}/e,其中,a*b表示整幅图像中所有的像素点个数,a表示宽度方向的像素点个数,b表示长度方向的像素点个数;c表示待识别目标的最小尺寸;d表示待识别目标的最大尺寸;e表示估算的a*b大小的图片最多包含的待识别目标的数量。The image object recognition method according to claim 1, wherein in the step S2, the third threshold is obtained according to the following formula: {(a*b)*c/d}/e, wherein a* b represents the number of all pixels in the entire image, a represents the number of pixels in the width direction, b represents the number of pixels in the length direction; c represents the minimum size of the target to be identified; d represents the maximum size of the target to be identified ;e indicates the maximum number of targets to be identified included in the estimated a*b size picture.
- 根据权利要求1所述的图像目标识别方法,其特征在于:所述待识别的目标为待识别的浮游生物。The image object recognition method according to claim 1, wherein the object to be identified is a plankton to be identified.
- 根据权利要求1所述的图像目标识别方法,其特征在于:还包括以下步骤:M,将各个种类的样本归一化处理,并按照类别分层次地提取出每个种类的样本的特征,导入至训练器中进行学习;N,将识别到的目标所在的区域进行归一化处理,提取每个区域各自的特征,导进分类器,根据步骤M的学习情况,对各个区域进行分类,统计结果,以识别出区域中的目标所属的类别信息。The image object recognition method according to claim 1, further comprising the steps of: normalizing each type of sample, and extracting features of each type of sample hierarchically according to categories, and importing Learning in the trainer; N, normalizing the region in which the identified target is located, extracting the respective features of each region, and introducing the classifier, classifying each region according to the learning situation of step M, and counting As a result, the category information to which the target in the area belongs is identified.
- 根据权利要求1所述的图像目标识别方法,其特征在于:还包括步骤S5,获取识别到的目标的种类信息:S51,样本训练:将n类样本按照二分法的方式分成1~n/2和n/2+1~n两大类,对这两大类包含的样本的图片进行图形的边缘密度和边界梯度统计;重复上述过程,将两大类中的各自n/2类按照二分法的方式继续分类和统计,直至将样本分类至单独的一个类别,并统计出单独各个类别的样本的图形的边缘密度和边界梯度;S52,将目标所在的各区域进行归一化处理;S53,分类:对归一化处理后的各区域,分别提取各区域中图像的边缘密度和边界梯度,根据边缘密度和边界梯度信息,与步骤S51中训练获得的样本的统计信息进行比较,将图像分类至n个大类中的n/2个类别中,重复上述分类过程,将图像分类至n/2个类别中n/4个类别中,重复分类过程,直至将图像分类至其中单独的一个类别中,从而获取得到区域中目标所属的类别信息。The image object recognition method according to claim 1, further comprising the step S5 of acquiring the type information of the identified target: S51, sample training: dividing the n types of samples into 1 to n/2 according to a binary method. And n/2+1~n two categories, the edge density and boundary gradient statistics of the graphs of the samples of the two categories are included; the above process is repeated, and the n/2 classes of the two classes are divided according to the dichotomy The way to continue classification and statistics until the sample is sorted into a single category, and the edge density and boundary gradient of the graph of the samples of each individual category are counted; S52, the regions in which the target is located are normalized; S53, Classification: For each region after normalization, extract the edge density and boundary gradient of the image in each region, and compare the statistical information of the sample obtained in step S51 according to the edge density and the boundary gradient information to classify the image. To n/2 categories in n major categories, repeat the above classification process, classify images into n/4 categories in n/2 categories, repeat the classification process until the images are classified Wherein a separate category, category information obtained so as to acquire the target region belongs.
- 根据权利要求1所述的图像目标识别方法,其特征在于:还包括步骤S6,获取识别到的目标的种类信息:S61,样本训练:将n类样本通过边缘快速提取算法进行处理得到边缘和特征点的分布情况,再通过特征点统计方法对特征点的分布进行统计,从而统计出各个类别的样本的特征点分布情况;S62,将目标所在的各区域进行归一化 处理;S63,分类:对归一化处理后的各区域的图像,通过边缘快速提取算法进行处理得到边缘和特征点的分布情况,再通过特征点统计方法对特征点分布进行统计,将统计后的结果与步骤S61中训练获得的各个类别的样本的统计结果进行比较,从而识别出目标所属的类别信息。The image object recognition method according to claim 1, further comprising the step S6 of acquiring the type information of the identified target: S61, sample training: processing the n types of samples by using an edge fast extraction algorithm to obtain edges and features. According to the distribution of points, the distribution of feature points is statistically analyzed by feature point statistics method to calculate the distribution of feature points of samples of each category; S62, normalize each area where the target is located Processing; S63, classification: the image of each region after normalization is processed by the edge fast extraction algorithm to obtain the distribution of the edge and the feature points, and then the feature point distribution is statistically analyzed by the feature point statistical method, and the statistics are collected. The result is compared with the statistical result of the samples of the respective categories obtained by the training in step S61, thereby identifying the category information to which the target belongs.
- 一种图像目标识别装置,其特征在于:包括二值化处理模块、区域去除模块、区域框取模块和区域合并模块;其中,所述二值化处理模块用于将图像中各像素点二值化处理,划分为有效像素点和背景点,从而将图像转换为二值化的图片;所述区域去除模块用于根据图像的像素点的总个数和待识别的目标的尺寸范围设定第三阈值的大小,将二值化图片中已连通的区域内的有效像素点的个数与第三阈值进行比较,如果小于所述第三阈值,则将该区域内的像素点均设置为背景点,从而去除该区域;区域框取模块用于对剩余的已连通的各区域确定出其外接矩形框,形成框取区域;其中,外接矩形框的四条边分别与图像的四条边平行;所述区域合并模块用于将框取区域有重叠的已连通区域视为合并的整体区域,确定出整体区域的外接矩形框,外接矩形框的四条边分别与图像的四条边平行,外接矩形框中的图像内容为识别到的目标。An image object recognition device, comprising: a binarization processing module, an area removal module, an area frame extraction module, and a region merging module; wherein the binarization processing module is configured to binary values of each pixel in the image Processing, dividing into effective pixel points and background points, thereby converting the image into a binarized picture; the area removing module is configured to set the number according to the total number of pixels of the image and the size range of the target to be identified The size of the three thresholds is used to compare the number of effective pixel points in the connected area in the binarized picture with a third threshold. If the value is smaller than the third threshold, the pixel points in the area are set as the background. Pointing, thereby removing the area; the area frame fetching module is configured to determine the circumscribed rectangular frame of the remaining connected areas to form a frame taking area; wherein the four sides of the circumscribed rectangular frame are respectively parallel to the four sides of the image; The area merging module is used to treat the connected area with overlapping areas of the frame as the merged whole area, and determine the circumscribed rectangular frame of the whole area, and the quaternary rectangular frame is four. Sides respectively parallel to the four sides of the image, the image content is circumscribed rectangle box identified target.
- 根据权利要求11所述的图像目标识别装置,其特征在于:还包括特征提取模块、训练器学习模块和分类识别模块;所述特征提取模块用于获取所述区域合并模块中识别到的目标所在的区域的特征并统计,并用于获取各个种类的样本的特征并统计;所述训练器学习模块用于将所述特征提取模块获取得到的各个种类的样本的特征导入训练器中,所述训练器用于根据每个种类样本的特征进行学习;所述分类识别模块用于将所述特征提取模块获取得到的图像中目标所在的区域的特征导入分类器中,所述分类器用于将所述目标所在的区域的特征与所述训练器对样本训练学习所得结果进行比较,对区域内的目标进行分类,以获得该目标所属的类别信息。 The image object recognition apparatus according to claim 11, further comprising: a feature extraction module, a trainer learning module, and a classification recognition module; wherein the feature extraction module is configured to acquire the target identified in the region merge module And the statistics of the regions are used to obtain the features of the various kinds of samples and are counted; the trainer learning module is configured to import the features of the samples of the various kinds obtained by the feature extraction module into the training device, the training The device is configured to learn according to the characteristics of each type of sample; the classification and identification module is configured to import, into the classifier, the feature of the region where the target is located in the image obtained by the feature extraction module, and the classifier is configured to use the target The characteristics of the region in which it is located are compared with the results obtained by the trainer for the sample training learning, and the objects in the region are classified to obtain the category information to which the target belongs.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710526661.1 | 2017-06-30 | ||
CN201710526661.1A CN107330465B (en) | 2017-06-30 | 2017-06-30 | A kind of images steganalysis method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019000653A1 true WO2019000653A1 (en) | 2019-01-03 |
Family
ID=60198065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/101704 WO2019000653A1 (en) | 2017-06-30 | 2017-09-14 | Image target identification method and apparatus |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN107330465B (en) |
WO (1) | WO2019000653A1 (en) |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977944A (en) * | 2019-02-21 | 2019-07-05 | 杭州朗阳科技有限公司 | A kind of recognition methods of digital water meter reading |
CN110070533A (en) * | 2019-04-23 | 2019-07-30 | 科大讯飞股份有限公司 | A kind of evaluating method of object detection results, device, equipment and storage medium |
CN110175563A (en) * | 2019-05-27 | 2019-08-27 | 上海交通大学 | The recognition methods of metal cutting tool drawings marked and system |
CN110180186A (en) * | 2019-05-28 | 2019-08-30 | 北京奇思妙想信息技术有限公司 | A kind of topographic map conversion method and system |
CN110189403A (en) * | 2019-05-22 | 2019-08-30 | 哈尔滨工程大学 | A kind of submarine target three-dimensional rebuilding method based on simple beam Forward-looking Sonar |
CN110348442A (en) * | 2019-07-17 | 2019-10-18 | 大连海事大学 | A kind of shipborne radar image sea oil film recognition methods based on support vector machines |
CN110443272A (en) * | 2019-06-24 | 2019-11-12 | 中国地质大学(武汉) | A kind of complicated cigarette strain image classification method based on fuzzy selecting rules |
CN110490848A (en) * | 2019-08-02 | 2019-11-22 | 上海海事大学 | Infrared target detection method, apparatus and computer storage medium |
CN111126252A (en) * | 2019-12-20 | 2020-05-08 | 浙江大华技术股份有限公司 | Stall behavior detection method and related device |
CN111191730A (en) * | 2020-01-02 | 2020-05-22 | 中国航空工业集团公司西安航空计算技术研究所 | Method and system for detecting oversized image target facing embedded deep learning |
CN111209864A (en) * | 2020-01-07 | 2020-05-29 | 上海交通大学 | Target identification method for power equipment |
CN111260629A (en) * | 2020-01-16 | 2020-06-09 | 成都地铁运营有限公司 | Pantograph structure abnormity detection algorithm based on image processing |
CN111259980A (en) * | 2020-02-10 | 2020-06-09 | 北京小马慧行科技有限公司 | Method and device for processing labeled data |
CN111507995A (en) * | 2020-04-30 | 2020-08-07 | 柳州智视科技有限公司 | Image segmentation method based on color image pyramid and color channel classification |
CN111523613A (en) * | 2020-05-09 | 2020-08-11 | 黄河勘测规划设计研究院有限公司 | Image analysis anti-interference method under complex environment of hydraulic engineering |
CN111598947A (en) * | 2020-04-03 | 2020-08-28 | 上海嘉奥信息科技发展有限公司 | Method and system for automatically identifying patient orientation by identifying features |
CN111626230A (en) * | 2020-05-29 | 2020-09-04 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111724351A (en) * | 2020-05-30 | 2020-09-29 | 上海健康医学院 | Helium bubble electron microscope image statistical analysis method based on machine learning |
CN111753794A (en) * | 2020-06-30 | 2020-10-09 | 创新奇智(成都)科技有限公司 | Fruit quality classification method and device, electronic equipment and readable storage medium |
CN111833398A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Method and device for marking pixel points in image |
CN112053399A (en) * | 2020-09-04 | 2020-12-08 | 厦门大学 | Method for positioning digestive tract organs in capsule endoscope video |
CN112232286A (en) * | 2020-11-05 | 2021-01-15 | 浙江点辰航空科技有限公司 | Unmanned aerial vehicle image recognition system and unmanned aerial vehicle are patrolled and examined to road |
CN112241466A (en) * | 2020-09-22 | 2021-01-19 | 天津永兴泰科技股份有限公司 | Wild animal protection law recommendation system based on animal identification map |
CN112241956A (en) * | 2020-11-03 | 2021-01-19 | 甘肃省地震局(中国地震局兰州地震研究所) | PolSAR image ridge line extraction method based on region growing method and variation function |
CN112488118A (en) * | 2020-12-18 | 2021-03-12 | 哈尔滨工业大学(深圳) | Target detection method and related device |
CN112508893A (en) * | 2020-11-27 | 2021-03-16 | 中国铁路南宁局集团有限公司 | Machine vision-based method and system for detecting tiny foreign matters between two railway tracks |
CN112668441A (en) * | 2020-12-24 | 2021-04-16 | 中国电子科技集团公司第二十八研究所 | Satellite remote sensing image airplane target identification method combined with priori knowledge |
CN112750136A (en) * | 2020-12-30 | 2021-05-04 | 深圳英集芯科技股份有限公司 | Image processing method and system |
CN113033400A (en) * | 2021-03-25 | 2021-06-25 | 新东方教育科技集团有限公司 | Method and device for identifying mathematical expression, storage medium and electronic equipment |
CN113221917A (en) * | 2021-05-13 | 2021-08-06 | 南京航空航天大学 | Monocular vision double-layer quadrilateral structure cooperative target extraction method under insufficient illumination |
CN113298702A (en) * | 2021-06-23 | 2021-08-24 | 重庆科技学院 | Reordering and dividing method based on large-size image pixel points |
CN113409352A (en) * | 2020-11-19 | 2021-09-17 | 西安工业大学 | Single-frame infrared image weak and small target detection method, device, equipment and storage medium |
CN113420668A (en) * | 2021-06-21 | 2021-09-21 | 西北工业大学 | Underwater target identification method based on two-dimensional multi-scale arrangement entropy |
CN113469980A (en) * | 2021-07-09 | 2021-10-01 | 连云港远洋流体装卸设备有限公司 | Flange identification method based on image processing |
CN113516611A (en) * | 2020-04-09 | 2021-10-19 | 合肥美亚光电技术股份有限公司 | Method and device for determining abnormal material removing area, and material sorting method and equipment |
CN113591674A (en) * | 2021-07-28 | 2021-11-02 | 桂林电子科技大学 | Real-time video stream-oriented edge environment behavior recognition system |
CN113588663A (en) * | 2021-08-03 | 2021-11-02 | 上海圭目机器人有限公司 | Pipeline defect identification and information extraction method |
CN113610830A (en) * | 2021-08-18 | 2021-11-05 | 常州领创电气科技有限公司 | Detection system and method for lightning arrester |
CN113689455A (en) * | 2021-07-01 | 2021-11-23 | 上海交通大学 | Thermal fluid image processing method, system, terminal and medium |
CN113688829A (en) * | 2021-08-05 | 2021-11-23 | 南京国电南自电网自动化有限公司 | Automatic transformer substation monitoring picture identification method and system |
CN113776408A (en) * | 2021-09-13 | 2021-12-10 | 北京邮电大学 | Reading method for gate opening ruler |
CN114037650A (en) * | 2021-05-17 | 2022-02-11 | 西北工业大学 | Ground target visible light damage image processing method for change detection and target detection |
CN114199262A (en) * | 2020-08-28 | 2022-03-18 | 阿里巴巴集团控股有限公司 | Method for training position recognition model, position recognition method and related equipment |
CN114871120A (en) * | 2022-05-26 | 2022-08-09 | 江苏省徐州医药高等职业学校 | Medicine determining and sorting method and device based on image data processing |
CN114998887A (en) * | 2022-08-08 | 2022-09-02 | 山东精惠计量检测有限公司 | Intelligent identification method for electric energy meter |
CN115690693A (en) * | 2022-12-13 | 2023-02-03 | 山东鲁旺机械设备有限公司 | Intelligent monitoring system and monitoring method for construction hanging basket |
CN116311543A (en) * | 2023-02-03 | 2023-06-23 | 汇金智融(深圳)科技有限公司 | Handwriting analysis method and system based on image recognition technology |
CN116403094A (en) * | 2023-06-08 | 2023-07-07 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN116740579A (en) * | 2023-08-15 | 2023-09-12 | 兰陵县城市规划设计室 | Intelligent collection method for territorial space planning data |
CN116740070A (en) * | 2023-08-15 | 2023-09-12 | 青岛宇通管业有限公司 | Plastic pipeline appearance defect detection method based on machine vision |
CN116740332A (en) * | 2023-06-01 | 2023-09-12 | 南京航空航天大学 | Method for positioning center and measuring angle of space target component on satellite based on region detection |
CN116758578A (en) * | 2023-08-18 | 2023-09-15 | 上海楷领科技有限公司 | Mechanical drawing information extraction method, device, system and storage medium |
CN112508893B (en) * | 2020-11-27 | 2024-04-26 | 中国铁路南宁局集团有限公司 | Method and system for detecting tiny foreign matters between double rails of railway based on machine vision |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110443097A (en) * | 2018-05-03 | 2019-11-12 | 北京中科晶上超媒体信息技术有限公司 | A kind of video object extract real-time optimization method and system |
CN109117845A (en) * | 2018-08-15 | 2019-01-01 | 广州云测信息技术有限公司 | Object identifying method and device in a kind of image |
CN109190640A (en) * | 2018-08-20 | 2019-01-11 | 贵州省生物研究所 | A kind of the intercept type acquisition method and acquisition system of the planktonic organism based on big data |
CN109670518B (en) * | 2018-12-25 | 2022-09-23 | 浙江大学常州工业技术研究院 | Method for measuring boundary of target object in picture |
CN110263608B (en) * | 2019-01-25 | 2023-07-07 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | Automatic electronic component identification method based on image feature space variable threshold measurement |
CN109815906B (en) * | 2019-01-25 | 2021-04-06 | 华中科技大学 | Traffic sign detection method and system based on step-by-step deep learning |
CN110096991A (en) * | 2019-04-25 | 2019-08-06 | 西安工业大学 | A kind of sign Language Recognition Method based on convolutional neural networks |
CN110390313B (en) * | 2019-07-29 | 2023-03-28 | 哈尔滨工业大学 | Violent action detection method and system |
CN110415237B (en) * | 2019-07-31 | 2022-02-08 | Oppo广东移动通信有限公司 | Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium |
CN110941987B (en) * | 2019-10-10 | 2023-04-07 | 北京百度网讯科技有限公司 | Target object identification method and device, electronic equipment and storage medium |
CN112991253A (en) * | 2019-12-02 | 2021-06-18 | 合肥美亚光电技术股份有限公司 | Central area determining method, foreign matter removing device and detecting equipment |
CN112890736B (en) * | 2019-12-03 | 2023-06-09 | 精微视达医疗科技(武汉)有限公司 | Method and device for obtaining field mask of endoscopic imaging system |
CN113538450B (en) | 2020-04-21 | 2023-07-21 | 百度在线网络技术(北京)有限公司 | Method and device for generating image |
CN112102288B (en) * | 2020-09-15 | 2023-11-07 | 应急管理部大数据中心 | Water body identification and water body change detection method, device, equipment and medium |
CN113900750B (en) * | 2021-09-26 | 2024-02-23 | 珠海豹好玩科技有限公司 | Method and device for determining window interface boundary, storage medium and electronic equipment |
CN114067122B (en) * | 2022-01-18 | 2022-04-08 | 深圳市绿洲光生物技术有限公司 | Two-stage binarization image processing method |
CN114821030B (en) * | 2022-04-11 | 2023-04-04 | 苏州振旺光电有限公司 | Planet image processing method, system and device |
CN115601385B (en) * | 2022-04-12 | 2023-05-05 | 北京航空航天大学 | Bubble morphology processing method, device and medium |
CN116012283B (en) * | 2022-09-28 | 2023-10-13 | 逸超医疗科技(北京)有限公司 | Full-automatic ultrasonic image measurement method, equipment and storage medium |
CN116758024B (en) * | 2023-06-13 | 2024-02-23 | 山东省农业科学院 | Peanut seed direction identification method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030108250A1 (en) * | 2001-12-10 | 2003-06-12 | Eastman Kodak Company | Method and system for selectively applying enhancement to an image |
CN101699469A (en) * | 2009-11-09 | 2010-04-28 | 南京邮电大学 | Method for automatically identifying action of writing on blackboard of teacher in class video recording |
CN102375982A (en) * | 2011-10-18 | 2012-03-14 | 华中科技大学 | Multi-character characteristic fused license plate positioning method |
CN104077777A (en) * | 2014-07-04 | 2014-10-01 | 中国科学院大学 | Sea surface vessel target detection method |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777122B (en) * | 2010-03-02 | 2012-01-04 | 中国海洋大学 | Chaetoceros microscopic image cell target extraction method |
CN102663406A (en) * | 2012-04-12 | 2012-09-12 | 中国海洋大学 | Automatic chaetoceros and non-chaetoceros sorting method based on microscopic images |
CN103049763B (en) * | 2012-12-07 | 2015-07-01 | 华中科技大学 | Context-constraint-based target identification method |
CN104036239B (en) * | 2014-05-29 | 2017-05-10 | 西安电子科技大学 | Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering |
KR101601564B1 (en) * | 2014-12-30 | 2016-03-09 | 가톨릭대학교 산학협력단 | Face detection method using circle blocking of face and apparatus thereof |
CN105117706B (en) * | 2015-08-28 | 2019-01-18 | 小米科技有限责任公司 | Image processing method and device, character identifying method and device |
CN105261049B (en) * | 2015-09-15 | 2017-09-22 | 重庆飞洲光电技术研究院 | A kind of image connectivity region quick determination method |
CN106250901A (en) * | 2016-03-14 | 2016-12-21 | 上海创和亿电子科技发展有限公司 | A kind of digit recognition method based on image feature information |
CN105868708B (en) * | 2016-03-28 | 2019-09-20 | 锐捷网络股份有限公司 | A kind of images steganalysis method and device |
CN106407978B (en) * | 2016-09-24 | 2020-10-30 | 上海大学 | Method for detecting salient object in unconstrained video by combining similarity degree |
CN106875404A (en) * | 2017-01-18 | 2017-06-20 | 宁波摩视光电科技有限公司 | The intelligent identification Method of epithelial cell in a kind of leukorrhea micro-image |
CN106846339A (en) * | 2017-02-13 | 2017-06-13 | 广州视源电子科技股份有限公司 | A kind of image detecting method and device |
-
2017
- 2017-06-30 CN CN201710526661.1A patent/CN107330465B/en active Active
- 2017-06-30 CN CN201910576843.9A patent/CN110334706B/en active Active
- 2017-09-14 WO PCT/CN2017/101704 patent/WO2019000653A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030108250A1 (en) * | 2001-12-10 | 2003-06-12 | Eastman Kodak Company | Method and system for selectively applying enhancement to an image |
CN101699469A (en) * | 2009-11-09 | 2010-04-28 | 南京邮电大学 | Method for automatically identifying action of writing on blackboard of teacher in class video recording |
CN102375982A (en) * | 2011-10-18 | 2012-03-14 | 华中科技大学 | Multi-character characteristic fused license plate positioning method |
CN104077777A (en) * | 2014-07-04 | 2014-10-01 | 中国科学院大学 | Sea surface vessel target detection method |
Cited By (96)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977944A (en) * | 2019-02-21 | 2019-07-05 | 杭州朗阳科技有限公司 | A kind of recognition methods of digital water meter reading |
CN109977944B (en) * | 2019-02-21 | 2023-08-01 | 杭州朗阳科技有限公司 | Digital water meter reading identification method |
CN111833398B (en) * | 2019-04-16 | 2023-09-08 | 杭州海康威视数字技术股份有限公司 | Pixel point marking method and device in image |
CN111833398A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Method and device for marking pixel points in image |
CN110070533A (en) * | 2019-04-23 | 2019-07-30 | 科大讯飞股份有限公司 | A kind of evaluating method of object detection results, device, equipment and storage medium |
CN110189403B (en) * | 2019-05-22 | 2022-11-18 | 哈尔滨工程大学 | Underwater target three-dimensional reconstruction method based on single-beam forward-looking sonar |
CN110189403A (en) * | 2019-05-22 | 2019-08-30 | 哈尔滨工程大学 | A kind of submarine target three-dimensional rebuilding method based on simple beam Forward-looking Sonar |
CN110175563B (en) * | 2019-05-27 | 2023-03-24 | 上海交通大学 | Metal cutting tool drawing mark identification method and system |
CN110175563A (en) * | 2019-05-27 | 2019-08-27 | 上海交通大学 | The recognition methods of metal cutting tool drawings marked and system |
CN110180186A (en) * | 2019-05-28 | 2019-08-30 | 北京奇思妙想信息技术有限公司 | A kind of topographic map conversion method and system |
CN110443272A (en) * | 2019-06-24 | 2019-11-12 | 中国地质大学(武汉) | A kind of complicated cigarette strain image classification method based on fuzzy selecting rules |
CN110443272B (en) * | 2019-06-24 | 2023-01-03 | 中国地质大学(武汉) | Complex tobacco plant image classification method based on fuzzy selection principle |
CN110348442A (en) * | 2019-07-17 | 2019-10-18 | 大连海事大学 | A kind of shipborne radar image sea oil film recognition methods based on support vector machines |
CN110348442B (en) * | 2019-07-17 | 2022-09-30 | 大连海事大学 | Shipborne radar image offshore oil film identification method based on support vector machine |
CN110490848A (en) * | 2019-08-02 | 2019-11-22 | 上海海事大学 | Infrared target detection method, apparatus and computer storage medium |
CN111126252A (en) * | 2019-12-20 | 2020-05-08 | 浙江大华技术股份有限公司 | Stall behavior detection method and related device |
CN111126252B (en) * | 2019-12-20 | 2023-08-18 | 浙江大华技术股份有限公司 | Swing behavior detection method and related device |
CN111191730B (en) * | 2020-01-02 | 2023-05-12 | 中国航空工业集团公司西安航空计算技术研究所 | Method and system for detecting oversized image target oriented to embedded deep learning |
CN111191730A (en) * | 2020-01-02 | 2020-05-22 | 中国航空工业集团公司西安航空计算技术研究所 | Method and system for detecting oversized image target facing embedded deep learning |
CN111209864B (en) * | 2020-01-07 | 2023-05-26 | 上海交通大学 | Power equipment target identification method |
CN111209864A (en) * | 2020-01-07 | 2020-05-29 | 上海交通大学 | Target identification method for power equipment |
CN111260629A (en) * | 2020-01-16 | 2020-06-09 | 成都地铁运营有限公司 | Pantograph structure abnormity detection algorithm based on image processing |
CN111259980A (en) * | 2020-02-10 | 2020-06-09 | 北京小马慧行科技有限公司 | Method and device for processing labeled data |
CN111259980B (en) * | 2020-02-10 | 2023-10-03 | 北京小马慧行科技有限公司 | Method and device for processing annotation data |
CN111598947B (en) * | 2020-04-03 | 2024-02-20 | 上海嘉奥信息科技发展有限公司 | Method and system for automatically identifying patient position by identification features |
CN111598947A (en) * | 2020-04-03 | 2020-08-28 | 上海嘉奥信息科技发展有限公司 | Method and system for automatically identifying patient orientation by identifying features |
CN113516611A (en) * | 2020-04-09 | 2021-10-19 | 合肥美亚光电技术股份有限公司 | Method and device for determining abnormal material removing area, and material sorting method and equipment |
CN113516611B (en) * | 2020-04-09 | 2024-01-30 | 合肥美亚光电技术股份有限公司 | Method and device for determining abnormal material removing area, material sorting method and equipment |
CN111507995A (en) * | 2020-04-30 | 2020-08-07 | 柳州智视科技有限公司 | Image segmentation method based on color image pyramid and color channel classification |
CN111507995B (en) * | 2020-04-30 | 2023-05-23 | 柳州智视科技有限公司 | Image segmentation method based on color image pyramid and color channel classification |
CN111523613A (en) * | 2020-05-09 | 2020-08-11 | 黄河勘测规划设计研究院有限公司 | Image analysis anti-interference method under complex environment of hydraulic engineering |
CN111523613B (en) * | 2020-05-09 | 2023-03-24 | 黄河勘测规划设计研究院有限公司 | Image analysis anti-interference method under complex environment of hydraulic engineering |
CN111626230B (en) * | 2020-05-29 | 2023-04-14 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111626230A (en) * | 2020-05-29 | 2020-09-04 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111724351A (en) * | 2020-05-30 | 2020-09-29 | 上海健康医学院 | Helium bubble electron microscope image statistical analysis method based on machine learning |
CN111753794B (en) * | 2020-06-30 | 2024-02-27 | 创新奇智(成都)科技有限公司 | Fruit quality classification method, device, electronic equipment and readable storage medium |
CN111753794A (en) * | 2020-06-30 | 2020-10-09 | 创新奇智(成都)科技有限公司 | Fruit quality classification method and device, electronic equipment and readable storage medium |
CN114199262A (en) * | 2020-08-28 | 2022-03-18 | 阿里巴巴集团控股有限公司 | Method for training position recognition model, position recognition method and related equipment |
CN112053399B (en) * | 2020-09-04 | 2024-02-09 | 厦门大学 | Method for positioning digestive tract organs in capsule endoscope video |
CN112053399A (en) * | 2020-09-04 | 2020-12-08 | 厦门大学 | Method for positioning digestive tract organs in capsule endoscope video |
CN112241466A (en) * | 2020-09-22 | 2021-01-19 | 天津永兴泰科技股份有限公司 | Wild animal protection law recommendation system based on animal identification map |
CN112241956B (en) * | 2020-11-03 | 2023-04-07 | 甘肃省地震局(中国地震局兰州地震研究所) | PolSAR image ridge line extraction method based on region growing method and variation function |
CN112241956A (en) * | 2020-11-03 | 2021-01-19 | 甘肃省地震局(中国地震局兰州地震研究所) | PolSAR image ridge line extraction method based on region growing method and variation function |
CN112232286A (en) * | 2020-11-05 | 2021-01-15 | 浙江点辰航空科技有限公司 | Unmanned aerial vehicle image recognition system and unmanned aerial vehicle are patrolled and examined to road |
CN113409352A (en) * | 2020-11-19 | 2021-09-17 | 西安工业大学 | Single-frame infrared image weak and small target detection method, device, equipment and storage medium |
CN113409352B (en) * | 2020-11-19 | 2024-03-15 | 西安工业大学 | Method, device, equipment and storage medium for detecting weak and small target of single-frame infrared image |
CN112508893A (en) * | 2020-11-27 | 2021-03-16 | 中国铁路南宁局集团有限公司 | Machine vision-based method and system for detecting tiny foreign matters between two railway tracks |
CN112508893B (en) * | 2020-11-27 | 2024-04-26 | 中国铁路南宁局集团有限公司 | Method and system for detecting tiny foreign matters between double rails of railway based on machine vision |
CN112488118B (en) * | 2020-12-18 | 2023-08-08 | 哈尔滨工业大学(深圳) | Target detection method and related device |
CN112488118A (en) * | 2020-12-18 | 2021-03-12 | 哈尔滨工业大学(深圳) | Target detection method and related device |
CN112668441B (en) * | 2020-12-24 | 2022-09-23 | 中国电子科技集团公司第二十八研究所 | Satellite remote sensing image airplane target identification method combined with priori knowledge |
CN112668441A (en) * | 2020-12-24 | 2021-04-16 | 中国电子科技集团公司第二十八研究所 | Satellite remote sensing image airplane target identification method combined with priori knowledge |
CN112750136B (en) * | 2020-12-30 | 2023-12-05 | 深圳英集芯科技股份有限公司 | Image processing method and system |
CN112750136A (en) * | 2020-12-30 | 2021-05-04 | 深圳英集芯科技股份有限公司 | Image processing method and system |
CN113033400B (en) * | 2021-03-25 | 2024-01-19 | 新东方教育科技集团有限公司 | Method and device for identifying mathematical formulas, storage medium and electronic equipment |
CN113033400A (en) * | 2021-03-25 | 2021-06-25 | 新东方教育科技集团有限公司 | Method and device for identifying mathematical expression, storage medium and electronic equipment |
CN113221917A (en) * | 2021-05-13 | 2021-08-06 | 南京航空航天大学 | Monocular vision double-layer quadrilateral structure cooperative target extraction method under insufficient illumination |
CN113221917B (en) * | 2021-05-13 | 2024-03-19 | 南京航空航天大学 | Monocular vision double-layer quadrilateral structure cooperative target extraction method under insufficient illumination |
CN114037650B (en) * | 2021-05-17 | 2024-03-19 | 西北工业大学 | Ground target visible light damage image processing method for change detection and target detection |
CN114037650A (en) * | 2021-05-17 | 2022-02-11 | 西北工业大学 | Ground target visible light damage image processing method for change detection and target detection |
CN113420668B (en) * | 2021-06-21 | 2024-01-12 | 西北工业大学 | Underwater target identification method based on two-dimensional multi-scale permutation entropy |
CN113420668A (en) * | 2021-06-21 | 2021-09-21 | 西北工业大学 | Underwater target identification method based on two-dimensional multi-scale arrangement entropy |
CN113298702B (en) * | 2021-06-23 | 2023-08-04 | 重庆科技学院 | Reordering and segmentation method based on large-size image pixel points |
CN113298702A (en) * | 2021-06-23 | 2021-08-24 | 重庆科技学院 | Reordering and dividing method based on large-size image pixel points |
CN113689455A (en) * | 2021-07-01 | 2021-11-23 | 上海交通大学 | Thermal fluid image processing method, system, terminal and medium |
CN113689455B (en) * | 2021-07-01 | 2023-10-20 | 上海交通大学 | Thermal fluid image processing method, system, terminal and medium |
CN113469980A (en) * | 2021-07-09 | 2021-10-01 | 连云港远洋流体装卸设备有限公司 | Flange identification method based on image processing |
CN113469980B (en) * | 2021-07-09 | 2023-11-21 | 连云港远洋流体装卸设备有限公司 | Flange identification method based on image processing |
CN113591674A (en) * | 2021-07-28 | 2021-11-02 | 桂林电子科技大学 | Real-time video stream-oriented edge environment behavior recognition system |
CN113591674B (en) * | 2021-07-28 | 2023-09-22 | 桂林电子科技大学 | Edge environment behavior recognition system for real-time video stream |
CN113588663B (en) * | 2021-08-03 | 2024-01-23 | 上海圭目机器人有限公司 | Pipeline defect identification and information extraction method |
CN113588663A (en) * | 2021-08-03 | 2021-11-02 | 上海圭目机器人有限公司 | Pipeline defect identification and information extraction method |
CN113688829B (en) * | 2021-08-05 | 2024-02-20 | 南京国电南自电网自动化有限公司 | Automatic identification method and system for monitoring picture of transformer substation |
CN113688829A (en) * | 2021-08-05 | 2021-11-23 | 南京国电南自电网自动化有限公司 | Automatic transformer substation monitoring picture identification method and system |
CN113610830A (en) * | 2021-08-18 | 2021-11-05 | 常州领创电气科技有限公司 | Detection system and method for lightning arrester |
CN113610830B (en) * | 2021-08-18 | 2023-12-29 | 常州领创电气科技有限公司 | Detection system and method for lightning arrester |
CN113776408B (en) * | 2021-09-13 | 2022-09-13 | 北京邮电大学 | Reading method for gate opening ruler |
CN113776408A (en) * | 2021-09-13 | 2021-12-10 | 北京邮电大学 | Reading method for gate opening ruler |
CN114871120A (en) * | 2022-05-26 | 2022-08-09 | 江苏省徐州医药高等职业学校 | Medicine determining and sorting method and device based on image data processing |
CN114871120B (en) * | 2022-05-26 | 2023-11-07 | 江苏省徐州医药高等职业学校 | Medicine determining and sorting method and device based on image data processing |
CN115026839B (en) * | 2022-07-29 | 2024-04-26 | 西南交通大学 | Method for positioning swing bolster hole of inclined wedge supporting robot of railway vehicle bogie |
CN114998887A (en) * | 2022-08-08 | 2022-09-02 | 山东精惠计量检测有限公司 | Intelligent identification method for electric energy meter |
CN114998887B (en) * | 2022-08-08 | 2022-10-11 | 山东精惠计量检测有限公司 | Intelligent identification method for electric energy meter |
CN115690693A (en) * | 2022-12-13 | 2023-02-03 | 山东鲁旺机械设备有限公司 | Intelligent monitoring system and monitoring method for construction hanging basket |
CN116311543B (en) * | 2023-02-03 | 2024-03-08 | 汇金智融(深圳)科技有限公司 | Handwriting analysis method and system based on image recognition technology |
CN116311543A (en) * | 2023-02-03 | 2023-06-23 | 汇金智融(深圳)科技有限公司 | Handwriting analysis method and system based on image recognition technology |
CN116740332B (en) * | 2023-06-01 | 2024-04-02 | 南京航空航天大学 | Method for positioning center and measuring angle of space target component on satellite based on region detection |
CN116740332A (en) * | 2023-06-01 | 2023-09-12 | 南京航空航天大学 | Method for positioning center and measuring angle of space target component on satellite based on region detection |
CN116403094A (en) * | 2023-06-08 | 2023-07-07 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN116403094B (en) * | 2023-06-08 | 2023-08-22 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN116740579A (en) * | 2023-08-15 | 2023-09-12 | 兰陵县城市规划设计室 | Intelligent collection method for territorial space planning data |
CN116740070B (en) * | 2023-08-15 | 2023-10-24 | 青岛宇通管业有限公司 | Plastic pipeline appearance defect detection method based on machine vision |
CN116740070A (en) * | 2023-08-15 | 2023-09-12 | 青岛宇通管业有限公司 | Plastic pipeline appearance defect detection method based on machine vision |
CN116740579B (en) * | 2023-08-15 | 2023-10-20 | 兰陵县城市规划设计室 | Intelligent collection method for territorial space planning data |
CN116758578A (en) * | 2023-08-18 | 2023-09-15 | 上海楷领科技有限公司 | Mechanical drawing information extraction method, device, system and storage medium |
CN116758578B (en) * | 2023-08-18 | 2023-11-07 | 上海楷领科技有限公司 | Mechanical drawing information extraction method, device, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110334706A (en) | 2019-10-15 |
CN107330465B (en) | 2019-07-30 |
CN110334706B (en) | 2021-06-01 |
CN107330465A (en) | 2017-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019000653A1 (en) | Image target identification method and apparatus | |
CN107316036B (en) | Insect pest identification method based on cascade classifier | |
Savkare et al. | Automatic system for classification of erythrocytes infected with malaria and identification of parasite's life stage | |
CN104036278B (en) | The extracting method of face algorithm standard rules face image | |
CN107977682B (en) | Lymphocyte classification method and device based on polar coordinate transformation data enhancement | |
CN111738064B (en) | Haze concentration identification method for haze image | |
CN105447503B (en) | Pedestrian detection method based on rarefaction representation LBP and HOG fusion | |
WO2020248513A1 (en) | Ocr method for comprehensive performance test | |
CN104392432A (en) | Histogram of oriented gradient-based display panel defect detection method | |
US9558403B2 (en) | Chemical structure recognition tool | |
EP3848472A2 (en) | Methods and systems for automated counting and classifying microorganisms | |
Zhou et al. | Leukocyte image segmentation based on adaptive histogram thresholding and contour detection | |
CN108876795A (en) | A kind of dividing method and system of objects in images | |
Rizal et al. | Comparison of SURF and HOG extraction in classifying the blood image of malaria parasites using SVM | |
CN109858570A (en) | Image classification method and system, computer equipment and medium | |
CN115294377A (en) | System and method for identifying road cracks | |
CN113076860B (en) | Bird detection system under field scene | |
Pratap et al. | Development of Ann based efficient fruit recognition technique | |
CN108009480A (en) | A kind of image human body behavioral value method of feature based identification | |
CN109460768B (en) | Text detection and removal method for histopathology microscopic image | |
Ye et al. | A new text detection algorithm in images/video frames | |
Tian et al. | Research on artificial intelligence of accounting information processing based on image processing | |
Satish et al. | Edge assisted fast binarization scheme for improved vehicle license plate recognition | |
Kumari et al. | On the use of Moravec operator for text detection in document images and video frames | |
Fang et al. | A hybrid approach for efficient detection of plastic mulching films in cotton |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17915559 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17915559 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17915559 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/07/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17915559 Country of ref document: EP Kind code of ref document: A1 |