CN107330465B - A kind of images steganalysis method and device - Google Patents

A kind of images steganalysis method and device Download PDF

Info

Publication number
CN107330465B
CN107330465B CN201710526661.1A CN201710526661A CN107330465B CN 107330465 B CN107330465 B CN 107330465B CN 201710526661 A CN201710526661 A CN 201710526661A CN 107330465 B CN107330465 B CN 107330465B
Authority
CN
China
Prior art keywords
pixel
region
image
target
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710526661.1A
Other languages
Chinese (zh)
Other versions
CN107330465A (en
Inventor
程雪岷
毕洪生
王育琦
王嵘
张临风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201710526661.1A priority Critical patent/CN107330465B/en
Priority to CN201910576843.9A priority patent/CN110334706B/en
Priority to PCT/CN2017/101704 priority patent/WO2019000653A1/en
Publication of CN107330465A publication Critical patent/CN107330465A/en
Application granted granted Critical
Publication of CN107330465B publication Critical patent/CN107330465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The invention discloses a kind of images steganalysis method and devices.Images steganalysis method is the following steps are included: pixel binary conversion treatment each in image is divided into effective pixel points and background dot by S1;S2, the size of third threshold value is set according to the size range of the total number of the pixel of image and target to be identified, the number of effective pixel points in the region being connected in binaryzation picture is compared with third threshold value, if it is less than third threshold value, the pixel in the region is then disposed as background dot, to remove the region;S3 determines its boundary rectangle frame to the remaining each region being connected to, and forms frame and takes region;The connected region that frame takes region to have overlapping is considered as the overall region of merging by S4, determines the boundary rectangle frame of overall region;In image, the picture material in boundary rectangle frame is the target recognized.Target identification method of the invention can efficiently identify out each target object in image for the lower image of contrast.

Description

A kind of images steganalysis method and device
[technical field]
The present invention relates to a kind of images steganalysis method and devices.
[background technique]
Target identification is to be distinguished target specific in image or feature in the machine using various algorithms in image Process, and the target distinguished is further processed, basis is provided.It, can be extensive in today of informatization and network It is applied to many fields.Human eye speed when carrying out identifying some specific objective is often relatively slow, if desired for mass data or Great amount of images is identified or is distinguished, then needs to expend a large amount of manpower and material resources, is replaced eye recognition using machine recognition, is utilized Speed and reduction energy consumption can be improved with brain volume instead of human eye in computer calculation amount, is very for field of image recognition It is advantageous.Such as: the video frame picture of 1,000 width crossroads is identified, it is desirable that find out by vehicle flowrate, hence it is evident that adopt Much be conducive to eye recognition with machine recognition;Likewise, if to robot add images steganalysis system, be equivalent to Robot is added to " eyes ", is also very favorable for developing AI technology.Currently, people not only answer image recognition technology For recognition of face, article identification etc., also applied in terms of, greatly facilitate the life of people It is living.
Images steganalysis technology is generally following below scheme: image preprocessing, image segmentation, feature extraction and feature identification Or matching.But handled image is generally more visible image, image method lower for contrast is seldom, is difficult point It cuts and extracts effective target signature.
[summary of the invention]
The technical problems to be solved by the present invention are: making up above-mentioned the deficiencies in the prior art, propose that a kind of image object is known Other method and device can efficiently identify out each target object in image for the lower image of contrast.
Technical problem of the invention is resolved by technical solution below:
A kind of images steganalysis method, comprising the following steps: S1 divides pixel binary conversion treatment each in image For effective pixel points and background dot, to convert the image into the picture of binaryzation;S2, according to the total number of the pixel of image It, will be effective in the region being connected in binaryzation picture with the size of the size range setting third threshold value of target to be identified The number of pixel is compared with third threshold value, if it is less than the third threshold value, is then all provided with the pixel in the region It is set to background dot, to remove the region;S3 determines its boundary rectangle frame to the remaining each region being connected to, forms frame Take region;Wherein, the four edges of boundary rectangle frame are parallel with the four edges of image respectively;Frame is taken region to have overlapping by S4 Connected region is considered as the overall region of merging, determines the boundary rectangle frame of overall region, the four edges difference of boundary rectangle frame It is parallel with the four edges of image;In image, the picture material in boundary rectangle frame is the target recognized.
A kind of images steganalysis device, including binary processing module, region remove module, the area regional frame modulus Kuai He Domain merging module;Wherein, the binary processing module is used to pixel binary conversion treatment each in image being divided into effective picture Vegetarian refreshments and background dot, to convert the image into the picture of binaryzation;The region removal module is used for the pixel according to image The size of the size range setting third threshold value of the total number and target to be identified of point, the area that will be connected in binaryzation picture The number of effective pixel points in domain is compared with third threshold value, then will be in the region if it is less than the third threshold value Pixel is disposed as background dot, to remove the region;Regional frame modulus block is for true to the remaining each region being connected to Its boundary rectangle frame is made, frame is formed and takes region;Wherein, the four edges of boundary rectangle frame are parallel with the four edges of image respectively; The region merging technique module is used to for the connected region that frame takes region to have overlapping being considered as the overall region of merging, determines entirety The boundary rectangle frame in region, the four edges of boundary rectangle frame are parallel with the four edges of image respectively, the image in boundary rectangle frame Content is the target recognized.
The beneficial effect of the present invention compared with the prior art is:
Images steganalysis method and device of the invention, by being converted to binaryzation picture, and root after binary conversion treatment After being compared according to the number of pixel in image with target size range to be identified setting threshold value, effectively cast out background area Domain.Image is split and is merged finally by connected domain method, thus where efficiently identifying target in the picture Position and quantity in the picture., characteristics of image unsharp figure lower to contrast can be improved by upper step in the present invention As the accuracy rate identified.
[Detailed description of the invention]
Fig. 1 is the flow chart of the images steganalysis method of the specific embodiment of the invention;
Fig. 2 is the effect picture that the entire image of the specific embodiment of the invention switchs to the picture of binaryzation;
Fig. 3 is effect picture of the Fig. 2 after optimization removes scatterplot noise;
Fig. 4 is the effect picture in Fig. 3 after removing interference region;
Fig. 5 is that the effect picture after boundary rectangle frame is determined in the image of the specific embodiment of the invention;
Fig. 6 is the effect picture that partial region merges after determining boundary rectangle frame in the image of the specific embodiment of the invention;
Fig. 7 is the schematic diagram of the support vector machines binary classification of the specific embodiment of the invention;
Fig. 8 is the schematic diagram of the support vector machines multivariate classification of the specific embodiment of the invention;
Fig. 9 is the flow chart of the first assorting process of the specific embodiment of the invention;
Figure 10 is the original image of marginal information to be extracted in the specific embodiment of the invention;
Figure 11 is the image of area-of-interest in Figure 10;
Figure 12 is the image obtained after feature point extraction in Figure 11;
Figure 13 is the distribution schematic diagram in the specific embodiment of the invention in characteristic point statistical method.
[specific embodiment]
With reference to embodiment and compares attached drawing the present invention is described in further details.
As shown in Figure 1, for the flow chart of images steganalysis method in present embodiment, comprising the following steps:
Pixel binary conversion treatment each in image is divided into effective pixel points and background dot by S1, so that image be converted For the picture of binaryzation.
In the step, binaryzation conversion process, convenient for the subsequent position recognized where target.When binaryzation, it is preferable that Carry out in accordance with the following steps: setting first window centered on pixel, by first window the pixel value of pixel it is flat The size of mean value and standard deviation setting first threshold, is compared, if picture with the pixel value of pixel with the first threshold Element value is greater than first threshold, then pixel is set as effective pixel points;Otherwise, pixel is set as background dot.
Wherein, first threshold can be arranged to obtain according to following formula:Its In, with pixel (x, y) be center when, T (x, y) indicate correspond to the pixel (x, y) first threshold;R indicates whole picture The dynamic range of the standard deviation of the pixel value of the pixel of image;K is the deviation factor of setting, takes positive value;M (x, y) indicates institute State the average value of the pixel value of pixel in first window;δ (x, y) indicates the grey scale pixel value of pixel in the first window Standard deviation.By above-mentioned calculating formula, it may make first threshold with the standard of the grey scale pixel value of pixel in first window Poor adaptive adjustment.
Should during, window sliding is carried out centered on pixel, by the average pixel value of pixel in first window, Threshold value is arranged in pixel value standard deviation.For image high-contrast area, standard deviation δ (x, y) levels off to R, and setting in this way obtains Threshold value T (x, y) is then approximately equal to mean value m (x, y), i.e., the pixel value of central pixel point (x, y) and one is similar to local window The threshold value of the average pixel value of mouth is compared, and is greater than threshold value, namely shows to be greater than average pixel value, to be confirmed as effective picture Vegetarian refreshments.In field low-down for local contrast, standard deviation δ (x, y) be much smaller than R, in this way be arranged obtain threshold value T (x, Y) then want small than mean value m (x, y).When comparing, i.e., the pixel value of central pixel point (x, y) and one are less than the flat of local window The threshold value of equal pixel value is compared, rather than is compared always with fixed mean value, can will be greater than the center of threshold value in this way Pixel is left the potential target pixel for effectively avoiding omitting fuzzy region.Pass through the above-mentioned side using regional area The threshold value of the corresponding comparison of each pixel is arranged in formula, is adaptively adjusted threshold value using the standard deviation of pixel in first window Size so that threshold value is adaptively adjusted with the contrast of image, so as to accurately be divided to pixel each in image, It avoids omitting effective pixel points because image is fuzzy.
By first threshold compared with the pixel value of pixel, if pixel value is greater than threshold value, which is valid pixel, can be incited somebody to action It is set as white point, white point as shown in Figure 2;It otherwise, is background dot, the pixel of black region as shown in Figure 2 Point, so that entire image to be switched to the picture of binaryzation.
It is further preferred that further including the process for being confirmed processing again to the picture after binary conversion treatment, comprising: with picture The second window is set centered on vegetarian refreshments, according to the size of the number setting second threshold of pixel in the second window;By the second window The number of effective pixel points is compared with the second threshold in mouthful, if it is greater than the second threshold, then by the pixel It is set as effective pixel points;Otherwise, which is set as background dot.In the step, the size of the second window can be with aforementioned The size of one window is identical, can not also be identical.
Wherein, second threshold can be arranged to obtain according to following formula:Wherein, The downward rounding operation of floor function representation, z indicate the number of pixel in second window.In the calculation method, with pros For shape window,It can indicate side length,Indicate cornerwise square, being opened can be approximate after radical sign is rounded For the rounding of catercorner length.The mode of i.e. above-mentioned setting second threshold is the number using pixel on the second window diagonal line As threshold value.It subtracts 2 meaning to be to remove 1 pixel of itself, then removes the effective pixel points of a possibility, thus Keep the setting of threshold value more accurate.Certainly, the mode of remaining customized setting threshold value is also feasible, as long as what can be identified is exhausted most Several effective pixel points.
The above-mentioned process advanced optimized continues to select the second window centered on pixel on the basis of binaryzation (window size can be made by oneself) checks the number of available point in the second window as an entirety, carries out with the threshold value from setting Compare.If bigger than threshold value, the pixel at center is set as effective pixel points, is otherwise noise, is set as background dot, is removed.It should Step can be really more by effective pixel points around by the comparison procedure of the topically effective pixel number of the second window Central pixel point is reaffirmed as available point, and the not many central pixel point of effective pixel points around is confirmed as background dot, To effectively remove the scatterplot in Fig. 2 in image.It is important that in addition, also when, can also will pass through aforementioned regional area The breakpoint generated after processing is attached, such as presumable black color dots are changed into white in this process, thus by adjacent White point connects the white area to form connection.By the further optimization process, accurate area is carried out convenient for subsequent Domain identification.As shown in figure 3, to advanced optimize the effect picture after removal scatterplot noise.
S2 sets the big of third threshold value according to the size range of the total number of the pixel of image and target to be identified It is small, the number of the effective pixel points in the region being connected in binaryzation picture is compared with third threshold value, if it is less than The third threshold value, then be disposed as background dot for the pixel in the region, to remove the region.
Picture after binary conversion treatment, the scattered effective pixel points of some regions, some regions have been concentrated more Effective pixel points, to form the region that has been connected to.The process sieves the connected domain in whole binaryzation picture Choosing, to detect the region where target, and for the region of interference, is then removed.
Specifically, the size of third threshold value is set, according to the total number of the pixel of entire image and target to be identified Size range setting third threshold value size.The size of third threshold value: { (a*b) * c/d }/e can be set according to following formula, Wherein, a*b indicates pixel number all in entire image, and a indicates the pixel number of width direction, and b indicates length side To pixel number;C indicates the minimum dimension of target to be identified;D indicates the full-size of target to be identified;E indicates estimation A*b size the quantity of target to be identified that contains up to of picture.By taking target to be identified is planktonic organism as an example, life of swimming The size dimension range of object is generally in the range of 20 μm~5cm.Acquiring the picture that equipment obtains by planktonic organism includes Pixel total number is 2448*2050.Estimate a figure contain up to 10 maximum planktonic organisms (when estimation, can be according to The size and biology size 1:1 of whole figure are treated, and the size of whole picture is 3 centimetres * 3.5 centimetres, are 10.5 square centimeters, with Planktonic organism averagely accounts for 1 square centimeter of area, is estimated as including up to 10 so rounding up).When third threshold value is set, Set to obtain third threshold value as 200.736 by [(2448*2050) * 20/50000]/10.
The number of available point in the region being connected to and the third threshold value of setting are compared, the third threshold is less than Value then shows that the available point in the region of these connections is insufficient, is interference region, so that the pixel in the region is respectively provided with For background dot, cast out the region.As shown in figure 4, to cast out the effect diagram after interference region in Fig. 3.
S3 determines its boundary rectangle frame to the remaining region being connected to, and forms frame and takes region;Wherein, boundary rectangle The four edges of frame are parallel with the four edges of image respectively.
By step S2, in the region that has been connected to, partial region is cast out, and partial region is retained.Residue is retained Each region being connected to, S3, determines the boundary rectangle frame of the horizontal direction in each region through the above steps, forms the area Kuang Qu Domain.Boundary rectangle frame is a rectangle, and the four edges of rectangle are each passed through four boundary pixel points up and down in region (most It is upper, most under, most left and most right pixel).The boundary rectangle frame of horizontal direction indicates that the four edges of rectangle frame are respectively parallel to The four edges of image are horizontal.After determining boundary rectangle frame, the content in rectangle frame is that frame takes region.Such as Fig. 5 institute Show, to determine the effect diagram after boundary rectangle frame.
The connected region that frame takes region to have overlapping is considered as the overall region of merging by S4, determines the outer of overall region Rectangle frame is connect, the four edges of boundary rectangle frame are parallel with the four edges of image respectively, and the picture material in boundary rectangle frame is to know The target being clipped to.
For the region that frame takes, some regions are that independence is scattered, there is overlapping in some regions each other.There is weight for rectangle frame The connected region of this part is considered as the overall region of merging by folded part, determines its horizontal direction to the overall region Boundary rectangle frame.
As shown in fig. 6, to determine the effect diagram after boundary rectangle frame after step S4, in image.Relative to Some regions in Fig. 5, Fig. 6 merge frame by a boundary rectangle frame and take.In Fig. 6, the picture material in each boundary rectangle frame is For the target recognized, thus position and corresponding quantity where filtering out suspected target.
In present embodiment, by above-mentioned steps, handle blurred picture (such as in the higher water body of turbidity at Picture) when, it is compared by local threshold, it is available point or ambient noise point that accurate binaryzation, which divides pixel, then right Connected domain after binaryzation is denoised again, and connected domain frame takes processing and merging treatment, to have to image The segmentation of effect extracts the area-of-interest where target, can be improved, characteristics of image unsharp image lower to contrast into The accuracy rate of row identification.The target identification method is especially suitable for the identification of the planktonic organism shot in water.
After recognizing the region where target, can further combining classification method be divided by the picture material in region Class processing, identifies the classification information of target.In present embodiment, by the following two kinds classification schemes respectively from boundary ladder Two degree, morphosis element characteristic aspects are classified.Certainly, it in practical application, can also select according to the actual situation more suitable Other classification methods.
To be handled convenient for Classification and Identification, each region extracted is normalized, is handled to include 128*128 The image of a pixel.
The first classification schemes: classified using the classification method analysis boundary gradient of SVM+HOG.It is obtained to after normalization After the image arrived carries out simple background denoising processing, the marginal density and boundary gradient for extracting figure are counted into histogram Figure, is analyzed to treat mapping piece by support vector machines (SVM) bonding position histogram of gradients (HOG), and telling is which The other target of type.SVM is a traditional binary classifier, and principle is as shown in Figure 7.Wherein, x1Indicate lower section lines more Intensive sample point;x2Indicate the sparse sample point of top lines.ωTX+b=0 is meant that: dividing difference with linear equation The hyperplane of sample;1 and -1 on the right side of linear equation respectively represents two types.Indicate that the outermost layer of two classifications is flat The distance between row face.By taking target to be identified is planktonic organism as an example, plankton species are various, and only binary is inadequate, Therefore multiple types classifier is optimized in present embodiment.
Assorting process the following steps are included:
(sample is to have selected in advance) first is trained to sample before classification.Training process are as follows: by n class sample according to The mode of dichotomy is divided into two class of 1~n/2 and n/2+1~n, then to the sample that these two types include carry out figure marginal density and Boundary gradient statistics;These two types are continued to continue to classify according to two points of method and count, be divided until by sample by the repetition process Class indicates that training terminates to a wherein individual classification.Schematic diagram is as shown in Figure 8.
When classification, to the image of each connected domain after normalized, the marginal density of image in each region is extracted respectively And boundary gradient is compared according to marginal density and gradient information with the statistical information of the sample of training acquisition, by image point Class is to repeat assorting process in n/2 classification in n major class, by image classification into n/2 classification in n/4 classification, weight Subdivision class, up to image classification is into one of classification, to obtain category belonging to image.The flow chart of classification is such as Shown in Fig. 9.
When searching determining classification, since image to be detected is unknown for classifier, so the time is for searching kind Mostly important for class, the most common lookup mode and sortord are bubbling method, dichotomy and quicksort.From time complexity It is seen on degree, bubbling algorithm is O (n2), dichotomy is O (log2N), quicksort is O (n*logn), in present embodiment most Choosing dichotomy eventually is lookup means.
Second of classification schemes: morphosis element characteristic is analyzed using characteristic point Distribution Algorithm (shape-context) Classify.Characteristic point is extracted using edge Fast Extraction.The algorithm can directly come out the edge extracting of figure, from And it using the point extracted as characteristic point, can more effectively find out the edge and feature distribution situation of figure.The edge Fast Extraction is extracted accurate and time-consuming also shorter.By taking original image shown in Fig. 10 as an example, size 2448*2050 feels emerging The planktonic organism image in interesting region is shown in Figure 11, and size 210*210 extracts the mistake of the characteristic point in doubtful planktonic organism region Journey time-consuming is 54 seconds, and the image of the characteristic point obtained after extraction (black pixel point) is as shown in figure 12.
The process classified of analysis boundary gradient the following steps are included:
(sample is to have selected in advance), training process are as follows: sample is passed through side are trained to sample before classification Edge Fast Extraction is handled to obtain the distribution situation at edge and characteristic point, then passes through characteristic point statistics side shown in Figure 13 Method counts characteristic point distribution, and the characteristic point distribution situation of every kind of sample is counted respectively in a respective text, The characteristic point distribution situation for counting all samples completes training.Statistical method shown in Figure 13 are as follows: centered on characteristic point 8 equal parts (45 ° be a region, 360 ° are divided equally into 8 regions) are carried out, further according to graphic feature size to 5 regions of external diffusion, I.e. centered on this feature point, to the maximum radius for the circumscribed circle that can include all characteristic points, by this maximum radius point five etc. Point, five circles are constituted, while each circle is divided into 8 regions according to above-mentioned, and characteristic points all in figure are thus divided into 40 In region.
When classification, the image of each connected domain after normalized is handled to obtain by edge Fast Extraction The distribution situation at edge and characteristic point, then characteristic point distribution is counted by method shown in Figure 13, by image to be detected The characteristic point distribution statistics result of characteristic point distribution results and the resulting each sample of training after statistics is compared, to know It Chu not classification belonging to image to be detected.
The multiple types classifier and multiple types training aids gone out by above-mentioned design can be preferably to target, such as the world Multifarious species are classified.
A kind of images steganalysis device is also provided in present embodiment, including binary processing module, region are gone Except module, regional frame modulus block and region merging technique module;Wherein, the binary processing module is used for pixel each in image Binary conversion treatment is divided into effective pixel points and background dot, to convert the image into the picture of binaryzation;The region removal Module is used for the size of the size range setting third threshold value of the total number and target to be identified according to the pixel of image, will The number for the effective pixel points in region being connected in binaryzation picture is compared with third threshold value, if it is less than described Pixel in the region is then disposed as background dot by three threshold values, to remove the region;Regional frame modulus block is used for surplus Its boundary rectangle frame is determined in the remaining each region being connected to, and is formed frame and is taken region;Wherein, the four edges difference of boundary rectangle frame It is parallel with the four edges of image;The region merging technique module is used to the connected region that frame takes region to have overlapping being considered as merging Overall region determines the boundary rectangle frame of overall region, and the four edges of boundary rectangle frame are parallel with the four edges of image respectively, Picture material in boundary rectangle frame is the target recognized.The Target Identification Unit of present embodiment can be improved to comparison Spend the accuracy rate that lower, the unsharp image of characteristics of image is identified.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that Specific implementation of the invention is only limited to these instructions.For those of ordinary skill in the art to which the present invention belongs, exist Several alternative or obvious variations are made under the premise of not departing from present inventive concept, and performance or use is identical, all should be considered as It belongs to the scope of protection of the present invention.

Claims (5)

1. a kind of images steganalysis method, it is characterised in that: the following steps are included: S1, by pixel binaryzation each in image Processing, is divided into effective pixel points and background dot, to convert the image into the picture of binaryzation;S2, according to the pixel of image The size of the size range setting third threshold value of the total number and target to be identified of point, the area that will be connected in binaryzation picture The number of effective pixel points in domain is compared with third threshold value, then will be in the region if it is less than the third threshold value Pixel is disposed as background dot, to remove the region;S3 determines its boundary rectangle to the remaining each region being connected to Frame forms frame and takes region;Wherein, the four edges of boundary rectangle frame are parallel with the four edges of image respectively;Frame is taken region to have by S4 The connected region of overlapping is considered as the overall region of merging, determines the boundary rectangle frame of overall region, and the four of boundary rectangle frame Side is parallel with the four edges of image respectively;In image, the picture material in boundary rectangle frame is the target recognized;Step S1 In, following binary conversion treatment is carried out to pixel each in image: setting first window centered on pixel, passes through first window The size of the average and standard deviation setting first threshold of the pixel value of interior pixel, with the picture of the first threshold and pixel Plain value is compared, if pixel value is greater than first threshold, pixel is set as effective pixel points;Otherwise, pixel is set For background dot;The first threshold is arranged to obtain according to following formula:Wherein, With pixel (x, y) be center when, T (x, y) indicate correspond to the pixel (x, y) first threshold;R indicates entire image Pixel grey scale pixel value standard deviation dynamic range;K is the deviation factor of setting, takes positive value;M (x, y) indicates institute State the average value of the pixel value of pixel in first window;δ (x, y) indicates the grey scale pixel value of pixel in the first window Standard deviation.
2. images steganalysis method according to claim 1, it is characterised in that: in step S2, the third threshold value root It is arranged to obtain according to following formula: { (a*b) * c/d }/e, wherein a*b indicates pixel number all in entire image, and a is indicated The pixel number of width direction, b indicate the pixel number of length direction;C indicates the minimum dimension of target to be identified;D table Show the full-size of target to be identified;E indicates the quantity for the target to be identified that the picture of the a*b size of estimation contains up to.
3. images steganalysis method according to claim 1, it is characterised in that: the target to be identified is to be identified Planktonic organism.
4. images steganalysis method according to claim 1, it is characterised in that: further include step S5, acquisition recognizes Target information: sample training: n class sample is divided into 1~n/2 and n/2+1~n two by S51 in the way of dichotomy Major class carries out the marginal density of figure to the picture for the sample that this two major classes includes and boundary gradient counts;It repeats the above process, Respective n/2 class in two major classes is continued to classify and be counted in the way of dichotomy, until by sample classification to individual one A classification, and count the marginal density and boundary gradient of the figure of the sample of independent each classification;S52, will be where target Each region is normalized;Classification: S53 to each region after normalized, extracts the side of image in each region respectively Edge density and boundary gradient are believed according to the statistics for the sample that training obtains in marginal density and boundary gradient information, with step S51 Breath is compared, and by image classification in the n/2 classification into n major class, above-mentioned assorting process is repeated, by image classification to n/ In 2 classifications in n/4 classification, assorting process is repeated, until by image classification into a wherein individual classification, to obtain Classification information belonging to target is obtained into region.
5. images steganalysis method according to claim 1, it is characterised in that: further include step S6, acquisition recognizes Target information: S61, sample training: by n class sample by edge Fast Extraction handled to obtain edge and The distribution situation of characteristic point, then the distribution of characteristic point is counted by characteristic point statistical method, to count each class The characteristic point distribution situation of other sample;Each region where target is normalized S62;S63, classification: to normalizing The image for changing treated each region, is handled to obtain the distribution feelings at edge and characteristic point by edge Fast Extraction Condition, then characteristic point distribution is counted by characteristic point statistical method, the result after statistics is obtained with training in step S61 The statistical result of sample of each classification be compared, to identify classification information belonging to target.
CN201710526661.1A 2017-06-30 2017-06-30 A kind of images steganalysis method and device Active CN107330465B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710526661.1A CN107330465B (en) 2017-06-30 2017-06-30 A kind of images steganalysis method and device
CN201910576843.9A CN110334706B (en) 2017-06-30 2017-06-30 Image target identification method and device
PCT/CN2017/101704 WO2019000653A1 (en) 2017-06-30 2017-09-14 Image target identification method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526661.1A CN107330465B (en) 2017-06-30 2017-06-30 A kind of images steganalysis method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201910576843.9A Division CN110334706B (en) 2017-06-30 2017-06-30 Image target identification method and device

Publications (2)

Publication Number Publication Date
CN107330465A CN107330465A (en) 2017-11-07
CN107330465B true CN107330465B (en) 2019-07-30

Family

ID=60198065

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710526661.1A Active CN107330465B (en) 2017-06-30 2017-06-30 A kind of images steganalysis method and device
CN201910576843.9A Active CN110334706B (en) 2017-06-30 2017-06-30 Image target identification method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910576843.9A Active CN110334706B (en) 2017-06-30 2017-06-30 Image target identification method and device

Country Status (2)

Country Link
CN (2) CN107330465B (en)
WO (1) WO2019000653A1 (en)

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443097A (en) * 2018-05-03 2019-11-12 北京中科晶上超媒体信息技术有限公司 A kind of video object extract real-time optimization method and system
CN109117845A (en) * 2018-08-15 2019-01-01 广州云测信息技术有限公司 Object identifying method and device in a kind of image
CN109190640A (en) * 2018-08-20 2019-01-11 贵州省生物研究所 A kind of the intercept type acquisition method and acquisition system of the planktonic organism based on big data
CN109670518B (en) * 2018-12-25 2022-09-23 浙江大学常州工业技术研究院 Method for measuring boundary of target object in picture
CN110263608B (en) * 2019-01-25 2023-07-07 天津职业技术师范大学(中国职业培训指导教师进修中心) Automatic electronic component identification method based on image feature space variable threshold measurement
CN109815906B (en) * 2019-01-25 2021-04-06 华中科技大学 Traffic sign detection method and system based on step-by-step deep learning
CN109977944B (en) * 2019-02-21 2023-08-01 杭州朗阳科技有限公司 Digital water meter reading identification method
CN111833398B (en) * 2019-04-16 2023-09-08 杭州海康威视数字技术股份有限公司 Pixel point marking method and device in image
CN110070533B (en) * 2019-04-23 2023-05-30 科大讯飞股份有限公司 Evaluation method, device, equipment and storage medium for target detection result
CN110096991A (en) * 2019-04-25 2019-08-06 西安工业大学 A kind of sign Language Recognition Method based on convolutional neural networks
CN110189403B (en) * 2019-05-22 2022-11-18 哈尔滨工程大学 Underwater target three-dimensional reconstruction method based on single-beam forward-looking sonar
CN110175563B (en) * 2019-05-27 2023-03-24 上海交通大学 Metal cutting tool drawing mark identification method and system
CN110180186B (en) * 2019-05-28 2022-08-19 北京奇思妙想信息技术有限公司 Topographic map conversion method and system
CN110443272B (en) * 2019-06-24 2023-01-03 中国地质大学(武汉) Complex tobacco plant image classification method based on fuzzy selection principle
CN110348442B (en) * 2019-07-17 2022-09-30 大连海事大学 Shipborne radar image offshore oil film identification method based on support vector machine
CN110390313B (en) * 2019-07-29 2023-03-28 哈尔滨工业大学 Violent action detection method and system
CN110415237B (en) * 2019-07-31 2022-02-08 Oppo广东移动通信有限公司 Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium
CN110490848B (en) * 2019-08-02 2022-09-30 上海海事大学 Infrared target detection method, device and computer storage medium
CN110941987B (en) * 2019-10-10 2023-04-07 北京百度网讯科技有限公司 Target object identification method and device, electronic equipment and storage medium
CN112991253A (en) * 2019-12-02 2021-06-18 合肥美亚光电技术股份有限公司 Central area determining method, foreign matter removing device and detecting equipment
CN112890736B (en) * 2019-12-03 2023-06-09 精微视达医疗科技(武汉)有限公司 Method and device for obtaining field mask of endoscopic imaging system
CN111126252B (en) * 2019-12-20 2023-08-18 浙江大华技术股份有限公司 Swing behavior detection method and related device
CN111191730B (en) * 2020-01-02 2023-05-12 中国航空工业集团公司西安航空计算技术研究所 Method and system for detecting oversized image target oriented to embedded deep learning
CN111209864B (en) * 2020-01-07 2023-05-26 上海交通大学 Power equipment target identification method
CN111260629A (en) * 2020-01-16 2020-06-09 成都地铁运营有限公司 Pantograph structure abnormity detection algorithm based on image processing
CN111259980B (en) * 2020-02-10 2023-10-03 北京小马慧行科技有限公司 Method and device for processing annotation data
CN111598947B (en) * 2020-04-03 2024-02-20 上海嘉奥信息科技发展有限公司 Method and system for automatically identifying patient position by identification features
CN113516611B (en) * 2020-04-09 2024-01-30 合肥美亚光电技术股份有限公司 Method and device for determining abnormal material removing area, material sorting method and equipment
CN113538450B (en) * 2020-04-21 2023-07-21 百度在线网络技术(北京)有限公司 Method and device for generating image
CN111507995B (en) * 2020-04-30 2023-05-23 柳州智视科技有限公司 Image segmentation method based on color image pyramid and color channel classification
CN111523613B (en) * 2020-05-09 2023-03-24 黄河勘测规划设计研究院有限公司 Image analysis anti-interference method under complex environment of hydraulic engineering
CN111626230B (en) * 2020-05-29 2023-04-14 合肥工业大学 Vehicle logo identification method and system based on feature enhancement
CN111724351B (en) * 2020-05-30 2023-05-02 上海健康医学院 Helium bubble electron microscope image statistical analysis method based on machine learning
CN111753794B (en) * 2020-06-30 2024-02-27 创新奇智(成都)科技有限公司 Fruit quality classification method, device, electronic equipment and readable storage medium
CN114199262A (en) * 2020-08-28 2022-03-18 阿里巴巴集团控股有限公司 Method for training position recognition model, position recognition method and related equipment
CN112053399B (en) * 2020-09-04 2024-02-09 厦门大学 Method for positioning digestive tract organs in capsule endoscope video
CN112102288B (en) * 2020-09-15 2023-11-07 应急管理部大数据中心 Water body identification and water body change detection method, device, equipment and medium
CN112241466A (en) * 2020-09-22 2021-01-19 天津永兴泰科技股份有限公司 Wild animal protection law recommendation system based on animal identification map
CN112241956B (en) * 2020-11-03 2023-04-07 甘肃省地震局(中国地震局兰州地震研究所) PolSAR image ridge line extraction method based on region growing method and variation function
CN112232286A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle image recognition system and unmanned aerial vehicle are patrolled and examined to road
CN113409352B (en) * 2020-11-19 2024-03-15 西安工业大学 Method, device, equipment and storage medium for detecting weak and small target of single-frame infrared image
CN112488118B (en) * 2020-12-18 2023-08-08 哈尔滨工业大学(深圳) Target detection method and related device
CN112668441B (en) * 2020-12-24 2022-09-23 中国电子科技集团公司第二十八研究所 Satellite remote sensing image airplane target identification method combined with priori knowledge
CN112750136B (en) * 2020-12-30 2023-12-05 深圳英集芯科技股份有限公司 Image processing method and system
CN113033400B (en) * 2021-03-25 2024-01-19 新东方教育科技集团有限公司 Method and device for identifying mathematical formulas, storage medium and electronic equipment
CN113221917B (en) * 2021-05-13 2024-03-19 南京航空航天大学 Monocular vision double-layer quadrilateral structure cooperative target extraction method under insufficient illumination
CN114037650B (en) * 2021-05-17 2024-03-19 西北工业大学 Ground target visible light damage image processing method for change detection and target detection
CN113420668B (en) * 2021-06-21 2024-01-12 西北工业大学 Underwater target identification method based on two-dimensional multi-scale permutation entropy
CN113298702B (en) * 2021-06-23 2023-08-04 重庆科技学院 Reordering and segmentation method based on large-size image pixel points
CN113689455B (en) * 2021-07-01 2023-10-20 上海交通大学 Thermal fluid image processing method, system, terminal and medium
CN113469980B (en) * 2021-07-09 2023-11-21 连云港远洋流体装卸设备有限公司 Flange identification method based on image processing
CN113591674B (en) * 2021-07-28 2023-09-22 桂林电子科技大学 Edge environment behavior recognition system for real-time video stream
CN113588663B (en) * 2021-08-03 2024-01-23 上海圭目机器人有限公司 Pipeline defect identification and information extraction method
CN113688829B (en) * 2021-08-05 2024-02-20 南京国电南自电网自动化有限公司 Automatic identification method and system for monitoring picture of transformer substation
CN113610830B (en) * 2021-08-18 2023-12-29 常州领创电气科技有限公司 Detection system and method for lightning arrester
CN113776408B (en) * 2021-09-13 2022-09-13 北京邮电大学 Reading method for gate opening ruler
CN113900750B (en) * 2021-09-26 2024-02-23 珠海豹好玩科技有限公司 Method and device for determining window interface boundary, storage medium and electronic equipment
CN114067122B (en) * 2022-01-18 2022-04-08 深圳市绿洲光生物技术有限公司 Two-stage binarization image processing method
CN114821030B (en) * 2022-04-11 2023-04-04 苏州振旺光电有限公司 Planet image processing method, system and device
CN115601385B (en) * 2022-04-12 2023-05-05 北京航空航天大学 Bubble morphology processing method, device and medium
CN114871120B (en) * 2022-05-26 2023-11-07 江苏省徐州医药高等职业学校 Medicine determining and sorting method and device based on image data processing
CN114998887B (en) * 2022-08-08 2022-10-11 山东精惠计量检测有限公司 Intelligent identification method for electric energy meter
CN116012283B (en) * 2022-09-28 2023-10-13 逸超医疗科技(北京)有限公司 Full-automatic ultrasonic image measurement method, equipment and storage medium
CN115690693B (en) * 2022-12-13 2023-03-21 山东鲁旺机械设备有限公司 Intelligent monitoring system and monitoring method for construction hanging basket
CN116311543B (en) * 2023-02-03 2024-03-08 汇金智融(深圳)科技有限公司 Handwriting analysis method and system based on image recognition technology
CN116740332B (en) * 2023-06-01 2024-04-02 南京航空航天大学 Method for positioning center and measuring angle of space target component on satellite based on region detection
CN116403094B (en) * 2023-06-08 2023-08-22 成都菁蓉联创科技有限公司 Embedded image recognition method and system
CN116758024B (en) * 2023-06-13 2024-02-23 山东省农业科学院 Peanut seed direction identification method
CN116740070B (en) * 2023-08-15 2023-10-24 青岛宇通管业有限公司 Plastic pipeline appearance defect detection method based on machine vision
CN116740579B (en) * 2023-08-15 2023-10-20 兰陵县城市规划设计室 Intelligent collection method for territorial space planning data
CN116758578B (en) * 2023-08-18 2023-11-07 上海楷领科技有限公司 Mechanical drawing information extraction method, device, system and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036239A (en) * 2014-05-29 2014-09-10 西安电子科技大学 Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering
CN105117706A (en) * 2015-08-28 2015-12-02 小米科技有限责任公司 Image processing method and apparatus and character recognition method and apparatus
CN105261049A (en) * 2015-09-15 2016-01-20 重庆飞洲光电技术研究院 Quick detection method of image connection area
CN106250901A (en) * 2016-03-14 2016-12-21 上海创和亿电子科技发展有限公司 A kind of digit recognition method based on image feature information
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106875404A (en) * 2017-01-18 2017-06-20 宁波摩视光电科技有限公司 The intelligent identification Method of epithelial cell in a kind of leukorrhea micro-image

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7092573B2 (en) * 2001-12-10 2006-08-15 Eastman Kodak Company Method and system for selectively applying enhancement to an image
CN101699469A (en) * 2009-11-09 2010-04-28 南京邮电大学 Method for automatically identifying action of writing on blackboard of teacher in class video recording
CN101777122B (en) * 2010-03-02 2012-01-04 中国海洋大学 Chaetoceros microscopic image cell target extraction method
CN102375982B (en) * 2011-10-18 2013-01-02 华中科技大学 Multi-character characteristic fused license plate positioning method
CN102663406A (en) * 2012-04-12 2012-09-12 中国海洋大学 Automatic chaetoceros and non-chaetoceros sorting method based on microscopic images
CN103049763B (en) * 2012-12-07 2015-07-01 华中科技大学 Context-constraint-based target identification method
CN104077777B (en) * 2014-07-04 2017-01-11 中国科学院大学 Sea surface vessel target detection method
KR101601564B1 (en) * 2014-12-30 2016-03-09 가톨릭대학교 산학협력단 Face detection method using circle blocking of face and apparatus thereof
CN105868708B (en) * 2016-03-28 2019-09-20 锐捷网络股份有限公司 A kind of images steganalysis method and device
CN106846339A (en) * 2017-02-13 2017-06-13 广州视源电子科技股份有限公司 A kind of image detecting method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036239A (en) * 2014-05-29 2014-09-10 西安电子科技大学 Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering
CN105117706A (en) * 2015-08-28 2015-12-02 小米科技有限责任公司 Image processing method and apparatus and character recognition method and apparatus
CN105261049A (en) * 2015-09-15 2016-01-20 重庆飞洲光电技术研究院 Quick detection method of image connection area
CN106250901A (en) * 2016-03-14 2016-12-21 上海创和亿电子科技发展有限公司 A kind of digit recognition method based on image feature information
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106875404A (en) * 2017-01-18 2017-06-20 宁波摩视光电科技有限公司 The intelligent identification Method of epithelial cell in a kind of leukorrhea micro-image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Moving Region Detection Based on Background Difference;Xu Yang 等;《2014 IEEE Workshop on Electronics, Computer and Applications》;20141231;第518-521页
序列图像中运动目标的自动提取方法;王阿妮 等;《光子学报》;20100331;第39卷(第3期);第565-570页

Also Published As

Publication number Publication date
CN110334706B (en) 2021-06-01
WO2019000653A1 (en) 2019-01-03
CN107330465A (en) 2017-11-07
CN110334706A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN107330465B (en) A kind of images steganalysis method and device
Jumb et al. Color image segmentation using K-means clustering and Otsu’s adaptive thresholding
CN105740945B (en) A kind of people counting method based on video analysis
CN110097034A (en) A kind of identification and appraisal procedure of Intelligent human-face health degree
Savkare et al. Automatic system for classification of erythrocytes infected with malaria and identification of parasite's life stage
CN106446952A (en) Method and apparatus for recognizing score image
CN104794708A (en) Atherosclerosis plaque composition dividing method based on multi-feature learning
Zhou et al. Leukocyte image segmentation based on adaptive histogram thresholding and contour detection
CN109636824A (en) A kind of multiple target method of counting based on image recognition technology
CN104021384B (en) A kind of face identification method and device
CN112734741B (en) Image processing method and system for pneumonia CT image
CN108257124A (en) A kind of white blood cell count(WBC) method and system based on image
CN106127735A (en) A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN110458792A (en) Method and device for evaluating quality of face image
CN106529377A (en) Age estimating method, age estimating device and age estimating system based on image
CN110728185A (en) Detection method for judging existence of handheld mobile phone conversation behavior of driver
CN107392105B (en) Expression recognition method based on reverse collaborative salient region features
CN104361339B (en) Slap shape Graph Extraction and recognition methods
CN111783885A (en) Millimeter wave image quality classification model construction method based on local enhancement
Zabihi et al. Vessel extraction of conjunctival images using LBPs and ANFIS
CN109409347A (en) A method of based on facial features localization fatigue driving
CN107341487A (en) A kind of detection method and system for smearing character
CN107563287B (en) Face recognition method and device
Wang et al. Hand vein images enhancement based on local gray-level information histogram
CN106372647A (en) Image texture classification method based on Weber local binary counting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant