CN108764325A - Image-recognizing method, device, computer equipment and storage medium - Google Patents

Image-recognizing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108764325A
CN108764325A CN201810502263.0A CN201810502263A CN108764325A CN 108764325 A CN108764325 A CN 108764325A CN 201810502263 A CN201810502263 A CN 201810502263A CN 108764325 A CN108764325 A CN 108764325A
Authority
CN
China
Prior art keywords
probability
image
region
pixel point
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810502263.0A
Other languages
Chinese (zh)
Other versions
CN108764325B (en
Inventor
陈炳文
王翔
周斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810502263.0A priority Critical patent/CN108764325B/en
Publication of CN108764325A publication Critical patent/CN108764325A/en
Application granted granted Critical
Publication of CN108764325B publication Critical patent/CN108764325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of image-recognizing method, device, computer equipment and storage medium, the method includes:Obtain the present image of target to be identified;Current pixel point is obtained from the present image, according to the corresponding reference background region of the location determination of the current pixel point;The corresponding context similarity of the current pixel point is calculated according to the reference background region;The corresponding characteristics of image of the current pixel point is handled according to the images steganalysis model trained, obtains corresponding first probability of the current pixel point, first probability is the probability that the current pixel point belongs to target object pixel;The corresponding context similarity of each pixel and the first probability in the present image is calculated, identifies to obtain the image-region in the present image where target object according to the corresponding context similarity of each pixel and the first probability.The accuracy of above method image recognition is high.

Description

Image-recognizing method, device, computer equipment and storage medium
Technical field
The present invention relates to image processing fields, are situated between more particularly to image-recognizing method, device, computer equipment and storage Matter.
Background technology
With the development of science and technology, the information that image includes is more and more, in order to obtain the content in image, needs Processing is identified to picture material, such as when analyzing monitoring image, needs to identify from image and be monitored Target object.
Currently, when needing to identify the target object in image, often incited somebody to action according to the gray value of image background is smaller Gray value be more than certain threshold value region be used as the region where target, however the gray value of background may also can very greatly or mesh Target gray value may also can very little therefore identify that the method for target object is inaccurate in image according only to gray value.
Invention content
Based on this, it is necessary to be directed to above-mentioned problem, provide a kind of image-recognizing method, device, computer equipment and deposit Storage media, since context similarity can reflect whether pixel is background, and the probability front reflected image obtained using model Whether vegetarian refreshments is target, and the probability that target object pixel is belonged in conjunction with the corresponding context similarity of pixel and pixel is known The image-region where target object is not obtained, and image recognition accuracy is high.
A kind of image-recognizing method, the method includes:Obtain the present image of target to be identified;From the present image Middle acquisition current pixel point, according to the corresponding reference background region of the location determination of the current pixel point;According to the background Reference zone calculates the corresponding context similarity of the current pixel point;According to the images steganalysis model trained to described The corresponding characteristics of image of current pixel point is handled, and corresponding first probability of the current pixel point is obtained, and described first is general Rate is the probability that the current pixel point belongs to target object pixel;Each pixel in the present image is calculated Corresponding context similarity and the first probability are identified according to the corresponding context similarity of each pixel and the first probability Image-region to where target object in the present image.
A kind of pattern recognition device, described device include:Present image acquisition module, for obtaining working as target to be identified Preceding image;Background area determining module, for obtaining current pixel point from the present image, according to the current pixel point The corresponding reference background region of location determination;Similarity calculation module, described in being calculated according to the reference background region The corresponding context similarity of current pixel point;First probability obtains module, for according to the images steganalysis model trained The corresponding characteristics of image of the current pixel point is handled, corresponding first probability of the current pixel point is obtained, it is described First probability is the probability that the current pixel point belongs to target object pixel;Target area identification module, for calculating To the corresponding context similarity of each pixel and the first probability in the present image, corresponded to according to each pixel Context similarity and the first probability identify to obtain the image-region in the present image where target object.
The background area determining module includes in one of the embodiments,:First area acquiring unit is used for basis The position of the current pixel point obtains first area and second area on the present image, wherein the second area For the subregion of the first area, the current pixel point is located inside the second area;First area determination unit is used In using the non-overlapping images region between the first area and second area as the reference background region.
Described device further includes in one of the embodiments,:Training region acquisition module, for obtaining training image, Obtain the corresponding trained region of target object in the training image;Training characteristics acquisition module, for obtaining the training The corresponding training image feature of each pixel in region;Training module, for carrying out model according to the training image feature Training obtains the Feature Mapping function to minimum feature space and the feature space by the training image Feature Mapping Central value.
Described device further includes in one of the embodiments,:Second area acquiring unit, for according to the current picture The position of vegetarian refreshments obtains third region and the fourth region on the present image, wherein the fourth region is the third The subregion in region, the current pixel point are located inside the fourth region;Second area determination unit, it is described for obtaining Non-overlapping images region between third region and the fourth region;First statistic unit, for the non-overlapping images region The gray value of corresponding pixel is counted, and the first statistical result is obtained, to the ash of the corresponding pixel of the fourth region Angle value is counted, and the second statistical result is obtained;Contrast metric obtains unit, for according to first statistical result and Second statistical result obtains the contrast metric.
The target area identification module includes in one of the embodiments,:Second probability obtains unit, is used for basis The context similarity obtains corresponding second probability of the current pixel point, wherein second probability is the current picture Vegetarian refreshments belongs to the probability of target object pixel, second probability and the negatively correlated relationship of the context similarity;Target is general Rate obtains unit, for according to the corresponding current mesh of current pixel point described in first probability and second determine the probability Probability is marked, the current goal probability is the probability that the current pixel point belongs to target object pixel;Target area identifies Unit, for identifying to obtain target in the present image according to the corresponding destination probability of each pixel in the present image Image-region where object.
The target area recognition unit is used in one of the embodiments,:It is general to obtain target in the present image Rate is more than the first pixel of first threshold;Obtain the distribution characteristics of the corresponding destination probability of first pixel;According to institute It states distribution characteristics and obtains second threshold;Obtain the first pixel that destination probability in the present image is more than the second threshold Obtained region is combined, as the image-region where target object in the present image.
A kind of computer equipment, including memory and processor are stored with computer program, the meter in the memory When calculation machine program is executed by the processor so that the processor executes the step of above-mentioned image-recognizing method.
A kind of computer readable storage medium, which is characterized in that calculating is stored on the computer readable storage medium Machine program, when the computer program is executed by processor so that the processor executes the step of above-mentioned image-recognizing method.
Above-mentioned image-recognizing method, device, computer equipment and storage medium obtain the present image of target to be identified, Current pixel point is obtained from present image, according to the corresponding reference background region of the location determination of current pixel point, according to the back of the body Scape reference zone calculates the corresponding context similarity of current pixel point, according to the images steganalysis model trained to current picture The corresponding characteristics of image of vegetarian refreshments is handled, and corresponding first probability of current pixel point is obtained, and the first probability is current pixel point The corresponding context similarity of each pixel and first in present image is calculated in the probability for belonging to target object pixel Probability identifies to obtain in present image according to the corresponding context similarity of each pixel and the first probability where target object Image-region.Since context similarity can reflect whether pixel is background, and reflected using the probability front that model obtains Whether pixel is target, and the probability of target object pixel is belonged in conjunction with the corresponding context similarity of pixel and pixel Identification obtains the image-region where target object, and image recognition accuracy is high.
Description of the drawings
Fig. 1 is the applied environment figure of the image-recognizing method provided in one embodiment;
Fig. 2 is the flow chart of image-recognizing method in one embodiment;
Fig. 3 A are the schematic diagram of first area and second area in one embodiment;
Fig. 3 B are the schematic diagram of first area and second area in one embodiment;
Fig. 4 is the flow for calculating the corresponding context similarity of current pixel point in one embodiment according to reference background region Figure;
Fig. 5 is the schematic diagram that reference background region is divided into multiple subregions in one embodiment;
Fig. 6 is special to the corresponding image of current pixel point according to the images steganalysis model trained in one embodiment Sign is handled, and the flow chart of corresponding first probability of current pixel point is obtained;
Fig. 7 A are to obtain the flow chart of images steganalysis model in one embodiment;
Fig. 7 B are the schematic diagram in the corresponding region of target object of training image in one embodiment;
Fig. 8 is to obtain the flow chart of the corresponding contrast metric of current pixel point in one embodiment;
Fig. 9 is to obtain the flow chart of the corresponding shade of gray feature of current pixel point in one embodiment;
Figure 10 is to identify and worked as according to the corresponding context similarity of each pixel and the first probability in one embodiment The flow chart of image-region in preceding image where target object;
Figure 11 is to identify to obtain currently according to the corresponding destination probability of each pixel in present image in one embodiment The flow chart of image-region in image where target object;
Figure 12 is the corresponding destination probability schematic diagram of each pixel of present image in one embodiment;
Figure 13 is the schematic diagram of image detection result in one embodiment;
Figure 14 is the structure diagram of pattern recognition device in one embodiment;
Figure 15 is the structure diagram of background area determining module in one embodiment;
Figure 16 is the structure diagram of target area identification module in one embodiment;
Figure 17 is the internal structure block diagram of one embodiment Computer equipment.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Fig. 1 is the applied environment figure of image-recognizing method provided in one embodiment, as shown in Figure 1, applying ring at this In border, including photographic device 110 and computer equipment 120.After photographic device 110, which carries out camera shooting, gets present image, It is sent in computer equipment 120, computer equipment 120 gets the present image of target to be identified, executes the present invention and implements The image-recognizing method that example provides, identification obtain the image-region in present image where target object.
In one embodiment, after obtaining the image-region in present image where target object, computer 120 can be shown Show the present image, and the image-region where target object is identified, for example, increase arrow on present image, arrow Region pointed by head is the image-region where target object.
It in one embodiment, can also be by target pair after obtaining the image-region in present image where target object To not be the image-region where target object in present image as the pixel gray value of the image-region at place is set as 1 Pixel gray value is set as 0, obtains the corresponding binary image of present image.
In one embodiment, present image is infrared image, and photographic device 110 can be infrared eye.
In one embodiment, computer equipment 120 can be independent physical server or terminal, can also be multiple The server cluster that physical server is constituted can be to provide the basic cloud meter such as Cloud Server, cloud database, cloud storage and CDN Calculate the Cloud Server of service.
It should be noted that above-mentioned application scenarios are only a kind of example, limitation of the present invention can not be considered as, in reality In the application of border, there may also be other application scenarios.For example, computer equipment 120 can be current from middle acquisition is locally stored Image obtains present image from other equipment.Alternatively, computer equipment 120 can be same set with photographic device 110 It is standby.
As shown in Fig. 2, in one embodiment it is proposed that a kind of image-recognizing method, the present embodiment is mainly in this way It is illustrated applied to the computer equipment 120 in above-mentioned Fig. 1.It can specifically include following steps:
Step S202 obtains the present image of target to be identified.
Specifically, present image refers to the image for currently needing to identify target object.Present image can be photographic device It in real time or is periodically transferred in computer equipment, present image can also be the image that computer equipment is locally stored.Meter Calculating machine equipment can also be in response to the present image of image recognition acquisition request target to be identified.For example, when user needs to identify When target object in image, image recognition request can be sent, image or image mark can be carried in image recognition request Know, after computer equipment receives image recognition request, according to image recognition acquisition request present image.Target object refers to needing The object to be identified, for example, target object can be face, a cat or an aircraft etc., target object specifically can root It is configured according to needs.
Step S204, obtains current pixel point from present image, according to the corresponding back of the body of the location determination of current pixel point Scape reference zone.
Specifically, pixel is the minimum image unit in the image indicated by Serial No..Present image includes multiple Pixel (pixel) is identified when carrying out image recognition to present image as unit of pixel.Current pixel point is Refer to the pixel currently obtained.For each pixel of present image, current pixel point can be used as to execute step successively S204~S208 can also execute step respectively simultaneously using multiple pixels as current pixel point to each current pixel point Rapid S204~S208.Reference background region is the corresponding image-region of background, and reference background region is according to current pixel point Location determination.For example, the region that can form the pixel adjacent with current pixel point is as reference background region.Also may be used The region in current pixel point preset range will be located at as reference background region.
In one embodiment, include according to the corresponding reference background region of the location determination of current pixel point:According to working as The position of preceding pixel point obtains first area and second area on present image, will be non-between first area and second area Overlapping image region is as reference background region.Wherein, second area is the subregion of first area, and current pixel point is located at the Inside two regions.
Specifically, the size of first area and second area can be specifically arranged as required to.First area and second Region can be the subregion in present image.For example, first area may include 7*7 pixel, second area can be with Including 3*3 pixel.Second area is that the subregion of first area refers to that second area belongs to first area.It can manage Solution, since current pixel point is located in second area, and the subregion that second area is first area, therefore, current pixel point It also is located in first area.After obtaining first area and second area, by the region except second area in first area, i.e., The Non-overlapping Domain of first area and second area is as reference background region.In the embodiment of the present invention, due to current pixel point The pixel of surrounding is also likely to be the corresponding pixel of target object, therefore will be non-overlapping between first area and second area Image-region improves the accuracy in the reference background region of selection as reference background region.
As shown in Figure 3A, it is assumed that a grid in Fig. 3 A represents a pixel, PijIndicate the pixel of the i-th row jth row Point, wherein P44For current pixel point, P32、P33、P34、P42、P43、P44、P52、P53And P54For second area, i.e. Fig. 3 A bends Corresponding region, first area are the region of whole pixels composition in Fig. 3, then Non-overlapping Domain, that is, reference background region is Region in Fig. 3 A in addition to the corresponding region of oblique line,.
In one embodiment, during at least one of first area and second area are symmetrical with current pixel point The heart.For example, when the shape of first area is rectangle, symmetrical centre is cornerwise intersection point.As shown in Figure 3B, P44It is current Pixel, second area are the corresponding region of oblique line, and reference background region is the region except the corresponding region of Fig. 3 B bends.
Step S206 calculates the corresponding context similarity of current pixel point according to reference background region.
Specifically, context similarity is for indicating current pixel point degree similar with background.Similarity is bigger, then currently Pixel is that the possibility of background is bigger.After obtaining reference background region, it can regard each pixel as target pixel points, Calculate the similarity of each pixel and current pixel point in reference background region.Can also from reference background region selection portion Divide pixel as target pixel points, calculates the similarity of target pixel points and current pixel point.It obtains target pixel points and works as After the similarity of preceding pixel point, context similarity is obtained according to the similarity being calculated.For example, the phase that can will be calculated Like the average value of degree, maximum similarity, minimum similarity degree or similarity median as context similarity.The calculating of similarity Method can be configured as needed.For example, the corresponding pixel characteristic of pixel can be obtained, the phase between pixel characteristic is calculated Like degree.Pixel characteristic can be with one or more in color characteristic, textural characteristics and gray feature.
In one embodiment, similarity can be calculated according to the gray value of pixel.For example, pixel can be obtained The gray value of point, carries out the gray value of pixel to negate operation, obtains complementary current grayvalue, then that pixel is corresponding Gray value and complementary gray value form gray value vectors.The similarity between vector is calculated again, is obtained between pixel and pixel Similarity.Similarity between vector can be calculated using cosine similarity algorithm, Euclidean distance similarity algorithm.
Step S208, according to the images steganalysis model trained to the corresponding characteristics of image of current pixel point at Reason, obtains corresponding first probability of current pixel point, and the first probability is the probability that current pixel point belongs to target object pixel.
Specifically, characteristics of image is for indicating the corresponding property of image, such as the corresponding contrast metric of pixel, color It is one or more etc. in feature or gray feature, it can specifically choose as needed.Target object pixel is target The corresponding pixel of object.Before handling characteristics of image according to the images steganalysis model trained, need to pass through Training data carries out model training and determines the corresponding model parameter of images steganalysis model, it is established that characteristics of image to pixel Belong to the mapping of the probability of target object pixel.The method of model training can be Training method or unsupervised instruction Practice method.For Training method, it is known that whether the pixel in training data, which is target object pixel, there is prison Training pattern is superintended and directed such as can be support vector machines or depth nerve learning model.For unsupervised training method, training It can be unknown that whether the pixel in data, which is target object pixel, and unsupervised training pattern for example can be that cluster is calculated Method.
The corresponding context similarity of each pixel and the first probability in present image, root is calculated in step S210 Identify to obtain the image district in present image where target object according to the corresponding context similarity of each pixel and the first probability Domain.
Specifically, the corresponding context similarity of each pixel in present image is calculated according to step S202~208 And first probability, identify to obtain target in present image in conjunction with the corresponding context similarity of each pixel and the first probability Image-region where object.
In one embodiment, can context similarity is less than default similarity, and the first probability is general more than default The pixel of rate is as target object pixel.
In one embodiment, the pixel number of the corresponding image-region of target object, combining target can also be set The pixel number of the corresponding image-region of object obtains the image-region in present image where target object.Such as when advance When the pixel number that the image-region where target object is arranged is 8,10 pictures of context similarity minimum can be obtained Vegetarian refreshments, then it is preceding 8 pixels as target object pixel to obtain from this 10 pixels the first probability sorting, by mesh The region of subject image vegetarian refreshments composition is marked as the image-region where target object.
In one embodiment, the position that can be combined with pixel obtains the image in present image where target object Region.Context similarity can be chosen and be less than default similarity, and the first probability is more than the pixel of predetermined probabilities, then obtains The position of these pixels chosen, the consecutive image region that the pixel of selection is formed is as target object in present image The image-region at place.
In one embodiment, the second probability can also be obtained according to context similarity, the second probability is current pixel point Belong to the probability of target object pixel.Then the first probability and the second probability multiplication are obtained into destination probability, target is general Rate is more than the pixel of preset value as the target object pixel in present image, obtains the image district where target object Domain.
In one embodiment, the gray value for being determined as target object pixel in present image can also be set as 1, The gray value of other pixels is set as 0, and shows the corresponding binary image of present image.
Above-mentioned image-recognizing method obtains the present image of target to be identified, current pixel point is obtained from present image, According to the corresponding reference background region of the location determination of current pixel point, current pixel point is calculated according to reference background region and is corresponded to Context similarity, the corresponding characteristics of image of current pixel point is handled according to the images steganalysis model trained, Corresponding first probability of current pixel point is obtained, the first probability is the probability that current pixel point belongs to target object pixel, meter Calculation obtains the corresponding context similarity of each pixel and the first probability in present image, according to the corresponding back of the body of each pixel Scape similarity and the first probability identify to obtain the image-region in present image where target object.Since context similarity can Reflect whether pixel is background, and whether the probability front reflection pixel obtained using model is target, in conjunction with pixel The probability that corresponding context similarity and pixel belong to target object pixel identifies to obtain the image where target object Region, image recognition accuracy are high.
Method provided in an embodiment of the present invention can be applied in the recongnition of objects of infrared image, and infrared image is logical The image that infrared imagery technique obtains is crossed, in the environment of light intensity deficiency and poor contrast, is based on infrared detection technique imaging side Method can acquire image in the case where not depending on illumination, and the target object in infrared image is generally smaller, such as The size of the corresponding image-region of target object is usually buried in the complicated back of the body generally in 1 × 1 pixel between 6 × 6 pixels Jing Zhong adds the inhomogeneities of atmospheric heat radiation, the inside of the atmospheric attenuation under DIFFERENT METEOROLOGICAL CONDITIONS and infrared detector The factors such as noise influence, and cause infrared image grey scale change violent, slowly have with traditional visible images grey scale change very big Difference.Therefore, identify that the effect of target object is poor using traditional image-recognizing method.And the embodiment of the present invention is used to provide Method, obtain context similarity using the similarity of current pixel point and the pixel in reference background region, known using image Other model determines that pixel is the probability of target object pixel, and two methods are combined to be determined from two angle synthesis of a positive and a negative Whether pixel is target object pixel, therefore image recognition effect is good.
In one embodiment, as shown in figure 4, step S206 calculates current pixel point correspondence according to reference background region Context similarity may comprise steps of:
Step S402 obtains target pixel points from reference background region, obtains target pixel points and current pixel point corresponds to Gray value.
Specifically, target pixel points can be one or more.Target pixel points can be the whole in reference background region Pixel, naturally it is also possible to which target pixel points are obtained according to preset pixel screening rule.For example, it may be obtaining background ginseng In the domain of examination district, gray value is the pixel of the median of the gray value of each pixel in reference background region as object pixel Point.
In one embodiment, reference background region can also be divided into multiple subregions, obtained from each sub-regions Target pixel points.For example, the median that gray value in each sub-regions is the gray value of each pixel in subregion is corresponded to Pixel as target pixel points.As shown in figure 5, the line of the central point of first area can will be passed perpendicularly through in horizontal direction The region for the pixel composition that section Q1 is passed through will pass perpendicularly through the center of first area as the first subregion on vertical direction The pixels that are passed through of line segment Q2 of point are as the second subregion, pixel group that diagonal line Q3, Q4 of first area are passed through At region respectively as third subregion and the 4th subregion.Then the gray value of pixel in each sub-regions is obtained Median, using the pixel that gray value in each sub-regions is median as target pixel points.
In one embodiment, when there are second area, it will be understood that the corresponding pixel of each sub-regions do not include Pixel in second area.Can be P for example, for corresponding 4th subregions of line segment Q411、P22、P66、P77It is formed Region does not include P33、P44、P55
Step S404 is calculated reference gray level value according to the gray value of target pixel points, is taken to reference gray level value Inverse operation obtains corresponding complementary reference gray value, by reference gray level value and complementary reference gray value form reference gray level value to Amount.
Specifically, reference gray level value can be the median of the corresponding gray value of each target pixel points.Can also be pair The gray value that the gray value of target pixel points obtains after being normalized can also be to the corresponding gray value of target pixel points The gray value that median is normalized.For example, when the gray value of target pixel points is 200, due to the model of gray value It is 0~255 to enclose, and therefore, it is 0.784 that 200 divided by 255, which are obtained normalized reference gray level value,.Or work as target pixel points packet 4 are included, respectively 100,250,200 and 210, it is (200+210=)/2=205 first to obtain gray value median, corresponding Reference gray level value is 205/255=0.804.Complement picture of the operation for obtaining image is negated, complementation refers to two gray value phases Add the maximum value equal to gray scale, is such as 255 or 1.For example, when reference gray level value is 0.804, then complementary reference gray value is 1- 0.804=0.196.After obtaining reference gray level value and complementary reference gray value, composition reference gray level value vector, reference gray level value Vector is [0.804,0.196].
Step S406 negate operation to the corresponding gray value of current pixel point and obtains corresponding complementary current gray level The corresponding gray value of current pixel point and complementary current grayvalue are formed current grayvalue vector by value.
Specifically, the corresponding gray value of current pixel point can be the gray value after normalization, can also be not normalize Gray value.After obtaining the corresponding gray value of current pixel point, also the corresponding gray value of current pixel point is carried out negating operation Corresponding complementary current grayvalue is obtained, it is then that the corresponding gray value of current pixel point and complementary current grayvalue composition is current Gray value vectors.For example, when the corresponding gray value of current pixel point is 0.901, then complementary current grayvalue is 1-0.901= 0.099.Current gray value vectors of examining are [0.901,0.099].
The corresponding back of the body of current pixel point is calculated according to reference gray level value vector sum current grayvalue vector in step S408 Scape similarity.
Specifically, the method for context similarity being calculated according to reference gray level value vector sum current grayvalue vector can basis Actual needs is configured.For example, cosine similarity algorithm may be used, can also be counted using Euclidean distance computational methods It calculates.
In one embodiment, when reference gray level value vector includes multiple, each reference gray level value can be calculated separately Similarity between vector sum current grayvalue vector, further according between each reference gray level value vector sum current grayvalue vector Similarity obtain context similarity.For example, the phase between each reference gray level value vector sum current grayvalue vector can be taken It is used as context similarity like one in the median of degree, average value, maximum value and minimum value.
In one embodiment, the method for calculating context similarity includes:By reference gray level value vector sum current grayvalue The vector value of same position is compared in vector, obtains the corresponding minimum vector value in each position.Minimum value is combined, Obtain recombination vector, calculate recombination vector field homoemorphism square, according to recombination vector field homoemorphism square, current grayvalue vector field homoemorphism And reference gray level value vector field homoemorphism obtains the corresponding blurred background degree of membership of current pixel point, is obtained according to blurred background degree of membership To context similarity.
Specifically, blurred background degree of membership is for indicating that current pixel point is under the jurisdiction of the degree of background, according to blurred background The algorithm that degree of membership obtains context similarity can be configured as needed.For example, working as there are one blurred background degrees of membership When, it can be using fuzzy membership as context similarity.It, can be by fuzzy membership when blurred background degree of membership is multiple One in median, average value, maximum value and minimum value is used as context similarity.The acquisition methods citing of recombination vector is such as Under, if current grayvalue vector is [0.901,0.099], reference gray level value is vectorial [0.804,0.196], then recombinating vector is [0.804,0.099] assume that target pixel points are 4, then the method for above-mentioned calculating context similarity is indicated with formula (1)~(4) It is as follows:
I=[Ft(x,y),Ft(x,y)c],Ft(x,y)c=1-Ft(x,y) (1)
Bg (x, y)=max { Pj| j=1...4 } (4)
Wherein, in above-mentioned formula, Ft(x, y) is the corresponding gray values of current pixel point t, Ft(x,y)cFor current pixel point t Complementary current grayvalue, I be current grayvalue vector.wjFor the corresponding reference gray level value of j-th of target pixel points, wj cFor The complementary reference gray value of j-th of target pixel points, WjFor j-th of target pixel points reference gray level value vector, PjFor current pixel Fuzzy membership between point and j-th of target pixel points,For the parameter for preventing fuzzy membership to be arranged equal to 1, specifically may be used To be arranged as required to." ^ " be it is fuzzy ship operator, result of calculation takes the minimum of the vector value of same position between two vectors Value." | | " indicate that vector field homoemorphism, Bg (x, y) they are context similarity, max expressions are maximized, i.e., context similarity is maximum Pj
In one embodiment, images steganalysis model is support vector clustering model, as shown in fig. 6, step S208 The corresponding characteristics of image of current pixel point is handled according to the images steganalysis model trained, obtains current pixel The step of point corresponding first probability, can specifically include:
Step S602 obtains the corresponding Feature Mapping function of images steganalysis model, obtains and Feature Mapping function pair The central value for the feature space answered.
Specifically, the basic thought of support vector clustering model is:It, can be with for the feature for model training of input Using a Feature Mapping function by the Feature Mapping of input to feature space, Feature Mapping value is obtained, this feature space is most The feature space of the small mapping value that can be obtained after covering mappings, after the central value of feature space is characterized mapping function The central value of obtained Feature Mapping value.Such as feature space can be a suprasphere, the center of feature space is suprasphere The centre of sphere.When being trained, feature can be caused due to finding the minimal characteristic space that one is completely covered all feature vectors Space is bigger, therefore, model training condition can be arranged, and then makees obtained feature space when reaching model training condition To meet the minimal characteristic space of condition, the method for model training is described further below.Therefore, for having trained obtained image Model of Target Recognition can obtain corresponding Feature Mapping function, obtain in feature space corresponding with Feature Mapping function Center value.
Step S604 is calculated according to Feature Mapping function pair characteristics of image, obtains the corresponding mapping value of characteristics of image.
Specifically, after obtaining Feature Mapping function, using Feature Mapping function by image feature maps to feature space, Obtain corresponding mapping value.
Step S606 calculates the first distance of mapping value and central value.
Specifically, after obtaining mapping value, mapping value is worth in feature space at a distance from central value according to mapping.Assuming that Characteristics of image be s (i), Feature Mapping function be Φ, central value a, then the first distance of mapping value and central value can indicate For | Φ (s (i))-a |, wherein " | | " indicate to calculate is Euclidean distance.
Corresponding first probability of current pixel point is calculated according to the first distance in step S608, wherein the first distance with The negatively correlated relationship of first probability.
Specifically, the first distance and the negatively correlated relationship of the first probability, i.e. the first probability with the increase of the first distance and Become smaller.For example, the first probability can be the inverse of the first distance.
In one embodiment, the center of feature space can also be obtained to the second distance on the boundary of feature space.Root The first probability that current pixel point is the corresponding pixel of target, which is calculated, according to the first distance includes:Calculate the first distance and The ratio value of two distances.Corresponding first probability of current pixel point is calculated according to ratio value, wherein ratio value and first is generally The negatively correlated relationship of rate.
Specifically, second distance is characterized the boundary in space to the distance at the center of feature space, when feature space is super When sphere, the distance at the center on boundary to feature space is the centre of sphere of sphere to the distance on the surface of sphere, the i.e. radius of sphere. It can be with the correspondence of Set scale value and the first probability, for example, can be arranged when ratio value is 0~10% corresponding first general Rate is 0.8, is 0.6 when ratio value is 10~20% corresponding first probability.
In one embodiment, the first probability subtracts ratio value for 1, and as follows, wherein H is indicated with formula (5)t(x, y) is First probability, Φ (s (i)) is the corresponding mapping value of characteristics of image of current pixel point, centered on a, R second distances, " | | " Indicate to calculate is Euclidean distance.
In the embodiment of the present invention, by the mapping value center corresponding with support vector clustering model for calculating current pixel point First distance of value, the first distance and the negatively correlated relationship of the first probability, the i.e. feature that the center in distance feature space is remoter are reflected Penetrate the corresponding pixel of value belong to pixel where target object possibility it is smaller, therefore, it is possible to be to current pixel point The probability of target object pixel is accurately quantified, and the accuracy of image recognition is further improved.
Fig. 7 shows that one embodiment obtains the implementation flow chart of images steganalysis model, can specifically include following Step:
Step S702 obtains training image, obtains the corresponding trained region of target object in training image.
Specifically, training image is for carrying out model training.Training region is the figure in training image where target object As region.Training region, which can manually mark, to be obtained, and as shown in Figure 7 A, the region that the rectangle frame in Fig. 7 A surrounds is to pass through Manual identified obtains the corresponding region of target object.
Step S704 obtains the corresponding training image feature of each pixel in training region.
Specifically, characteristics of image is for indicating the corresponding property of image, such as the corresponding contrast metric of pixel, color It is one or more etc. in feature or gray feature, it can specifically choose as needed.
Step S706 carries out model training according to training image feature, obtains training image Feature Mapping to minimum The Feature Mapping function of feature space and the central value of feature space.
Specifically, minimum feature space is that model training condition can be arranged according to for model training condition, when Then using obtained feature space as minimal characteristic space when reaching model training condition.It, can for support vector clustering model The optimization aim of model training is expressed as formula, wherein min expressions are minimized, and s.t. indicates subject to, Indicate constraining for formula of the formula minimized after by s.t..R indicates that the radius of suprasphere, a indicate the centre of sphere of suprasphere, Φ It is characterized mapping function, ξiFor slack variable, expression can allow training sample corresponding mapping value in part to be located in suprasphere Except, slack variable can be specifically configured as needed, and n is the quantity of training sample, and x (i) indicates characteristics of image.C is Penalty, for the balance between alignment error and the boundary of suprasphere.Therefore, optimization aim can be summarized as:It is punishing Under conditions of function and the slack variable of setting, trained to obtain Feature Mapping function and minimum sphere according to training sample Body.The central value of minimal hyper-sphere is a, radius value R.
S.t.|Φ((xi)-a)2|≤R2iAnd ξi≥0
In one embodiment, as shown in figure 8, characteristics of image includes contrast metric, it is corresponding to obtain current pixel point The step of contrast metric includes:
Step S802 obtains third region and the fourth region according to the position of current pixel point on present image, wherein The fourth region is the subregion in third region, and current pixel point is located inside the fourth region.
Specifically, contrast refers to the comparison of light and shade degree of image, and the size of third region and the fourth region specifically can root According to needing to be arranged, for example, third region may include 9*9 pixel, the fourth region may include 3*3 pixel.It can be with Understand, since current pixel point is located in the fourth region, and the subregion that the fourth region is third region, therefore, current pixel Point also is located in third region.
In one embodiment, the fourth region can be identical as second area, and third region can be identical as first area.
In one embodiment, during at least one of the fourth region and third region are symmetrical with current pixel point The heart.
Step S804 obtains the non-overlapping images region between third region and the fourth region.
Specifically, non-between third region and the fourth region due to the subregion that the fourth region is third region Overlapping image region is the region other than the fourth region in third region.
Step S806 counts the gray value of the corresponding pixel in non-overlapping images region, obtains the first statistics knot Fruit counts the gray value of the corresponding pixel of the fourth region, obtains the second statistical result.
Specifically, the first statistical result and the second statistical result can be gray value and can also be the flat of gray value Mean value.For example, the gray value of each pixel of Non-overlapping Domain can be carried out to addition summation, then divided by Non-overlapping Domain in The quantity of pixel obtains the average gray value of the corresponding pixel of Non-overlapping Domain, as the first statistical result.It can be by The gray value of each pixel of four-range carries out addition summation, then divided by the fourth region in pixel quantity, obtain the 4th The average gray value of the corresponding pixel in region, as the second statistical result.
Step S808 obtains contrast metric according to the first statistical result and the second statistical result.
Specifically, it after obtaining the first statistical result and the second statistical result, unites in conjunction with the first statistical result and second Meter result obtains contrast metric.For example, contrast metric can be the ratio value of the first statistical result and the second statistical result, Or contrast metric can also be the difference of the first statistical result and the second statistical result.
In one embodiment, when statistical result is the average value of gray value, contrast metric be the first statistical result with When the difference of the second statistical result, the calculation formula of contrast metric is formula (6), and wherein lmci is i-th of current pixel point Corresponding contrast metric, ninFor the pixel number of the fourth region, noutFor the pixel number in third region, F (x, y) table Show the gray value of pixel.
In one embodiment, as shown in figure 9, characteristics of image includes shade of gray feature, current pixel point correspondence is obtained Shade of gray feature the step of include:
Step S902 obtains third region and the fourth region according to the position of current pixel point on present image, wherein The fourth region is the subregion in third region, and current pixel point is located inside the fourth region.
Specifically, shade of gray feature refers to the relevant feature of gray difference between pixel and pixel.Third region and The method that four regions are referred to step S802 obtains, and specifically repeats no more.
Step S904 obtains the non-overlapping images region between third region and the fourth region.
Specifically, non-between third region and the fourth region due to the subregion that the fourth region is third region Overlapping image region is the region other than the fourth region in third region.
Step S906 obtains the first gray scale difference value of each pixel and neighbor pixel in non-overlapping images region, obtains Take the second gray scale difference value of each pixel and neighbor pixel in the fourth region.
Specifically, neighbor pixel refers to the pixel there are coincidence boundary with pixel, can be calculated adjacent with whole The gray scale difference value of pixel, can also calculating section neighbor pixel gray scale difference value.It in one embodiment, can be by gray scale Difference is divided into the gray scale difference value on the gray scale difference value and vertical direction of horizontal direction, and gray scale difference value can be the ash of horizontal direction One spent in the gray scale difference value of difference and vertical direction or gray scale difference value can be horizontal direction gray scale difference value with it is vertical The sum of the gray scale difference value in direction.To calculate the pixel P of Fig. 3 A44It, can be by P for corresponding first gray scale difference value44With P45It Between gray scale difference value absolute value as P44Gray scale difference value in corresponding horizontal direction, by P44With P54Between gray scale difference value Absolute value as P44Gray scale difference value in corresponding vertical direction, then by the gray scale difference value and vertical direction in horizontal direction On gray scale difference value be added, obtain P44Corresponding first gray scale difference value.The computational methods of gray scale difference value are formulated as:
Gh(x, y)=| F (x, y)-F (x+1, y) | (7)
Gv(x, y)=| F (x, y)-F (x, y+1) | (8)
G (x, y)=Gh(x,y)+Gv(x,y) (9)
Wherein, Gh(x, y) indicates the gray scale difference value in horizontal direction, Gv(x, y) indicates the gray scale difference value in vertical direction, G (x, y) is the corresponding gray scale difference value of pixel, and F (x, y) refers to the corresponding gray value of pixel (x, y), and x can indicate capable, and y is indicated Row.
Step S908 counts corresponding first gray scale difference value of each pixel in non-overlapping images region, obtains Third statistical result counts corresponding second gray scale difference value of each pixel in the fourth region, obtains the 4th statistics knot Fruit.
Specifically, third statistical result and the 4th statistical result can be gray scale difference value and can also be gray scale difference value Average value.For example, can the first gray scale difference value of each pixel of Non-overlapping Domain be subjected to addition summation, then divided by it is non- The quantity of pixel in overlapping region, obtains the average gray difference value of the corresponding pixel of Non-overlapping Domain, is counted as third As a result.Second gray scale difference value of each pixel of the fourth region can carry out to addition summation, then divided by the fourth region in picture The quantity of vegetarian refreshments obtains the average gray difference value of the corresponding pixel of the fourth region, as the 4th statistical result.
Step S910 obtains shade of gray feature according to third statistical result and the 4th statistical result.
Specifically, it after obtaining third statistical result and the 4th statistical result, unites in conjunction with third statistical result and the 4th Meter result obtains shade of gray feature.For example, contrast metric can be the ratio of third statistical result and the 4th statistical result Value or shade of gray feature can also be the difference of third statistical result and the 4th statistical result.When shade of gray is characterized in The difference of third statistical result and the 4th statistical result, third statistical result and the 4th statistical result are being averaged for gray scale difference value When value, the calculation formula of shade of gray feature is as follows, and wherein lmgi is the corresponding shade of gray feature of i-th of current pixel point, ninFor the pixel number of the fourth region, noutFor the pixel number in third region, G (x, y) indicates gray scale difference value.
In the embodiment of the present invention, in the corresponding characteristics of image of calculating current pixel point, calculating is current pixel point pair The characteristics of image in the region answered, and be divided into the characteristics of image that two regions are calculated, therefore obtained and can reflect current picture Environment residing for vegetarian refreshments, the first obtained probability accuracy are high.
In one embodiment, characteristics of image may include contrast metric and one kind in shade of gray feature or Two kinds.
In one embodiment, as shown in Figure 10, step S210 i.e. according to the corresponding context similarity of each pixel and First probability identifies to obtain the image-region where target object in present image:
Step S1002 obtains corresponding second probability of current pixel point according to context similarity, wherein the second probability is Current pixel point belongs to the probability of target object pixel, the second probability and the negatively correlated relationship of context similarity.
Specifically, context similarity and the negatively correlated relationship of the second probability refer to the increase with context similarity, second Probability becomes smaller.For example, the second probability can be the inverse of context similarity.Or the range of context similarity and the can be set The correspondence of two probability, can such as be arranged context similarity be 0~10% when, corresponding second probability be 90%, work as background When similarity is 11~20%, corresponding second probability is 80%.Alternatively, the second probability can be indicated with following formula, wherein Bg (x, y) is context similarity, Hb(x, y) indicates corresponding second probability of pixel (x, y).
Hb(x, y)=1-Bg (x, y) (11)
Step S1004, according to the first probability and the corresponding current goal probability of the second determine the probability current pixel point, when Preceding destination probability is the probability that current pixel point belongs to target object pixel.
Specifically, according to the method for the first probability and the corresponding current goal probability of the second determine the probability current pixel point It can be arranged as required to, for example, can be general as current goal using obtained product by the first probability and the second probability multiplication Rate.Or using the average value of the first probability and the second probability as current goal probability.It is of course also possible to which product is further arranged With the correspondence of destination probability, destination probability is obtained according to product.It can also be in conjunction with the current picture that another method obtains The third probability that vegetarian refreshments belongs to target object pixel obtains destination probability.For example, destination probability can be the first probability, second The product of probability and third probability.
Step S1006 identifies to obtain mesh in present image according to the corresponding destination probability of each pixel in present image Mark the image-region where object.
Specifically, present image includes multiple pixels, needs to obtain mesh according to the corresponding destination probability of each pixel Mark the image-region where object.Such as destination probability can be more than to the pixel of predetermined probabilities as target object pixel Point, the region that target object pixel is formed is as the image-region where target object.
In one embodiment, it can also be obtained in present image where target object according to the position relationship of pixel Image-region.For example, for the target object that corresponding region is continuum, if being preset generally although being more than there are destination probability Rate, but be the pixel of individualism, i.e. the pixel that destination probability is more than predetermined probabilities is not present in surrounding, then the pixel It may not be target object pixel.
In one embodiment, as shown in figure 11, step S1006 is i.e. according to the corresponding mesh of each pixel in present image Marking the image-region that probability identifies to obtain in present image where target object includes:
Step S1102 obtains the first pixel that destination probability in present image is more than first threshold.
Specifically, first threshold specifically can as needed for example be configured the requirement of recognition accuracy, pass through reality It tests, it is found that recognition accuracy is high when first threshold is 0.85.
In one embodiment, the pixel of first threshold is less than for destination probability, it can be using the pixel as the back of the body Its corresponding destination probability is updated to 0 by scape vegetarian refreshments.
Step S1104 obtains the distribution characteristics of the corresponding destination probability of the first pixel.
Specifically, the distribution situation for the destination probability that distribution characteristics reflects, may include destination probability maximum value, In minimum value, average value and each numberical range distribution proportion or number it is one or more.It specifically can basis It needs to be arranged.Each numberical range can be pre-set, for example, it is first numerical value model that can be arranged 0.85~0.88 It encloses, 0.89~0.92 is second numberical range.
Step S1106 obtains second threshold according to distribution characteristics.
Specifically, after obtaining distribution characteristics, second threshold is obtained according to each distribution characteristics.The method for obtaining second threshold It can be configured as needed, such as threshold segmentation algorithm may be used, second threshold, image threshold point is calculated Cut algorithm such as may include OTSU (maximum between-cluster variance algorithm), iteration ask varimax and maximum entropy method (MEM) in one Grey value characteristics are replaced with the first pixel by kind or a variety of methods wherein when obtaining second threshold using image segmentation algorithm The corresponding destination probability of point is calculated, and second threshold is obtained.
Step S1108 obtains destination probability in present image and is more than the area that the first pixel of second threshold combines Domain, as the image-region where target object in present image.
Specifically, specifically, after obtaining second threshold, destination probability is more than the first pixel of second threshold as mesh Subject image vegetarian refreshments is marked, the region that target object pixel is combined is as the image district where target object in present image Domain.
In the embodiment of the present invention, by obtaining second threshold according to the distribution characteristics of destination probability, all according to second threshold Image is divided again, due to it can carry out preliminary screening to pixel by first threshold after, further according to specific figure As the probability distribution of corresponding pixel further filters out target object pixel, therefore, image recognition is further improved Accuracy.
Method provided in an embodiment of the present invention is illustrated with a specific embodiment, is included the following steps:
1, the present image of target to be identified is obtained, it is assumed that present image is the image of 7*7 pixels, that is, includes 7*7 picture Vegetarian refreshments shares 7 rows and 7 row.
2, using each pixel as current pixel point, the method provided according to embodiments of the present invention calculates separately each The corresponding context similarity of current pixel point and corresponding first probability.Wherein, when calculating context similarity, first area And the size in third region is 5*5 pixels, the size of second area and the fourth region is 3*3 pixels.
3, corresponding second probability of each pixel is calculated according to the corresponding context similarity of each pixel, wherein second Probability=1- context similarities.
4, the corresponding destination probability of each pixel is obtained according to the first probability and the second probability, wherein destination probability is The product of first probability and the second probability.As shown in figure 12, a grid in Figure 12 represents a pixel, the number in pixel For the corresponding destination probability of pixel.
5, the first pixel that destination probability in present image is more than first threshold is obtained, it is assumed that first threshold 0.85, Then P shown in Figure 1216、P25、P26、P34、P35、P36、P45And P46For the first pixel.
6, second threshold is obtained.The method that iterative method selection threshold value may be used obtains second threshold, and iterative method selects threshold Value-based algorithm selects an initial threshold T first, divides the image into two parts:R1 and R2 calculates the equal of region R1 and R2 Value u1 and u2, the new threshold value T=(u1+u2)/2 of reselection repeat process above, until new threshold value is compared to last threshold Until value no longer changes or changes the threshold value less than setting.In embodiments of the present invention, it is right in the first pixel to obtain first The maximum target probability and minimum target probability answered, respectively 0.86 and 0.96, therefore initial threshold is 0.86+0.96= 0.91, it is used as threshold value by 0.91, classifies to the first pixel, P can be obtained26、P36、P45For target object pixel, He is background pixel point.Calculate target object pixel P26、P36、P45Destination probability average value, be (0.96+0.91+ 0.96)/3=0.94.The destination probability average value of background pixel point is calculated, is 0.89+0.86+0.87+0.87+0.89= 0.876, therefore new threshold value is (0.876+0.94)/2=0.913, the changing value of new threshold value and initial threshold is 0.913- 0.91=0.003, it is assumed that the changing value is less than the threshold value of setting, then 0.913 is second threshold, it is assumed that the changing value is more than setting Threshold value, then can continue that 0.913 classifies as the first pixel of threshold value pair, then repeat the above steps, until new Threshold value no longer change or change the threshold value for being less than and setting compared to last threshold value until.
7, the region combined of the first pixel that destination probability in present image is more than second threshold is obtained, as working as Image-region in preceding image where target object.Assuming that second threshold is finally 0.87, then in Figure 12, P16、P26、P36、P45、 P46The region of this 4 pixels composition is the image-region in target object present image where target object.
In one embodiment, with using feature selecting filter method (CSF), maximin calculus of finite differences (DMMF) and more Nine sections of infrared videos of scale Gradient method (MSG) and method pair provided in an embodiment of the present invention carry out background after carrying out image recognition For inhibition, the effect of method provided in an embodiment of the present invention is further described, and use snr gain (ISNR), Contrast gain (ISCR) and Background suppression factor (BSF) these three indexs evaluate background histamine result.Wherein, background inhibits The factor is bigger, then illustrates that global context smooth effect is better.Snr gain and contrast gain are bigger, then illustration method inhibits The ability that clutter enhances Weak target is stronger.The results are shown in Table 1 for specific index, and the seq wherein in table indicates video, seq Number afterwards is the label of video, as can be seen from Table 1:BSF higher, ISNR and the ISCR of feature selecting filter method (CSF) It is relatively low, illustrate it with preferable global context Lubricity, but clutter recognition less effective, maximin calculus of finite differences (DMMF) ISCR higher, but ISNR and BSF are relatively low, illustrate that its target signal to noise ratio promotion degree is not strong, and global context degree of suppression is owed It is good.Three kinds of indexs of multi-scale gradient method (MSG) are relatively low, illustrate that its background rejection is general.And this method has simultaneously Higher ISNR, ISCR and BSF index, global context is smoothly preferable with local clutter recognition effect, can effectively inhibit complicated Background clutter highlights target.
The image recognition evaluation index of 1 four kinds of detection algorithms of table
It in one embodiment, can also be small and weak to evaluate using verification and measurement ratio (r), accuracy rate (p) and overall target (F1) The recognition effect of target.Wherein, r is the ratio between the correct number of targets detected and real goal sum, and p is the correct mesh detected Mark number and the ratio between the target sum detected, F1 are the composite index of r and p indexs, can be that r weighting reconciliations corresponding with p are flat Mean value.With higher r values, while higher p value is kept, the recognition performance that higher F1 values have also implied that.
Four kinds of methods are as shown in table 2 for the detection evaluation result of nine sections of infrared videos.As can be seen from Table 2:Feature Selective filter method (CSF) has higher verification and measurement ratio, but accuracy rate is relatively low, and whole detection performance is general.Multi-scale gradient method (MSG) and maximin calculus of finite differences (DMMF) has higher verification and measurement ratio, but accuracy rate is relatively low, and this method have simultaneously compared with High verification and measurement ratio and accuracy rate, F1 indexs are up to 97.7%, have preferable detection stability.
The average criterion of 2 four kinds of detection methods of table detects evaluation index
In addition, three images are chosen, wherein the background of first image is cloudless and sky color is dark, wherein second The background of image is to have cloud and camera distance is remote, and the background of third image is to have cloud and camera distance is close, as shown in figure 13, The image of first row is original image, and the dot in box indicates the actual position of target object, and uses four kinds of above-mentioned sides Method identifies the target object in image, and the image of the 2nd row to the 4th row is respectively feature selecting filter method (CSF), most in Figure 13 Big minimum value calculus of finite differences (DMMF) and multi-scale gradient method (MSG) and the corresponding target pair of method provided in an embodiment of the present invention The testing result of elephant, as seen from Figure 13, method provided in an embodiment of the present invention can be recognized accurately where target object Picture position.
As shown in figure 14, in one embodiment, a kind of pattern recognition device is provided, which can collect At in above-mentioned computer equipment 120, present image acquisition module 1402, background area determining module can specifically include 1404, similarity calculation module 1406, the first probability obtain module 1408 and target area identification module 1410.
Present image acquisition module 1402, the present image for obtaining target to be identified.
Background area determining module 1404, for obtaining current pixel point from present image, according to current pixel point The corresponding reference background region of location determination.
Similarity calculation module 1406, it is similar for calculating the corresponding background of current pixel point according to reference background region Degree.
First probability obtains module 1408, for being corresponded to current pixel point according to the images steganalysis model trained Characteristics of image handled, obtain corresponding first probability of current pixel point, the first probability is that current pixel point belongs to target The probability of subject image vegetarian refreshments.
Target area identification module 1410, the corresponding background of each pixel for being calculated in present image are similar Degree and the first probability, identify to obtain target pair in present image according to the corresponding context similarity of each pixel and the first probability As the image-region at place.
Background area determining module 1404 includes in one of the embodiments,:
First area acquiring unit, for obtaining first area and the on present image according to the position of current pixel point Two regions, wherein second area is the subregion of first area, and current pixel point is located inside second area.
First area determination unit, for using the non-overlapping images region between first area and second area as background Reference zone.
In one of the embodiments, as shown in figure 15, similarity calculation module 1406 includes:
Gray value acquiring unit 1406A, for from reference background region obtain target pixel points, obtain target pixel points and The corresponding gray value of current pixel point.
Reference vector component units 1406B is right for reference gray level value to be calculated according to the gray value of target pixel points Reference gray level value, which carries out negating operation, obtains corresponding complementary reference gray value, by reference gray level value and complementary reference gray value group At reference gray level value vector.
Current vector component units 1406C is obtained pair for negate operation to the corresponding gray value of current pixel point The complementary current grayvalue answered, by the corresponding gray value of current pixel point and complementary current grayvalue form current grayvalue to Amount.
Similarity calculated 1406D works as being calculated according to reference gray level value vector sum current grayvalue vector The corresponding context similarity of preceding pixel point.
Images steganalysis model is support vector clustering model in one of the embodiments, and the first probability obtains mould Block includes:
Model parameter acquiring unit obtains and spy for obtaining the corresponding Feature Mapping function of images steganalysis model Levy the central value of the corresponding feature space of mapping function.
Mapping value computing unit obtains characteristics of image pair for being calculated according to Feature Mapping function pair characteristics of image The mapping value answered.
First metrics calculation unit, the first distance for calculating mapping value and central value.
First probability obtains unit, for corresponding first probability of current pixel point to be calculated according to the first distance, In, the first distance and the negatively correlated relationship of the first probability.
Pattern recognition device further includes in one of the embodiments,:
Second distance acquisition module, for obtaining the center of feature space to the second distance on the boundary of feature space.
First probability obtains unit and is used for:Calculate the ratio value of the first distance and second distance.It is calculated according to ratio value To corresponding first probability of current pixel point, wherein ratio value and the negatively correlated relationship of the first probability.
Pattern recognition device further includes in one of the embodiments,:
Training region acquisition module obtains the corresponding training of target object in training image for obtaining training image Region.
Training characteristics acquisition module, for obtaining the corresponding training image feature of each pixel in trained region.
Training module is obtained training image Feature Mapping for carrying out model training according to training image feature to most The Feature Mapping function of small feature space and the central value of feature space.
Image device further includes in one of the embodiments,:
Second area acquiring unit, for obtaining third region and the on present image according to the position of current pixel point Four regions, wherein the fourth region is the subregion in third region, and current pixel point is located inside the fourth region.
Second area determination unit, for obtaining the non-overlapping images region between third region and the fourth region.
First statistic unit is counted for the gray value to the corresponding pixel in non-overlapping images region, obtains One statistics obtains the second statistical result as a result, counted to the gray value of the corresponding pixel of the fourth region.
Contrast metric obtains unit, for obtaining contrast spy according to the first statistical result and the second statistical result Sign.
In one of the embodiments, as shown in figure 16, target area identification module 1410 includes:
Second probability obtains unit 1410A, for obtaining corresponding second probability of current pixel point according to context similarity, Wherein, the second probability is the probability that current pixel point belongs to target object pixel, and the second probability is in negative with context similarity Pass relationship.
Destination probability obtains unit 1410B, for being corresponded to according to the first probability and the second determine the probability current pixel point Current goal probability, current goal probability is that current pixel point belongs to the probability of target object pixel.
Target area recognition unit 1410C, for being identified according to the corresponding destination probability of each pixel in present image Obtain the image-region where target object in present image.
Recognition unit 1410C in target area is used in one of the embodiments,:It is big to obtain destination probability in present image In the first pixel of first threshold.Obtain the distribution characteristics of the corresponding destination probability of the first pixel.It is obtained according to distribution characteristics To second threshold.It obtains destination probability in present image and is more than the region that the first pixel of second threshold combines, as Image-region in present image where target object.
Figure 17 shows the internal structure charts of one embodiment Computer equipment.The computer equipment can be specifically figure Calculator device 120 in 1.As shown in figure 17, it includes being connected by system bus which, which includes the computer equipment, Processor, memory, network interface and the input unit connect.Wherein, memory includes non-volatile memory medium and memory Reservoir.The non-volatile memory medium of the computer equipment is stored with operating system, can also be stored with computer program, the calculating When machine program is executed by processor, processor may make to realize image-recognizing method.Also calculating can be stored in the built-in storage Machine program when the computer program is executed by processor, may make processor to execute image-recognizing method.Computer equipment it is defeated It can be the touch layer covered on display screen to enter device, can also be the button being arranged on computer equipment shell, trace ball or Trackpad can also be external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Figure 17, only with the relevant part of application scheme The block diagram of structure, does not constitute the restriction for the computer equipment being applied thereon to application scheme, and specific computer is set Standby may include either combining certain components than more or fewer components as shown in the figure or being arranged with different components.
In one embodiment, pattern recognition device provided by the present application can be implemented as a kind of shape of computer program Formula, computer program can be run on computer equipment as shown in figure 17.Composition can be stored in the memory of computer equipment Each program module of the pattern recognition device, for example, present image acquisition module 1402 shown in Figure 14, background area determine Module 1404, similarity calculation module 1406, the first probability obtain module 1408 and target area identification module 1410.It is each The computer program that program module is constituted makes processor execute the image of each embodiment of the application described in this specification Step in recognition methods.
For example, computer equipment shown in Figure 17 can pass through the present image in pattern recognition device as shown in figure 14 Acquisition module 1402 obtains the present image of target to be identified;It is obtained from present image by background area determining module 1404 Current pixel point, according to the corresponding reference background region of the location determination of current pixel point;Pass through similarity calculation module 1406 The corresponding context similarity of current pixel point is calculated according to reference background region;Module 1408 is obtained according to by the first probability Trained images steganalysis model handles the corresponding characteristics of image of current pixel point, and it is corresponding to obtain current pixel point First probability, the first probability are the probability that current pixel point belongs to target object pixel;Pass through target area identification module 1410 are calculated the corresponding context similarity of each pixel and the first probability in present image, according to each pixel pair The context similarity and the first probability answered identify to obtain the image-region in present image where target object.
In one embodiment it is proposed that a kind of computer equipment, computer equipment include memory, processor and storage On a memory and the computer program that can run on a processor, processor realize following steps when executing computer program: Obtain the present image of target to be identified;Current pixel point is obtained from present image, according to the location determination of current pixel point Corresponding reference background region;The corresponding context similarity of current pixel point is calculated according to reference background region;According to having trained Images steganalysis model the corresponding characteristics of image of current pixel point is handled, obtain current pixel point corresponding first Probability, the first probability are the probability that current pixel point belongs to target object pixel;Each picture in present image is calculated The corresponding context similarity of vegetarian refreshments and the first probability are identified according to the corresponding context similarity of each pixel and the first probability Image-region to where target object in present image.
In one embodiment, the corresponding reference background region of the location determination according to current pixel point that processor executes Including:First area and second area are obtained on present image according to the position of current pixel point, wherein second area The subregion in one region, current pixel point are located inside second area;By the non-overlapping figure between first area and second area As region is as reference background region.
In one embodiment, what processor executed calculates the corresponding background phase of current pixel point according to reference background region Like degree, including:Target pixel points are obtained from reference background region, obtain target pixel points and the corresponding gray scale of current pixel point Value;Reference gray level value is calculated according to the gray value of target pixel points, carries out negating operation being corresponded to reference gray level value Complementary reference gray value, by reference gray level value and complementary reference gray value composition reference gray level value vector;To current pixel point Corresponding gray value negate operation and obtains corresponding complementary current grayvalue, by the corresponding gray value of current pixel point and mutually Mend current grayvalue composition current grayvalue vector;It is calculated currently according to reference gray level value vector sum current grayvalue vector The corresponding context similarity of pixel.
In one embodiment, images steganalysis model is support vector clustering model, and processor executes basis and instructed Experienced images steganalysis model handles the corresponding characteristics of image of current pixel point, obtains current pixel point corresponding One probability includes:The corresponding Feature Mapping function of images steganalysis model is obtained, spy corresponding with Feature Mapping function is obtained Levy the central value in space;It is calculated according to Feature Mapping function pair characteristics of image, obtains the corresponding mapping value of characteristics of image;Meter Calculate the first distance of mapping value and central value;Corresponding first probability of current pixel point is calculated according to the first distance, wherein First distance and the negatively correlated relationship of the first probability.
In one embodiment, computer program also makes processor execute following steps:Obtain the center of feature space To the second distance on the boundary of feature space;It is the corresponding pixel of target that current pixel point, which is calculated, according to the first distance First probability includes:Calculate the ratio value of the first distance and second distance;Current pixel point is calculated according to ratio value to correspond to The first probability, wherein ratio value and the negatively correlated relationship of the first probability.
In one embodiment, processor execution includes the step of obtaining images steganalysis model:Obtain training figure Picture obtains the corresponding trained region of target object in training image;Obtain the corresponding training of each pixel in training region Characteristics of image;Model training is carried out according to training image feature, is obtained training image Feature Mapping to minimum feature space Feature Mapping function and feature space central value.
In one embodiment, processor executes, and characteristics of image includes contrast metric, obtains current pixel point correspondence Contrast metric the step of include:Third region and the 4th area are obtained on present image according to the position of current pixel point Domain, wherein the fourth region is the subregion in third region, and current pixel point is located inside the fourth region;Obtain third region and Non-overlapping images region between the fourth region;The gray value of the corresponding pixel in non-overlapping images region is counted, is obtained To the first statistical result, the gray value of the corresponding pixel of the fourth region is counted, the second statistical result is obtained;According to One statistical result and the second statistical result obtain contrast metric.
In one embodiment, what processor executed knows according to the corresponding context similarity of each pixel and the first probability Not obtaining the image-region in present image where target object includes:It is corresponding that current pixel point is obtained according to context similarity Second probability, wherein the second probability is the probability that current pixel point belongs to target object pixel, and the second probability is similar to background Spend negatively correlated relationship;According to the first probability and the corresponding current goal probability of the second determine the probability current pixel point, currently Destination probability is the probability that current pixel point belongs to target object pixel;According to the corresponding mesh of each pixel in present image Mark probability identifies to obtain the image-region in present image where target object.
In one embodiment, what processor executed identifies according to the corresponding destination probability of each pixel in present image Obtaining the image-region in present image where target object includes:It obtains destination probability in present image and is more than first threshold First pixel;Obtain the distribution characteristics of the corresponding destination probability of the first pixel;Second threshold is obtained according to distribution characteristics;It obtains Destination probability in present image is taken to be more than the region that the first pixel of second threshold combines, as target in present image Image-region where object.
In one embodiment, a kind of computer readable storage medium is provided, is stored on computer readable storage medium Computer program, when computer program is executed by processor so that processor executes following steps:Obtain working as target to be identified Preceding image;Current pixel point is obtained from present image, according to the corresponding reference background region of the location determination of current pixel point; The corresponding context similarity of current pixel point is calculated according to reference background region;According to the images steganalysis model pair trained The corresponding characteristics of image of current pixel point is handled, and obtains corresponding first probability of current pixel point, the first probability is current Pixel belongs to the probability of target object pixel;The corresponding context similarity of each pixel in present image is calculated With the first probability, identify to obtain target object in present image according to the corresponding context similarity of each pixel and the first probability The image-region at place.
In one embodiment, the corresponding reference background region of the location determination according to current pixel point that processor executes Including:First area and second area are obtained on present image according to the position of current pixel point, wherein second area The subregion in one region, current pixel point are located inside second area;By the non-overlapping figure between first area and second area As region is as reference background region.
In one embodiment, what processor executed calculates the corresponding background phase of current pixel point according to reference background region Like degree, including:Target pixel points are obtained from reference background region, obtain target pixel points and the corresponding gray scale of current pixel point Value;Reference gray level value is calculated according to the gray value of target pixel points, carries out negating operation being corresponded to reference gray level value Complementary reference gray value, by reference gray level value and complementary reference gray value composition reference gray level value vector;To current pixel point Corresponding gray value negate operation and obtains corresponding complementary current grayvalue, by the corresponding gray value of current pixel point and mutually Mend current grayvalue composition current grayvalue vector;It is calculated currently according to reference gray level value vector sum current grayvalue vector The corresponding context similarity of pixel.
In one embodiment, images steganalysis model is support vector clustering model, and processor executes basis and instructed Experienced images steganalysis model handles the corresponding characteristics of image of current pixel point, obtains current pixel point corresponding One probability includes:The corresponding Feature Mapping function of images steganalysis model is obtained, spy corresponding with Feature Mapping function is obtained Levy the central value in space;It is calculated according to Feature Mapping function pair characteristics of image, obtains the corresponding mapping value of characteristics of image;Meter Calculate the first distance of mapping value and central value;Corresponding first probability of current pixel point is calculated according to the first distance, wherein First distance and the negatively correlated relationship of the first probability.
In one embodiment, computer program also makes processor execute following steps:Obtain the center of feature space To the second distance on the boundary of feature space;It is the corresponding pixel of target that current pixel point, which is calculated, according to the first distance First probability includes:Calculate the ratio value of the first distance and second distance;Current pixel point is calculated according to ratio value to correspond to The first probability, wherein ratio value and the negatively correlated relationship of the first probability.
In one embodiment, processor execution includes the step of obtaining images steganalysis model:Obtain training figure Picture obtains the corresponding trained region of target object in training image;Obtain the corresponding training of each pixel in training region Characteristics of image;Model training is carried out according to training image feature, is obtained training image Feature Mapping to minimum feature space Feature Mapping function and feature space central value.
In one embodiment, processor executes, and characteristics of image includes contrast metric, obtains current pixel point correspondence Contrast metric the step of include:Third region and the 4th area are obtained on present image according to the position of current pixel point Domain, wherein the fourth region is the subregion in third region, and current pixel point is located inside the fourth region;Obtain third region and Non-overlapping images region between the fourth region;The gray value of the corresponding pixel in non-overlapping images region is counted, is obtained To the first statistical result, the gray value of the corresponding pixel of the fourth region is counted, the second statistical result is obtained;According to One statistical result and the second statistical result obtain contrast metric.
In one embodiment, what processor executed knows according to the corresponding context similarity of each pixel and the first probability Not obtaining the image-region in present image where target object includes:It is corresponding that current pixel point is obtained according to context similarity Second probability, wherein the second probability is the probability that current pixel point belongs to target object pixel, and the second probability is similar to background Spend negatively correlated relationship;According to the first probability and the corresponding current goal probability of the second determine the probability current pixel point, currently Destination probability is the probability that current pixel point belongs to target object pixel;According to the corresponding mesh of each pixel in present image Mark probability identifies to obtain the image-region in present image where target object.
In one embodiment, what processor executed identifies according to the corresponding destination probability of each pixel in present image Obtaining the image-region in present image where target object includes:It obtains destination probability in present image and is more than first threshold First pixel;Obtain the distribution characteristics of the corresponding destination probability of the first pixel;Second threshold is obtained according to distribution characteristics;It obtains Destination probability in present image is taken to be more than the region that the first pixel of second threshold combines, as target in present image Image-region where object.
Although should be understood that various embodiments of the present invention flow chart in each step according to arrow instruction successively It has been shown that, but these steps are not the inevitable sequence indicated according to arrow to be executed successively.Unless expressly state otherwise herein, There is no stringent sequences to limit for the execution of these steps, these steps can execute in other order.Moreover, each embodiment In at least part step may include that either these sub-steps of multiple stages or stage are not necessarily multiple sub-steps Completion is executed in synchronization, but can be executed at different times, the execution in these sub-steps or stage sequence is not yet Necessarily carry out successively, but can either the sub-step of other steps or at least part in stage be in turn with other steps Or it alternately executes.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, provided herein Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of embodiment described above can be combined arbitrarily, to keep description succinct, not to above-mentioned reality It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, it is all considered to be the range of this specification record.
Several embodiments of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously Cannot the limitation to the scope of the claims of the present invention therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect range.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (15)

1. a kind of image-recognizing method, the method includes:
Obtain the present image of target to be identified;
Current pixel point is obtained from the present image, according to the corresponding reference background of the location determination of the current pixel point Region;
The corresponding context similarity of the current pixel point is calculated according to the reference background region;
The corresponding characteristics of image of the current pixel point is handled according to the images steganalysis model trained, obtains institute Corresponding first probability of current pixel point is stated, first probability is that the current pixel point belongs to the general of target object pixel Rate;
The corresponding context similarity of each pixel and the first probability in the present image is calculated, according to described each The corresponding context similarity of pixel and the first probability identify to obtain the image-region in the present image where target object.
2. according to the method described in claim 1, it is characterized in that, described correspond to according to the location determination of the current pixel point Reference background region include:
First area and second area are obtained on the present image according to the position of the current pixel point, wherein described Second area is the subregion of the first area, and the current pixel point is located inside the second area;
Using the non-overlapping images region between the first area and second area as the reference background region.
3. according to the method described in claim 1, it is characterized in that, described described current according to reference background region calculating The corresponding context similarity of pixel, including:
Target pixel points are obtained from the reference background region, the target pixel points is obtained and the current pixel point is corresponding Gray value;
Reference gray level value is calculated according to the gray value of the target pixel points, the reference gray level value is carried out negating operation Obtain corresponding complementary reference gray value, by the reference gray level value and the complementary reference gray value form reference gray level value to Amount;
Negate operation to the corresponding gray value of the current pixel point and obtain corresponding complementary current grayvalue, will described in work as The corresponding gray value of preceding pixel point and the complementary current grayvalue composition current grayvalue vector;
The corresponding back of the body of the current pixel point is calculated according to current grayvalue vector described in the reference gray level value vector sum Scape similarity.
4. according to the method described in claim 1, it is characterized in that, described image Model of Target Recognition is support vector clustering mould Type, the images steganalysis model that the basis has been trained handle the corresponding characteristics of image of the current pixel point, obtain Include to corresponding first probability of the current pixel point:
The corresponding Feature Mapping function of described image Model of Target Recognition is obtained, spy corresponding with the Feature Mapping function is obtained Levy the central value in space;
It is calculated according to the Feature Mapping function pair described image feature, obtains the corresponding mapping value of described image feature;
Calculate the first distance of the mapping value and the central value;
Corresponding first probability of the current pixel point is calculated according to first distance, wherein first distance with The negatively correlated relationship of first probability.
5. according to the method described in claim 4, it is characterized in that, the method further includes:
The center of the feature space is obtained to the second distance on the boundary of the feature space;
It is described that the first probability packet that the current pixel point is the corresponding pixel of target is calculated according to first distance It includes:
Calculate the ratio value of first distance and the second distance;
Corresponding first probability of the current pixel point is calculated according to the ratio value, wherein the ratio value with it is described The negatively correlated relationship of first probability.
6. according to the method described in claim 4, it is characterized in that, the step of obtaining described image Model of Target Recognition includes:
Training image is obtained, the corresponding trained region of target object in the training image is obtained;
Obtain the corresponding training image feature of each pixel in the trained region;
Model training is carried out according to the training image feature, is obtained the training image Feature Mapping is empty to minimum feature Between Feature Mapping function and the feature space central value.
7. according to the method described in claim 1, it is characterized in that, described image feature includes contrast metric, obtain described The step of current pixel point corresponding contrast metric includes:
Third region and the fourth region are obtained on the present image according to the position of the current pixel point, wherein described The fourth region is the subregion in the third region, and the current pixel point is located inside the fourth region;
Obtain the non-overlapping images region between the third region and the fourth region;
The gray value of the corresponding pixel in the non-overlapping images region is counted, the first statistical result is obtained, to described The gray value of the corresponding pixel of the fourth region is counted, and the second statistical result is obtained;
The contrast metric is obtained according to first statistical result and second statistical result.
8. according to the method described in claim 1, it is characterized in that, it is described according to the corresponding context similarity of each pixel and The image-region that first probability identifies to obtain in the present image where target object includes:
Corresponding second probability of the current pixel point is obtained according to the context similarity, wherein second probability is institute State the probability that current pixel point belongs to target object pixel, second probability and the negatively correlated pass of the context similarity System;
It is described according to the corresponding current goal probability of current pixel point described in first probability and second determine the probability Current goal probability is the probability that the current pixel point belongs to target object pixel;
It identifies to obtain target object in the present image according to the corresponding destination probability of each pixel in the present image The image-region at place.
9. according to the method described in claim 8, it is characterized in that, described correspond to according to each pixel in the present image The destination probability image-region that identifies to obtain in the present image where target object include:
Obtain the first pixel that destination probability in the present image is more than first threshold;
Obtain the distribution characteristics of the corresponding destination probability of first pixel;
Second threshold is obtained according to the distribution characteristics;
It obtains destination probability in the present image and is more than the region that the first pixel of the second threshold combines, as Image-region in the present image where target object.
10. a kind of pattern recognition device, described device include:
Present image acquisition module, the present image for obtaining target to be identified;
Background area determining module, for obtaining current pixel point from the present image, according to the current pixel point The corresponding reference background region of location determination;
Similarity calculation module, it is similar for calculating the corresponding background of the current pixel point according to the reference background region Degree;
First probability obtains module, is used for according to the images steganalysis model trained to the corresponding figure of the current pixel point As feature is handled, corresponding first probability of the current pixel point is obtained, first probability is the current pixel point Belong to the probability of target object pixel;
Target area identification module, the corresponding context similarity of each pixel for being calculated in the present image and First probability identifies to obtain mesh in the present image according to the corresponding context similarity of each pixel and the first probability Mark the image-region where object.
11. device according to claim 10, which is characterized in that the similarity calculation module includes:
Gray value acquiring unit, for from the reference background region obtain target pixel points, obtain the target pixel points and The corresponding gray value of the current pixel point;
Reference vector component units, for reference gray level value to be calculated according to the gray value of the target pixel points, to described Reference gray level value, which carries out negating operation, obtains corresponding complementary reference gray value, by the reference gray level value and the complementary reference Gray value forms reference gray level value vector;
Current vector component units, for the corresponding gray value of the current pixel point negate operation obtain it is corresponding mutually Mend current grayvalue, by the corresponding gray value of the current pixel point and the complementary current grayvalue form current grayvalue to Amount;
Similarity calculated, it is described for being calculated according to current grayvalue vector described in the reference gray level value vector sum The corresponding context similarity of current pixel point.
12. device according to claim 10, which is characterized in that described image Model of Target Recognition is support vector clustering Model, first probability obtain module and include:
Model parameter acquiring unit, for obtaining the corresponding Feature Mapping function of described image Model of Target Recognition, acquisition and institute State the central value of the corresponding feature space of Feature Mapping function;
Mapping value computing unit obtains the figure for being calculated according to the Feature Mapping function pair described image feature As the corresponding mapping value of feature;
First metrics calculation unit, the first distance for calculating the mapping value and the central value;
First probability obtains unit, for the current pixel point corresponding first to be calculated generally according to first distance Rate, wherein first distance and the negatively correlated relationship of the first probability.
13. device according to claim 12, which is characterized in that described device further includes:
Second distance acquisition module, for obtain the center of the feature space to the boundary of the feature space second away from From;
First probability obtains unit and is used for:
Calculate the ratio value of first distance and the second distance;
Corresponding first probability of the current pixel point is calculated according to the ratio value, wherein the ratio value with it is described The negatively correlated relationship of first probability.
14. a kind of computer equipment, which is characterized in that including memory and processor, computer is stored in the memory Program, when the computer program is executed by the processor so that the processor perform claim requires any one of 1 to 9 The step of claim described image recognition methods.
15. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program, when the computer program is executed by processor so that the processor perform claim requires any one of 1 to 9 right It is required that the step of described image recognition methods.
CN201810502263.0A 2018-05-23 2018-05-23 Image recognition method and device, computer equipment and storage medium Active CN108764325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810502263.0A CN108764325B (en) 2018-05-23 2018-05-23 Image recognition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810502263.0A CN108764325B (en) 2018-05-23 2018-05-23 Image recognition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108764325A true CN108764325A (en) 2018-11-06
CN108764325B CN108764325B (en) 2022-07-08

Family

ID=64005379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810502263.0A Active CN108764325B (en) 2018-05-23 2018-05-23 Image recognition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108764325B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109587248A (en) * 2018-12-06 2019-04-05 腾讯科技(深圳)有限公司 User identification method, device, server and storage medium
CN109635723A (en) * 2018-12-11 2019-04-16 讯飞智元信息科技有限公司 A kind of occlusion detection method and device
CN109697460A (en) * 2018-12-05 2019-04-30 华中科技大学 Object detection model training method, target object detection method
CN110363241A (en) * 2019-07-11 2019-10-22 合肥联宝信息技术有限公司 A kind of picture comparative approach and device
CN110490212A (en) * 2019-02-26 2019-11-22 腾讯科技(深圳)有限公司 Molybdenum target image processing arrangement, method and apparatus
CN111598088A (en) * 2020-05-15 2020-08-28 京东方科技集团股份有限公司 Target detection method and device, computer equipment and readable storage medium
CN111639653A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 False detection image determining method, device, equipment and medium
CN111754538A (en) * 2019-06-29 2020-10-09 浙江大学 Threshold segmentation method for USB surface defect detection
CN111932447A (en) * 2020-08-04 2020-11-13 中国建设银行股份有限公司 Picture processing method, device, equipment and storage medium
CN112668582A (en) * 2020-12-31 2021-04-16 北京迈格威科技有限公司 Image recognition method, device, equipment and storage medium
CN113017699A (en) * 2019-10-18 2021-06-25 深圳北芯生命科技有限公司 Image noise reduction method for reducing noise of ultrasonic image
CN114151942A (en) * 2021-09-14 2022-03-08 海信家电集团股份有限公司 Air conditioner and human face area detection method
CN114550244A (en) * 2022-02-11 2022-05-27 支付宝(杭州)信息技术有限公司 Living body detection method, device and equipment
CN114820601A (en) * 2022-06-27 2022-07-29 合肥新晶集成电路有限公司 Target image updating method and system, wafer detection method and computer equipment
CN117834891A (en) * 2024-03-06 2024-04-05 成都凌亚科技有限公司 Video signal compression processing and sending platform and method
CN117853932A (en) * 2024-03-05 2024-04-09 华中科技大学 Sea surface target detection method, detection platform and system based on photoelectric pod

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632361A (en) * 2012-08-20 2014-03-12 阿里巴巴集团控股有限公司 An image segmentation method and a system
CN103996189A (en) * 2014-05-05 2014-08-20 小米科技有限责任公司 Image segmentation method and device
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
CN104504007A (en) * 2014-12-10 2015-04-08 成都品果科技有限公司 Method and system for acquiring similarity degree of images
US20150156484A1 (en) * 2013-11-29 2015-06-04 Canon Kabushiki Kaisha Image processing apparatus, method, and storage medium
CN104992144A (en) * 2015-06-11 2015-10-21 电子科技大学 Method for distinguishing transmission line from road in remote sensing image
CN105957093A (en) * 2016-06-07 2016-09-21 浙江树人大学 ATM retention detection method of texture discrimination optimization HOG operator
CN106023249A (en) * 2016-05-13 2016-10-12 电子科技大学 Moving object detection method based on local binary similarity pattern
CN106056606A (en) * 2016-05-30 2016-10-26 乐视控股(北京)有限公司 Image processing method and device
US20160328853A1 (en) * 2014-06-17 2016-11-10 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
US20170301081A1 (en) * 2015-09-30 2017-10-19 Shanghai United Imaging Healthcare Co., Ltd. System and method for determining a breast region in a medical image
CN107833242A (en) * 2017-10-30 2018-03-23 南京理工大学 One kind is based on marginal information and improves VIBE moving target detecting methods
CN108010034A (en) * 2016-11-02 2018-05-08 广州图普网络科技有限公司 Commodity image dividing method and device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632361A (en) * 2012-08-20 2014-03-12 阿里巴巴集团控股有限公司 An image segmentation method and a system
US20150156484A1 (en) * 2013-11-29 2015-06-04 Canon Kabushiki Kaisha Image processing apparatus, method, and storage medium
CN103996189A (en) * 2014-05-05 2014-08-20 小米科技有限责任公司 Image segmentation method and device
US20160328853A1 (en) * 2014-06-17 2016-11-10 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
CN104504007A (en) * 2014-12-10 2015-04-08 成都品果科技有限公司 Method and system for acquiring similarity degree of images
CN104992144A (en) * 2015-06-11 2015-10-21 电子科技大学 Method for distinguishing transmission line from road in remote sensing image
US20170301081A1 (en) * 2015-09-30 2017-10-19 Shanghai United Imaging Healthcare Co., Ltd. System and method for determining a breast region in a medical image
CN106023249A (en) * 2016-05-13 2016-10-12 电子科技大学 Moving object detection method based on local binary similarity pattern
CN106056606A (en) * 2016-05-30 2016-10-26 乐视控股(北京)有限公司 Image processing method and device
CN105957093A (en) * 2016-06-07 2016-09-21 浙江树人大学 ATM retention detection method of texture discrimination optimization HOG operator
CN108010034A (en) * 2016-11-02 2018-05-08 广州图普网络科技有限公司 Commodity image dividing method and device
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN107833242A (en) * 2017-10-30 2018-03-23 南京理工大学 One kind is based on marginal information and improves VIBE moving target detecting methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱松豪等: "基于背景相似性匹配的人脸识别", 《中南大学学报(自然科学版) 》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697460A (en) * 2018-12-05 2019-04-30 华中科技大学 Object detection model training method, target object detection method
US11640678B2 (en) 2018-12-05 2023-05-02 Tencent Technology (Shenzhen) Company Limited Method for training object detection model and target object detection method
CN109697460B (en) * 2018-12-05 2021-06-29 华中科技大学 Object detection model training method and target object detection method
CN109587248B (en) * 2018-12-06 2023-08-29 腾讯科技(深圳)有限公司 User identification method, device, server and storage medium
CN109587248A (en) * 2018-12-06 2019-04-05 腾讯科技(深圳)有限公司 User identification method, device, server and storage medium
CN109635723A (en) * 2018-12-11 2019-04-16 讯飞智元信息科技有限公司 A kind of occlusion detection method and device
CN109635723B (en) * 2018-12-11 2021-02-09 讯飞智元信息科技有限公司 Shielding detection method and device
CN110490212B (en) * 2019-02-26 2022-11-08 腾讯科技(深圳)有限公司 Molybdenum target image processing equipment, method and device
CN110490212A (en) * 2019-02-26 2019-11-22 腾讯科技(深圳)有限公司 Molybdenum target image processing arrangement, method and apparatus
CN111754538A (en) * 2019-06-29 2020-10-09 浙江大学 Threshold segmentation method for USB surface defect detection
CN110363241A (en) * 2019-07-11 2019-10-22 合肥联宝信息技术有限公司 A kind of picture comparative approach and device
CN113017699A (en) * 2019-10-18 2021-06-25 深圳北芯生命科技有限公司 Image noise reduction method for reducing noise of ultrasonic image
CN111639653B (en) * 2020-05-08 2023-10-10 浙江大华技术股份有限公司 False detection image determining method, device, equipment and medium
CN111639653A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 False detection image determining method, device, equipment and medium
CN111598088B (en) * 2020-05-15 2023-12-29 京东方科技集团股份有限公司 Target detection method, device, computer equipment and readable storage medium
CN111598088A (en) * 2020-05-15 2020-08-28 京东方科技集团股份有限公司 Target detection method and device, computer equipment and readable storage medium
US12056897B2 (en) 2020-05-15 2024-08-06 Boe Technology Group Co., Ltd. Target detection method, computer device and non-transitory readable storage medium
WO2021227723A1 (en) * 2020-05-15 2021-11-18 京东方科技集团股份有限公司 Target detection method and apparatus, computer device and readable storage medium
CN111932447A (en) * 2020-08-04 2020-11-13 中国建设银行股份有限公司 Picture processing method, device, equipment and storage medium
CN111932447B (en) * 2020-08-04 2024-03-22 中国建设银行股份有限公司 Picture processing method, device, equipment and storage medium
CN112668582A (en) * 2020-12-31 2021-04-16 北京迈格威科技有限公司 Image recognition method, device, equipment and storage medium
CN114151942A (en) * 2021-09-14 2022-03-08 海信家电集团股份有限公司 Air conditioner and human face area detection method
CN114151942B (en) * 2021-09-14 2024-09-06 海信家电集团股份有限公司 Air conditioner and face area detection method
CN114550244A (en) * 2022-02-11 2022-05-27 支付宝(杭州)信息技术有限公司 Living body detection method, device and equipment
CN114820601A (en) * 2022-06-27 2022-07-29 合肥新晶集成电路有限公司 Target image updating method and system, wafer detection method and computer equipment
CN117853932A (en) * 2024-03-05 2024-04-09 华中科技大学 Sea surface target detection method, detection platform and system based on photoelectric pod
CN117853932B (en) * 2024-03-05 2024-05-14 华中科技大学 Sea surface target detection method, detection platform and system based on photoelectric pod
CN117834891A (en) * 2024-03-06 2024-04-05 成都凌亚科技有限公司 Video signal compression processing and sending platform and method
CN117834891B (en) * 2024-03-06 2024-05-07 成都凌亚科技有限公司 Video signal compression processing and transmitting method

Also Published As

Publication number Publication date
CN108764325B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN108764325A (en) Image-recognizing method, device, computer equipment and storage medium
Haurum et al. A survey on image-based automation of CCTV and SSET sewer inspections
CN111260055B (en) Model training method based on three-dimensional image recognition, storage medium and device
Lopez-Molina et al. Multiscale edge detection based on Gaussian smoothing and edge tracking
Rieger et al. Irof: a low resource evaluation metric for explanation methods
CN105612554A (en) Method for characterizing images acquired through video medical device
Xu et al. A fuzzy C-means clustering algorithm based on spatial context model for image segmentation
Li et al. Change-detection map learning using matching pursuit
AU2020272936A1 (en) Methods and systems for crack detection using a fully convolutional network
CN114022718A (en) Digestive system pathological image recognition method, system and computer storage medium
CN112418149A (en) Abnormal behavior detection method based on deep convolutional neural network
Gao et al. Sea ice change detection in SAR images based on collaborative representation
US20170053172A1 (en) Image processing apparatus, and image processing method
Tian et al. A novel edge-weight based fuzzy clustering method for change detection in SAR images
Venugopal Sample selection based change detection with dilated network learning in remote sensing images
Krylov et al. False discovery rate approach to unsupervised image change detection
Zhao et al. Change detection in SAR images based on superpixel segmentation and image regression
Xu et al. Extended non-local feature for visual saliency detection in low contrast images
Oga et al. River state classification combining patch-based processing and CNN
Morandeira et al. Assessment of SAR speckle filters in the context of object-based image analysis
CN110399868B (en) Coastal wetland bird detection method
CN117333440A (en) Power transmission and distribution line defect detection method, device, equipment, medium and program product
Askari et al. Automatic determination of number of homogenous regions in SAR images utilizing splitting and merging based on a reversible jump MCMC algorithm
Albalooshi et al. Deep belief active contours (DBAC) with its application to oil spill segmentation from remotely sensed sea surface imagery
Chen et al. Urban damage estimation using statistical processing of satellite images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant