CN102567731A - Extraction method for region of interest - Google Patents

Extraction method for region of interest Download PDF

Info

Publication number
CN102567731A
CN102567731A CN2011104005585A CN201110400558A CN102567731A CN 102567731 A CN102567731 A CN 102567731A CN 2011104005585 A CN2011104005585 A CN 2011104005585A CN 201110400558 A CN201110400558 A CN 201110400558A CN 102567731 A CN102567731 A CN 102567731A
Authority
CN
China
Prior art keywords
pixel
image
value
component
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104005585A
Other languages
Chinese (zh)
Other versions
CN102567731B (en
Inventor
牛建伟
周成玉
童超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Zhongcheng information Polytron Technologies Inc
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201110400558.5A priority Critical patent/CN102567731B/en
Publication of CN102567731A publication Critical patent/CN102567731A/en
Application granted granted Critical
Publication of CN102567731B publication Critical patent/CN102567731B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an extraction method for a region of interest. The extraction method specifically comprises the following steps of: 1, preprocessing an original image; 2, calculating a salient map for the original image; 3, cutting the original image by using a watershed segmentation algorithm; 4, calculating interest of each region according to the salient map and a segmentation result; and 5, detecting the region of interest. According to the extraction method, the region of interest in the image can be automatically searched and found; an Itti model for simulating a human attention mechanism is adopted; and the found region of interest is basically matched with the subjective feeling of a person. Compared with manual marking of the region of interest, the extraction method is faster and more accurate.

Description

A kind of area-of-interest exacting method
Technical field
The invention belongs to the Computer Image Processing field, proposed a kind of area-of-interest exacting method.
Background technology
Digital picture and rapid development of Internet, the explosive growth that has brought the digital picture resource, because it is the expressiveness of image is stronger than literal, also more directly perceived.But with respect to the expression-form of literal, image generally all takies a large amount of storage spaces, and how expressing the content that image will express with storage space still less just becomes a vital problem.Consider that picture material is rich and varied; And user's concern often is the some key areas in the picture; The JPEG2000 standard proposes a kind of encoding region of interest mechanism, and promptly picture can be divided into a plurality of zones, adopts the different compression rate to encode to different zones.This mechanism can address the above problem to a certain extent, but also has a problem not solve, and promptly how to find area-of-interest.The JPEG2000 standard needs the artificial mark of user area-of-interest, and this makes the application of method that significant limitation arranged, and seldom has the user to use this mechanism in the standard, because operate cumbersome.If the imagination method can detect the area-of-interest in the image automatically, perhaps only need user interactions seldom can find the area-of-interest in the image, this mechanism will inevitably receive more favor.
Find area-of-interest, at first will each Region Segmentation in the picture be come out, this will use image segmentation algorithm.At present; Image segmentation method commonly used has similarity measure algorithm, interest tolerance detection technique and from shallow depth image, extracts the algorithm etc. of area-of-interest automatically; The similarity measure algorithm is unusual difficulty when being the new feature of reflection high-level semantic in extracting image; Key is to have huge wide gap between image low-level feature and the high-level semantic, and the area-of-interest that utilizes flex point to extract image has significant limitation.The shortcoming of flex point detecting device is because flex point concentrates on texture region, so the point-of-interest that extracts with the viewpoint detecting device is concentrated the zone that is distributed in more than the texture, and at the very coefficient of the few areal distribution of texture.The too uneven distribution of this texture is unfavorable for intactly describing the content of image each several part.
Summary of the invention
The present invention is in order to solve the limitation of JPEG2000 standard encoding region of interest Technology Need manual markings area-of-interest; A kind of method of extracting interesting image regions has been proposed; Adopt the mechanism of watershed segmentation algorithm combination Itti attention model algorithm, realize the automatic detection of interesting image regions.The watershed segmentation algorithm is a kind of based on morphologic partitioning algorithm, and each zone in the image that can efficiently and accurately is cut apart.And the Itti attention model has been simulated biological vision noticing mechanism characteristic; Select for use the not good enough sample mode of central peripheral to extract characteristics of image; The characteristics of image of different dimensions is fused to remarkable figure; The focus-of-attention that the competition of significant point process obtains merges remarkable figure and watershed segmentation zone and obtains area-of-interest as the seed points of watershed segmentation.
A kind of area-of-interest exacting method of the present invention specifically may further comprise the steps:
Step 1: original image is carried out pre-service.
Step 2: original image is calculated significantly figure.
Step 3: original image is cut apart with the watershed segmentation algorithm.
Step 4:, calculate the degree interested in each zone according to remarkable figure and segmentation result.
Step 5: carry out area-of-interest and detect.
Advantage of the present invention and good effect are:
(1) the present invention can seek the area-of-interest in the positioning image automatically, has adopted the Itti model of simulating human notice mechanism, and area-of-interest that finds and people's subjective sensation are identical basically.With respect to handmarking's area-of-interest, the present invention is also more accurate more fast.
(2) the inventive method is supported the area-of-interest of arbitrary shape.Jpeg2000 at first puts forward the notion of area-of-interest, but from the consideration of compressing, Jpeg2000 only supports rule zones such as circle, rectangle.By contrast, the inventive method is supported any regular or irregular area-of-interest according to segmentation result.For example the profile of people's face in the image if identify area-of-interest with regular circular or rectangle, has bigger error.
(3) the present invention can be used as the pre-treatment step of some image processing algorithms, in ROI Image Coding, owing to can find the area-of-interest in the image, therefore, can encode respectively to area-of-interest and background area very easily.In popular CBIR (Content-Based Image Retrieval) system, because area-of-interest more can be represented the content of entire image, therefore, can give bigger weights to the characteristic of area-of-interest, like this, help improving whole accuracy rate.
Description of drawings
Fig. 1 is the whole flow chart of steps of extraction area-of-interest method of the present invention;
Fig. 2 is a comparison diagram before and after sample picture one pre-service;
Fig. 3 is a comparison diagram before and after sample picture two pre-service;
Fig. 4 is a comparison diagram before and after sample picture three pre-service;
Fig. 5 is a comparison diagram before and after sample picture four pre-service;
Fig. 6 is the remarkable figure of sample picture one;
Fig. 7 is the remarkable figure of sample picture two;
Fig. 8 is the remarkable figure of sample picture three;
Fig. 9 is the remarkable figure of sample picture four;
The comparison diagram as a result that Figure 10 adopts watershed segmentation algorithm and other partitioning algorithms commonly used to cut apart for sample picture one;
The comparison diagram as a result that Figure 11 adopts watershed segmentation algorithm and other partitioning algorithms commonly used to cut apart for sample picture two;
The comparison diagram as a result that Figure 12 adopts watershed segmentation algorithm and other partitioning algorithms commonly used to cut apart for sample picture three;
The comparison diagram as a result that Figure 13 adopts watershed segmentation algorithm and other partitioning algorithms commonly used to cut apart for sample picture four;
Figure 14 is the area-of-interest mask that sample picture one utilization this method is extracted automatically;
Figure 15 is the area-of-interest mask that sample picture two utilization this method are extracted automatically;
Figure 16 is the area-of-interest mask that sample picture three utilization this method are extracted automatically;
Figure 17 is the area-of-interest mask that sample picture four utilization this method are extracted automatically.
Embodiment
To combine accompanying drawing and instance that the present invention is done further detailed description below.
The present invention has provided a kind of area-of-interest exacting method based on watershed segmentation (Segmentation Method Based On Region Of Interest; SM-Roi); Method is according to the vision noticing mechanism characteristic of biology; Select for use the not good enough sample mode of central peripheral to extract characteristics of image, the characteristics of image of different dimensions is fused to remarkable figure; The focus-of-attention that the competition of significant point process obtains merges remarkable figure and watershed segmentation zone and obtains area-of-interest as the seed points of watershed segmentation; Follow to return and suppress and contiguous preferential accurate selection and shifting attention focus; Thereby obtain the significance level or the degree interested in zone; Through experiment; The result shows that this method meets biological vision noticing mechanism, when detecting area-of-interest automatically, can effectively reduce too and cut apart, and also can handle big object preferably.
Method among the present invention is based on image segmentation, and strictly speaking, this method also is a kind of dividing method.With respect to additive method; This method has added the step of graphical analysis, through utilization Itti visual attention model each sub regions is analyzed, and method can navigate to the area-of-interest in the image preferably; As shown in Figure 1, the concrete performing step of the inventive method is following:
Step 1: original image is carried out pre-service.
The gauge point that employing is come by the significant point expansion is as the basis for estimation of area-of-interest, and main departure degree by pixel attributive analysis, regional mean value computation and bank section pixel thereof expands pixel 3 parts and forms.Because each pixel is different with the departure degree of image average in the image; Set out a parameter K through emulation experiment; As long as the pixel value of pixel is less than parameter; Just strengthen at this pixel automatically, Filtering Processing, color space conversion, threshold value, area-of-interest are set extract and image restoration, concrete pre-treatment step is following:
Step 1.1: calculate each pixel in the image (x, pixel value s y) i, wherein, i is the pixel subscript.
Step 1.2: the number of supposing the pixel in the image is N, calculates the pixel average m of entire image:
m = 1 N Σ i = 1 N s i - - - ( 1 )
Wherein: total number of pixels of N presentation video, s iRepresent i gray values of pixel points.
Step 1.3: according to average m and pixel (x, pixel value s y) i, the departure degree of each pixel and its average during equilibrium is published picture and looked like:
w i=‖s i-m‖ 2(2)
Wherein: m is the mean value that step 1.2 is obtained.
Step 1.4: setup parameter K is 1/10 of an image average, as the departure degree w of image pixel value and mean value iDuring less than K, just on the pixel value of its neighborhood, add K; If departure degree w iMore than or equal to K, the pixel value of its neighborhood point equals the pixel value of pixel;
Step 2: original image is calculated significantly figure.
At first pretreated image is carried out filtering.Any image all can have noise and distortion, and the present invention adopts medium filtering to avoid the fuzzy of image detail, protects the image border simultaneously, the filtering random impulsive noise, and filtered image outline is more clear.What the present invention adopted is median filtering method; It is roamed a template pixel that contains odd number earlier in the drawings; And its template center overlapped with certain location of pixels in the image; Read each the respective pixel gray-scale value in the template, then these gray-scale values are sorted from small to large, find out its intermediate value and give the pixel of the center of corresponding templates it.
Carry out color space conversion then.Because the HIS color space more meets people's normal vision observation property; It is to describe color characteristic with tone saturation degree and brightness thereof; Have the visuality of radical sign than RGB color space, thus the image after the present invention will handle by the RGB color space conversion to corresponding HIS color space.
H = 1 / 360 [ 90 - arctan ( F / 3 ) + { 0 , G > B ; 180 , G > B } ] S = 1 - [ min ( R , G , B ) / I ] I = ( R + G + B ) / 3 F = ( 2 R - G - B ) / ( G - B ) - - - ( 3 )
Wherein: G representes the value of Green in the rgb space (green) component; B representes the value of Blue in the rgb space (blueness) component, and R representes the value of Red in the rgb space (redness) component, and H representes the value of Hue in the HIS space (tone) component; I representes the value of Intensity in the HIS space (intensity) component; S representes the value of Saturation in the HIS space (saturation degree), and F is an intermediate value of asking for convenience of calculation.
Certain regional conspicuousness is not to embody through this regional pixel value in the image, but contrast through this zone and peripheral region embodies, and the zone that has high-contrast in the image is easy to obtain people's attention.Therefore, just can obtain each pixel conspicuousness through each the pixel global contrast in the computer picture.Provided in the prior art and calculated the method for contrast, but implemented all more complicated.Therefore the present invention adopts a kind of method of simple and effective calculating contrast, and it is following that the tone overall situation correlative value of calculating pixel x gets formula:
S h ( x ) = Σ V h = V h min V h max ( | H ( x ) - V h | · hist h ( v h ) ) - - - ( 4 )
S s ( x ) = Σ V s = V s min V s max ( | Sue ( x ) - V s | · hist s ( v s ) ) - - - ( 5 )
S i ( x ) = Σ V i = V i min V i max ( | I ( x ) - V i | · hist i ( v i ) ) - - - ( 6 )
S x = S h ( x ) 2 + S s ( x ) 2 + S i ( x ) 2 - - - ( 7 )
Wherein: in the formula (4), S h(x) significance of the h component of remarked pixel x, V hThe minimum value of h component in the min presentation video, V hThe maximal value of the h component of max remarked pixel x, hist hThe statistics with histogram result of presentation video h component, in the formula (5), S s(x) significance of the s component of remarked pixel x, V sThe minimum value of s component in the min presentation video, V sThe maximal value of the s component of max remarked pixel x, hist sThe statistics with histogram result of presentation video s component, in the formula (6), S i(x) significance of the h component of remarked pixel x, V iThe minimum value of i component in the min presentation video, V iThe maximal value of the i component of max remarked pixel x, hist iThe statistics with histogram result of presentation video i component.In the formula (7),, utilize formula (7) just can estimate each pixel conspicuousness in the image with the synthesis result significance as a whole of above-mentioned three component significances.
Step 3: original image is cut apart with the watershed segmentation algorithm.
The notion of watershed divide is treated to the basis image is carried out three-dimensional visualization: wherein two is coordinate, and another is a gray level.For the explanation of a kind of like this " topography ", consider three types of points: the point that (a) belongs to the locality minimum value; (b) when a water was placed on certain point locational, water was bound to drop to a single minimum point; (c) when water was on the position of certain point, water can flow to more than such minimum point equiprobably.To a specific regional minimum value, the point of satisfy condition (b) set is called " catchment basin " or " watershed divide " of this minimum value.The point point set of (c) of satisfying condition is set up into the crest line of topographical surface jointly, and term is called " cut-off rule " or " watershed line ".
Fundamental purpose based on the partitioning algorithm of these notions is to find out watershed line.Basic thought is very simple: suppose on the position of each regional minimum value, to make a call to a hole and whom lets gush out from the hole with uniform climbing speed, flood whole landform from low to high.When the different water that converge in the basin will condense together, the dam of building was with organized polymerization originally.The top that water can only arrive dam is in the degree on the waterline.The border of these dams is corresponding to the cut-off rule of watershed divide.So they extract (continuous) separatrix by watershed algorithm.
The detailed process of original image being cut apart with the watershed segmentation algorithm is following:
Make M 1, M 2... M R(this is the typical gradient image of a width of cloth for x, the coordinate set of local minimum point y) for presentation video g.Make C (M i) be the set of the coordinate of a point, these points are positioned at and local minimum M iIn the catchment basin that (recall in the catchment basin whichsoever place all form UNICOM's component) interrelates.Symbol min and max represent g (x, minimum value y) and maximal value.At last, make T [n] denotation coordination (s, set t), wherein g (s, t)<n, that is:
Figure BDA0000116460030000051
On how much, T [n] be g (x, the coordinate set of the point in y), the point in the set all be positioned at plane g (x, y)=below of n.
Along with water level from n=min+1 to n=max+1, constantly to increase, the landform in the image can be covered by water.Cover each stage in the process of landform at water level, method need know that all the point that is under the water level counts out.From conceptive, suppose coordinate among the T [n] be in g (x, y)=the n plane under, and be marked as black, so other coordinate is marked as white.So, when when water level increases with any increment n, observe the xy plane from the top down, can see a width of cloth bianry image.In image black color dots corresponding to be lower than in the function plane g (x, y)=point of n.
Make C n(M i) coordinate set of expression catchment basin mid point.This basin is with relevant in covered minimum value of n stage.C n(M i) also can be counted as the bianry image that provides by following formula:
C n(M i)=C(M i)∩T[n] (9)
Wherein, C (M i) and T [n] represent the set of two coordinate points respectively, definition is with reference to formula (8), C n(M i) represent this two intersection of sets collection.In other words, if (x, y) ∈ C (M i) and ((x y) has C then in the position for x, y) ∈ T [n] n(M i)=1, otherwise, C n(M i)=0.Explanation on result's geometry is very simple hereto.Only needing to use " with (AND) " operator that the bianry image among the T [n] is separated in n the stage that water overflows gets final product.T [n] is and local minimum M iThe set that interrelates.
Next, make C [n] be illustrated in the intersection of the part that n stage catchment basin do not had by water logging:
C [ n ] = ∪ j = 1 R C n ( M j ) - - - ( 10 )
Wherein, C n(M j) definition with reference to formula (9), R is total number of local minimum.Make C [max+1] be the intersection of all catchment basins then:
C [ max + 1 ] = ∪ j = 1 R C ( M j ) - - - ( 11 )
Wherein, max is g (x, maximal value y), the max pixel value that promptly occurs in the image.Can find out and be in C n(M i) and T [n] in element term of execution of algorithm, can not be replaced, and the element number in these two set and n keep simultaneous growth.Therefore, C [n-1] is the subclass of set C [n].According to formula (10) and (11), C [n] is the subclass of T [n], so C [n-1] is the subclass of T [n].From this conclusion, each the UNICOM's component that draws among important result: the C [n-1] all is UNICOM's component of T [n] just.
When beginning, the algorithm of looking for watershed line sets C [min+1]=T [min+1].Algorithm gets into recursive call then, supposes when n goes on foot, and has constructed C [n-1].The process of trying to achieve C [n] according to C [n-1] is following: make Q represent the set of UNICOM's component among the T [n].Then, for each component q ∈ Q of UNICOM [n], following three kinds of possibilities are arranged:
(a) q ∩ C [n-1] is empty.
(b) q ∩ C [n-1] comprises UNICOM's component among the C [n-1].
(c) q ∩ C [n-1] comprises unnecessary one the UNICOM's component of C [n-1].
Depend on this 3 conditions according to C [n-1] structure C [n].When running into a up-to-date minimum value eligible (a), then incorporate q into C [n-1] and constitute C [n].When q was arranged in the catchment basin of some local minimum formation, eligible (b) integrated with C [n-1] with q and constitute C [n] this moment.When running into the topographical crest of two or more catchment basins of all or part of separation, eligible (c).Further the water filling meeting causes the water in different basins to condense together, thereby water level is reached unanimity.Therefore, must in q, set up a dam (if relate to a plurality of basins will set up do dam) overflows to stop intrabasinal water more.
Through use with g (x, the corresponding n value of gray-scale value that exists in y) can be improved efficiency of algorithm; (x, histogram y) can be confirmed these values and minimum value and maximal value according to g.
Step 4:, calculate the degree interested in each zone according to remarkable figure and segmentation result.
The size that at first will significantly scheme is shrunk to the size the same with original image in proportion, and like this, significantly the value of each point among the figure can correspond to the degree interested of pixel in the original image, and the size of value is all between 0 and 1.Since not of uniform size the causing in each zone, if simply add up each regional value, will certainly be with the area maximum region as area-of-interest, and the maximum zone of area sometimes possibly be background area etc.Certainly, can get the mean value in zone, but may cause like this choosing some because the tiny area that over-segmentation causes does not square with the fact, therefore, the present invention has taken all factors into consideration two kinds of factors, and has provided the computing formula of a kind of zone degree interested:
Interest ( R i ) = γ 1 × Num ( R i ) Num ( R ) + γ 2 × Σ j = 1 Num ( R i ) r j Num ( R i ) - - - ( 12 )
Wherein, γ 1+ γ 2=1, represent the weights coefficient of region area and average degree interested, Num (R respectively i) and Num (R) the pixel number of being with not regional i and overall region to comprise respectively, r jCertain any degree interested in the expression zone.
Step 5: carry out area-of-interest and detect.
The area-of-interest detection technique of the inventive method comprises image segmentation and two parts of interest tolerance, and region detecting part is divided with gradient watershed transform split image, and (x y) has described the object grey scale change to gradient G, is expressed as
G ( x , y ) = ( I ( x , y ) * G x ) 2 + ( I ( x , y ) * G y ) 2 - - - ( 13 )
Wherein: (x y) is gray-scale map, G to I x, G yBe the sobel edge mask and
Figure BDA0000116460030000073
At first image segmentation is become several regions, then gradient image is carried out the watershed divide mark and produce mark matrix L rgb, select to note that with focus-of-attention the zone produces mask mask figure at last according to image gradient.
Interest tolerance part; Adopt vision noticing mechanism to build interest-degree, choose the watershed segmentation result, use the Itti attention model to calculate the interest-degree in focus-of-attention tolerance zone; The conspicuousness in zone depends on the conspicuousness of graphic feature; As: brightness, color and direction etc., characteristic remarkable property obtains through the difference of gaussian sampling of computed image regional center and periphery, and formula is:
DOG ( x , y ) = 1 2 πσ c 2 exp | - x 2 + y 2 2 σ c 2 | - 1 2 πσ s 2 exp | - x 2 + y 2 2 σ s 2 | - - - ( 14 )
Wherein, (formula (14) is the expression of a double gauss differential mode type on the whole, σ for x, y) the pixel coordinate of presentation video cAnd σ sBe respectively the parameter of model, need to confirm, go 0.5 and 1.5 in this method respectively through experiment.Respectively to brightness I, color characteristic figure C and direction character figure O ask the conspicuousness of local component with formula (14), and be last, and whole remarkable figure S is the combination of brightness figure I, color characteristic figure C and direction character figure O, and formula is:
S=w i×N(I)+w c×N(C)+w o×N(O) (15)
In the formula, N (.) is a normalized factor, is about to a component value and constrains in [0,1] interval w i, w cAnd w oBe respectively the characteristic weights of each component, and w i+ w c+ w o=1.
After significantly being schemed S, next step need find the focus-of-attention among the remarkable figure, and focus-of-attention is the strongest point of vision significance, and being needs the at first place of concern in the scene, and focus-of-attention has selectivity and metastatic characteristics.The selection of focus-of-attention is to obtain through the neural net method that the victor is a king with shifting, and formula is:
V ( t + δt ) = [ 1 - δt CR ] V ( t ) + δt C I ( t ) - - - ( 16 )
In the formula, C representes electric capacity, and R is a resistance, and V is a mode voltage, and following formula is represented the output voltage V (t) in the known t moment and the mode voltage V (t+ δ t) that input current I (t) produces behind time δ t, through after a while, and the voltage integrating meter granting of generation.When choosing focus-of-attention to image, with the integration granting neuron array of significantly scheming to regard as a two dimension, the value of pixel among the corresponding significantly figure of neuronic input current; Each neuronic film in will significantly scheming is then led through electricity and is converted neuronic input current in the WTA network to.The WTA network also be also should two dimension integration provide neuron town Liu, it is littler than neuronic time constant among the remarkable figure to play neuronic time constant, current potential rises to faster than the neuron among the remarkable figure.Like this, the neuron in the WTA network always produces earlier than the neuron among the remarkable figure and provides, and the neuron that generation is at first provided is just corresponding to significantly significantly being worth maximum neuron, i.e. focus-of-attention among the figure.
Before focus-of-attention shifts, need choose next focus-of-attention, that is to say that focus-of-attention shifts a capital and produces a focus-of-attention, the conspicuousness of the marking area of having selected can be suppressed, and is called to return inhibition.For this reason, return inhibition with a width of cloth and scheme to suppress current region, make notice turn to next zone; When carrying out the k time focus-of-attention transfer, return inhibition figure IR at the k width of cloth kIn, will belong to interior ground of (k-1) individual marking area pixel value and all be changed to 0, the pixel value of remaining position returns inhibition figure IR with respect to (k-1) width of cloth K-1Constant, as shown in the formula:
IR 0(x,y)=1
Figure BDA0000116460030000081
In the formula, R kBe k marking area: (x y) is coordinate points; IR k(x, when y) k focus-of-attention chosen in expression, the k width of cloth returned coordinate points among the inhibition figure (x, value y).The area-of-interest of the public definite image of degree regional interested that calculates according to the focus-of-attention that finds and step 4 at last.
Embodiment:
The present invention realizes the extraction of area-of-interest automatically, and handling for follow-up low layer encoding operation or high-level semantic provides good precondition.Extremely shown in Figure 5 like Fig. 2; Be comparison diagram before and after the pre-service of different sample pictures, (a) subgraph is former figure among the figure, and (b) subgraph is the grey level histogram of original image; (c) subgraph is the image behind the former figure process smothing filtering, and (d) subgraph is the image after histogram strengthens.Can find out that more clear than original image through the image after strengthening, contrast is stronger.
Fig. 6 to Fig. 9 is respectively the remarkable figure of the said sample picture of Fig. 2 to Fig. 5, is used for the Itti model and seeks the calculating of noting the force.
Figure 10 to Figure 13 is respectively the sample picture comparison diagram as a result that sample picture adopts watershed segmentation algorithm and other partitioning algorithms commonly used to cut apart to Fig. 5 among Fig. 2.Wherein, (a) subgraph is an original image; (b) subgraph is the segmentation effect of FCM (Fuzzy C Means) algorithm, (c) is the segmentation effect of SM_Ath (Segmentation method based onadaptive threshold) algorithm, and (d) subgraph is the segmentation effect of SM_Edge (Segmentation method based on Edge) algorithm; (e) be the segmentation effect of SM_Reg (Segmentation method based on region) algorithm, (f) be the segmentation effect of the inventive method.Can find out that the cut zone profile of this method is more obvious, is easy to extract area-of-interest.
Figure 14 is the area-of-interest mask that sample picture utilization this method is extracted automatically among Fig. 2, and the elementary contour that can find out bear is all by correct extracting, and effect is better.
The area-of-interest mask that Figure 15 extracts for Fig. 3 sample picture utilization this method automatically, the hull partial contour is capped basically, and belt ship top is not found, and effect is not so good.
Figure 16 and Figure 17 are respectively Fig. 4 sample picture and the automatic area-of-interest mask that extracts of Fig. 5 sample picture utilization this method, and people's elementary contour is all come out by correct extraction, and effect is better.

Claims (5)

1. an area-of-interest exacting method is characterized in that, comprises following step:
Step 1: original image is carried out pre-service;
Concrete pre-treatment step is following:
Step 1.1: calculate each pixel in the image (x, pixel value s y) i, wherein, i is the pixel subscript;
Step 1.2: the number of supposing the pixel in the image is N, calculates the pixel average m of entire image:
m = 1 N Σ i = 1 N s i - - - ( 1 )
Wherein: total number of pixels of N presentation video, s iRepresent i gray values of pixel points;
Step 1.3: according to average m and pixel (x, pixel value s y) i, the departure degree of each pixel and its average during equilibrium is published picture and looked like:
w i=‖s i-m‖ 2(2)
Step 1.4: setup parameter K, as the departure degree w of image pixel value and mean value iDuring less than K, just on the pixel value of its neighborhood, add K; If departure degree w iMore than or equal to K, the pixel value of its neighborhood point equals the pixel value of pixel;
Step 2: original image is calculated significantly figure;
At first pretreated image is carried out filtering;
Carry out color space conversion then, with filtered image by the RGB color space conversion to corresponding HIS color space;
Original image is calculated significantly figure, and it is following that the tone overall situation correlative value of calculating pixel x gets formula:
S h ( x ) = Σ V h = V h min V h max ( | H ( x ) - V h | · hist h ( v h ) ) - - - ( 4 )
S s ( x ) = Σ V s = V s min V s max ( | Sue ( x ) - V s | · hist s ( v s ) ) - - - ( 5 )
S i ( x ) = Σ V i = V i min V i max ( | I ( x ) - V i | · hist i ( v i ) ) - - - ( 6 )
S x = S h ( x ) 2 + S s ( x ) 2 + S i ( x ) 2 - - - ( 7 )
Wherein: in the formula (4), S h(x) significance of the h component of remarked pixel x, V hThe minimum value of h component in the min presentation video, V hThe maximal value of the h component of max remarked pixel x, hist hThe statistics with histogram result of h component in the presentation video, in the formula (5), S s(x) significance of the s component of remarked pixel x, V sThe minimum value of s component in the min presentation video, V sThe maximal value of the s component of max remarked pixel x, hist sThe statistics with histogram result of presentation video s component, in the formula (6), S i(x) significance of the h component of remarked pixel x, V iThe minimum value of i component in the min presentation video, V iThe maximal value of the i component of max remarked pixel x, hist iThe statistics with histogram result of presentation video i component; In the formula (7),, utilize formula (7) to obtain each pixel conspicuousness in the image with the synthesis result significance as a whole of above-mentioned three component significances;
Step 3: original image is cut apart with the watershed segmentation algorithm;
Detailed process is following:
M 1, M 2... M RPresentation video g (x, the coordinate set of local minimum point y), C (M i) for being positioned at and local minimum M iThe set of the coordinate of point in the catchment basin that interrelates; Min and max represent g (x, minimum value y) and maximal value; T [n] denotation coordination (s, set t), wherein g (s, t)<n, that is:
T [n] be g (x, the coordinate set of the point in y), the point in the set all be positioned at plane g (x, y)=below of n;
C n(M i) coordinate set of expression catchment basin mid point; C n(M i) be the bianry image that provides by following formula:
C n(M i)=C(M i)∩T[n] (9)
Wherein, C n(M i) expression C (M i) with the common factor of T [n], that is, if (x, y) ∈ C (M i) and ((x y) has C then in the position for x, y) ∈ T [n] n(M i)=1, otherwise, C n(M i)=0;
C [n] is illustrated in the intersection of the part that n stage catchment basin do not had by water logging:
c [ n ] = ∪ j = 1 R C n ( M j ) - - - ( 10 )
Wherein, R is total number of local minimum; Make C [max+1] be the intersection of all catchment basins then:
C [ max + 1 ] = ∪ j = 1 R C ( M j ) - - - ( 11 )
Wherein, max is g (x, maximal value y), the max pixel value that promptly occurs in the image;
When beginning, the algorithm of looking for watershed line sets C [min+1]=T [min+1]; Algorithm gets into recursive call then, supposes when n goes on foot, and has constructed C [n-1]; The process of trying to achieve C [n] according to C [n-1] is following: make Q represent the set of UNICOM's component among the T [n]; Then, for each component q ∈ Q of UNICOM [n], following three kinds of possibilities are arranged:
(a) q ∩ C [n-1] is empty;
(b) q ∩ C [n-1] comprises UNICOM's component among the C [n-1];
(c) q ∩ C [n-1] comprises unnecessary one the UNICOM's component of C [n-1];
When running into a up-to-date minimum value eligible (a), then incorporate q into C [n-1] and constitute C [n]; When q was arranged in the catchment basin of some local minimum formation, eligible (b) integrated with C [n-1] with q and constitute C [n] this moment; When running into the topographical crest of two or more catchment basins of all or part of separation, eligible (c);
Step 4:, calculate the degree interested in each zone according to remarkable figure and segmentation result;
The size that at first will significantly scheme is shrunk to the size the same with original image in proportion, calculates the degree interested in each zone:
Interest ( R i ) = γ 1 × Num ( R i ) Num ( R ) + γ 2 × Σ j = 1 Num ( R i ) r j Num ( R i ) - - - ( 12 )
Wherein, γ 1+ γ 2=1, represent the weights coefficient of region area and average degree interested, Num (R respectively i) and Num (R) the pixel number of being with not regional i and overall region to comprise respectively, r jCertain any degree interested in the expression zone;
Step 5: carry out area-of-interest and detect;
Area-of-interest detects and comprises image segmentation and two parts of interest tolerance:
Region detecting part is divided with gradient watershed transform split image, and (x y) has described the object grey scale change to gradient G, is expressed as
G ( x , y ) = ( I ( x , y ) * G x ) 2 + ( I ( x , y ) * G y ) 2 - - - ( 13 )
Wherein: (x y) is gray-scale map, G to I x, G yBe the sobel edge mask and At first image segmentation is become several regions, then gradient image is carried out the watershed divide mark and produce mark matrix L rgb, select to note that with focus-of-attention the zone produces mask mask figure at last according to image gradient;
Interest tolerance part adopts vision noticing mechanism to build interest-degree; Choose the watershed segmentation result; Use the Itti attention model to calculate the interest-degree in focus-of-attention tolerance zone, characteristic remarkable property obtains through the difference of gaussian sampling of computed image regional center and periphery, and formula is:
DOG ( x , y ) = 1 2 πσ c 2 exp | - x 2 + y 2 2 σ c 2 | - 1 2 πσ s 2 exp | - x 2 + y 2 2 σ s 2 | - - - ( 14 )
Wherein, (formula (14) is the expression of a double gauss differential mode type on the whole, σ for x, y) the pixel coordinate of presentation video cAnd σ sIt is respectively the parameter of model; Respectively to brightness I, color characteristic figure C and direction character figure O ask the conspicuousness of local component with formula (14), and be last, and whole remarkable figure S is the combination of brightness figure I, color characteristic figure C and direction character figure O, and formula is:
S=w i×N(I)+w c×N(C)+w o×N(O) (15)
In the formula, N (.) is a normalized factor, is about to a component value and constrains in [0,1] interval w i, w cAnd w oBe respectively the characteristic weights of each component, and w i+ w c+ w o=1;
After significantly being schemed S, next step need find the focus-of-attention among the remarkable figure, and the selection of focus-of-attention is to obtain through the neural net method that the victor is a king with shifting, and formula is:
V ( t + δt ) = [ 1 - δt CR ] V ( t ) + δt C I ( t ) - - - ( 16 )
In the formula, C representes electric capacity, and R is a resistance, and V is a mode voltage, and following formula is represented the output voltage V (t) in the known t moment and the mode voltage V (t+ δ t) that input current I (t) produces behind time δ t, through after a while, and the voltage integrating meter granting of generation; When choosing focus-of-attention to image, with the integration granting neuron array of significantly scheming to regard as a two dimension, the value of pixel among the corresponding significantly figure of neuronic input current; Each neuronic film in will significantly scheming is then led through electricity and is converted neuronic input current in the WTA network to; The WTA network also be also should two dimension integration provide neuron town Liu, it is littler than neuronic time constant among the remarkable figure to play neuronic time constant, it is faster than the neuron among the remarkable figure that current potential rises to; Like this, the neuron in the WTA network always produces earlier than the neuron among the remarkable figure and provides, and the neuron that generation is at first provided is just corresponding to significantly significantly being worth maximum neuron, i.e. focus-of-attention among the figure;
Before focus-of-attention shifts, need choose next focus-of-attention, return inhibition with a width of cloth and scheme to suppress current region, make notice turn to next zone; When carrying out the k time focus-of-attention transfer, return inhibition figure IR at the k width of cloth kIn, will belong to interior ground of (k-1) individual marking area pixel value and all be changed to 0, the pixel value of remaining position returns inhibition figure IR with respect to (k-1) width of cloth K-1Constant, as shown in the formula:
IR 0(x,y)=1
Figure FDA0000116460020000041
In the formula, R kBe k marking area: (x y) is coordinate points; IR k(x, when y) k focus-of-attention chosen in expression, the k width of cloth returned coordinate points among the inhibition figure (x, value y); The area-of-interest of the public definite image of degree regional interested that calculates according to the focus-of-attention that finds and step 4 at last.
2. a kind of area-of-interest exacting method according to claim 1 is characterized in that, parameter K is 1/10 of an image average in the step 1.
3. a kind of area-of-interest exacting method according to claim 1; It is characterized in that median filtering method is adopted in the filtering in the step 2, is specially: earlier a template pixel that contains odd number is roamed in the drawings; And its template center overlapped with certain location of pixels in the image; Read each the respective pixel gray-scale value in the template, then these gray-scale values are sorted from small to large, find out its intermediate value and give the pixel of the center of corresponding templates it.
4. a kind of area-of-interest exacting method according to claim 1 is characterized in that, in the step 2 with filtered image by the RGB color space conversion to corresponding HIS color space, be specially:
H = 1 / 360 [ 90 - arctan ( F / 3 ) + { 0 , G > B ; 180 , G > B } ] S = 1 - [ min ( R , G , B ) / I ] I = ( R + G + B ) / 3 F = ( 2 R - G - B ) / ( G - B ) - - - ( 3 )
Wherein: G representes the value of green component in the rgb space, and B representes the value of rgb space Smalt component, and R representes the value of red component in the rgb space; H representes the value of tone component in the HIS space; I representes the value of strength component in the HIS space, and S representes the value of saturation degree in the HIS space, and F is an intermediate value.
5. a kind of area-of-interest exacting method according to claim 1 is characterized in that, σ in the step 5 cAnd σ sValue be respectively 0.5 and 1.5.
CN201110400558.5A 2011-12-06 2011-12-06 Extraction method for region of interest Expired - Fee Related CN102567731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110400558.5A CN102567731B (en) 2011-12-06 2011-12-06 Extraction method for region of interest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110400558.5A CN102567731B (en) 2011-12-06 2011-12-06 Extraction method for region of interest

Publications (2)

Publication Number Publication Date
CN102567731A true CN102567731A (en) 2012-07-11
CN102567731B CN102567731B (en) 2014-06-04

Family

ID=46413104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110400558.5A Expired - Fee Related CN102567731B (en) 2011-12-06 2011-12-06 Extraction method for region of interest

Country Status (1)

Country Link
CN (1) CN102567731B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020933A (en) * 2012-12-06 2013-04-03 天津师范大学 Multi-source image fusion method based on bionic visual mechanism
CN103106671A (en) * 2013-01-25 2013-05-15 西北工业大学 Method for detecting interested region of image based on visual attention mechanism
CN104599282A (en) * 2015-02-09 2015-05-06 国家海洋局第二海洋研究所 Sand wave body range detection method based on remote sensing images
CN104977310A (en) * 2014-04-10 2015-10-14 征图新视(江苏)科技有限公司 Detection method and detection system of random bottom shading on cigarette pack
CN105261148A (en) * 2015-10-14 2016-01-20 广州医科大学 Trample event early warning evacuation method based on skynet monitoring system
CN105631456A (en) * 2015-12-15 2016-06-01 安徽工业大学 Particle swarm optimization ITTI model-based white cell region extraction method
CN105657580A (en) * 2015-12-30 2016-06-08 北京工业大学 Capsule endoscopy video summary generation method
CN105893999A (en) * 2016-03-31 2016-08-24 北京奇艺世纪科技有限公司 Method and device for extracting a region of interest
CN105913426A (en) * 2016-04-11 2016-08-31 中国科学院南京地理与湖泊研究所 ZY-3 image-based shallow lake seine zone extraction method
CN106093066A (en) * 2016-06-24 2016-11-09 安徽工业大学 A kind of magnetic tile surface defect detection method based on the machine vision attention mechanism improved
CN106128201A (en) * 2016-06-14 2016-11-16 北京航空航天大学 The attention training system that a kind of immersion vision and discrete force control task combine
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN106203432A (en) * 2016-07-14 2016-12-07 杭州健培科技有限公司 A kind of localization method of area-of-interest based on convolutional Neural net significance collection of illustrative plates
CN106611402A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 Image processing method and device
CN106980813A (en) * 2016-01-15 2017-07-25 福特全球技术公司 Generation is watched in machine learning attentively
CN107016823A (en) * 2017-05-31 2017-08-04 上海耐相智能科技有限公司 A kind of intelligent market early warning system
CN107247952A (en) * 2016-07-28 2017-10-13 哈尔滨工业大学 The vision significance detection method for the cyclic convolution neutral net supervised based on deep layer
CN107403183A (en) * 2017-07-21 2017-11-28 桂林电子科技大学 The intelligent scissor method that conformity goal is detected and image segmentation is integrated
CN107609537A (en) * 2017-10-09 2018-01-19 上海海事大学 A kind of waterfront line detecting method based on HSV space Surface Picture feature
CN108287250A (en) * 2018-02-01 2018-07-17 中国计量大学 Escalator step speed-measuring method based on machine vision
CN109101908A (en) * 2018-07-27 2018-12-28 北京工业大学 Driving procedure area-of-interest detection method and device
CN109478248A (en) * 2016-05-20 2019-03-15 渊慧科技有限公司 Classified using collection is compared to input sample
CN109886985A (en) * 2019-01-22 2019-06-14 浙江大学 Merge the image Accurate Segmentation method of deep learning network and watershed algorithm
CN111144314A (en) * 2019-12-27 2020-05-12 北京中科研究院 Method for detecting tampered face video
CN111209802A (en) * 2019-12-24 2020-05-29 浙江大学 Robot visual image scene analysis method for graph focus transfer
CN111257326A (en) * 2020-01-22 2020-06-09 重庆大学 Metal processing area extraction method
CN111784714A (en) * 2020-08-13 2020-10-16 北京英迈琪科技有限公司 Image separation method and system
CN111784715A (en) * 2020-08-13 2020-10-16 北京英迈琪科技有限公司 Image separation method and system
CN112219224A (en) * 2019-12-30 2021-01-12 商汤国际私人有限公司 Image processing method and device, electronic equipment and storage medium
CN113159026A (en) * 2021-03-31 2021-07-23 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503429B (en) * 2016-10-12 2021-12-28 上海联影医疗科技股份有限公司 Sampling method and radiotherapy plan optimization method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883291A (en) * 2010-06-29 2010-11-10 上海大学 Method for drawing viewpoints by reinforcing interested region
EP2273424A1 (en) * 2009-07-08 2011-01-12 Honeywell International Inc. Automated target detection and recognition system and method
CN102184557A (en) * 2011-06-17 2011-09-14 电子科技大学 Salient region detection method for complex scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2273424A1 (en) * 2009-07-08 2011-01-12 Honeywell International Inc. Automated target detection and recognition system and method
CN101883291A (en) * 2010-06-29 2010-11-10 上海大学 Method for drawing viewpoints by reinforcing interested region
CN102184557A (en) * 2011-06-17 2011-09-14 电子科技大学 Salient region detection method for complex scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张巧荣 等: "基于视觉注意的医学图像感兴趣区域提取", 《计算机应用研究》 *

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020933A (en) * 2012-12-06 2013-04-03 天津师范大学 Multi-source image fusion method based on bionic visual mechanism
CN103106671A (en) * 2013-01-25 2013-05-15 西北工业大学 Method for detecting interested region of image based on visual attention mechanism
CN104977310A (en) * 2014-04-10 2015-10-14 征图新视(江苏)科技有限公司 Detection method and detection system of random bottom shading on cigarette pack
CN104599282A (en) * 2015-02-09 2015-05-06 国家海洋局第二海洋研究所 Sand wave body range detection method based on remote sensing images
CN104599282B (en) * 2015-02-09 2017-04-12 国家海洋局第二海洋研究所 Sand wave body range detection method based on remote sensing images
CN105261148A (en) * 2015-10-14 2016-01-20 广州医科大学 Trample event early warning evacuation method based on skynet monitoring system
CN106611402B (en) * 2015-10-23 2019-06-14 腾讯科技(深圳)有限公司 Image processing method and device
CN106611402A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 Image processing method and device
CN105631456A (en) * 2015-12-15 2016-06-01 安徽工业大学 Particle swarm optimization ITTI model-based white cell region extraction method
CN105631456B (en) * 2015-12-15 2018-11-30 安徽工业大学 A kind of leucocyte method for extracting region based on particle group optimizing ITTI model
CN105657580A (en) * 2015-12-30 2016-06-08 北京工业大学 Capsule endoscopy video summary generation method
CN105657580B (en) * 2015-12-30 2018-11-13 北京工业大学 A kind of capsule endoscope video abstraction generating method
CN106980813A (en) * 2016-01-15 2017-07-25 福特全球技术公司 Generation is watched in machine learning attentively
CN105893999A (en) * 2016-03-31 2016-08-24 北京奇艺世纪科技有限公司 Method and device for extracting a region of interest
CN105913426A (en) * 2016-04-11 2016-08-31 中国科学院南京地理与湖泊研究所 ZY-3 image-based shallow lake seine zone extraction method
CN105913426B (en) * 2016-04-11 2018-07-06 中国科学院南京地理与湖泊研究所 A kind of shallow lake purse seine area extracting method based on ZY-3 images
US11714993B2 (en) 2016-05-20 2023-08-01 Deepmind Technologies Limited Classifying input examples using a comparison set
CN109478248B (en) * 2016-05-20 2022-04-05 渊慧科技有限公司 Method, system, and storage medium for classifying input samples using a comparison set
CN109478248A (en) * 2016-05-20 2019-03-15 渊慧科技有限公司 Classified using collection is compared to input sample
CN106128201A (en) * 2016-06-14 2016-11-16 北京航空航天大学 The attention training system that a kind of immersion vision and discrete force control task combine
CN106128201B (en) * 2016-06-14 2018-12-21 北京航空航天大学 A kind of attention training system of immersion vision and the combination of discrete force control task
CN106093066A (en) * 2016-06-24 2016-11-09 安徽工业大学 A kind of magnetic tile surface defect detection method based on the machine vision attention mechanism improved
CN106093066B (en) * 2016-06-24 2018-11-30 安徽工业大学 A kind of magnetic tile surface defect detection method based on improved machine vision attention mechanism
CN106203432A (en) * 2016-07-14 2016-12-07 杭州健培科技有限公司 A kind of localization method of area-of-interest based on convolutional Neural net significance collection of illustrative plates
CN106203432B (en) * 2016-07-14 2020-01-17 杭州健培科技有限公司 Positioning system of region of interest based on convolutional neural network significance map
CN107247952B (en) * 2016-07-28 2020-11-10 哈尔滨工业大学 Deep supervision-based visual saliency detection method for cyclic convolution neural network
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN106157319B (en) * 2016-07-28 2018-11-02 哈尔滨工业大学 The conspicuousness detection method in region and Pixel-level fusion based on convolutional neural networks
CN107247952A (en) * 2016-07-28 2017-10-13 哈尔滨工业大学 The vision significance detection method for the cyclic convolution neutral net supervised based on deep layer
CN107016823A (en) * 2017-05-31 2017-08-04 上海耐相智能科技有限公司 A kind of intelligent market early warning system
CN107403183A (en) * 2017-07-21 2017-11-28 桂林电子科技大学 The intelligent scissor method that conformity goal is detected and image segmentation is integrated
CN107609537A (en) * 2017-10-09 2018-01-19 上海海事大学 A kind of waterfront line detecting method based on HSV space Surface Picture feature
CN107609537B (en) * 2017-10-09 2020-12-29 上海海事大学 Water bank line detection method based on HSV space water surface image characteristics
CN108287250A (en) * 2018-02-01 2018-07-17 中国计量大学 Escalator step speed-measuring method based on machine vision
CN109101908A (en) * 2018-07-27 2018-12-28 北京工业大学 Driving procedure area-of-interest detection method and device
CN109886985A (en) * 2019-01-22 2019-06-14 浙江大学 Merge the image Accurate Segmentation method of deep learning network and watershed algorithm
CN109886985B (en) * 2019-01-22 2021-02-12 浙江大学 Image accurate segmentation method fusing deep learning network and watershed algorithm
CN111209802A (en) * 2019-12-24 2020-05-29 浙江大学 Robot visual image scene analysis method for graph focus transfer
CN111144314A (en) * 2019-12-27 2020-05-12 北京中科研究院 Method for detecting tampered face video
CN111144314B (en) * 2019-12-27 2020-09-18 北京中科研究院 Method for detecting tampered face video
CN112219224B (en) * 2019-12-30 2024-04-26 商汤国际私人有限公司 Image processing method and device, electronic equipment and storage medium
CN112219224A (en) * 2019-12-30 2021-01-12 商汤国际私人有限公司 Image processing method and device, electronic equipment and storage medium
CN111257326B (en) * 2020-01-22 2021-02-26 重庆大学 Metal processing area extraction method
CN111257326A (en) * 2020-01-22 2020-06-09 重庆大学 Metal processing area extraction method
CN111784714B (en) * 2020-08-13 2021-08-17 深圳市贝格蓝斯科技有限公司 Image separation method and system
CN111784715B (en) * 2020-08-13 2022-01-04 重庆七腾科技有限公司 Image separation method and system
CN111784715A (en) * 2020-08-13 2020-10-16 北京英迈琪科技有限公司 Image separation method and system
CN111784714A (en) * 2020-08-13 2020-10-16 北京英迈琪科技有限公司 Image separation method and system
CN113159026A (en) * 2021-03-31 2021-07-23 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium

Also Published As

Publication number Publication date
CN102567731B (en) 2014-06-04

Similar Documents

Publication Publication Date Title
CN102567731B (en) Extraction method for region of interest
CN103218619A (en) Image aesthetics evaluating method
CN108171701B (en) Significance detection method based on U network and counterstudy
CN104809187B (en) A kind of indoor scene semanteme marking method based on RGB D data
Lu et al. Salient object detection using concavity context
CN103810504B (en) Image processing method and device
CN106462771A (en) 3D image significance detection method
CN110119687A (en) Detection method based on the road surface slight crack defect that image procossing and convolutional neural networks combine
CN103440646A (en) Similarity obtaining method for color distribution and texture distribution image retrieval
CN106780434A (en) Underwater picture visual quality evaluation method
CN101853286B (en) Intelligent selection method of video thumbnails
CN103996195A (en) Image saliency detection method
CN101211356A (en) Image inquiry method based on marking area
CN108345892A (en) A kind of detection method, device, equipment and the storage medium of stereo-picture conspicuousness
CN109948593A (en) Based on the MCNN people counting method for combining global density feature
CN105957124B (en) With the natural image color edit methods and device for repeating situation elements
CN102156888A (en) Image sorting method based on local colors and distribution characteristics of characteristic points
CN108829711A (en) A kind of image search method based on multi-feature fusion
CN101587189B (en) Texture elementary feature extraction method for synthetizing aperture radar images
CN102088597A (en) Method for estimating video visual salience through dynamic and static combination
CN101763440A (en) Method for filtering searched images
Xiao et al. Segmentation of multispectral high-resolution satellite imagery using log Gabor filters
CN103198479A (en) SAR image segmentation method based on semantic information classification
CN101710418A (en) Interactive mode image partitioning method based on geodesic distance
CN105913377A (en) Image splicing method for reserving image correlation information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20161122

Address after: 450063 Henan Province, Zhengzhou North Sanhuan Henan province university science and Technology Park Building 7, 13 floor

Patentee after: Henan Zhongcheng information Polytron Technologies Inc

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: Beihang University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140604

Termination date: 20191206