CN109886279A - Image processing method, device, computer equipment and storage medium - Google Patents

Image processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109886279A
CN109886279A CN201910067366.3A CN201910067366A CN109886279A CN 109886279 A CN109886279 A CN 109886279A CN 201910067366 A CN201910067366 A CN 201910067366A CN 109886279 A CN109886279 A CN 109886279A
Authority
CN
China
Prior art keywords
pixel
sampling area
candidate region
characteristic pattern
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910067366.3A
Other languages
Chinese (zh)
Other versions
CN109886279B (en
Inventor
王义文
张文龙
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910067366.3A priority Critical patent/CN109886279B/en
Priority to PCT/CN2019/089196 priority patent/WO2020151153A1/en
Publication of CN109886279A publication Critical patent/CN109886279A/en
Application granted granted Critical
Publication of CN109886279B publication Critical patent/CN109886279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of image processing method, device, computer equipment and storage mediums, so that the calculated for pixel values result of the fixed-size characteristic pattern finally obtained is more accurate.Method part includes: to obtain candidate region characteristic pattern;Candidate region characteristic pattern is divided into the equivalent zonule of NxM size according to default candidate region pond parameter;Each zonule is averagely divided into P sampling area according to default sampling number P;Determine the intersection pixel intersected in the characteristic pattern of candidate region with sampling area;According to intersection pixel, the pixel value of the center position of sampling area is determined;According to the pixel value of the center position of the corresponding each sampling area in zonule, the corresponding pixel value for determining each zonule;Fixed-size candidate region characteristic pattern is obtained according to the pixel value of each zonule.

Description

Image processing method, device, computer equipment and storage medium
Technical field
The present invention relates to field of image processing more particularly to a kind of image processing method, device, computer equipment and storages Medium.
Background technique
In field of image processing, it usually needs detect and analyze certain block region, referred to as target detection.In object detection field Numerous frames in, such as Fast-RCNN, Faster-RCNN, RFCN, ROI Pooling effect be according to pre-selection frame Corresponding region pond is turned to fixed-size spy in characteristic pattern by Region Proposal namely the position coordinates of candidate region Sign figure, to carry out subsequent sort operation.But the location information of Region Proposal is returned by model, Obtained location information is usually the form of floating number, and the characteristic pattern desired size of pondization operation is fixed.Therefore by corresponding region Pond turns to this process of fixed-size characteristic pattern and there is the operation for being rounded quantization twice: (1) by the boundary Region Proposal Rounding is quantified as integral point coordinate value.(2) borderline region being rounded after quantifying averagely is divided into N xN unit, to each The boundary of unit carries out rounding quantization again.
However pass through and be rounded quantization twice, obtained candidate frame and most start to return the position come out have it is certain inclined Difference, this deviation will affect the accuracy of detection or segmentation, referred to as region mismatch problem (misalignment).Traditional It is that above-mentioned integral quantization has been taken to operate in scheme, obtains the pixel that coordinate is floating number using the method for bilinear interpolation Image values on point, to convert a continuous operation for entire feature accumulation process, specifically: first passing through bilinearity Interpolation algorithm calculates one " red crunode ", and according to the fixed position in the side of red crunode, such as unifies the picture where red crunode The lower-left of vegetarian refreshments, bottom right pixel point calculate the position of the pixel where red crunode, however, the fixed position institute in the side of red crunode The pixel of pixel be not necessarily the pixel close to the pixel where red crunode, it is thus possible to will lead to and calculate As a result there is a certain error.
Summary of the invention
The present invention provides a kind of image processing method, device, computer equipment and storage mediums, so that finally obtain The calculated for pixel values result of fixed-size characteristic pattern is more accurate.
A kind of image processing method, comprising:
Candidate region characteristic pattern is obtained, the candidate region characteristic pattern is acquired after candidate region to be mapped to characteristic pattern , the characteristic pattern is obtained by carrying out feature extraction to input picture by convolutional neural networks, and the candidate region is root The input picture is carried out obtained by the detection of target area according to goal-selling detection algorithm;
The candidate region characteristic pattern is divided into the equivalent cell of NxM size according to default candidate region pond parameter Domain, for positive integer and more than or equal to 1, the default candidate region pond parameter includes the width for pondization processing by described N, M Spend parameter and length parameter;
Each zonule is averagely divided into P sampling area according to default sampling number P, the P is positive integer And it is greater than or equal to 2;
Determine the intersection pixel intersected in the candidate region characteristic pattern with the sampling area;
According to the intersection pixel intersected in the candidate region characteristic pattern with the sampling area, the sampling is determined The pixel value of the center position in region;
It is corresponding to determine often according to the pixel value of the center position of the corresponding each sampling area in the zonule The pixel value of a zonule;
The fixed-size candidate region characteristic pattern is obtained according to the pixel value of each zonule.
A kind of image processing apparatus, comprising:
First obtains module, and for obtaining candidate region characteristic pattern, the candidate region characteristic pattern is to reflect candidate region It is incident upon obtained after characteristic pattern, the characteristic pattern is to be carried out obtained by feature extraction as convolutional neural networks to input picture It arrives, the candidate region is obtained by carrying out target area detection to the input picture according to goal-selling detection algorithm;
First division module, for being obtained described in module acquisition according to default candidate region pond parameter by described first Candidate region characteristic pattern is divided into the equivalent zonule of NxM size, and described N, M are described pre- for positive integer and more than or equal to 1 If candidate region pond parameter includes the width parameter and length parameter for pondization processing;
Second division module, the institute for dividing each first division module according to sampling number P is preset It states zonule and is averagely divided into P sampling area, the P is positive integer and is greater than or equal to 2;
First determining module, for determine it is described first obtain module obtain the candidate region characteristic pattern in institute State the intersection pixel for the sampling area intersection that the second division module divides;
Second determining module, for according to first determining module determine the candidate region characteristic pattern in it is described The intersection pixel of sampling area intersection, determines the pixel value of the center position of the sampling area;
Third determining module, the zonule for being determined according to second determining module is corresponding each described to adopt The pixel value of the center position in sample region, the corresponding pixel value for determining each zonule;
Second obtains module, for being obtained according to the pixel value of each of the third determining module determination zonule The fixed-size candidate region characteristic pattern.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing The computer program run on device, the processor realize above-mentioned image processing method when executing the computer program.It is a kind of Computer readable storage medium, the computer-readable recording medium storage have computer program, and the computer program is located Reason device realizes above-mentioned image processing method when executing.
In the scheme that above-mentioned image processing method, device, computer equipment and storage medium are realized, according to sampling area It with the overlapping relation of pixel, is effectively guaranteed and corresponding candidate pool area is turned into fixed-size characteristic pattern, and due to being It being calculated according to intersection pixel, the calculated result of the pixel value of final sampling area is more accurate, so that finally obtain The calculated for pixel values result of fixed-size characteristic pattern is also relatively more accurate.
Detailed description of the invention
It, below will be attached needed in the description of this invention in order to illustrate more clearly of technical solution of the present invention Figure is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this field For those of ordinary skill, without any creative labor, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is one schematic diagram of system framework figure applied by image processing method in the present invention;
Fig. 2 is image processing method one embodiment flow diagram in the present invention;
Fig. 3 is a schematic diagram of the sampling area of candidate region characteristic pattern in the present invention;
Fig. 4 is another embodiment flow diagram of image processing method in the present invention;
Fig. 5 is another schematic diagram of the sampling area of candidate region characteristic pattern in the present invention;
Fig. 6 is another schematic diagram of the sampling area of candidate region characteristic pattern in the present invention;
Fig. 7 is another embodiment flow diagram of image processing method in the present invention;
Fig. 8 is another embodiment flow diagram of image processing method in the present invention;
Fig. 9 is another embodiment flow diagram of image processing method in the present invention;
Figure 10 is another schematic diagram of the sampling area of candidate region characteristic pattern in the present invention;
Figure 11 is another embodiment flow diagram of image processing method in the present invention;
Figure 12 is image processing apparatus one embodiment structural schematic diagram in the present invention;
Figure 13 is one embodiment structural schematic diagram of computer equipment in the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the present invention, the technical solution in the present invention is clearly and completely described, is shown So, described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on the implementation in the present invention Example, every other embodiment obtained by those of ordinary skill in the art without making creative efforts belong to The scope of protection of the invention.
The embodiment of the present invention provides image processing method, can be applicable in the system framework schematic diagram such as Fig. 1, server can Processing mode provided by this image processing method is executed to input picture, to obtain fixed-size feature after being handled Figure, wherein server can realize with the server cluster of independent server either multiple servers composition, below it is right The embodiment of the present invention describes in detail:
In one embodiment, as shown in Fig. 2, providing a kind of image processing method, include the following steps:
S10: obtaining candidate region characteristic pattern, and candidate region is is mapped to institute after characteristic pattern by the candidate region characteristic pattern It obtains, the characteristic pattern is the candidate region obtained by carrying out feature extraction to input picture by convolutional neural networks Obtained by carrying out target area detection to the input picture according to goal-selling detection algorithm.
The embodiment of the present invention is applied to convolutional neural networks (the Regions with Convolutional based on region Neural Networks, RCNN) in, including but not limited to Fast RCNN, Faster RCNN and based on the complete of region Convolutional network (Region-based Fully Convolutional Networks, RFCN).It is appreciated that being based on above-mentioned It, can input picture progress convolutional layer, pond layer, area-of-interest pond layer (region of in the convolutional neural networks in region Interest pooling layer, ROI pooling) and full articulamentum treatment process, and the embodiment of the present invention be Obtain the candidate region (Region Proposal) of input picture and characteristic pattern (namely the feature of input picture Maps after), pooling layers of ROI are handled.
Wherein, candidate region is exactly to find out the position namely region of interest that the middle target of input picture is likely to occur in advance Domain, it is described by detecting the candidate region of input picture using characteristic informations such as texture, edge, colors in input picture Candidate region is obtained by carrying out target area detection to the input picture according to goal-selling detection algorithm, wherein specific Ground, in Fast RCNN, goal-selling detection algorithm is specifically to use selective search (selective search) directly right Input picture carries out the extraction of candidate region, and in Faster RCNN, goal-selling detection algorithm refer to input picture into After the feature extraction of row convolutional layer obtains characteristic pattern, using region candidate network (region proposal network, PRN the extraction of candidate region) is carried out to characteristic pattern, no further details are given here for detailed process, but it is understood that, process is above-mentioned The processing of goal-selling detection algorithm can get the corresponding candidate region of input picture.
Candidate region characteristic pattern refers to candidate region mapping to obtained figure after characteristic pattern corresponding to input picture Picture, it will be understood that input picture mentions after the convolutional layer of convolutional neural networks of the input based on region by the feature of convolutional layer Take processing, the corresponding characteristic pattern of available input object, the characteristic extraction procedure of specific convolutional layer is not also unfolded to go to live in the household of one's in-laws on getting married here It states.In embodiments of the present invention, the corresponding candidate region characteristic pattern of input picture can be got, it should be noted that the present invention The candidate region characteristic pattern of embodiment meaning refers to the corresponding each candidate region characteristic pattern of input picture, for the ease of retouching It states, hereafter the image processing method that candidate region special zone figure proposes the embodiment of the present invention is described.
S20: it is equivalent that NxM size is divided into the candidate region characteristic pattern according to default candidate region pond parameter Zonule, described N, M be positive integer and be greater than or equal to 1, the default candidate region pond parameter include width parameter and Length parameter.
Wherein, presetting candidate region pond parameter is preset parameter in ROI pooling layers, above-mentioned default candidate region Pond parameter is intended to turn in candidate region characteristic pattern pond the parameter of fixed-size characteristic pattern, and specific address includes width parameter (pooled-h) and length parameter (pooled-w).
Illustratively, it is assumed that the size of input picture is 800*800, and the convolutional neural networks convolutional layer based on region uses Be VGG16 network, feat-stride=32 (indicates that input picture picture after convolutional layer is handled is reduced into the 1/ of original image 32), namely after VGG16 network processes obtaining the corresponding characteristic pattern of input picture is 25*25, it is assumed that input picture has a time Favored area, size 665*665, then after the candidate region maps to characteristic pattern, the size of obtained candidate region characteristic pattern are as follows: 665/32 ≈ 20.78, i.e. candidate region characteristic pattern are as follows: 20.78*20.78.Needing explanation is, in embodiments of the present invention, In pooling layers for the treatment of process of ROI, for ease of description, usually retain two floating numbers of decimal point, when there is floating number Retain 2 significant digits floating number to be illustrated.Assuming that the width parameter and length parameter of default candidate region pond parameter It is respectively as follows: pooled-h=7, pooled-w=7, then the candidate region characteristic pattern is fixed into the spy of 7*7 size after treatment Sign figure, in other words, is divided into 7*7=49 size etc. for the candidate region of the 20.78*20.78 mapped on characteristic pattern Same zonule, the size of each zonule are 20.78/7 ≈ 2.97, that is to say, that each zonule is 2.97*2.97.
It should be noted that in embodiments of the present invention, according to default candidate region pond parameter to the candidate region Characteristic pattern is divided into the equivalent zonule of NxM size, and the specific size of N, M are configured by practical application request, and default Candidate region pond parameter is related, and the embodiment of the present invention is not specifically limited, and illustratively, above-mentioned N, M can also be respectively 8, At this point, the candidate region characteristic pattern is fixed into the characteristic pattern of 8*8 size after treatment.
S30: each zonule is averagely divided by P sampling area according to default sampling number P, the P is positive Integer and be greater than or equal to 2.
In the embodiment of the present invention, the candidate region characteristic pattern is divided into according to default candidate region pond parameter After the equivalent zonule of NxM size, each zonule is handled as follows: according to default sampling number P by each institute It states zonule and is averagely divided into P sampling area, the P is for positive integer and more than or equal to 2, the sampling area and the time The shape type of the pixel of favored area characteristic pattern is identical.Wherein, which is for calculating each zonule Pixel value set sampling number, illustratively, which can be 4,8 etc., and the specific present invention is implemented Example is without limitation.
As described above, illustratively, the candidate region of the 20.78*20.78 mapped on characteristic pattern is divided into 7* Behind the equivalent zonule of 7 sizes of ≈ 49, if default sampling number P is 4, by equivalent each of the zonule of 49 sizes Zonule is averagely divided into 4 sampling areas.
S40: the intersection pixel intersected in the candidate region characteristic pattern with the sampling area is determined.
Passing through step 30, each zonule is averagely divided by P sampling area according to default sampling number P Afterwards, the intersection pixel intersected in the candidate region characteristic pattern with the sampling area is determined.Namely passing through step S30 After processing obtains the corresponding each sampling area in each zonule, determines in the characteristic pattern of candidate region, adopted respectively with each The intersection pixel of sample region intersection.It should be noted that specifically, the coordinate position and time of each sampling area can be passed through The coordinate position of the pixel of favored area special zone figure determines the intersection pixel intersected with each sampling area.Illustratively, It is appreciated that passing through step S40, the center position of corresponding 4 sampling areas in each zonule can be obtained, and small with this The intersection pixel of the corresponding 4 sampling areas intersection in region.
Illustratively, as shown in Figure 3:
Wherein, region shown in Fig. 3 is a part of region in the characteristic pattern of candidate region, including 1-16 pixel, Region where thick line box A, B, C and D is sampling area, and sampling area A, B, C and D constitute a zonule.With sampling For the A of region, the center position of sampling area A can be determined by bilinearity difference interpolation algorithm, and determine to adopt with described The intersection pixel of sample region A intersection, that is, pixel 1,2,5 and 6.Sampling area corresponding for each zonule, The intersection pixel that can determine that center position and intersect with sampling area, for example, for sampling area B, with sample region The target pixel points that domain B has intersection are 2,3,6 and 7.
S50: it according to the intersection pixel intersected in the candidate region characteristic pattern with the sampling area, determines described The pixel value of the center position of sampling area.
It is determining the center position of the sampling area, and is determining to adopt in the candidate region characteristic pattern with described After the intersection pixel of sample region intersection, according to the intersection picture intersected in the candidate region characteristic pattern with the sampling area Vegetarian refreshments determines the pixel value of the center position of the sampling area.
As shown in figure 3, can have the intersection pixel 1,2,3 intersected according to sampling area A by taking sampling area A as an example And 4 determine sampling area A center position pixel value.It is appreciated that being based on same calculation, can distinguish Obtain the pixel value of the center position of the corresponding each sampling area in each zonule.
S60: corresponding according to the pixel value of the center position of the corresponding each sampling area in each zonule Determine the pixel value of each zonule.
After step S50, it can get in candidate region, the pixel of the center position of the sampling area of each zonule It is worth, it is right according to the pixel value of the center position of the corresponding each sampling area in the zonule in the embodiment of the present invention It should determine the pixel value of each zonule.
As shown in figure 3, the pixel of the center position of zonule corresponding sampling area A, B, C and D can be obtained respectively Value, correspondence is determined in Fig. 3, by the pixel value of the zonule constituted sampling area A, B, C and D.It is calculated based on same Mode can get the corresponding pixel value in each zonule, different one illustrate here.
S70: the fixed-size candidate region characteristic pattern is obtained according to the pixel value of each zonule.
It is corresponding true in the pixel value according to the center position of the corresponding each sampling area in each zonule It is fixed-size after being handled according to the pixel value of each zonule after the pixel value for making each zonule The candidate region characteristic pattern.
As described above, illustratively, the candidate region of the 20.78*20.78 mapped on characteristic pattern is being divided into 7* Behind the equivalent zonule of 7 sizes of ≈ 49, each zonule is 2.97*2.97, by preceding step S10-S70, acquisition it is every The corresponding pixel value in a zonule, obtains the pixel value of 49 zonules, so that output obtains the candidate region characteristic pattern of 7*7, Therefore, the exportable candidate region characteristic pattern for obtaining fixed-size 7*7, the fixed-size candidate region characteristic pattern can be used for The subsequent classification of convolutional neural networks based on region and recurrence processing.As it can be seen that the embodiment of the invention provides at a kind of image Reason method is effectively guaranteed corresponding candidate pool area turning to fixed ruler according to the overlapping relation of sampling area and pixel Very little characteristic pattern, and by being calculated thus according to intersection pixel, the calculated result of the pixel value of final sampling area is more Accurately, so that the calculated for pixel values result of the fixed-size characteristic pattern finally obtained is also relatively more accurate.
It should be noted that in conjunction with above-described embodiment, according to the pixel in sampling area and candidate region characteristic pattern The size relation of size, the embodiment of the invention also provides with specific reference in the candidate region characteristic pattern with the sampling area The target of intersection intersects pixel, determines the implementation of the pixel value of the center position of the sampling area, divides below It is not introduced:
In one embodiment, as shown in figure 4, before step 50 namely it is described according in the candidate region characteristic pattern with The target of sampling area intersection intersects pixel, before the pixel value for determining the center position of the sampling area, The method also includes following steps:
S80: judge whether the size of the sampling area is greater than or equal to the pixel of the candidate region characteristic pattern Size.
In step s 30, after each zonule being averagely divided into P sampling area according to default sampling number P, Whether the sampling area for judging each zonule is greater than or equal to the size of the pixel in the characteristic pattern of candidate region, Ke Yili Solution, since the size of each pixel of input picture is consistent, the candidate region characteristic pattern mapped it is every The size of a pixel is also the same, and because the size for dividing each sampling area of obtained candidate region is equivalent, because This may determine that whether any one sampling area is greater than or equal to the size of any one pixel of candidate region characteristic pattern.
S90: if judging, the size of the sampling area is greater than or equal to the size of the pixel of the candidate region image, The corresponding auxiliary frame of the sampling area, the shape of the auxiliary frame and the time are then generated centered on the center position The pixel of favored area characteristic pattern is identical, and the size of the auxiliary frame is then less than or equal to the pixel of the candidate region characteristic pattern The size of point.
When judge sampling area size be greater than or equal to the candidate region image pixel size, then with described The corresponding auxiliary frame of the sampling area is generated centered on center position, that is to say, that each of each zonule is adopted Sample region can all generate corresponding auxiliary frame, and the shape of the auxiliary frame is identical as the pixel of the candidate region characteristic pattern, The size of the auxiliary frame is then less than or equal to the size of the pixel of the candidate region characteristic pattern.It should be noted that this The center position that bilinear interpolation determines each sampling area can be used in inventive embodiments, specifically calculate and each adopt No further details are given here for the process of the center position in sample region.
Illustratively, referring to Fig. 5, as shown in Figure 5, the size of sampling area is greater than or equal to the candidate region image Pixel size (being illustrated so that auxiliary frame and pixel are in the same size as an example in Fig. 5), by taking sampling area D as an example, this In inventive embodiments, the corresponding auxiliary frame d of sampling area D can be generated with the center position of sampling area D, in Fig. 5 Shown in slash box, wherein the shape of the corresponding auxiliary frame d of sampling area D and the pixel of the candidate region characteristic pattern Identical, the size of the corresponding auxiliary frame d of sampling area D is then less than or equal to the pixel of the candidate region characteristic pattern. It should be noted that being only illustrated by taking sampling area D as an example here, for generating each cell in the characteristic pattern of candidate region Each sampling area in domain produces corresponding auxiliary frame through the embodiment of the present invention.
It is described according to the intersection pixel intersected in the candidate region characteristic pattern with the sampling area in step S50, The pixel value for determining the center position of the sampling area, specifically comprises the following steps:
S50`: according to the intersection pixel that intersects in the candidate region characteristic pattern with the sampling area and described The corresponding auxiliary frame of sampling area, determines the pixel value of the center position of the sampling area.
In generating candidate region characteristic pattern after the auxiliary frame of each sampling area, according in the candidate region characteristic pattern The intersection pixel and corresponding auxiliary frame intersected with the sampling area, determines the center position of the sampling area Pixel value.
In order to make it easy to understand, referring again to Fig. 5, by taking sampling area D as an example, it is seen then that there is the phase intersected with sampling area D Friendship pixel is pixel 6,7,10 and 11, and the corresponding auxiliary frame of sampling area D is auxiliary frame d, then according to pixel 6,7, 10,11 and auxiliary frame d determines the pixel value of the center position of sampling area D.
It should be noted that for other sampling areas of candidate region characteristic pattern, the center based on above-mentioned sampling area D The calculated for pixel values mode of point position is determined, illustratively, referring to Fig. 6, for sampling area A, it is seen then that with sample region The intersection pixel that domain A has intersection is pixel 1,2,5 and 6, and the corresponding auxiliary frame of sampling area A is auxiliary frame a, then root The pixel value of the center position of sampling area a, other sampling areas are determined according to pixel 1,2,5,6 and auxiliary frame a Central point pixel value calculation, do not illustrate one by one here.
In one embodiment, as shown in fig. 7, in step S50` namely it is described according in the candidate region characteristic pattern with The intersection pixel and the corresponding auxiliary frame of the sampling area of the sampling area intersection, determine the sampling area The pixel value of center position, specifically comprises the following steps:
S51`: phase of the corresponding auxiliary frame of the sampling area respectively with each pixel in the intersection pixel is obtained Cross surface product.
S52`: according to the intersecting area of the corresponding auxiliary frame of the sampling area and each pixel and described The pixel value of each pixel determines the corresponding first object pixel value of the sampling area.
S53`: the center position by the corresponding first object pixel value of the sampling area, as the sampling area Pixel value.
For step S51`, please continue to refer to Fig. 5, by taking sampling area D as an example, it is seen that the corresponding auxiliary region sampling area D Domain d respectively with intersect pixel namely pixel 6,7,10 and 11 and intersect, in this step, it may be determined that auxiliary frame d respectively with The intersecting area for intersecting pixel 6,7,10 and 11, is denoted as: C respectively in the present embodiment6d、C7d、C10dAnd C11d
For step S52`-S53`, equally by taking sampling area D as an example, C is being obtained6d、C7d、C10dAnd C11dLater, root According to C6d、C7d、C10dAnd C11d, and intersect pixel 6,7,10 and 11 and determine the corresponding first object picture of sampling area D Element value, and using the first object pixel value as the pixel value of the center position of sampling area D.It should be noted that for The corresponding center point of sampling area can be obtained by step S51`-S53` in other sampling areas of candidate region characteristic pattern The pixel value set does not repeat to repeat here one by one.
In one embodiment, as shown in figure 8, it is in step S53` namely described according to the corresponding auxiliary of the sampling area Frame and the intersecting area of each pixel and the pixel value of each pixel, determine the sampling area pair The first object pixel value answered, specifically comprises the following steps:
S531`: respectively correspond calculate each pixel pixel value and each pixel intersecting area it Between product.
S532`: by the product phase between the pixel value of each pixel and the intersecting area of each pixel Add, to obtain the first sum of products.
S533`: calculating sum of the intersecting area of each pixel, with obtain the first intersection pixel point areas with.
S534`: the first sum of products is calculated with described first and intersects quotient between pixel point areas, to obtain the sample region The corresponding first object pixel value in domain.
Here by taking the sampling area D in the characteristic pattern of candidate region as an example, the embodiment of the present invention is illustrated:
For step S531`, C is being obtained6d、C7d、C10dAnd C11dLater, it respectively corresponds and calculates sampling area correspondence Intersection pixel in each pixel pixel value and C6d、C7d、C10dAnd C11dBetween product, namely calculate separately out C6dWith the pixel value A of pixel 66Product, be denoted as: A6C6d;Calculate C7dWith the pixel value A of pixel 77Product, be denoted as: A7C7d;Calculate C10dWith the pixel value A of pixel 1010Product, be denoted as: A10C10d;Calculate C11dWith the picture of pixel 11 Plain value A11Product, be denoted as: A11C11d
For step S532`, A is calculated6C6d、A7C7d、A10C10dAnd A11C11dLater, above-mentioned product is subjected to phase Add to obtain the first sum of products namely A6C6d+A7C7d+A10C10d+A11C11d
For step S533`, the intersecting area of the corresponding auxiliary frame d of sampling area D and each pixel and, the One intersection pixel point areas and, that is, C6d+C7d+C10d+C11d
For step S234`, corresponding first sum of products of sampling area D is calculated, intersects pixel point areas with described first Quotient U between andD, it is specific as follows to state shown in formula:
It should be noted that being illustrated by taking sampling area D as an example in the embodiment of the present invention, for candidate region feature Other sampling areas in figure, the corresponding first object pixel value of D for seeing above-mentioned sampling area obtain calculation progress It calculates, does not repeat to repeat here.
It should be noted that according to the size relation of the size of the pixel in sampling area and candidate region characteristic pattern, The embodiment of the invention also provides with specific reference to the target intersection intersected in the candidate region characteristic pattern with the sampling area Pixel determines the implementation of the pixel value of the center position of the sampling area:
As shown in figure 9, after step S80 namely whether the size for judging the sampling area is greater than or equal to institute After the size for stating the pixel of candidate region characteristic pattern, which further includes following steps:
S100: if judging, the size of the sampling area is less than the size of the pixel of the candidate region characteristic pattern, According to the pixel value of each intersection pixel intersected in the candidate region characteristic pattern with the sampling area and described adopt Sample region and each intersecting area for intersecting pixel, determine corresponding second target pixel value of the sampling area.
S110: using corresponding second target pixel value of the sampling area as the center position of the sampling area Pixel value.
It has been observed that default sampled point is preset configuration, according to default sampling number P to each cell in candidate region Domain marks off the size for the sampling area come, it is possible to less than the size of pixel in the characteristic pattern of candidate region, for example, candidate regions When domain is Small object, mapping to the corresponding obtained candidate region characteristic pattern of input picture will be smaller, each of marks off and to adopt The size in sample region is likely less than the size of the pixel of candidate region spy's figure.
Illustratively, as shown in Figure 10: as it can be seen that sampling area A, B, C and D that zonule marks off are less than pixel Size, then according to the pixel value of each intersection pixel intersected in the candidate region characteristic pattern with the sampling area, with And the sampling area and each intersecting area for intersecting pixel, determine corresponding second target of the sampling area Pixel value.By taking sampling area B as an example, each intersection pixel intersected with sampling area B is respectively pixel 5 and 6, is remembered respectively The pixel value of pixel 5 is A5, the pixel value of pixel 6 is A6, sampling area B and the intersecting area of pixel 5 and 6 are remembered respectively For C5B、C6B, then according to A5、A6、C5BAnd C6BDetermine corresponding second target pixel value of sampling area B, and by sampling area Pixel value of corresponding second target pixel value of B as the center position of sampling area B.Candidate region is less than for size Other sampling areas of the pixel spot size of characteristic pattern can be based on aforesaid way, determine the heavy central point of sampling area respectively The pixel value of position does not repeat to repeat here one by one.
In one embodiment, as shown in figure 11, in step S100, if judging, the size of the sampling area is less than the time The size of the pixel of favored area characteristic pattern, it is described each according to intersecting in the candidate region characteristic pattern with the sampling area The pixel value and the sampling area and each intersecting area for intersecting pixel of a intersection pixel, determine institute Corresponding second target pixel value of sampling area is stated, is specifically comprised the following steps:
S101: the pixel value for calculating each intersection pixel is respectively corresponded, with each phase for intersecting pixel Product between cross surface product.
S102: by the pixel value of each intersection pixel, between each intersecting area for intersecting pixel Product addition, to obtain the second sum of products.
S103: calculating the sum between the intersecting area of each intersection pixel, to obtain the second intersection pixel face Product and.
S104: the second sum of products is calculated with described second and intersects quotient between pixel point areas, to obtain the sample region Corresponding second target pixel value in domain.
For step S101-S104, please continue to refer to Figure 10, by taking the sampling area B in the characteristic pattern of candidate region as an example, Obtaining is C5B、C6BLater, the pixel value and C for calculating each pixel in intersection pixel are respectively corresponded5B、C6BBetween multiply Product, namely calculate separately out C5BWith the pixel value A of pixel 55Product, be denoted as: A5C5B;Calculate C6BWith the picture of pixel 6 Plain value A6Product, be denoted as: A6C6B
By taking the sampling area B in the characteristic pattern of candidate region as an example, C is calculated in acquisition5BWith the pixel value A of pixel 55's Product A5C5B, and calculate C6BWith the pixel value A of pixel 66Product A6C6BLater, product is added to obtain second Sum of products, namely: A5C5B+A6C6B
Sum of the intersecting area of the corresponding intersection pixel of sampling area B, i.e., the first intersection pixel point areas and are as follows: C5B+ C6B
Second sum of products intersects quotient between pixel point areas with described second, and the specific following formula that samples calculates:
It should be noted that being illustrated by taking sampling area B as an example in the embodiment of the present invention, for candidate region feature Other sampling areas in figure, corresponding second target pixel value of B for seeing above-mentioned sampling area obtain calculation progress It calculates, does not repeat to repeat here.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
In one embodiment, a kind of image processing apparatus is provided, in the image processing apparatus and above-described embodiment at image Reason method corresponds.As shown in figure 12, which includes the first acquisition module 101, the first division module 102, the second division module 103, the first determining module 104, the second determining module 105, third determining module 106 and second obtain Module 107.Detailed description are as follows for each functional module:
First obtains module, and for obtaining candidate region characteristic pattern, the candidate region characteristic pattern is to reflect candidate region It is incident upon obtained after characteristic pattern, the characteristic pattern is to be carried out obtained by feature extraction as convolutional neural networks to input picture It arrives, the candidate region is obtained by carrying out target area detection to the input picture according to goal-selling detection algorithm;
First division module, for being obtained described in module acquisition according to default candidate region pond parameter by described first Candidate region characteristic pattern is divided into the equivalent zonule of NxM size, and described N, M are described pre- for positive integer and more than or equal to 1 If candidate region pond parameter includes the width parameter and length parameter for pondization processing;
Second division module, the institute for dividing each first division module according to sampling number P is preset It states zonule and is averagely divided into P sampling area, the P is positive integer and is greater than or equal to 2;
First determining module, for determine it is described first obtain module obtain the candidate region characteristic pattern in institute State the intersection pixel for the sampling area intersection that the second division module divides;
Second determining module, for according to first determining module determine the candidate region characteristic pattern in it is described The intersection pixel of sampling area intersection, determines the pixel value of the center position of the sampling area;
Third determining module, the zonule for being determined according to second determining module is corresponding each described to adopt The pixel value of the center position in sample region, the corresponding pixel value for determining each zonule;
Second obtains module, for being obtained according to the pixel value of each of the third determining module determination zonule The fixed-size candidate region characteristic pattern.
In one embodiment, described image processing unit further includes the 4th determining module, generation module;
4th determining module is adopted according in the candidate region characteristic pattern with described for second determining module The intersection pixel of sample region intersection, before the pixel value for determining the center position of the sampling area, is adopted described in judgement Whether the size in sample region is greater than or equal to the size of the pixel of the candidate region characteristic pattern;
The generation module, if judging that the size of the sampling area is greater than or equal to institute for the 4th determining module The size of the pixel of candidate region image is stated, then generates the sample region centered on the center position of the sampling area The corresponding auxiliary frame in domain, the shape of the auxiliary frame is identical as the pixel of the candidate region characteristic pattern, the auxiliary frame Size is then less than or equal to the size of the pixel of the candidate region characteristic pattern;
Second determining module, is specifically used for: intersecting according in the candidate region characteristic pattern with the sampling area Intersection pixel and the corresponding auxiliary frame of the sampling area, determine the pixel of the center position of the sampling area Value.
In one embodiment, second determining module be used for according in the candidate region characteristic pattern with the sample region The intersection pixel and the corresponding auxiliary frame of the sampling area of domain intersection, determine the center position of the sampling area Pixel value, comprising:
Second determining module is used for:
Obtain intersection of the corresponding auxiliary frame of the sampling area respectively with each pixel in the intersection pixel Product;
According to the intersecting area and each picture of the corresponding auxiliary frame of the sampling area and each pixel The pixel value of vegetarian refreshments determines the corresponding first object pixel value of the sampling area;
By the corresponding first object pixel value of the sampling area, the pixel of the center position as the sampling area Value.
In one embodiment, second determining module is for described according to the corresponding auxiliary frame of the sampling area and institute The intersecting area of each pixel and the pixel value of each pixel are stated, determines the sampling area corresponding One target pixel value, comprising:
Second determining module is specifically used for:
Respectively correspond multiplying between the intersecting area of the pixel value and each pixel that calculate each pixel Product;
By the product addition between the pixel value of each pixel and the intersecting area of each pixel, to obtain To the first sum of products;
Calculate the sum of the intersecting area of each pixel, with obtain the first intersection pixel point areas and;
It calculates the first sum of products and intersects quotient between pixel point areas with described first, it is corresponding to obtain the sampling area First object pixel value.
In one embodiment, described image processing unit further includes the 5th determining module:
5th determining module: whether the size for judging the sampling area is greater than or equal to the candidate region After the size of the pixel of characteristic pattern, if judging, the size of the sampling area is less than the pixel of the candidate region characteristic pattern The size of point, then according to the pixel of each intersection pixel intersected in the candidate region characteristic pattern with the sampling area Value and the sampling area and each intersecting area for intersecting pixel determine the sampling area corresponding the Two target pixel values;
Using corresponding second target pixel value of the sampling area as the pixel of the center position of the sampling area Value.
In one embodiment, the 5th determining module for it is described according in the candidate region characteristic pattern with the sample region The pixel value and the sampling area and each intersection for intersecting pixel of each intersection pixel of domain intersection Product, determines corresponding second target pixel value of the sampling area, comprising:
5th determining module, is specifically used for:
The pixel value for calculating each target intersection pixel is respectively corresponded, intersects pixel with each target Product between intersecting area;
By the pixel value of each intersection pixel, with the product between each intersecting area for intersecting pixel It is added, to obtain the second sum of products;
Calculate it is described it is each intersection pixel intersecting area between sum, with obtain second intersection pixel point areas with;
It calculates the second sum of products and intersects quotient between pixel point areas with described second, it is corresponding to obtain the sampling area The second target pixel value.
Specific about image processing apparatus limits the restriction that may refer to above for image processing method, herein not It repeats again.Modules in above-mentioned image processing apparatus can be realized fully or partially through software, hardware and combinations thereof.On Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in figure 13.The computer equipment include by system bus connect processor, memory, network interface and Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating The database of machine equipment is for storing characteristic pattern etc..The network interface of the computer equipment is used to pass through network with external terminal Connection communication.To realize a kind of image processing method when the computer program is executed by processor.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor perform the steps of when executing computer program
Candidate region characteristic pattern is obtained, the candidate region characteristic pattern is acquired after candidate region to be mapped to characteristic pattern , the characteristic pattern is obtained by carrying out feature extraction to input picture by convolutional neural networks, and the candidate region is root The input picture is carried out obtained by the detection of target area according to goal-selling detection algorithm;
The candidate region characteristic pattern is divided into the equivalent cell of NxM size according to default candidate region pond parameter Domain, for positive integer and more than or equal to 1, the default candidate region pond parameter includes the width for pondization processing by described N, M Spend parameter and length parameter;
Each zonule is averagely divided into P sampling area according to default sampling number P, the P is positive integer And it is greater than or equal to 2;
Determine the intersection pixel intersected in the candidate region characteristic pattern with the sampling area;
According to the intersection pixel intersected in the candidate region characteristic pattern with the sampling area, the sampling is determined The pixel value of the center position in region;
It is corresponding to determine often according to the pixel value of the center position of the corresponding each sampling area in the zonule The pixel value of a zonule;
The fixed-size candidate region characteristic pattern is obtained according to the pixel value of each zonule.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of when being executed by processor
Candidate region characteristic pattern is obtained, the candidate region characteristic pattern is acquired after candidate region to be mapped to characteristic pattern , the characteristic pattern is obtained by carrying out feature extraction to input picture by convolutional neural networks, and the candidate region is root The input picture is carried out obtained by the detection of target area according to goal-selling detection algorithm;
The candidate region characteristic pattern is divided into the equivalent cell of NxM size according to default candidate region pond parameter Domain, for positive integer and more than or equal to 1, the default candidate region pond parameter includes the width for pondization processing by described N, M Spend parameter and length parameter;
Each zonule is averagely divided into P sampling area according to default sampling number P, the P is positive integer And it is greater than or equal to 2;
Determine the intersection pixel intersected in the candidate region characteristic pattern with the sampling area;
According to the intersection pixel intersected in the candidate region characteristic pattern with the sampling area, the sampling is determined The pixel value of the center position in region;
It is corresponding to determine often according to the pixel value of the center position of the corresponding each sampling area in the zonule The pixel value of a zonule;
The fixed-size candidate region characteristic pattern is obtained according to the pixel value of each zonule.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided by the present invention, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of image processing method characterized by comprising
Obtain candidate region characteristic pattern, the candidate region characteristic pattern be candidate region is mapped to it is obtained after characteristic pattern, The characteristic pattern is obtained by carrying out feature extraction to input picture by convolutional neural networks, and the candidate region is according to pre- If algorithm of target detection carries out obtained by the detection of target area the input picture;
The candidate region characteristic pattern is divided into the equivalent zonule of NxM size according to default candidate region pond parameter, For positive integer and more than or equal to 1, the default candidate region pond parameter includes the width ginseng for pondization processing by described N, M Several and length parameter;
Each zonule is averagely divided into P sampling area according to default sampling number P, the P is positive integer and big In or equal to 2;
Determine the intersection pixel intersected in the candidate region characteristic pattern with the sampling area;
According to the intersection pixel intersected in the candidate region characteristic pattern with the sampling area, the sampling area is determined Center position pixel value;
According to the pixel value of the center position of the corresponding each sampling area in the zonule, correspondence determines each institute State the pixel value of zonule;
The fixed-size candidate region characteristic pattern is obtained according to the pixel value of each zonule.
2. image processing method as described in claim 1, which is characterized in that it is described according in the candidate region characteristic pattern with The intersection pixel of sampling area intersection, it is described before the pixel value for determining the center position of the sampling area Method further include:
Judge whether the size of the sampling area is greater than or equal to the size of the pixel of the candidate region characteristic pattern;
If judging, the size of the sampling area is greater than or equal to the size of the pixel of the candidate region image, with described Generate the corresponding auxiliary frame of the sampling area centered on the center position of sampling area, the shape of the auxiliary frame with it is described The pixel of candidate region characteristic pattern is identical, and the size of the auxiliary frame is then less than or equal to the picture of the candidate region characteristic pattern The size of vegetarian refreshments;
It is described according to the intersection pixel intersected in the candidate region characteristic pattern with the sampling area, determine the sampling The pixel value of the center position in region, comprising:
According to the intersection pixel and the sampling area pair intersected in the candidate region characteristic pattern with the sampling area The auxiliary frame answered determines the pixel value of the center position of the sampling area.
3. image processing method as claimed in claim 2, which is characterized in that it is described according in the candidate region characteristic pattern with The intersection pixel and the corresponding auxiliary frame of the sampling area of the sampling area intersection, determine the sampling area The pixel value of center position, comprising:
Obtain intersecting area of the corresponding auxiliary frame of the sampling area respectively with each pixel in the intersection pixel;
According to the intersecting area and each pixel of the corresponding auxiliary frame of the sampling area and each pixel Pixel value, determine the corresponding first object pixel value of the sampling area;
By the corresponding first object pixel value of the sampling area, the pixel value of the center position as the sampling area.
4. image processing method as claimed in claim 3, which is characterized in that described according to the corresponding auxiliary of the sampling area Frame and the intersecting area of each pixel and the pixel value of each pixel, determine the sampling area pair The first object pixel value answered, comprising:
Respectively correspond the product between the intersecting area of the pixel value and each pixel that calculate each pixel;
By the product addition between the pixel value of each pixel and the intersecting area of each pixel, to obtain One sum of products;
Calculate the sum of the intersecting area of each pixel, with obtain the first intersection pixel point areas and;
It calculates the first sum of products and intersects quotient between pixel point areas with described first, to obtain the sampling area corresponding One target pixel value.
5. image processing method as claimed in claim 2, which is characterized in that whether the size for judging the sampling area After size more than or equal to the pixel of the candidate region characteristic pattern, the method also includes:
If judging, the size of the sampling area is less than the size of the pixel of the candidate region characteristic pattern, according to the time The pixel value of each intersection pixel intersected in favored area characteristic pattern with the sampling area and the sampling area and institute The intersecting area for stating each intersection pixel, determines corresponding second target pixel value of the sampling area;
Using corresponding second target pixel value of the sampling area as the pixel value of the center position of the sampling area.
6. image processing method as claimed in claim 5, which is characterized in that it is described according in the candidate region characteristic pattern with The pixel value of each intersection pixel of the sampling area intersection and the sampling area and each intersection pixel Intersecting area, determine corresponding second target pixel value of the sampling area, comprising:
The pixel value for calculating each target intersection pixel is respectively corresponded, the intersection of pixel is intersected with each target Product between area;
By the pixel value of each intersection pixel, with the product phase between each intersecting area for intersecting pixel Add, to obtain the second sum of products;
Calculate it is described it is each intersection pixel intersecting area between sum, with obtain second intersection pixel point areas with;
It calculates the second sum of products and intersects quotient between pixel point areas with described second, to obtain the sampling area corresponding Two target pixel values.
7. a kind of image processing apparatus characterized by comprising
First obtains module, and for obtaining candidate region characteristic pattern, the candidate region characteristic pattern is to map to candidate region Obtained after characteristic pattern, the characteristic pattern is institute obtained by carrying out feature extraction to input picture by convolutional neural networks Stating candidate region is obtained by carrying out target area detection to the input picture according to goal-selling detection algorithm;
First division module, the candidate for obtaining module acquisition for described first according to candidate region pond parameter is preset Provincial characteristics figure is divided into the equivalent zonule of NxM size, and described N, M are positive integer and are greater than or equal to 1, the default time Favored area pond parameter includes the width parameter and length parameter for pondization processing;
Second division module is described small for being divided each first division module according to default sampling number P Region is averagely divided into P sampling area, and the P is positive integer and is greater than or equal to 2;
First determining module, for determining that described first obtains in the candidate region characteristic pattern that module obtains with described the The intersection pixel for the sampling area intersection that two division modules divide;
Second determining module, for according to first determining module determine the candidate region characteristic pattern in the sampling The intersection pixel of region intersection, determines the pixel value of the center position of the sampling area;
Third determining module, the corresponding each sample region in the zonule for being determined according to second determining module The pixel value of the center position in domain, the corresponding pixel value for determining each zonule;
Second obtains module, for being fixed according to the pixel value of each of the third determining module determination zonule The candidate region characteristic pattern of size.
8. image processing apparatus as claimed in claim 7, which is characterized in that described image processing unit further includes the 4th determining Module, generation module;
4th determining module, for second determining module according in the candidate region characteristic pattern with the sample region The intersection pixel of domain intersection, before the pixel value for determining the center position of the sampling area, judges the sample region Whether the size in domain is greater than or equal to the size of the pixel of the candidate region characteristic pattern;
The generation module, if judging that the size of the sampling area is greater than or equal to the time for the 4th determining module The size of the pixel of constituency area image then generates the sampling area pair centered on the center position of the sampling area The auxiliary frame answered, the shape of the auxiliary frame is identical as the pixel of the candidate region characteristic pattern, the size of the auxiliary frame Then less than or equal to the size of the pixel of the candidate region characteristic pattern;
Second determining module, is specifically used for: according to the phase intersected in the candidate region characteristic pattern with the sampling area Pixel and the corresponding auxiliary frame of the sampling area are handed over, determines the pixel value of the center position of the sampling area.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to 6 described in any item image processing methods.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In realization such as image processing method as claimed in any one of claims 1 to 6 when the computer program is executed by processor.
CN201910067366.3A 2019-01-24 2019-01-24 Image processing method, device, computer equipment and storage medium Active CN109886279B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910067366.3A CN109886279B (en) 2019-01-24 2019-01-24 Image processing method, device, computer equipment and storage medium
PCT/CN2019/089196 WO2020151153A1 (en) 2019-01-24 2019-05-30 Image processing method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910067366.3A CN109886279B (en) 2019-01-24 2019-01-24 Image processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109886279A true CN109886279A (en) 2019-06-14
CN109886279B CN109886279B (en) 2023-09-29

Family

ID=66926823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910067366.3A Active CN109886279B (en) 2019-01-24 2019-01-24 Image processing method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109886279B (en)
WO (1) WO2020151153A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415239A (en) * 2019-08-01 2019-11-05 腾讯科技(深圳)有限公司 Image processing method, device, equipment, medical treatment electronic equipment and medium
CN110646006A (en) * 2019-09-02 2020-01-03 平安科技(深圳)有限公司 Assembly path planning method and related device
CN111462094A (en) * 2020-04-03 2020-07-28 联觉(深圳)科技有限公司 PCBA component detection method and device and computer readable storage medium
CN112256906A (en) * 2020-10-23 2021-01-22 安徽启新明智科技有限公司 Method, device and storage medium for marking annotation on display screen
CN115393586A (en) * 2022-08-18 2022-11-25 北京爱科农科技有限公司 Farmland breeding region dividing method and device, computer equipment and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553685B (en) * 2021-07-27 2024-03-22 久瓴(江苏)数字智能科技有限公司 Lighting device arrangement method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2141928A1 (en) * 2008-06-30 2010-01-06 Thomson Licensing S.A. Device and method for analysing an encoded image
CN106599866A (en) * 2016-12-22 2017-04-26 上海百芝龙网络科技有限公司 Multidimensional user identity identification method
US20170220904A1 (en) * 2015-04-02 2017-08-03 Tencent Technology (Shenzhen) Company Limited Training method and apparatus for convolutional neural network model
CN107145889A (en) * 2017-04-14 2017-09-08 中国人民解放军国防科学技术大学 Target identification method based on double CNN networks with RoI ponds
CN107808141A (en) * 2017-11-08 2018-03-16 国家电网公司 A kind of electric transmission line isolator explosion recognition methods based on deep learning
CN108764143A (en) * 2018-05-29 2018-11-06 北京字节跳动网络技术有限公司 Image processing method, device, computer equipment and storage medium
CN108876791A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN109117876A (en) * 2018-07-26 2019-01-01 成都快眼科技有限公司 A kind of dense small target deteection model building method, model and detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301383B (en) * 2017-06-07 2020-11-24 华南理工大学 Road traffic sign identification method based on Fast R-CNN
CN108133217B (en) * 2017-11-22 2018-10-30 北京达佳互联信息技术有限公司 Characteristics of image determines method, apparatus and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2141928A1 (en) * 2008-06-30 2010-01-06 Thomson Licensing S.A. Device and method for analysing an encoded image
US20170220904A1 (en) * 2015-04-02 2017-08-03 Tencent Technology (Shenzhen) Company Limited Training method and apparatus for convolutional neural network model
CN106599866A (en) * 2016-12-22 2017-04-26 上海百芝龙网络科技有限公司 Multidimensional user identity identification method
CN107145889A (en) * 2017-04-14 2017-09-08 中国人民解放军国防科学技术大学 Target identification method based on double CNN networks with RoI ponds
CN108876791A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN107808141A (en) * 2017-11-08 2018-03-16 国家电网公司 A kind of electric transmission line isolator explosion recognition methods based on deep learning
CN108764143A (en) * 2018-05-29 2018-11-06 北京字节跳动网络技术有限公司 Image processing method, device, computer equipment and storage medium
CN109117876A (en) * 2018-07-26 2019-01-01 成都快眼科技有限公司 A kind of dense small target deteection model building method, model and detection method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415239A (en) * 2019-08-01 2019-11-05 腾讯科技(深圳)有限公司 Image processing method, device, equipment, medical treatment electronic equipment and medium
CN110415239B (en) * 2019-08-01 2022-12-16 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, medical electronic device, and medium
CN110646006A (en) * 2019-09-02 2020-01-03 平安科技(深圳)有限公司 Assembly path planning method and related device
CN111462094A (en) * 2020-04-03 2020-07-28 联觉(深圳)科技有限公司 PCBA component detection method and device and computer readable storage medium
CN112256906A (en) * 2020-10-23 2021-01-22 安徽启新明智科技有限公司 Method, device and storage medium for marking annotation on display screen
CN115393586A (en) * 2022-08-18 2022-11-25 北京爱科农科技有限公司 Farmland breeding region dividing method and device, computer equipment and medium

Also Published As

Publication number Publication date
CN109886279B (en) 2023-09-29
WO2020151153A1 (en) 2020-07-30

Similar Documents

Publication Publication Date Title
CN109886279A (en) Image processing method, device, computer equipment and storage medium
CN110517278B (en) Image segmentation and training method and device of image segmentation network and computer equipment
CN111079632A (en) Training method and device of text detection model, computer equipment and storage medium
US11282271B2 (en) Method in constructing a model of a scenery and device therefor
CN110675440B (en) Confidence evaluation method and device for three-dimensional depth data and computer equipment
CN109711419A (en) Image processing method, device, computer equipment and storage medium
CN113850807B (en) Image sub-pixel matching positioning method, system, device and medium
CN115601774B (en) Table recognition method, apparatus, device, storage medium and program product
CN109102524B (en) Tracking method and tracking device for image feature points
CN108122280A (en) The method for reconstructing and device of a kind of three-dimensional point cloud
CN114254584A (en) Comparison method, modeling method and device of chip products and storage medium
CN112348116A (en) Target detection method and device using spatial context and computer equipment
CN113160330A (en) End-to-end-based camera and laser radar calibration method, system and medium
CN115713487A (en) Defect identification method, device and storage medium for X-ray welding seam image
CN114048845B (en) Point cloud repairing method and device, computer equipment and storage medium
CN111143146A (en) Health state prediction method and system of storage device
US20160267352A1 (en) System and method for constructing a statistical shape model
CN109360215B (en) Method, device and equipment for searching outer contour of three-dimensional model and storage medium
CN115082592A (en) Curve generation method, system, computer equipment and storage medium
CN111311731B (en) Random gray level map generation method and device based on digital projection and computer equipment
CN117193278A (en) Method, apparatus, computer device and storage medium for dynamic edge path generation
CN114677468A (en) Model correction method, device, equipment and storage medium based on reverse modeling
CN110489510B (en) Road data processing method and device, readable storage medium and computer equipment
CN110211230B (en) Space planning model integration method and device, computer equipment and storage medium
CN114612609A (en) Curved surface reflection line calculation method, curved surface reflection line calculation device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant