CN111951189A - Data enhancement method for multi-scale texture randomization - Google Patents

Data enhancement method for multi-scale texture randomization Download PDF

Info

Publication number
CN111951189A
CN111951189A CN202010813012.1A CN202010813012A CN111951189A CN 111951189 A CN111951189 A CN 111951189A CN 202010813012 A CN202010813012 A CN 202010813012A CN 111951189 A CN111951189 A CN 111951189A
Authority
CN
China
Prior art keywords
img
sample
image
frame
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010813012.1A
Other languages
Chinese (zh)
Other versions
CN111951189B (en
Inventor
井焜
陈英鹏
许野平
刘辰飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Synthesis Electronic Technology Co Ltd
Original Assignee
Synthesis Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Synthesis Electronic Technology Co Ltd filed Critical Synthesis Electronic Technology Co Ltd
Priority to CN202010813012.1A priority Critical patent/CN111951189B/en
Publication of CN111951189A publication Critical patent/CN111951189A/en
Application granted granted Critical
Publication of CN111951189B publication Critical patent/CN111951189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4084Transform-based scaling, e.g. FFT domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/46Analysis of texture based on statistical description of texture using random fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a data enhancement method for multi-scale texture randomization, which splices 4 random training samples to form an output sample, and the output sample retains the characteristics of the 4 training samples, thereby increasing the sample characteristics and preventing overfitting during training; and randomly increasing a texture mask frame, comparing the self-checking overlapping area of the texture mask frame and the sample mark frame, if the overlapping area is smaller than a set threshold value, reserving the mask frame, and if the sample mark frame is an overlapping area mark frame, realizing the processing of the overlapping area. According to the invention, through carrying out multi-scale texture randomization data enhancement on the training sample, the preprocessing effect of the training data of the target detection task can be improved, and the recognition detection effect is improved.

Description

Data enhancement method for multi-scale texture randomization
Technical Field
The invention relates to the field of artificial intelligence, in particular to a computer vision typical target detection task training data preprocessing stage, and specifically relates to a data enhancement method for multi-scale texture randomization.
Background
In a real application scene, a large number of occlusion problems often exist. In the training samples of the target detection task, a large number of marked targets are overlapped, so that the shielded targets exist, and the characteristics of part of other targets exist in the training process, thereby influencing the recognition and detection effects of the targets.
A data enhancement method is proposed in the paper Improved reconstruction of volumetric Neural Networks with cut (https:// axiv. org/abs/1708.04552) to crop at random locations and areas of a certain size on an image. The method adds the occluded samples in the training as much as possible, but cannot be used for well managing the condition that a large number of occluded samples exist in the training samples.
A data enhancement method is proposed in a paper mixup, being radial and Empirical Risk minimizio (https:// axiv. org/abs/1710.09412), and two pictures are randomly selected to be superposed. In the target detection task, the samples with overlapping area overlapping proportion are not effectively distinguished. The method aims to generate more samples in a combined mode, and the condition that a large number of shielding samples exist in training samples cannot be well understood.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a multi-scale texture randomization data enhancement method, which increases sample characteristics, fills a random texture mask frame in a sample region and increases the effect of training data preprocessing of a target detection task.
In order to solve the technical problem, the sampling technical characteristics of the invention are as follows:
a multi-scale texture randomized data enhancement method comprises the following steps:
s01), selecting N training samples P = { P = { (P))0,P1,…,Pn-1And marker box information T = { [ X ] corresponding to N training samples0,Y0,W0,H0,L0],[X1,Y1,W1,H1,L1],…,[Xn-1,Yn-1,Wn-1,Hn-1,Ln-1]};
Wherein P contains picture information, e.g. P0Contains (img 0, img _ w0, img _ h 0), img0 represents picture P0Img _ w0 denotes a picture P0Img _ h0 represents the image P0High, P1、…、Pn-1 The same as above;
t contains mark information of the image, [ X, Y, W, H, C ] is represented as a set of mark frame information, respectively representing the upper left corner (X, Y) of the mark frame, W is width, H is height, and L is the category of the frame;
s02), randomly selecting 4 samples from the training sample set and marking frame information corresponding to the 4 samples, and recording the 4 samples as Ptl、Ptr、Pbl、PbrAnd marking the mark frame information corresponding to 4 samples as Xtl,Ytl,Wtl,Htl,Ltl], [Xtr,Ytr,Wtr,Htr,Ltr], [Xbl,Ybl,Wbl,Hbl,Lbl], [Xbr,Ybr,Wbr,Hbr,Lbr];
S03), randomly generating 4 scale factors, i.e., generating S = [ S0, S1, S2, S3], wherein S ranges between [0.5,1.0 ];
s04), and setting the sample after data enhancement as PoutSetting up PoutThe image information of (img, img _ w, img _ h), wherein img represents the data-enhanced image PoutImg _ w represents the picture PoutImg _ h represents the picture PoutThe height of (d);
s05), and setting the coordinates of the center point of the sample Pout after data enhancement as (xc, yc), then
xc = img_w / 2 + b (1),
yc = img_h / 2 + b (2),
Wherein b [ - (img _ w + img _ h)/16, (img _ w + img _ h)/16 ];
s06), when the input sample is PtlTime, image imgtlMultiplying by a scaling factor s0 to change the image scale, and recording the output image as Ptl0And similarly, obtaining an output image through scaling of other three input samples and recording the output image as Ptr1、Pbl2、Pbr3Output sample PoutIs Ptl0、Ptr1、Pbl2、Pbr3Splicing in different scales, and converting corresponding mark frame information to be recorded as Tout
S07), randomly generating n texture Mask frames with different sizes, different shapes and different colors, and recording the frames as Mask = [ m ]0,m1,…,mn-1];
S08), calculating an Overlap area of each generated mask frame and output sample mark frame, the Overlap area being denoted as Overlap = [ o =0,o1,…,on-1];
S09), assuming random generation of mask frame miAnd a certain mark frame t in the output samplejOverlap, the area of the overlap region is areaiThen oi= areai/ (wj * hj) Wherein the frame t is markedjIs [ x ] as position informationj,yj,wj,hj];
S10), value o when overlapping areaiGreater than a threshold value ostThen the mask frame m is deletedi
Further, Ptl0、Ptr1、Pbl2、Pbr3Splicing different scales to form an output sample PoutThe process comprises the following steps:
s61), placing the part area of Ptl0 samples at the upper left corner of Pout, and the concrete conversion formula is:
x1a = max(xc - imgtl0_w, 0),
y1a = max(yc - imgtl0_h, 0),
x2a = xc,
y2a = yc,
x1b = imgtl0_w – (x2a – x1a),
y1b = imgtl0_h – (y2a – y1a),
x2b = imgtl0_w,
y2b = imgtl0_h,
img[y1a:y2a, x1a:x2a] = imgtl0[y1b:y2b, x1b:x2b];
s62), adding Ptr1Partial area placement of sample PoutThe specific conversion formula of the upper right corner of the table is as follows:
x1a = xc,
y1a = max(yc - imgtr1_h, 0),
x2a = min(xc + imgtr1_w, img_w),
y2a = yc,
x1b = 0,
y1b = imgtr1_h – (y2a – y1a),
x2b = min(imgtr1_w, x2a –x1a),
y2b = imgtr1_h,
img[y1a:y2a, x1a:x2a] = imgtr1[y1b:y2b, x1b:x2b];
s63), adding Pbl2Partial area placement of sample PoutThe specific conversion formula is:
x1a = max(xc – imgbl2_w, 0),
y1a = yc,
x2a = xc,
y2a = min(img_h, yc + imgbl2_h),
x1b = imgbl2_w – (x2a – x1a),
y1b = 0,
x2b = max(xc, imgbl2_w),
y2b = min(y2a – y1a ,imgbl2_h),
img[y1a:y2a, x1a:x2a] = imgbl2[y1b:y2b, x1b:x2b];
s64), adding Pbr3Partial area placement of sample PoutThe specific conversion formula is as follows:
x1a = xc,
y1a = yc,
x2a = min(xc + imgbr3_w, img_w),
y2a = min(img_h, yc + imgbr3_h),
x1b = 0,
y1b = 0,
x2b = min(imgbr3_w, x2a –x1a),
y2b = min(y2a – y1a ,imgbr3_h),
img[y1a:y2a, x1a:x2a] = imgbr3[y1b:y2b, x1b:x2b]。
further, ostThe value of (d) was chosen to be 0.5.
The invention has the beneficial effects that: according to the invention, 4 random training samples are spliced to form an output sample, the output sample retains the characteristics of the 4 training samples, the sample characteristics can be increased, and overfitting during training is prevented; and randomly increasing a texture mask frame, comparing the self-checking overlapping area of the texture mask frame and the sample mark frame, if the overlapping area is smaller than a set threshold value, reserving the mask frame, and if the sample mark frame is an overlapping area mark frame, realizing the processing of the overlapping area. According to the invention, through carrying out multi-scale texture randomization data enhancement on the training sample, the preprocessing effect of the training data of the target detection task can be improved, and the recognition detection effect is improved.
Drawings
Fig. 1 is a schematic diagram of output samples formed by splicing 4 training samples in example 1.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
Example 1
The embodiment discloses a data enhancement method for multi-scale texture randomization, which comprises the following steps:
s01), selecting N training samples P = { P = { (P))0,P1,…,Pn-1And marker box information T = { [ X ] corresponding to N training samples0,Y0,W0,H0,L0],[X1,Y1,W1,H1,L1],…,[Xn-1,Yn-1,Wn-1,Hn-1,Ln-1]};
Wherein P contains picture information, e.g. P0Contains (img 0, img _ w0, img _ h 0), img0 represents picture P0Img _ w0 denotes a picture P0Img _ h0 represents the image P0High, P1、…、Pn-1 Has the same meaning as above;
t contains the label information of the image, as shown in fig. 1, there are multiple label frames in one training sample, i.e. one picture, so [ X, Y, W, H, C ] represents a set of label frame information, respectively representing the top left corner (X, Y) of the label frame, W is width, H is height, and L is the category of the frame;
s02), randomly selecting 4 samples from the training sample set and marking frame information corresponding to the 4 samples, and recording the 4 samples as Ptl、Ptr、Pbl、PbrAnd marking the mark frame information corresponding to 4 samples as Xtl,Ytl,Wtl,Htl,Ltl], [Xtr,Ytr,Wtr,Htr,Ltr], [Xbl,Ybl,Wbl,Hbl,Lbl], [Xbr,Ybr,Wbr,Hbr,Lbr];
S03), randomly generating 4 scale factors, namely generating S = [ S0, S1, S2, S3], wherein S ranges between [0.5,1.0], namely S ∈ [0.5,1.0 ];
s04), and setting the sample after data enhancement as PoutSetting up PoutThe image information of (img, img _ w, img _ h), wherein img represents the data-enhanced image PoutImg _ w represents the picture PoutImg _ h represents the picture PoutThe height of (d);
s05), and setting the coordinates of the center point of the sample Pout after data enhancement as (xc, yc), then
xc = img_w / 2 + b (1),
yc = img_h / 2 + b (2),
Wherein b [ - (img _ w + img _ h)/16, (img _ w + img _ h)/16 ];
s06), when the input sample is PtlTime, image imgtlMultiplying by a scaling factor s0 to change the image scale, and recording the output image as Ptl0And similarly, obtaining an output image through scaling of other three input samples and recording the output image as Ptr1、Pbl2、Pbr3Output sample PoutIs Ptl0、Ptr1、Pbl2、Pbr3Splicing in different scales, and converting corresponding mark frame information to be recorded as Tout
S07), randomly generating n texture Mask frames with different sizes, different shapes and different colors, and recording the frames as Mask = [ m ]0,m1,…,mn-1];
S08), calculating an Overlap area of each generated mask frame and output sample mark frame, the Overlap area being denoted as Overlap = [ o =0,o1,…,on-1];
S09), assuming random generation of mask frame miAnd a certain mark frame t in the output samplejOverlap, the area of the overlap region is areaiThen oi= areai/ (wj * hj) Wherein the frame t is markedjIs [ x ] as position informationj,yj,wj,hj];
S10), value o when overlapping areaiGreater than a threshold value ostThen the mask frame m is deletediGeneral ostThe value of (d) was chosen to be 0.5.
As shown in FIG. 1, Ptl0、Ptr1、Pbl2、Pbr3Splicing different scales to form an output sample PoutThe process comprises the following steps:
s61), placing the part area of Ptl0 samples at the upper left corner of Pout, and the concrete conversion formula is:
x1a = max(xc - imgtl0_w, 0),
y1a = max(yc - imgtl0_h, 0),
x2a = xc,
y2a = yc,
x1b = imgtl0_w – (x2a – x1a),
y1b = imgtl0_h – (y2a – y1a),
x2b = imgtl0_w,
y2b = imgtl0_h,
img[y1a:y2a, x1a:x2a] = imgtl0[y1b:y2b, x1b:x2b];
wherein, (x1a, y1a), (x2a, y2a) are images PoutThe coordinates of the upper left corner and the lower right corner of part A1, (x1b, y1b), (x2b, y2b) are respectively image Ptl0The coordinates of the upper left and lower right parts of part A2, and the last formula represents image Ptl0Is mapped to image PoutPart a 1.
S62), adding Ptr1Partial area placement of sample PoutThe specific conversion formula of the upper right corner of the table is as follows:
x1a = xc,
y1a = max(yc - imgtr1_h, 0),
x2a = min(xc + imgtr1_w, img_w),
y2a = yc,
x1b = 0,
y1b = imgtr1_h – (y2a – y1a),
x2b = min(imgtr1_w, x2a –x1a),
y2b = imgtr1_h,
img[y1a:y2a, x1a:x2a] = imgtr1[y1b:y2b, x1b:x2b];
wherein, (x1a, y1a), (x2a, y2a) are images PoutThe coordinates of the upper left corner and the lower right corner of part B1, (x1B, y1B), (x2B, y2B) are respectively image Ptr1B2 part, the last formula representing image Ptr1Is mapped to image PoutPart B1.
S63), adding Pbl2Partial area placement of sample PoutLeft lower ofThe specific conversion formula of the angle is as follows:
x1a = max(xc – imgbl2_w, 0),
y1a = yc,
x2a = xc,
y2a = min(img_h, yc + imgbl2_h),
x1b = imgbl2_w – (x2a – x1a),
y1b = 0,
x2b = max(xc, imgbl2_w),
y2b = min(y2a – y1a ,imgbl2_h),
img[y1a:y2a, x1a:x2a] = imgbl2[y1b:y2b, x1b:x2b];
wherein, (x1a, y1a), (x2a, y2a) are images PoutThe coordinates of the upper left corner and the lower right corner of the C1 part of (x1b, y1b), (x2b, y2b) are respectively the image Pbl2C2, and the last formula represents image Pbl2Is mapped to image PoutPart C1.
S64), adding Pbr3Partial area placement of sample PoutThe specific conversion formula is as follows:
x1a = xc,
y1a = yc,
x2a = min(xc + imgbr3_w, img_w),
y2a = min(img_h, yc + imgbr3_h),
x1b = 0,
y1b = 0,
x2b = min(imgbr3_w, x2a –x1a),
y2b = min(y2a – y1a ,imgbr3_h),
img[y1a:y2a, x1a:x2a] = imgbr3[y1b:y2b, x1b:x2b];
wherein, (x1a, y1a), (x2a, y2a) are images PoutD1, the coordinates of the upper left corner and the lower right corner of the image P are (x1b, y1b), (x2b, y2b) respectivelybr3The coordinates of the upper left and lower right parts of part D2, and the last formula represents image Pbr3Part D2 ofIs projected to the image PoutPart D1.
According to the invention, 4 random training samples are spliced to form an output sample, the output sample retains the characteristics of the 4 training samples, the sample characteristics can be increased, and overfitting during training is prevented; and randomly increasing a texture mask frame, comparing the self-checking overlapping area of the texture mask frame and the sample mark frame, if the overlapping area is smaller than a set threshold value, reserving the mask frame, and if the sample mark frame is an overlapping area mark frame, realizing the processing of the overlapping area. According to the invention, through carrying out multi-scale texture randomization data enhancement on the training sample, the preprocessing effect of the training data of the target detection task can be improved, and the recognition detection effect is improved.
The foregoing description is only for the basic principle and the preferred embodiments of the present invention, and modifications and substitutions by those skilled in the art are included in the scope of the present invention.

Claims (3)

1. A data enhancement method of multi-scale texture randomization is characterized in that: the method comprises the following steps:
s01), selecting N training samples P = { P = { (P))0,P1,…,Pn-1And marker box information T = { [ X ] corresponding to N training samples0,Y0,W0,H0,L0],[X1,Y1,W1,H1,L1],…,[Xn-1,Yn-1,Wn-1,Hn-1,Ln-1]};
Wherein P contains picture information, e.g. P0Contains (img 0, img _ w0, img _ h 0), img0 represents picture P0Img _ w0 denotes a picture P0Img _ h0 represents the image P0High, P1、…、Pn-1 The same as above;
t contains mark information of the image, [ X, Y, W, H, C ] is represented as a set of mark frame information, respectively representing the upper left corner (X, Y) of the mark frame, W is width, H is height, and L is the category of the frame;
s02), randomly selecting 4 samples from the training sample set and label frames corresponding to the 4 samplesInformation, 4 samples noted Ptl、Ptr、Pbl、PbrAnd marking the mark frame information corresponding to 4 samples as Xtl,Ytl,Wtl,Htl,Ltl], [Xtr,Ytr,Wtr,Htr,Ltr], [Xbl,Ybl,Wbl,Hbl,Lbl], [Xbr,Ybr,Wbr,Hbr,Lbr];
S03), randomly generating 4 scale factors, i.e., generating S = [ S0, S1, S2, S3], wherein S ranges between [0.5,1.0 ];
s04), and setting the sample after data enhancement as PoutSetting up PoutThe image information of (img, img _ w, img _ h), wherein img represents the data-enhanced image PoutImg _ w represents the picture PoutImg _ h represents the picture PoutThe height of (d);
s05), and setting the data enhanced sample PoutThe coordinate of the center point is (xc, yc), then
xc = img_w / 2 + b (1),
yc = img_h / 2 + b (2),
Wherein b [ - (img _ w + img _ h)/16, (img _ w + img _ h)/16 ];
s06), when the input sample is PtlTime, image imgtlMultiplying by a scaling factor s0 to change the image scale, and recording the output image as Ptl0And similarly, obtaining an output image through scaling of other three input samples and recording the output image as Ptr1、Pbl2、Pbr3Output sample PoutIs Ptl0、Ptr1、Pbl2、Pbr3Splicing in different scales, and converting corresponding mark frame information to be recorded as Tout
S07), randomly generating n texture Mask frames with different sizes, different shapes and different colors, and recording the frames as Mask = [ m ]0,m1,…,mn-1];
S08), calculating each generatedOverlap area of mask frame and output sample mark frame, and the Overlap area is marked as Overlap = [ o =0,o1,…,on-1];
S09), assuming random generation of mask frame miAnd a certain mark frame t in the output samplejOverlap, the area of the overlap region is areaiThen oi= areai / (wj * hj) Wherein the frame t is markedj Is [ x ] as position informationj,yj,wj,hj];
S10), value o when overlapping areaiGreater than a threshold value ost Then the mask frame m is deletedi
2. The method of multi-scale texture randomization of data enhancement as recited in claim 1, wherein: ptl0、Ptr1、Pbl2、Pbr3Splicing different scales to form an output sample PoutThe process comprises the following steps:
s61), adding Ptl0Partial area placement of sample PoutThe specific conversion formula of the upper left corner of (1) is as follows:
x1a = max(xc - imgtl0_w, 0),
y1a = max(yc - imgtl0_h, 0),
x2a = xc,
y2a = yc,
x1b = imgtl0_w – (x2a – x1a),
y1b = imgtl0_h – (y2a – y1a),
x2b = imgtl0_w,
y2b = imgtl0_h,
img[y1a:y2a, x1a:x2a] = imgtl0[y1b:y2b, x1b:x2b];
s62), adding Ptr1Partial area placement of sample PoutThe specific conversion formula of the upper right corner of the table is as follows:
x1a = xc,
y1a = max(yc - imgtr1_h, 0),
x2a = min(xc + imgtr1_w, img_w),
y2a = yc,
x1b = 0,
y1b = imgtr1_h – (y2a – y1a),
x2b = min(imgtr1_w, x2a –x1a),
y2b = imgtr1_h,
img[y1a:y2a, x1a:x2a] = imgtr1[y1b:y2b, x1b:x2b];
s063), mixing Pbl2Partial area placement of sample PoutThe specific conversion formula is:
x1a = max(xc – imgbl2_w, 0),
y1a = yc,
x2a = xc,
y2a = min(img_h, yc + imgbl2_h),
x1b = imgbl2_w – (x2a – x1a),
y1b = 0,
x2b = max(xc, imgbl2_w),
y2b = min(y2a – y1a ,imgbl2_h),
img[y1a:y2a, x1a:x2a] = imgbl2[y1b:y2b, x1b:x2b];
s064), adding Pbr3Partial area placement of sample PoutThe specific conversion formula is as follows:
x1a = xc,
y1a = yc,
x2a = min(xc + imgbr3_w, img_w),
y2a = min(img_h, yc + imgbr3_h),
x1b = 0,
y1b = 0,
x2b = min(imgbr3_w, x2a –x1a),
y2b = min(y2a – y1a ,imgbr3_h),
img[y1a:y2a, x1a:x2a] = imgbr3[y1b:y2b, x1b:x2b]。
3. the multi-scale texture randomization of claim 1The data enhancement method of (2), characterized by: ostThe value of (d) was chosen to be 0.5.
CN202010813012.1A 2020-08-13 2020-08-13 Data enhancement method for multi-scale texture randomization Active CN111951189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010813012.1A CN111951189B (en) 2020-08-13 2020-08-13 Data enhancement method for multi-scale texture randomization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010813012.1A CN111951189B (en) 2020-08-13 2020-08-13 Data enhancement method for multi-scale texture randomization

Publications (2)

Publication Number Publication Date
CN111951189A true CN111951189A (en) 2020-11-17
CN111951189B CN111951189B (en) 2022-05-06

Family

ID=73341855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010813012.1A Active CN111951189B (en) 2020-08-13 2020-08-13 Data enhancement method for multi-scale texture randomization

Country Status (1)

Country Link
CN (1) CN111951189B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348744A (en) * 2020-11-24 2021-02-09 电子科技大学 Data enhancement method based on thumbnail
CN112967187A (en) * 2021-02-25 2021-06-15 深圳海翼智新科技有限公司 Method and apparatus for target detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085609A (en) * 2017-04-24 2017-08-22 国网湖北省电力公司荆州供电公司 A kind of pedestrian retrieval method that multiple features fusion is carried out based on neutral net
CN108921817A (en) * 2018-05-24 2018-11-30 浙江工业大学 A kind of data enhancement methods for skin disease image
GB201918431D0 (en) * 2019-12-13 2020-01-29 British Broadcasting Corp Video encoding and video decoding
CN111179253A (en) * 2019-12-30 2020-05-19 歌尔股份有限公司 Product defect detection method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085609A (en) * 2017-04-24 2017-08-22 国网湖北省电力公司荆州供电公司 A kind of pedestrian retrieval method that multiple features fusion is carried out based on neutral net
CN108921817A (en) * 2018-05-24 2018-11-30 浙江工业大学 A kind of data enhancement methods for skin disease image
GB201918431D0 (en) * 2019-12-13 2020-01-29 British Broadcasting Corp Video encoding and video decoding
CN111179253A (en) * 2019-12-30 2020-05-19 歌尔股份有限公司 Product defect detection method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ETHAN HARRIS 等: "FMix: Enhancing Mixed Sample Data Augmentation", 《HTTPS://ARXIV.ORG/ABS/2002.12047》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348744A (en) * 2020-11-24 2021-02-09 电子科技大学 Data enhancement method based on thumbnail
CN112967187A (en) * 2021-02-25 2021-06-15 深圳海翼智新科技有限公司 Method and apparatus for target detection

Also Published As

Publication number Publication date
CN111951189B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
US11928863B2 (en) Method, apparatus, device, and storage medium for determining implantation location of recommendation information
CN111951189B (en) Data enhancement method for multi-scale texture randomization
EP0883088A2 (en) Automated mapping of facial images to wireframe topologies
CA2990109A1 (en) Fast rendering of quadrics
WO2023185234A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN110567441A (en) Particle filter-based positioning method, positioning device, mapping and positioning method
CN110569379A (en) Method for manufacturing picture data set of automobile parts
CN111462164A (en) Foreground segmentation method and data enhancement method based on image synthesis
US20200092529A1 (en) Image processing apparatus and 2d image generation program
TWI739268B (en) 3d image labeling method based on labeling information of 2d image and 3d image labeling device
CN112434581A (en) Outdoor target color identification method and system, electronic device and storage medium
CN115861733A (en) Point cloud data labeling method, model training method, electronic device and storage medium
CN113435358B (en) Sample generation method, device, equipment and program product for training model
Ernst et al. Check my chart: A robust color chart tracker for colorimetric camera calibration
CN111931741B (en) Mouth key point labeling method and device, electronic equipment and storage medium
CN113808142A (en) Ground identifier identification method and device and electronic equipment
Huang et al. Obmo: One bounding box multiple objects for monocular 3d object detection
JP3058769B2 (en) 3D image generation method
JP2017058657A (en) Information processing device, control method, computer program and storage medium
CN115082409B (en) Automatic change system of discernment nuclide image diagnosis myocardial ischemia
CN115953567B (en) Method and device for detecting quantity of stacked boxes, terminal equipment and storage medium
TWI807904B (en) Method for training depth identification model, method for identifying depth of images and related devices
WO2024011756A1 (en) Image acquisition parameter adjustment method and system, electronic device, and storage medium
CN111047604B (en) Transparency mask extraction method and device for high-definition image and storage medium
KR20180073020A (en) Hole Filling Method for Arbitrary View Image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant