CN112215770B - Image processing method, system, device and medium - Google Patents

Image processing method, system, device and medium Download PDF

Info

Publication number
CN112215770B
CN112215770B CN202011076677.5A CN202011076677A CN112215770B CN 112215770 B CN112215770 B CN 112215770B CN 202011076677 A CN202011076677 A CN 202011076677A CN 112215770 B CN112215770 B CN 112215770B
Authority
CN
China
Prior art keywords
stripe
stripes
sub
main
nth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011076677.5A
Other languages
Chinese (zh)
Other versions
CN112215770A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shuzhilian Technology Co Ltd
Original Assignee
Chengdu Shuzhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shuzhilian Technology Co Ltd filed Critical Chengdu Shuzhilian Technology Co Ltd
Priority to CN202011076677.5A priority Critical patent/CN112215770B/en
Publication of CN112215770A publication Critical patent/CN112215770A/en
Application granted granted Critical
Publication of CN112215770B publication Critical patent/CN112215770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, a system, a device and a medium, which relate to the field of image processing and comprise the following steps: slicing an original image into a plurality of sub-images, wherein the original image is provided with a plurality of main stripes; identifying a plurality of sub-stripes in the sub-image by using a deep neural network model; judging whether the plurality of identified sub-stripes belong to the same main stripe, and splicing the plurality of sub-stripes belonging to the same main stripe to obtain a spliced main stripe; calculating the position of the spliced main stripe central line in the original image to obtain main stripe skeleton position information; homogenizing the spliced main stripes based on the position information of the main stripe skeleton; aiming at a VISAR stripe image with speckle interference, the invention discloses a novel speckle suppression image processing method, a system, a device and a medium based on deep learning, which can greatly reduce the influence of speckles on the stripe so as to facilitate the subsequent experimental analysis.

Description

Image processing method, system, device and medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, system, apparatus, and medium.
Background
The imaging type arbitrary reflecting surface Velocity Interferometer (VISAR) can be used for diagnosing the velocity of the shock wave in the transparent material and becomes a main device of the velocity process of the shock wave in the neuro-optical system device. Usually, multimode optical fibers are used to couple external active probe laser into the system optical path, so as to realize the function of illuminating the target surface. Because laser has high temporal and spatial interference, speckle is introduced during illumination, and has two sources: firstly, different phases are introduced to coherent light at different positions on the surface of a rough target surface, so that interference and irregular speckle distribution are formed after reflection; secondly, the multimode fiber has optical path difference among different modes, so that speckles are formed at the output end face position of the multimode fiber. These speckles, amplified by the imaging of the system, directly affect the recording of the velocity fringe pattern, which is quite detrimental to the measurement.
There are currently different ways of speckle suppression methods. Starting from the speckle forming process, for example, a motion diffuse reflector is used for time smoothing, incident wavelength and angle diversification are used, the time coherence and the space coherence of incident laser are weakened, a motion phase element is added in a speckle transmission light path, and the like; and the method also starts from the aspect of traditional image processing, such as homomorphic filtering algorithm, Lee algorithm, wavelet threshold algorithm and the like, and has respective application range and advantages and disadvantages. For imaging-type VISAR systems, it is not appropriate to change the nature of the illuminating laser and the way the target surface state is changed, as required by the experimental and diagnostic equipment operating conditions. It is also not suitable to use moving type phase elements in the optical path, since the system records a transient time course (ns order). The fringe pattern actually acquired by the camera is the result of dynamic scanning of the speckle fringe pattern on the slit, speckles only affect the shape and distribution of the fringes, no direct speckle pattern exists in the mixed fringe pattern obtained by scanning, and direct processing by using a traditional image algorithm is not preferable.
Disclosure of Invention
Due to the defects of the conventional image processing method and the limitation of a real-world recording system, the influence of speckles on the stripes is difficult to be effectively reduced by the conventional processing method. Aiming at a VISAR stripe image with speckle interference, the invention discloses a novel speckle suppression image processing method, a system, a device and a medium based on deep learning, which can greatly reduce the influence of speckles on the stripe so as to facilitate the subsequent experimental analysis.
To achieve the above object, the present invention provides an image processing method, including:
slicing an original image into a plurality of sub-images, wherein the original image is provided with a plurality of main stripes;
identifying a plurality of sub-stripes in the sub-image by using a deep neural network model;
judging whether the plurality of identified sub-stripes belong to the same main stripe, and splicing the plurality of sub-stripes belonging to the same main stripe to obtain a spliced main stripe;
calculating the position of the spliced main stripe central line in the original image to obtain main stripe skeleton position information;
and homogenizing the spliced main stripes based on the position information of the main stripe skeleton.
The principle of the invention is as follows: due to speckles introduced by noise of the photosensitive device, pixel values reflected on an image where the bright and dark areas are located are different. To reduce the speckle effect, the area on each waveform needs to be smoothed. Since the waveform represents time longitudinally, homogenization can be performed only according to the time direction, so that information reflected by the experiment can be kept to the maximum extent.
The term "image slice" refers to a process of dividing a high resolution image (e.g., a meta image) into a plurality of low resolution images (e.g., sub-images), and any two adjacent low resolution images have a certain overlap region therebetween.
Preferably, the slicing the original image into a plurality of sub-images includes:
calculating the number of lines and columns of the sub-images obtained after the original image is sliced;
calculating to obtain length and width information of the original image filling part;
filling the original image based on the length and width information of the original image filling part;
segmenting the filled original image based on the line number and the column number of the sub-images to obtain a plurality of sub-images;
and the resolution of the original image is higher than that of the sub-images.
The number of rows and columns after slicing represents the number of rows and columns formed after slicing of the high-resolution image, and is represented by Nrow and Ncol, respectively. Suppose the matrix shape of the array represented by the original high-resolution image is [ H, W, C ], wherein C represents the number of channels. Describing a pixel, if gray, only one value is needed to describe it, which is a single channel. If a pixel has three colors of RGB to describe it, it is three channels. H and W represent the height and width of the high resolution image, respectively. Before image slicing, it is necessary to specify that the width and height of the low-resolution image after slicing are both NW and the overlapping width and height of any two adjacent low-resolution images are both S.
Preferably, the calculation mode of the number of lines and columns of the sub-images after the original image is sliced is as follows:
Figure BDA0002717019650000021
Figure BDA0002717019650000022
wherein the content of the first and second substances,
Figure BDA0002717019650000023
representing an upward rounding symbol, Nrow is the number of rows of sub-images, H is the height of an original image, W is the width of the original image, Ncol is the number of columns of the sub-images, the heights and the widths of the sub-images are both NW, and the height and the width of an overlapping area of any two adjacent sub-images are both S;
because there may be a difference between the calculated image resolution and the original image resolution, the image needs to be filled to a certain degree, so that the resolution of the original image is the same as the calculated resolution of the image, and the length and width information of the filled portion of the original image is calculated by:
PW=Ncol*(NW-S)-NW-W
PH=Nrow*(NW-S)-NW-H
where PW is the width of the original image filling portion, and PH is the length of the original image filling portion.
And filling the black area on the original image at the edge of the image according to the calculated filling length and width so that the resolution of the original image is consistent with that of the calculated image.
The image after the padding is sliced in accordance with the designated length and width NW of the low-resolution image, and the length and width of the overlapping area between any two adjacent low-resolution images are kept to be S.
Preferably, the streak recognition in the present invention refers to a process of recognizing streaks in a low-resolution image using a deep neural network model.
The stripe recognition in the present invention includes:
(1) construction of deep neural network model
The deep neural network model is composed of a feature extraction layer, a feature pyramid layer, a region proposition layer, a region alignment layer and a prediction branch layer.
The feature extraction layer is composed of a plurality of convolution, pooling and activation functions and mainly aims at extracting features of various targets in the image in different dimensions; the feature pyramid consists of a number of convolution, pooling, activation functions and upsampling operations, which are intended to fuse features extracted by the feature extraction layer. The region proposal layer is formed by convolution, and the purpose is to perform rough classification and rough regression of the frame coordinates on the generated frame candidates. The area alignment layer is used for extracting a coordinate area of the target represented by the candidate frame in the original image; the prediction branch layer is formed by multilayer convolution, and the purpose of the prediction branch layer is to perform more precise classification and regression of frame coordinates on a frame candidate and perform pixel-by-pixel judgment on a target in a frame candidate.
(2) Neural network model prediction
Neural network model prediction is the process of identifying the stripes in an image. The input low-resolution image is subjected to a feature extraction layer to respectively extract a first feature vector, a second feature vector, a third feature vector and a fourth feature vector. And performing upsampling and convolution operation on the first feature vector, the second feature vector, the third feature vector and the fourth feature vector through the feature pyramid layer to obtain a fifth feature vector, a sixth feature vector, a seventh feature vector and an eighth feature vector. And sending the first feature vector, the second feature vector, the third feature vector and the fourth feature vector into a region proposing layer to obtain a candidate frame of coarse regression and classification. Inputting the candidate frames subjected to rough regression and classification into a region alignment layer to obtain regions of the candidate frames corresponding to the original image, finally performing fine regression and classification on the candidate frames through a prediction branch layer, and predicting pixels of targets in the candidate frames.
Preferably, in the method, the sub-fringe stitching is a process of matching and merging fringes in a plurality of low-resolution images, and aims to restore the part originally belonging to the same main fringe.
In the method, the sub-stripe splicing comprises row sub-stripe splicing and column sub-stripe splicing, and because the overlapped area existing in two adjacent low-resolution images in a row, namely the identified target may have a repeat condition, the sub-stripes in the overlapped area need to be deduplicated. Wherein, each row of sub-stripes in the row sub-stripe splicing executes the following splicing operation:
step a: taking out all the identified N sub-stripes in one row, wherein N is an integer greater than 3;
step b: take out the Nth i Stripe of sliver, N i =1,2,3,…N;
Step c: take out the Nth j Stripe of sliver, N i 1,2,3, … N and N j ≠N i
Step d: calculate the Nth i Stripe and Nth stripe j Intersection K of stripe pixels;
step e: calculate the Nth i Sliver stripe and Nth stripe j A union P of striped pixels;
step d: based on the intersection K and the union P, the Nth set is calculated i Stripe and Nth stripe j The intersection ratio L of the stripe pixels;
step e: if the intersection ratio L is less than or equal to the threshold value t 1 Then, consider as the Nth i Stripe and Nth stripe j The stripe does not belong to the same main stripe, and the Nth stripe i Stripe and Nth stripe j Adding the stripes into a line identification result area; if the intersection ratio L is larger than the threshold value t 1 Then, consider as the Nth i Stripe and Nth stripe j The stripe belongs to the same main stripe, and the Nth stripe i StripeAnd N j Combining the stripes into a stripe and adding the stripe into the line candidate area;
repeating the steps b to e until the steps b and c traverse all the sub-stripes;
because of the overlapping area existing in two adjacent low resolution images in one column, it is necessary to match the same stripe in two rows and remove the overlapping portion. And each column of sub-stripes in the column sub-stripe splicing is subjected to the following splicing operation:
step I: taking all sub-stripes of the r row in the row candidate area and adding the sub-stripes into the row candidate area;
step II: all the sub-stripes of the t-th row in the row candidate area are taken and added into the area to be compared, the r-th row and the t-th row are adjacent, and the number of the sub-stripes in the area to be compared is
Figure BDA0002717019650000041
Step III: take out the region to be compared
Figure BDA0002717019650000042
The strips and the stripes of the strips are arranged,
Figure BDA0002717019650000043
step IV: calculate the first
Figure BDA0002717019650000044
Overlapping pixels of the stripe and all the sub-stripes of the column candidate region;
step V: get the column candidate region and
Figure BDA0002717019650000045
the stripe overlaps the sub-stripe with the largest pixel; if it is the first
Figure BDA0002717019650000046
Stripes and lines
Figure BDA0002717019650000047
Stripe overlapping pixelGreater than or equal to threshold t 2 Then it is considered as the first
Figure BDA0002717019650000048
Stripes and lines
Figure BDA0002717019650000049
The stripe belongs to the same main stripe
Figure BDA00027170196500000410
Stripes and lines
Figure BDA00027170196500000411
The stripe is deleted from the column candidate region and the candidate region to be compared, respectively, and
Figure BDA00027170196500000412
stripes and lines
Figure BDA00027170196500000413
Combining the stripes into a stripe to be added into the column candidate area; if it is first
Figure BDA00027170196500000414
Stripes and lines
Figure BDA00027170196500000415
The stripe overlapping pixels are less than the threshold t 2 Then it is considered as the first
Figure BDA00027170196500000416
Stripes and lines
Figure BDA00027170196500000417
The strip stripes do not belong to the same main stripe and do not operate;
step VI: repeating the steps II to IV until the traversal of the area to be compared is finished;
step VII: and repeating the steps II to VI until all the line candidate areas are traversed.
Through the steps, the stripes in the original image can be identified and spliced, and the stripes in the row candidate area are all identified stripes in the image.
Preferably, the skeleton calculation is a process of calculating the image position of the center line of each stripe. Calculating the position of the central line of the spliced main stripe in the original image to obtain the position information of the skeleton of the main stripe, and specifically comprising the following steps:
the number of the main stripes after splicing is
Figure BDA0002717019650000051
The method comprises the following steps: taking out any main stripe of the column candidate region
Figure BDA0002717019650000052
Figure BDA0002717019650000053
Step two: statistical dominant banding
Figure BDA0002717019650000054
Position coordinate x of each pixel in the image i ,y i
Step three: statistical dominant banding
Figure BDA0002717019650000055
Minimum value y of pixel longitudinal coordinate 1 And the maximum value y 2
Step four: statistical dominant banding
Figure BDA0002717019650000056
Minimum value x of pixel lateral coordinate 1 And the maximum value x 2
Step five: taking any one Y in a preset range i A predetermined range of y 1 ≤Y i ≤y 2
Step six: when Y is calculated i =y i The mean X of the abscissas of all pixels, X and Y i Adding a skeleton candidate region;
step seven: repeating the fourth step to the sixth step until all integers in the preset range are traversed;
step eight: and repeating the first step to the seventh step until all the main stripes are traversed, and obtaining skeleton position information of all the main stripes.
Preferably, the stripe homogenization is a process of homogenizing the identified stripes to suppress speckles. Based on main stripe skeleton position information, the main stripe after the homogenization treatment concatenation specifically includes:
step S1: taking out any main stripe
Figure BDA0002717019650000057
Figure BDA0002717019650000058
Step S2: taking out the main stripe
Figure BDA0002717019650000059
The central skeleton is the main stripe
Figure BDA00027170196500000510
A centerline of (a);
step S3: performing polynomial fitting on all the taken pixel values to obtain a group of new pixel values;
step S4: comparing the coordinates of the new pixel values with the main stripe
Figure BDA00027170196500000511
Region coordinates if the new pixel value coordinates are located in the main stripe
Figure BDA00027170196500000512
In the area coordinates, assigning the new pixel value to the corresponding position of the original image; if the new pixel value coordinate is not in the main stripe
Figure BDA00027170196500000513
Within the coordinates of the area(s),then no processing is done;
step S5: translating the central skeleton, and repeating the steps S2 to S4 until traversing the main stripes
Figure BDA00027170196500000514
Abscissa x 1 And x 2 All integers are taken;
step S6: repeating steps S1-S5 until all the main stripes are traversed.
The present invention also provides an image processing system, the system comprising:
the image slicing unit is used for slicing an original image into a plurality of sub-images, and the original image is provided with a plurality of main stripes;
the identification unit is used for identifying a plurality of sub-stripes in the sub-image by using the deep neural network model;
the splicing unit is used for judging whether the plurality of identified sub-stripes belong to the same main stripe or not, splicing the plurality of sub-stripes belonging to the same main stripe, and obtaining a spliced main stripe;
the computing unit is used for computing the position of the spliced main stripe center line in the original image to obtain main stripe skeleton position information;
and the homogenization unit is used for homogenizing the spliced main stripes based on the position information of the main stripe skeleton.
The invention also provides an image processing apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the image processing method when executing the computer program.
The invention also provides a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method.
One or more technical schemes provided by the invention at least have the following technical effects or advantages:
the deep neural network technology is adopted for identification and the image processing technology of self creation is adopted for splicing and homogenizing. The invention can greatly reduce the influence of speckles on the stripes so as to facilitate the subsequent experimental analysis.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention;
FIG. 1 is a schematic flow chart of an image processing method according to the present invention;
FIG. 2 is a schematic diagram of the composition of the image processing system of the present invention;
FIG. 3 is a diagram illustrating an original drawing;
FIG. 4 is a schematic diagram of a first sub-image of the original image after being split;
FIG. 5 is a schematic diagram of a second sub-image of the original image after being split;
fig. 6 is a schematic diagram of a third sub-image after the original image is sliced;
FIG. 7 is a schematic diagram of a fourth sub-image of the original image after being sliced;
fig. 8 is a schematic position diagram of the stripe skeleton.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflicting with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
It is understood that the terms "a" and "an" should be interpreted as meaning that a number of one element or element is one in one embodiment, while a number of other elements is one in another embodiment, and the terms "a" and "an" should not be interpreted as limiting the number.
Example one
Referring to fig. 1, fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the invention. The method mainly comprises the following steps:
step 1: slicing an image;
an image slice is a slice in which a high-resolution image (i.e., an original image) is divided into a plurality of low-resolution images (i.e., sub-images), and any two adjacent low-resolution images have a certain overlap region therebetween.
Step 1.1, calculating the number of rows and columns of the sliced image;
the number of rows and columns after slicing represents the number of rows and columns formed after slicing of the high-resolution image, and is represented by Nrow and Ncol, respectively. Suppose the matrix shape of the array represented by the original high-resolution image is [ H, W, C ], wherein C represents the number of channels. Describing a pixel, if gray, only one value is needed to describe it, which is a single channel. If a pixel has three colors of RGB to describe it, it is three channels. H and W represent the height and width of the high resolution image, respectively. Before image slicing, it is necessary to specify that the width and height of the low-resolution image after slicing are both NW and the overlapping width and height of any two adjacent low-resolution images are both S.
The number of rows and columns is calculated as follows:
Figure BDA0002717019650000071
Figure BDA0002717019650000072
wherein
Figure BDA0002717019650000073
Representing a rounded up symbol. In a preferred embodiment, the original high resolution image represents an array matrix having a shape [1024, 3]]If the width and height of the low-resolution image NW is 256 and the overlap width and height S is 100, the number of rows Nrow and Ncol after slicing is 6.
Step 1.2, image filling;
because there may be a difference between the calculated image resolution and the original image resolution, the image needs to be filled to a certain extent, so that the resolution of the original image is the same as the calculated image resolution, and the long widths PW and PH of the filled portion need to be calculated as follows:
PW=Ncol*(NW-S)-NW-W
PH=Nrow*(NW-S)-NW-H
and filling the black area on the original image at the edge of the image according to the calculated filling length and width so that the resolution of the original image is consistent with that of the calculated image. The width PW and PH of the filled region are both 12, and the array shape represented by the filled image is 1036, 3.
Step 1.3, image segmentation;
the image after the padding is sliced in accordance with the designated length and width NW of the low-resolution image, and the length and width of the overlapping area between any two adjacent low-resolution images are kept to be S.
Step 2, stripe recognition;
the stripe recognition refers to a process of recognizing stripes in a low-resolution image using a deep neural network model.
Step 2.1, constructing a deep neural network model;
the deep neural network model is composed of a feature extraction layer, a feature pyramid layer, a region proposition layer, a region alignment layer and a prediction branch layer.
The feature extraction layer is composed of a plurality of convolution, pooling and activation functions and mainly aims at extracting features of various objects in the image in different dimensions; the feature pyramid layer is composed of a plurality of convolution, pooling, activation functions and up-sampling operations, and is intended to fuse features extracted by the feature extraction layer. The region proposal layer is formed by convolution, and the purpose is to perform rough classification and rough regression of the frame coordinates on the generated frame candidates. The area alignment layer is used for extracting a coordinate area of the object represented by the candidate frame in the original image; the prediction branch layer is formed by multilayer convolution, and the purpose of the prediction branch layer is to perform more precise classification and regression of frame coordinates on a frame candidate and perform pixel-by-pixel judgment on a target in a frame candidate.
Step 2.3, predicting a neural network model;
neural network model prediction is the process of identifying the stripes in an image. The input low-resolution image is extracted by a feature extraction layer to respectively extract a first feature vector, a second feature vector, a third feature vector and a fourth feature vector. Inputting an image representing a low-resolution array, wherein the shapes of the extracted first feature vector, the extracted second feature vector, the extracted third feature vector and the extracted fourth feature vector are respectively as follows: [64, 256], [32, 512], [16, 1024] and [8, 1024 ].
And performing up-sampling and convolution operation on the first feature vector, the second feature vector, the third feature vector and the fourth feature vector through the feature pyramid layer to obtain a fifth feature vector, a sixth feature vector, a seventh feature vector and an eighth feature vector. In an embodiment, the fifth, sixth, seventh and eighth feature vectors have shapes of [64, 256], [32, 256], [16, 256] and [8, 256], respectively.
And sending the fifth feature vector, the sixth feature vector, the seventh feature vector and the eighth feature vector to the obtained region proposing layer to obtain a candidate frame of coarse regression and classification. Inputting the candidate frames subjected to rough regression and classification into a region alignment layer to obtain regions of the candidate frames corresponding to the original image, finally performing fine regression and classification on the candidate frames through a prediction branch layer, and predicting pixels of targets in the candidate frames.
Step 3 stripe splicing
Stripe stitching is a process of matching and merging stripes in a plurality of low-resolution images, and aims to restore the original part belonging to the same stripe.
Step 3.1 line stripe splicing
Since there is an overlapping area in two adjacent low-resolution images in a line, that is, there may be a situation where the identified target is duplicated, it is necessary to deduplicate the stripes in the overlapping area.
Step 3.1.1: taking out all the identified N stripes in one row;
step 3.1.2: take out the Nth i Stripe, N i =1,2,3,…N;
Step 3.1.3: take out the Nth j Stripe, N i 1,2,3, … N and N j ≠N i
Step 3.1.4: calculate the Nth i Stripe and Nth stripe j Intersection of stripe pixels;
step 3.1.5: calculate the Nth i Stripe and Nth stripe j A union of stripe pixels;
step 3.1.6: calculate the Nth i Stripe and Nth stripe j The intersection ratio of the stripe pixels;
step 3.1.7: if the intersection ratio is less than the threshold t 1 If the two stripes are not the same stripe, adding the two stripes into the line identification result area; if the intersection ratio is larger than the threshold value t 1 If the two stripes are considered to be the same stripe, the two stripe pixels are merged to be considered as one stripe and added into the line candidate area. In a preferred embodiment the threshold t 1 0.7. The threshold value in the present invention may take other values, and the present invention does not specifically limit the threshold value.
Step 3.1.8: repeat steps 3.1.2 through 3.1.7 until steps 3.1.2 and 3.1.3 have traversed all stripes.
Step 3.2, splicing the lines of stripes;
because of the overlapping area existing in two adjacent low resolution images in one column, it is necessary to match the same stripe in two rows and remove the overlapping portion.
Step 3.1.1: taking all sub-stripes of the r row in the row candidate area and adding the sub-stripes into the row candidate area;
step 3.1.2: all the sub-stripes of the t-th row in the row candidate area are taken and added into the area to be compared, the r-th row and the t-th row are adjacent, and the number of the sub-stripes in the area to be compared is
Figure BDA0002717019650000091
Step 3.1.3: take out the region to be compared
Figure BDA0002717019650000092
The number of the stripes is one,
Figure BDA0002717019650000093
step 3.1.4: calculate the first
Figure BDA0002717019650000094
Overlapping pixels of the stripe and all stripes of the column candidate region;
step 3.1.5: get the column candidate region and
Figure BDA0002717019650000095
the stripe overlaps the sub-stripe with the largest pixel; if it is first
Figure BDA0002717019650000096
Stripes and lines
Figure BDA0002717019650000097
The stripe overlapping pixels are greater than or equal to the threshold value t 2 Then it is considered as the first
Figure BDA0002717019650000098
Stripes and lines
Figure BDA0002717019650000099
The stripe belongs to the same main stripe
Figure BDA00027170196500000910
Stripes and lines
Figure BDA00027170196500000911
The stripe is deleted from the column candidate region and the candidate region to be compared, respectively, and
Figure BDA00027170196500000912
stripes and lines
Figure BDA00027170196500000913
Combining the stripes into a stripe to be added into the column candidate area; if it is first
Figure BDA00027170196500000914
Stripes and lines
Figure BDA00027170196500000915
The stripe overlapping pixels are less than the threshold t 2 Then it is considered as the first
Figure BDA00027170196500000916
Stripes and lines
Figure BDA00027170196500000917
The strip stripe does not belong to the same main stripe and is not operated. In a preferred embodiment, t 2 10000. The threshold value in the present invention may take other values, and the present invention does not specifically limit the threshold value.
Step 3.1.6: repeating the steps 3.1.2 to 3.1.4 until the traversal of the area to be compared is finished;
step 3.1.7: step 3.1.2 to step 3.1.6 are repeated until all row candidate regions are traversed.
Through the steps, the stripes in the original image can be identified and spliced, and the stripes in the row candidate area are all identified stripes in the image.
The splicing in the present invention is illustrated below by way of example:
the following will be exemplified:
referring to fig. 3, fig. 3 is an original image, and fig. 4-7 are sub-images sliced in fig. 3;
assuming that the original image is divided into 4 sub-images, which are numbered 1,2,3, 4, and each sub-image has an overlap region, fig. 4 is a first sub-image, fig. 5 is a second sub-image, fig. 6 is a third sub-image, and fig. 7 is a fourth sub-image.
When in splicing, line splicing is firstly carried out, namely, the first sub-image and the second sub-image are spliced to form a first line of candidate areas, and then the third sub-image and the fourth sub-image are spliced to form a second line of candidate areas.
Next, column splicing is performed, since only two rows of 4 sub-images are cut, if a plurality of rows are performed as described below.
The stripe of the first row candidate area is taken out and added to the column candidate area (because it is the first row), and then the stripe of the second row is taken and added to the area to be compared. (for clearer presentation, the first row of candidate regions and the second row of candidate regions are used below.)
A stripe in the first candidate region is selected to calculate the overlapping pixels with all stripes in the second row of candidate regions. There are multiple results, and if the largest overlapping pixel in the multiple results is greater than the threshold, the two stripes are considered to be one stripe, i.e. the two stripes are the upper and lower parts of one stripe.
A second stripe in the first row of candidate regions is then selected, and overlapping pixels are calculated with all the stripes in the second row of candidate regions. There are multiple results, and if the largest overlapping pixel in the multiple results is greater than the threshold, the two stripes are considered to be one stripe, i.e. the two stripes are the upper and lower parts of one stripe. And then a third stripe … until all stripes have been matched. And if the third row exists, matching the stripe formed by the matching result of the first row with the stripe of the third row.
Step 4, skeleton calculation;
skeleton calculation is the process of calculating the image position of the center line of each stripe. Assuming that the total number of stripes recognized is
Figure BDA0002717019650000101
Step 4.1: taking out any stripe of column candidate region
Figure BDA0002717019650000102
Figure BDA0002717019650000103
Step 4.2: counting the position coordinate x of the image of each pixel of the stripe i ,y i
Step 4.3: counting the minimum value y of the longitudinal coordinate of the stripe pixel 1 And the maximum value y 2
Step 4.4: counting the minimum value x of the horizontal coordinate of the stripe pixel 1 And the maximum value x 2
Step 4.5: taking out range y 1 And y 2 Any point Y in i
Step 4.6: when Y is calculated i =y i The mean X of the abscissas of all the pixels, X, Y i Adding a skeleton candidate region;
step 4.7: repeat step 4.4, step 4.5 up to range y 1 And y 2 Has been traversed;
step 4.8 repeat steps 4.1 to 4.6 until all stripes have been traversed;
thus, skeleton position information of all the stripes is obtained.
Referring to fig. 8, the position diagram of the skeleton is shown, where the area part in the diagram is a stripe, and the central line is the central skeleton of the stripe.
Step 5, homogenizing stripes;
the stripe homogenization is a process of homogenizing identified stripes to inhibit speckles.
Step 5.1: take out any stripe
Figure BDA0002717019650000104
Figure BDA0002717019650000105
Step 5.2: taking out the pixel values of all pixel points on the central skeleton of the stripe;
step 5.3: performing polynomial fitting on all pixel values to obtain a new group of pixel values;
step 5.4: and comparing the coordinates of the new pixel value with the coordinates of the stripe region, and assigning the new pixel value to the corresponding position of the original image if the coordinates of the new pixel value are in the coordinates of the stripe region. If the new pixel value coordinate is not in the stripe region coordinate, no processing is performed;
step 5.5: translating the central skeleton, wherein the translation range is the horizontal coordinate value range calculated in the step 4.4; and repeating steps 5.2 to 5.4 until traversing the strip abscissa x 1 And x 2 All integers are taken;
step 5.6 repeats steps 5.1 to 5.5 until all stripes are traversed.
Example two
Referring to fig. 2, fig. 2 is a schematic composition diagram of an image processing system, and a second embodiment of the present invention provides an image processing system, including:
the image slicing unit is used for slicing an original image into a plurality of sub-images, and the original image is provided with a plurality of main stripes;
the identification unit is used for identifying a plurality of sub-stripes in the sub-image by using the deep neural network model;
the splicing unit is used for judging whether the plurality of identified sub-stripes belong to the same main stripe or not, splicing the plurality of sub-stripes belonging to the same main stripe, and obtaining a spliced main stripe;
the computing unit is used for computing the position of the spliced main stripe center line in the original image to obtain main stripe skeleton position information;
and the homogenization unit is used for homogenizing the spliced main stripes based on the position information of the main stripe skeleton.
EXAMPLE III
The third embodiment of the present invention provides an image processing apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the image processing method when executing the computer program.
The processor may be a Central Processing Unit (CPU), or other general-purpose processor, a digital signal processor (digital signal processor), an Application Specific Integrated Circuit (Application Specific Integrated Circuit), a ready-made programmable gate array (field programmable gate array) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the image processing apparatus of the present invention by operating or executing data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card, a secure digital card, a flash memory card, at least one magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
Example four
A fourth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the image processing method are implemented.
The image processing apparatus, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, all or part of the flow in the method of implementing the embodiments of the present invention may also be stored in a computer readable storage medium through a computer program, and when the computer program is executed by a processor, the computer program may implement the steps of the above-described method embodiments. Wherein the computer program comprises computer program code, an object code form, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying said computer program code, a recording medium, a usb-disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory, a random access memory, a point carrier signal, a telecommunications signal, a software distribution medium, etc. It should be noted that the computer readable medium may contain content that is appropriately increased or decreased as required by legislation and patent practice in the jurisdiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. An image processing method, characterized in that the method comprises:
slicing an original image into a plurality of sub-images, wherein the original image is provided with a plurality of main stripes;
identifying a plurality of sub-stripes in the sub-image by using a deep neural network model;
judging whether the plurality of identified sub-stripes belong to the same main stripe, and splicing the plurality of sub-stripes belonging to the same main stripe to obtain a spliced main stripe;
calculating the position of the spliced main stripe central line in the original image to obtain main stripe skeleton position information;
homogenizing the spliced main stripes based on the position information of the main stripe skeleton;
the method comprises the following steps of splicing the sub-stripes, wherein the splicing operation of each row of sub-stripes in the splicing of the sub-stripes is executed:
step a: taking out all the identified N sub-stripes in one row, wherein N is an integer greater than 3;
step b: take out the Nth i Stripe of sliver, N i =1,2,3,…N;
Step c: take out the Nth j Stripe of sliver, N i 1,2,3, … N and N j ≠N i
Step d: calculate the Nth i Stripe and Nth stripe j Intersection K of stripe pixels;
step e: calculate the Nth i Stripe and Nth stripe j A union P of striped pixels;
step d: based on the intersection K and the union P, the Nth set is calculated i Stripe and Nth stripe j The intersection ratio L of the stripe pixels;
step e: if the intersection ratio L is less than or equal to the threshold value t 1 Then, consider as the Nth i Stripe and Nth stripe j The stripe does not belong to the same main stripe, and the Nth stripe i Stripe and Nth stripe j Adding the stripes into a line identification result area; if the intersection ratio L is larger than the threshold value t 1 Then, consider as the Nth i Stripe and Nth stripe j The stripe belongs to the same main stripe, and the Nth stripe i Stripe and Nth stripe j Combining the stripes into a stripe and adding the stripe into the line candidate area;
repeating the steps b to e until the steps b and c traverse all the sub-stripes;
and each column of sub-stripes in the column sub-stripe splicing is subjected to the following splicing operation:
step I: taking all sub-stripes of the r row in the row candidate area and adding the sub-stripes into the row candidate area;
step II: all the sub-stripes of the t-th row in the row candidate area are taken and added into the area to be compared, the r-th row and the t-th row are adjacent, and the number of the sub-stripes in the area to be compared is
Figure FDA0003578945930000011
Step III: take out the region to be compared
Figure FDA0003578945930000012
The strips and the stripes of the strips are arranged,
Figure FDA0003578945930000013
step IV: calculate the first
Figure FDA0003578945930000014
Overlapping pixels of the stripe and all the sub-stripes of the column candidate region;
step V: get the column candidate region and
Figure FDA0003578945930000015
the stripe overlaps the sub-stripe with the largest pixel; if it is first
Figure FDA0003578945930000016
Stripes and lines
Figure FDA0003578945930000017
The stripe overlap pixel is greater than or equal to a threshold value t 2 Then it is considered as the first
Figure FDA0003578945930000018
Stripes and lines
Figure FDA0003578945930000019
The stripe belongs to the same main stripe
Figure FDA0003578945930000021
Stripes and lines
Figure FDA0003578945930000022
The stripe is deleted from the column candidate region and the candidate region to be compared, respectively, and
Figure FDA0003578945930000023
stripes and lines
Figure FDA0003578945930000024
Combining the stripes into a stripe to be added into the column candidate area; if it is the first
Figure FDA0003578945930000025
Stripes and lines
Figure FDA0003578945930000026
The stripe overlapping pixels are less than the threshold t 2 Then it is considered as the first
Figure FDA0003578945930000027
Stripes and lines
Figure FDA0003578945930000028
The strip stripes do not belong to the same main stripe and do not operate;
step VI: repeating the steps II to IV until the traversal of the area to be compared is finished;
step VII: and repeating the steps II to VI until all the line candidate areas are traversed.
2. The method of claim 1, wherein slicing the original image into a plurality of sub-images comprises:
calculating the number of lines and columns of the sub-images obtained after the original image is sliced;
calculating to obtain length and width information of the original image filling part;
filling the original image based on the length and width information of the original image filling part;
segmenting the filled original image based on the line number and the column number of the sub-images to obtain a plurality of sub-images;
and the resolution of the original image is higher than that of the sub-images.
3. The image processing method of claim 2, wherein the number of rows and columns of the sub-image after the original image slice is calculated by:
Figure FDA0003578945930000029
Figure FDA00035789459300000210
wherein the content of the first and second substances,
Figure FDA00035789459300000211
representing an upward rounding symbol, Nrow is the number of rows of the sub-images, H is the height of the original image, W is the width of the original image, Ncol is the number of columns of the sub-images, the heights and the widths of the sub-images are both NW, and the heights and the widths of the overlapping areas of any two adjacent sub-images are both S;
the length and width information of the original image filling part is calculated in the following way:
PW=Ncol*(NW-S)-NW-W
PH=Nrow*(NW-S)-NW-H
where PW is the width of the original image filling portion, and PH is the height of the original image filling portion.
4. The image processing method according to claim 1, wherein the step of calculating the position of the center line of the stitched main stripe in the original image to obtain the position information of the skeleton of the main stripe comprises:
the number of the main stripes after splicing is
Figure FDA00035789459300000212
The method comprises the following steps: taking out any main stripe of the column candidate region
Figure FDA00035789459300000213
Figure FDA00035789459300000214
Step two: statistical dominant banding
Figure FDA00035789459300000215
Position coordinate x of each pixel in the image i ,y i
Step three: statistical dominant banding
Figure FDA0003578945930000031
Minimum value y of pixel longitudinal coordinate 1 And the maximum value y 2
Step four: statistical dominant banding
Figure FDA0003578945930000032
Minimum value x of pixel lateral coordinate 1 And the maximum value x 2
Step five: taking any one Y in a preset range i A predetermined range of y 1 ≤Y i ≤y 2
Step six: when Y is calculated i =y i The mean X of the abscissas of all pixels, X and Y i Adding a skeleton candidate region;
step seven: repeating the fourth step to the sixth step until all integers in the preset range are traversed;
step eight: and repeating the first step to the seventh step until all the main stripes are traversed, and obtaining skeleton position information of all the main stripes.
5. The image processing method according to claim 1, wherein the homogenizing the spliced main stripes based on the main stripe skeleton position information specifically comprises:
step S1: taking out any main stripe
Figure FDA0003578945930000033
Figure FDA0003578945930000034
Step S2: taking out the main stripe
Figure FDA0003578945930000035
The central skeleton is the main stripe
Figure FDA0003578945930000036
A centerline of (a);
step S3: performing polynomial fitting on all the taken pixel values to obtain a group of new pixel values;
step S4: comparing the coordinates of the new pixel values with the main stripe
Figure FDA0003578945930000037
Region coordinates if the new pixel value coordinates are located in the main stripe
Figure FDA0003578945930000038
In the area coordinates, assigning the new pixel value to the corresponding position of the original image; if the new pixel value coordinate is not in the main stripe
Figure FDA0003578945930000039
If the area coordinate is within the area coordinate, no processing is performed;
step S5: translating the central skeleton, and repeating the steps S2 to S4 until the main stripe is traversed
Figure FDA00035789459300000310
Abscissa x 1 And x 2 All integers are taken;
step S6: repeating steps S1-S5 until all the main stripes are traversed.
6. The method of claim 1, wherein identifying the sub-stripes in the sub-image using the deep neural network model comprises:
constructing a deep neural network model, wherein the deep neural network model comprises the following steps: the device comprises a feature extraction layer, a feature pyramid layer, a region proposing layer, a region alignment layer and a prediction branch layer; the feature extraction layer is used for extracting features of each target in the image in different dimensions; the characteristic pyramid layer is used for fusing the characteristics extracted by the characteristic extraction layer; the region proposing layer is used for carrying out rough classification and rough regression of the frame coordinates on the generated candidate frames; the area alignment layer is used for extracting a coordinate area of the target represented by the candidate frame in the original image; the prediction branch layer is used for classifying the candidate frames and performing frame coordinate regression, and judging the targets in the candidate frames pixel by pixel;
identifying sub-stripes in the image based on the deep neural network model; respectively extracting a first feature vector, a second feature vector, a third feature vector and a fourth feature vector from the input subimage through a feature extraction layer; performing up-sampling and convolution operation on the first feature vector, the second feature vector, the third feature vector and the fourth feature vector through the feature pyramid layer to obtain a fifth feature vector, a sixth feature vector, a seventh feature vector and an eighth feature vector; sending the first feature vector, the second feature vector, the third feature vector and the fourth feature vector into a region proposing layer to obtain a candidate frame of coarse regression and classification; inputting the coarsely regressed and classified candidate frames into a region alignment layer to obtain regions of the candidate frames corresponding to the original image, regressing and classifying the candidate frames through a prediction branch layer, and predicting pixels of targets in the candidate frames.
7. An image processing system, characterized in that the system comprises:
the image slicing unit is used for slicing an original image into a plurality of sub-images, and the original image is provided with a plurality of main stripes;
the identification unit is used for identifying a plurality of sub-stripes in the sub-image by using the deep neural network model;
the splicing unit is used for judging whether the plurality of identified sub-stripes belong to the same main stripe or not, splicing the plurality of sub-stripes belonging to the same main stripe, and obtaining a spliced main stripe;
the computing unit is used for computing the position of the spliced main stripe center line in the original image to obtain main stripe skeleton position information;
the homogenization processing unit is used for homogenizing the spliced main stripes based on the position information of the main stripe skeleton;
the sub-stripe splicing in the system comprises row sub-stripe splicing and column sub-stripe splicing, wherein each row of sub-stripes in the row sub-stripe splicing execute the following splicing operation:
step a: taking out all the identified N sub-stripes in one row, wherein N is an integer greater than 3;
step b: take out the Nth i Stripe of sliver, N i =1,2,3,…N;
Step c: take out the Nth j Stripe of sliver, N i 1,2,3, … N and N j ≠N i
Step d: calculate the Nth i Stripe and Nth stripe j Intersection K of stripe pixels;
step e: calculate the Nth i Stripe and Nth stripe j A union P of striped pixels;
step d: based on the intersection K and the union P, the Nth set is calculated i Stripe and Nth stripe j The intersection ratio L of the stripe pixels;
step e: if the intersection ratio L is less than or equal to the threshold value t 1 Then, consider as the Nth i Stripe and Nth stripe j The stripe does not belong to the same main stripe, and the Nth stripe i Stripe and Nth stripe j Adding the stripes into a line identification result area; if the intersection ratio L is larger than the threshold value t 1 Then, consider as the Nth i Stripe and Nth stripe j The stripe belongs to the same main stripe, and the Nth stripe i Stripe and Nth stripe j Combining the stripes into a stripe and adding the stripe into the line candidate area;
repeating the steps b to e until the steps b and c traverse all the sub-stripes;
and each column of sub-stripes in the column sub-stripe splicing is subjected to the following splicing operation:
step I: taking all sub-stripes of the r row in the row candidate area and adding the sub-stripes into the row candidate area;
step II: all the sub-stripes of the t-th row in the row candidate area are taken and added into the area to be compared, the r-th row and the t-th row are adjacent, and the number of the sub-stripes in the area to be compared is
Figure FDA0003578945930000051
Step III: take out the region to be compared
Figure FDA0003578945930000052
The strips and the stripes of the strips are arranged,
Figure FDA0003578945930000053
step IV: calculate the first
Figure FDA0003578945930000054
Overlapping pixels of the stripe and all the sub-stripes of the column candidate region;
step V: get the column candidate region and
Figure FDA0003578945930000055
the stripe overlaps the sub-stripe with the largest pixel; if it is first
Figure FDA0003578945930000056
Stripes and lines
Figure FDA0003578945930000057
The stripe overlapping pixels are greater than or equal to the threshold value t 2 Then it is considered as the first
Figure FDA0003578945930000058
Stripe of sliver and
Figure FDA0003578945930000059
the stripe belongs to the same main stripe
Figure FDA00035789459300000510
Stripes and lines
Figure FDA00035789459300000511
The stripe is deleted from the column candidate region and the candidate region to be compared, respectively, and
Figure FDA00035789459300000512
stripes and lines
Figure FDA00035789459300000513
Combining the stripes into a stripe to be added into the column candidate area; if it is first
Figure FDA00035789459300000514
Stripes and lines
Figure FDA00035789459300000515
The stripe overlapping pixels are less than the threshold t 2 Then it is considered as the first
Figure FDA00035789459300000516
Stripes and lines
Figure FDA00035789459300000517
The strip stripes do not belong to the same main stripe and do not operate;
step VI: repeating the steps II to IV until the traversal of the area to be compared is finished;
step VII: and repeating the steps II to VI until all the line candidate areas are traversed.
8. An image processing apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the image processing method according to any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 6.
CN202011076677.5A 2020-10-10 2020-10-10 Image processing method, system, device and medium Active CN112215770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011076677.5A CN112215770B (en) 2020-10-10 2020-10-10 Image processing method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011076677.5A CN112215770B (en) 2020-10-10 2020-10-10 Image processing method, system, device and medium

Publications (2)

Publication Number Publication Date
CN112215770A CN112215770A (en) 2021-01-12
CN112215770B true CN112215770B (en) 2022-08-02

Family

ID=74052968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011076677.5A Active CN112215770B (en) 2020-10-10 2020-10-10 Image processing method, system, device and medium

Country Status (1)

Country Link
CN (1) CN112215770B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113915871A (en) * 2021-03-05 2022-01-11 海信(山东)冰箱有限公司 Refrigerator and control method thereof
CN114862672B (en) * 2022-04-02 2024-04-02 华南理工大学 Image rapid splicing method based on vector shape preserving transformation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012056913A1 (en) * 2010-10-26 2012-05-03 Sintokogio, Ltd. Evaluation method and evaluation system for impact force of laser irradiation during laser peening and laser peening method and laser peening system
CN107144361A (en) * 2017-06-12 2017-09-08 中国科学院西安光学精密机械研究所 The consistent any reflecting surface velocity interferometer of many sensitivity of branch target
CN107179132A (en) * 2017-05-09 2017-09-19 中国工程物理研究院激光聚变研究中心 Optical fiber image transmission beam velocity interferometer and shock velocity computational methods
CN108871595A (en) * 2018-07-27 2018-11-23 中国工程物理研究院激光聚变研究中心 Super time resolution shock velocity calculation method
CN109493281A (en) * 2018-11-05 2019-03-19 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN111536868A (en) * 2020-04-07 2020-08-14 华东师范大学 Imaging type arbitrary reflecting surface speed interferometer with ultra-fast compression

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102778256B (en) * 2012-07-17 2015-07-08 中国科学院力学研究所 Multi-physical field measurement system aiming at strong laser driven impact effect test
US10161924B2 (en) * 2014-03-24 2018-12-25 National Technology & Engineering Solutions Of Sandia, Llc Sensor system that uses embedded optical fibers
CN108036863B (en) * 2017-12-19 2023-08-25 中国工程物理研究院激光聚变研究中心 Wide-range shock wave speed diagnosis device and measurement method
CN109712075B (en) * 2018-12-24 2022-10-14 广东理致技术有限公司 Method and device for identifying original image of digital image data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012056913A1 (en) * 2010-10-26 2012-05-03 Sintokogio, Ltd. Evaluation method and evaluation system for impact force of laser irradiation during laser peening and laser peening method and laser peening system
CN107179132A (en) * 2017-05-09 2017-09-19 中国工程物理研究院激光聚变研究中心 Optical fiber image transmission beam velocity interferometer and shock velocity computational methods
CN107144361A (en) * 2017-06-12 2017-09-08 中国科学院西安光学精密机械研究所 The consistent any reflecting surface velocity interferometer of many sensitivity of branch target
CN108871595A (en) * 2018-07-27 2018-11-23 中国工程物理研究院激光聚变研究中心 Super time resolution shock velocity calculation method
CN109493281A (en) * 2018-11-05 2019-03-19 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN111536868A (en) * 2020-04-07 2020-08-14 华东师范大学 Imaging type arbitrary reflecting surface speed interferometer with ultra-fast compression

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Simultaneous Detection and Tracking of Moving-Target Shadows in ViSAR Imagery;Xiaoqing Tian 等;《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》;20200616;1-18 *
成像型VISAR系统中激光散斑抑制方法;纪腾 等;《科技资讯》;20170123;243-246 *
成像型任意反射面速度干涉仪的散斑抑制;徐涛等;《强激光与粒子束》;20140515;第26卷(第5期);052001-1-4 *
成像型速度干涉仪散斑成因的研究;理玉龙 等;《强激光与粒子束》;20140915;第26卷(第9期);092003-1-4 *

Also Published As

Publication number Publication date
CN112215770A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN110738207B (en) Character detection method for fusing character area edge information in character image
CN107784301B (en) Method and device for recognizing character area in image
JP6871314B2 (en) Object detection method, device and storage medium
CN105144239B (en) Image processing apparatus, image processing method
AU2019227478B2 (en) An improved content aware fill that leverages the boundaries of underlying objects
JP5854802B2 (en) Image processing apparatus, image processing method, and computer program
CN112215770B (en) Image processing method, system, device and medium
US8743136B2 (en) Generating object representation from bitmap image
JP7246104B2 (en) License plate identification method based on text line identification
CN105574524B (en) Based on dialogue and divide the mirror cartoon image template recognition method and system that joint identifies
DE60020038T2 (en) Method for processing a numerical image
DE102016103854A1 (en) Graphics processing with directional representations of illumination at probe positions within a scene
CN101802842A (en) System and method for identifying complex tokens in an image
DE602004003111T2 (en) Deep-based antialiasing
EP3688725B1 (en) Method and device for creating a 3d reconstruction of an object
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
Bebeselea-Sterp et al. A comparative study of stereovision algorithms
Nguyen et al. High-definition texture reconstruction for 3D image-based modeling
CN110751732A (en) Method for converting 2D image into 3D image
CN116310688A (en) Target detection model based on cascade fusion, and construction method, device and application thereof
Rest et al. Illumination-based augmentation for cuneiform deep neural sign classification
RU2470368C2 (en) Image processing method
CN114863108A (en) Method, system, electronic device and computer readable storage medium for point cloud processing
CN115115535A (en) Depth map denoising method, device, medium and equipment
JP2775122B2 (en) Automatic contour extraction vectorization processing method of illustration data and processing device used for the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 610042 No. 270, floor 2, No. 8, Jinxiu street, Wuhou District, Chengdu, Sichuan

Applicant after: Chengdu shuzhilian Technology Co.,Ltd.

Address before: No.2, floor 4, building 1, Jule road crossing, Section 1, West 1st ring road, Wuhou District, Chengdu City, Sichuan Province 610041

Applicant before: CHENGDU SHUZHILIAN TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant