CN113936015A - Method and device for extracting effective region of image - Google Patents

Method and device for extracting effective region of image Download PDF

Info

Publication number
CN113936015A
CN113936015A CN202111545467.0A CN202111545467A CN113936015A CN 113936015 A CN113936015 A CN 113936015A CN 202111545467 A CN202111545467 A CN 202111545467A CN 113936015 A CN113936015 A CN 113936015A
Authority
CN
China
Prior art keywords
image
pixel point
point
matrix
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111545467.0A
Other languages
Chinese (zh)
Other versions
CN113936015B (en
Inventor
冯健
常培佳
邵学军
杨延成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Medcare Digital Engineering Co ltd
Original Assignee
Qingdao Medcare Digital Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Medcare Digital Engineering Co ltd filed Critical Qingdao Medcare Digital Engineering Co ltd
Priority to CN202111545467.0A priority Critical patent/CN113936015B/en
Publication of CN113936015A publication Critical patent/CN113936015A/en
Application granted granted Critical
Publication of CN113936015B publication Critical patent/CN113936015B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, and provides a method and a device for extracting an effective region of an image, wherein the method comprises the following steps: preprocessing the acquired video image to be processed, and converting the format of the processed video image into a YUV format; extracting Y components of each video image after format conversion, and respectively generating corresponding two-dimensional matrixes; superposing all the generated two-dimensional matrixes; solving an interpolation matrix of corresponding elements of two adjacent rows or two adjacent columns of the superposed two-dimensional matrix; performing binarization processing on the interpolation matrix obtained by the solution by using a preset threshold value to obtain a binary image matrix; scanning the binary image matrix, and removing a small-area of peripheral interference to obtain a primary effective area; and calculating a circumscribed rectangular area of the preliminary effective area to obtain a complete effective area. According to the invention, a better contrast ratio between a visual field and a non-visual field can be obtained by a mode of superposing a plurality of images, and then noise is filtered by image processing, so that an accurate visual field region can be obtained.

Description

Method and device for extracting effective region of image
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for extracting an effective region of an image.
Background
In the digestive endoscopy examination, a doctor can check that the visual field is fixed in one endoscopic examination and can calculate the visual field position before the formal examination, but the lens of the endoscope is outside the body before the formal examination, the lens picture is dim, and in addition, the original image has a lot of invalid information around the image before the original image is not processed, so that the processing and training effects of the subsequent image are influenced, and the original image needs to be processed.
In the image processing of the digestive endoscopy, in the existing image effective region extraction method, the image effective region must be located in the center of an image, the effective image of a non-central region cannot be correctly identified, the non-effective region must be pure black or pure white, and the processing effect of a color image cannot meet the requirement. In addition, the existing image processing method is not suitable for calculating the effective area of the video stream, a large amount of computing resources are wasted if each image in the video stream is calculated, and the image effect is affected if the number of processed images is too small, so that a new image effective area extraction method is needed to improve the identification of details in the image.
Disclosure of Invention
In order to solve the technical problems, the invention provides an image effective region extraction method and device, which are used for solving the problems that the image of the existing digestive endoscopy is not clear, and the clear effective region cannot be obtained by an image processing means.
In a first aspect, the present invention provides a method for extracting an effective region of an image, the method comprising:
s1, preprocessing the acquired video image to be processed, and converting the format of the processed video image into a YUV format;
s2, extracting Y components of each video image after format conversion, and respectively generating corresponding two-dimensional matrixes;
s3, superposing all the generated two-dimensional matrixes;
s4, solving an interpolation matrix of corresponding elements of two adjacent rows or two columns of the superposed two-dimensional matrix;
s5, carrying out binarization processing on the obtained interpolation matrix by using a preset threshold value to obtain a binary image matrix;
s6, scanning the binary image matrix, and removing small area areas of surrounding interference to obtain a preliminary effective area;
and S7, calculating a circumscribed rectangular area of the preliminary effective area to obtain a complete effective area.
Optionally, the method further comprises:
and S8, converting the complete effective area into an image of an original image coordinate system.
Optionally, in the step S1, the preprocessing the video image specifically includes: and scaling the video image in an equal ratio.
Optionally, the step S2 specifically includes:
extracting the Y component of each video image after the format conversion by the following formula:
Y=0.257R+0.504G+0.098B+16,
wherein Y is in the range of [16,235],
and obtaining a two-dimensional matrix of each video image after format conversion.
Optionally, in the step S4, the calculation of the interpolation matrix of two adjacent rows of the superimposed two-dimensional matrix uses the following formula:
Ch-1*128= B| 1:h-1|*128- B| 1:h-2|*128
wherein, B| 1:h-1|Is to take the superimposed two-dimensional matrix B h*128Of the 2 nd to h th rows, B| 1:h-2|Is to take the superimposed two-dimensional matrix B h*128New matrix made from line 1 to line h-1, h being said superimposed two-dimensional matrix B h*128The number of rows 128 is the number of columns.
Optionally, the step S5 specifically includes:
setting a preset threshold value K, wherein K is 10 times of 30% -50% of the value of the Y component;
and setting the value of the interpolation matrix which is larger than the threshold value K as 1, otherwise, setting the value of the interpolation matrix as 0, and obtaining a binary image matrix.
Optionally, the step S6 specifically includes:
scanning the two-dimensional image matrix in rows from the upper left corner, taking the first pixel point of a region with the continuous value of 1 and the width of the pixel point of which is greater than 32 as the uppermost point of a first row scanning region, scanning the two-dimensional image matrix in columns from the upper left corner, taking the first pixel point of a region with the continuous value of 1 and the height of the pixel point of which is greater than the height of the two-dimensional image matrix 1/4 as the leftmost point of a first column scanning region, and then determining the upper left corner point of the preliminary effective region according to the abscissa of the uppermost point of the first row scanning region and the ordinate of the leftmost point of the first column scanning region;
scanning the two-dimensional image matrix from the lower right corner in rows, taking the first pixel point of an area with the continuous value of 1 and the continuous value of 1, wherein the width of the pixel point is larger than 32, as the lowest point of a second row scanning area, scanning the two-dimensional image matrix from the lower right corner in columns, taking the first pixel point of an area with the continuous value of 1 and the continuous value of 1, wherein the height of the pixel point is larger than 1/4 of the height of the two-dimensional image matrix, as the rightmost point of the second row scanning area, and then determining the lower right corner point of the preliminary effective area according to the abscissa of the lowest point of the second row scanning area and the ordinate of the rightmost point of the second row scanning area;
and forming a rectangular area through the upper left corner and the lower right corner of the preliminary effective area, and determining the rectangular area as the preliminary effective area.
Optionally, the step S6 specifically includes:
s61, judging whether the value of the 32 th pixel point is 1 from the first row of the two-dimensional image matrix, if so, taking the pixel point as the current pixel point and executing the next step; otherwise, judging whether the value of the 64 th pixel point of the row is 1, if so, taking the pixel point as the current pixel point to execute the next step, and if not, executing the process of the step on the next row;
s62, searching the current pixel point to the left and the right respectively until the pixel point with the value not being 1 is searched by the left and the right, and if the total length of the searched pixel point with the value being 1 is larger than or equal to the length of the pre-estimated effective area, determining the leftmost point and the rightmost point of the upmost row of the preliminary effective area;
s63, starting from the left-to-right first row of the two-dimensional image matrix, judging whether the value of the 32 th pixel point is 1, if so, taking the pixel point as the second current pixel point, and executing the next step; otherwise, judging whether the value of the 64 th pixel point of the row is 1, if so, taking the pixel point as a second current pixel point to execute the next step, and if not, executing the process of the step in the next row;
s64, searching upward and downward sides from the second current pixel point respectively until the pixel point with the value not being 1 is searched at the upper and lower sides, and if the total height of the searched pixel point with the value being 1 is larger than or equal to the width of the pre-estimated effective area, determining the uppermost point and the lowermost point of the leftmost column of the preliminary effective area;
s65, determining the upper left corner point of the preliminary effective area through the leftmost point of the uppermost row and the uppermost point of the leftmost column;
s66, searching from the lower right corner of the two-dimensional image matrix by using the method of the steps S61-S65 to obtain the lower right corner of the preliminary effective area;
and S67, forming a rectangular area through the upper left corner and the lower right corner of the preliminary effective area, and determining the rectangular area as the preliminary effective area.
Optionally, the step S7 specifically includes:
s71, determining the central point of the preliminary effective area;
s72, scanning the central point respectively upwards and downwards line by line until the line with the pixel point value of 0 is scanned, and respectively determining the uppermost line and the lowermost line of the complete effective area;
s73, respectively scanning the central point in the left and right directions line by line until the line with the pixel point value of 0 is scanned, and respectively determining the leftmost line and the rightmost line of the complete effective area;
and S74, obtaining the complete effective area according to the upmost row, the downmost row, the leftmost column and the rightmost column.
In a second aspect, the present invention further provides an apparatus for extracting an image effective region, including: a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program, when executed by the processor, implements the steps of the method for extracting an image active area described above.
According to the method and the device for extracting the effective region of the image, the contrast ratio of a better visual field and a better non-visual field can be obtained in a mode of overlapping a plurality of images, and then noise is filtered through image processing, so that an accurate visual field region can be obtained.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of an image effective area extraction method according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following drawings and specific embodiments, it being understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Example one
The embodiment of the invention provides a method for extracting an effective region of an image, which comprises the following steps:
s1, preprocessing the acquired video image to be processed, and converting the format of the processed video image into a YUV format.
In this embodiment, in order to ensure the accuracy and the calculation amount of the image coordinate system, in the image preprocessing, the original image is scaled in an equal ratio, that is, the image is scaled to W × H (H is the height of the image after scaling with the aspect ratio maintained, and W is the width). In addition, other methods capable of implementing the above functions are also applicable to the embodiments of the present invention.
And S2, extracting the Y component of each video image after format conversion, and respectively generating corresponding two-dimensional matrixes.
After the scaled image is transformed into YUV format, based on the problem solved by the present invention, in this embodiment, only the Y component is concerned, and the UV components are not processed, and we extract the Y component of the image by using the following formula:
Y=0.257R+0.504G+0.098B+16,
wherein Y is in the range of [16,235],
thereby obtaining a two-dimensional matrix A of each video image after the format conversionh*128Where h is a two-dimensional matrix B h*128The number of rows 128 is the number of columns.
And S3, overlapping all the generated two-dimensional matrixes.
In one embodiment, if the matrices of 10 images are added, a two-dimensional matrix B is obtained h*128The formula is as follows:
B h*128 =
Figure 770439DEST_PATH_IMAGE001
where i represents the ith image.
And S4, solving an interpolation matrix of corresponding elements of two adjacent rows or two columns of the superposed two-dimensional matrix.
In this embodiment, the following formula is used for calculating the interpolation matrix of two adjacent rows of the superimposed two-dimensional matrix:
Ch-1*128= B| 1:h-1|*128- B| 1:h-2|*128
wherein, B| 1:h-1|Is to take the superimposed two-dimensional matrix B h*128Of the 2 nd to h th rows, B| 1:h-2|Is to take the superimposed two-dimensional matrix B h*128And (4) forming a new matrix from the 1 st row to the h-1 st row.
Similarly, an interpolation matrix C of two adjacent columns can be calculatedh*127
And S5, carrying out binarization processing on the obtained interpolation matrix by using a preset threshold value to obtain a binary image matrix.
In this embodiment, the binarization processing is as follows: setting a preset threshold value K, wherein K is 10 times of 30% -50% of the value of the Y component; setting the value of the interpolation matrix which is larger than the threshold value K as 1, otherwise, setting the value of the interpolation matrix as 0, and obtaining a binary image matrix M h-1*128Or M h*127
In the image processing of the present embodiment, the interpolation matrix C may be usedh-1*128An interpolation matrix C may also be usedh*127To perform subsequent image processing steps, the same object can be achieved.
In this embodiment, if interpolation is not performed, and subsequent scanning calculation is directly performed, different thresholds need to be set for different image luminances, because the overall luminances of different images are different, and because the accuracy of boundary determination is not high due to inaccuracy in selecting the thresholds, the accuracy of image processing can be improved by solving an interpolation matrix.
In the present embodiment, the contour of the effective region can be preliminarily determined by step S5.
S6, scanning the binary image matrix, and removing small area areas of surrounding interference to obtain a preliminary effective area;
in this step, the effective area can be accurately determined by removing a small-area of the surrounding interference.
In the image processing of step S6, the embodiment of the present invention can be divided into two methods.
The method is carried out in a progressive scanning mode, and specifically comprises the following steps:
since the image active area occupies at least 1/4 of the image, it is scanned from the top left corner.
Firstly, the two-dimensional image matrix is scanned from the upper left corner in rows, and the first pixel point of the area with the continuous value of 1 and the pixel point width of 1 larger than 32 is taken as the uppermost point (x, y) of the first row scanning area.
And scanning the two-dimensional image matrix from the upper left corner in columns, and taking the first pixel point of the area with the continuous value of 1 and the height of the pixel point with the continuous value of 1 larger than h/4 as the leftmost point (i, j) of the first column scanning area.
Then determining an upper left corner point (x, j) of the preliminary effective area according to the abscissa of the uppermost point of the first row scanning area and the ordinate of the leftmost point of the first column scanning area;
if the number of the continuous values 1 is less than 32 in the scanning, discarding the scanning result, namely removing noise;
and rescanning calculation from the next pixel point with the value of 1 until the scanning of the row and the column is finished.
Similarly, the two-dimensional image matrix is scanned from the lower right corner in rows, the first pixel point of a region with the continuous value of 1 and the continuous value of 1, wherein the width of the pixel point is larger than 32, is taken as the lowest point of a second row scanning region, the two-dimensional image matrix is scanned from the lower right corner in columns, the first pixel point of a region with the continuous value of 1 and the continuous value of 1, wherein the height of the pixel point is larger than the height of the two-dimensional image matrix 1/4, is taken as the rightmost point of the second column scanning region, and then the lower right corner point (u, n) of the preliminary effective region is determined according to the abscissa of the lowest point of the second row scanning region and the ordinate of the rightmost point of the second column scanning region;
and forming a rectangular area through the upper left corner and the lower right corner of the preliminary effective area, and determining the rectangular area as the preliminary effective area.
The second method is carried out in a key point scanning mode, and specifically comprises the following steps:
s61, judging whether the value of the 32 th pixel point is 1 from the first row of the two-dimensional image matrix, if so, taking the pixel point as the current pixel point and executing the next step; otherwise, judging whether the value of the 64 th pixel point of the row is 1, if so, taking the pixel point as the current pixel point to execute the next step, and if not, executing the process of the step on the next row;
s62, searching the current pixel point to the left and the right respectively until the pixel point with the value not being 1 is searched by the left and the right, and if the total length of the searched pixel point with the value being 1 is larger than or equal to the length of the pre-estimated effective area, determining the leftmost point and the rightmost point of the upmost row of the preliminary effective area; in this embodiment, the pre-estimated effective region refers to an effective region estimated based on experience after the image is subjected to preliminary processing.
S63, starting from the left-to-right first row of the two-dimensional image matrix, judging whether the value of the 32 th pixel point is 1, if so, taking the pixel point as the second current pixel point, and executing the next step; otherwise, judging whether the value of the 64 th pixel point of the row is 1, if so, taking the pixel point as a second current pixel point to execute the next step, and if not, executing the process of the step in the next row;
s64, searching upward and downward sides from the second current pixel point respectively until the pixel point with the value not being 1 is searched at the upper and lower sides, and if the total height of the searched pixel point with the value being 1 is larger than or equal to the width of the pre-estimated effective area, determining the uppermost point and the lowermost point of the leftmost column of the preliminary effective area;
s65, determining the upper left corner point of the preliminary effective area through the leftmost point of the uppermost row and the uppermost point of the leftmost column;
s66, searching from the lower right corner of the two-dimensional image matrix by using the method of the steps S61-S65 to obtain the lower right corner of the preliminary effective area;
and S67, forming a rectangular area through the upper left corner and the lower right corner of the preliminary effective area, and determining the rectangular area as the preliminary effective area.
After the image processing in step S5, a circumscribed rectangle of the field of view is obtained for the diamond field of view, but only an inscribed rectangle of the field of view is obtained for the center field of view, and therefore further image processing is required.
And S7, calculating a circumscribed rectangular area of the preliminary effective area to obtain a complete effective area.
In this embodiment, step S7 specifically includes:
s71, determining the preliminary effective area Rectx,j,u,nCentral point P ofc1,c2Wherein c1= (u-x)/2, c2= (n-j)/2;
s72, the center point Pc1,c2Scanning upwards line by line until a line with the pixel point value of 0 is scanned, and determining the uppermost action a = c1-i of the complete effective area;
at the center point Pc1,c2Downwards scanning line by line until a line with the pixel point value of 0 is scanned, and determining the lowest action c = c1+ i of the complete effective area;
s73, the center point Pc1,c2Scanning leftwards column by column until a row with the pixel point value of 0 is scanned, and determining that the leftmost column of the complete effective area is b = c 2-i;
at the center point Pc1,c2Scanning the right column by column until scanning a row with a pixel point value of 0, and determining that the rightmost column of the complete effective area is d = c2+ i;
and S74, obtaining the complete effective area according to the upmost row, the downmost row, the leftmost column and the rightmost column.
In a further embodiment, the method further comprises:
s8, converting the complete effective region into an image of an original image coordinate system, specifically:
the rectangular area is converted into a series of relative coordinates 0-1:
t = a/h T is in the range [0,1 ];
l = b/128, L ranging from [0,1 ];
b = c/h, B ranging from [0,1 ];
r = d/128, R ranging from [0,1 ];
h is the matrix height.
The relative coordinates are then multiplied by the original image size to obtain the true coordinates of the rectangular area.
According to the embodiment of the invention, a better contrast ratio between the visual field and the non-visual field can be obtained in a mode of superposing a plurality of images, and then noise is filtered out through image processing, so that an accurate visual field region can be obtained.
Example two
The present embodiment provides an apparatus for extracting an image effective region, including: a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program, when executed by the processor, implements the steps of the method for extracting an image effective area as described in any one of the above embodiments.
In the specific implementation process of the second embodiment, reference may be made to the first embodiment, so that a corresponding technical effect is achieved.
It is to be understood that the present invention has been described with reference to certain embodiments, and that various changes in the features and embodiments, or equivalent substitutions may be made by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (10)

1. A method for extracting an effective region of an image is characterized by comprising the following steps:
s1, preprocessing the acquired video image to be processed, and converting the format of the processed video image into a YUV format;
s2, extracting Y components of each video image after format conversion, and respectively generating corresponding two-dimensional matrixes;
s3, superposing all the generated two-dimensional matrixes;
s4, solving an interpolation matrix of corresponding elements of two adjacent rows or two columns of the superposed two-dimensional matrix;
s5, carrying out binarization processing on the obtained interpolation matrix by using a preset threshold value to obtain a binary image matrix;
s6, scanning the binary image matrix, and removing small area areas of surrounding interference to obtain a preliminary effective area;
and S7, calculating a circumscribed rectangular area of the preliminary effective area to obtain a complete effective area.
2. The method for extracting an image effective region according to claim 1, further comprising:
and S8, converting the complete effective area into an image of an original image coordinate system.
3. The method for extracting an image effective region according to claim 1, wherein the step S1 is to pre-process the video image, and specifically includes: and scaling the video image in an equal ratio.
4. The method for extracting an image effective region according to claim 1, wherein the step S2 specifically includes:
extracting the Y component of each video image after the format conversion by the following formula:
Y=0.257R+0.504G+0.098B+16,
wherein Y is in the range of [16,235],
and obtaining a two-dimensional matrix of each video image after format conversion.
5. The method for extracting an image effective region according to claim 1, wherein in the step S4, the calculation of the interpolation matrix of two adjacent lines of the superimposed two-dimensional matrix uses the following formula:
Ch-1*128= B| 1:h-1|*128- B| 1:h-2|*128
wherein, B| 1:h-1|Is to take the superimposed two-dimensional matrix B h*128Of the 2 nd to h th rows, B| 1:h-2|Is to take the superimposed two-dimensional matrix B h*128New matrix made from line 1 to line h-1, h being said superimposed two-dimensional matrix B h*128The number of rows 128 is the number of columns.
6. The method for extracting an image effective region according to claim 1, wherein the step S5 specifically includes:
setting a preset threshold value K, wherein K is 10 times of 30% -50% of the value of the Y component;
and setting the value of the interpolation matrix which is larger than the threshold value K as 1, otherwise, setting the value of the interpolation matrix as 0, and obtaining a binary image matrix.
7. The method for extracting an image effective region according to claim 6, wherein the step S6 specifically includes:
scanning the two-dimensional image matrix in rows from the upper left corner, taking the first pixel point of a region with the continuous value of 1 and the width of the pixel point of which is greater than 32 as the uppermost point of a first row scanning region, scanning the two-dimensional image matrix in columns from the upper left corner, taking the first pixel point of a region with the continuous value of 1 and the height of the pixel point of which is greater than the height of the two-dimensional image matrix 1/4 as the leftmost point of a first column scanning region, and then determining the upper left corner point of the preliminary effective region according to the abscissa of the uppermost point of the first row scanning region and the ordinate of the leftmost point of the first column scanning region;
scanning the two-dimensional image matrix from the lower right corner in rows, taking the first pixel point of an area with the continuous value of 1 and the continuous value of 1, wherein the width of the pixel point is larger than 32, as the lowest point of a second row scanning area, scanning the two-dimensional image matrix from the lower right corner in columns, taking the first pixel point of an area with the continuous value of 1 and the continuous value of 1, wherein the height of the pixel point is larger than 1/4 of the height of the two-dimensional image matrix, as the rightmost point of the second row scanning area, and then determining the lower right corner point of the preliminary effective area according to the abscissa of the lowest point of the second row scanning area and the ordinate of the rightmost point of the second row scanning area;
and forming a rectangular area through the upper left corner and the lower right corner of the preliminary effective area, and determining the rectangular area as the preliminary effective area.
8. The method for extracting an image effective region according to claim 6, wherein the step S6 specifically includes:
s61, judging whether the value of the 32 th pixel point is 1 from the first row of the two-dimensional image matrix, if so, taking the pixel point as the current pixel point and executing the next step; otherwise, judging whether the value of the 64 th pixel point of the row is 1, if so, taking the pixel point as the current pixel point to execute the next step, and if not, executing the process of the step on the next row;
s62, searching the current pixel point to the left and the right respectively until the pixel point with the value not being 1 is searched by the left and the right, and if the total length of the searched pixel point with the value being 1 is larger than or equal to the length of the pre-estimated effective area, determining the leftmost point and the rightmost point of the upmost row of the preliminary effective area;
s63, starting from the left-to-right first row of the two-dimensional image matrix, judging whether the value of the 32 th pixel point is 1, if so, taking the pixel point as the second current pixel point, and executing the next step; otherwise, judging whether the value of the 64 th pixel point of the row is 1, if so, taking the pixel point as a second current pixel point to execute the next step, and if not, executing the process of the step in the next row;
s64, searching upward and downward sides from the second current pixel point respectively until the pixel point with the value not being 1 is searched at the upper and lower sides, and if the total height of the searched pixel point with the value being 1 is larger than or equal to the width of the pre-estimated effective area, determining the uppermost point and the lowermost point of the leftmost column of the preliminary effective area;
s65, determining the upper left corner point of the preliminary effective area through the leftmost point of the uppermost row and the uppermost point of the leftmost column;
s66, searching from the lower right corner of the two-dimensional image matrix by using the method of the steps S61-S65 to obtain the lower right corner of the preliminary effective area;
and S67, forming a rectangular area through the upper left corner and the lower right corner of the preliminary effective area, and determining the rectangular area as the preliminary effective area.
9. The method for extracting an image effective region according to claim 6, wherein the step S7 specifically includes:
s71, determining the central point of the preliminary effective area;
s72, scanning the central point respectively upwards and downwards line by line until the line with the pixel point value of 0 is scanned, and respectively determining the uppermost line and the lowermost line of the complete effective area;
s73, respectively scanning the central point in the left and right directions line by line until the line with the pixel point value of 0 is scanned, and respectively determining the leftmost line and the rightmost line of the complete effective area;
and S74, obtaining the complete effective area according to the upmost row, the downmost row, the leftmost column and the rightmost column.
10. An image effective area extraction device, characterized in that the image effective area extraction device comprises: a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program, when executed by the processor, implements the steps of the method for extracting an image active area as claimed in any one of claims 1 to 9.
CN202111545467.0A 2021-12-17 2021-12-17 Method and device for extracting effective region of image Active CN113936015B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111545467.0A CN113936015B (en) 2021-12-17 2021-12-17 Method and device for extracting effective region of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111545467.0A CN113936015B (en) 2021-12-17 2021-12-17 Method and device for extracting effective region of image

Publications (2)

Publication Number Publication Date
CN113936015A true CN113936015A (en) 2022-01-14
CN113936015B CN113936015B (en) 2022-03-25

Family

ID=79289108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111545467.0A Active CN113936015B (en) 2021-12-17 2021-12-17 Method and device for extracting effective region of image

Country Status (1)

Country Link
CN (1) CN113936015B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000311180A (en) * 1999-03-11 2000-11-07 Fuji Xerox Co Ltd Method for feature set selection, method for generating video image class stastic model, method for classifying and segmenting video frame, method for determining similarity of video frame, computer-readable medium, and computer system
US20060013485A1 (en) * 2004-07-16 2006-01-19 Sony Corporation Data processing method, data processing apparatus, semiconductor device, and electronic apparatus
WO2008063615A2 (en) * 2006-11-20 2008-05-29 Rexee, Inc. Apparatus for and method of performing a weight-based search
JP2008210063A (en) * 2007-02-23 2008-09-11 Hiroshima Univ Image feature extraction apparatus, image retrieval system, video feature extraction apparatus, and query image retrieval system, and their methods, program, and computer readable recording medium
CN101651772A (en) * 2009-09-11 2010-02-17 宁波大学 Method for extracting video interested region based on visual attention
US20100053351A1 (en) * 2008-08-27 2010-03-04 Rastislav Lukac Image processing apparatus, image processing method, and program for attaining image processing
CN101778303A (en) * 2010-01-28 2010-07-14 南京航空航天大学 Global property difference-based CCD array video positioning method and system
WO2016029555A1 (en) * 2014-08-25 2016-03-03 京东方科技集团股份有限公司 Image interpolation method and device
CN106447606A (en) * 2016-10-31 2017-02-22 南京维睛视空信息科技有限公司 Rapid real-time video beautifying method
WO2017057021A1 (en) * 2015-09-28 2017-04-06 オリンパス株式会社 Image analysis device, image analysis system, and method for operating image analysis device
CN110276769A (en) * 2018-03-13 2019-09-24 上海狮吼网络科技有限公司 Live content localization method in a kind of video picture-in-pictures framework

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000311180A (en) * 1999-03-11 2000-11-07 Fuji Xerox Co Ltd Method for feature set selection, method for generating video image class stastic model, method for classifying and segmenting video frame, method for determining similarity of video frame, computer-readable medium, and computer system
US20060013485A1 (en) * 2004-07-16 2006-01-19 Sony Corporation Data processing method, data processing apparatus, semiconductor device, and electronic apparatus
WO2008063615A2 (en) * 2006-11-20 2008-05-29 Rexee, Inc. Apparatus for and method of performing a weight-based search
JP2008210063A (en) * 2007-02-23 2008-09-11 Hiroshima Univ Image feature extraction apparatus, image retrieval system, video feature extraction apparatus, and query image retrieval system, and their methods, program, and computer readable recording medium
US20100053351A1 (en) * 2008-08-27 2010-03-04 Rastislav Lukac Image processing apparatus, image processing method, and program for attaining image processing
CN101651772A (en) * 2009-09-11 2010-02-17 宁波大学 Method for extracting video interested region based on visual attention
CN101778303A (en) * 2010-01-28 2010-07-14 南京航空航天大学 Global property difference-based CCD array video positioning method and system
WO2016029555A1 (en) * 2014-08-25 2016-03-03 京东方科技集团股份有限公司 Image interpolation method and device
WO2017057021A1 (en) * 2015-09-28 2017-04-06 オリンパス株式会社 Image analysis device, image analysis system, and method for operating image analysis device
CN106447606A (en) * 2016-10-31 2017-02-22 南京维睛视空信息科技有限公司 Rapid real-time video beautifying method
CN110276769A (en) * 2018-03-13 2019-09-24 上海狮吼网络科技有限公司 Live content localization method in a kind of video picture-in-pictures framework

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Z. WANG等: ""Research on motion detection of Video Surveillance System"", 《2010 3RD INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING》 *
张淞等: ""一种基于相邻像素差值的隐写分析方法"", 《计算机系统应用》 *

Also Published As

Publication number Publication date
CN113936015B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
EP0831418B1 (en) Method and apparatus for character recognition
US20110211233A1 (en) Image processing device, image processing method and computer program
JP6688277B2 (en) Program, learning processing method, learning model, data structure, learning device, and object recognition device
US20060029276A1 (en) Object image detecting apparatus, face image detecting program and face image detecting method
US20050139782A1 (en) Face image detecting method, face image detecting system and face image detecting program
JP5777367B2 (en) Pattern identification device, pattern identification method and program
JPH0256708B2 (en)
JPH04315272A (en) Graphic recognizing device
JP2010102584A (en) Image processor and image processing method
CN113936015B (en) Method and device for extracting effective region of image
WO2005041128A1 (en) Face image candidate area search method, face image candidate area search system, and face image candidate area search program
JP5201184B2 (en) Image processing apparatus and program
US8254693B2 (en) Image processing apparatus, image processing method and program
CN111767752B (en) Two-dimensional code identification method and device
JPH0256707B2 (en)
CN116129496A (en) Image shielding method and device, computer equipment and storage medium
US20040037475A1 (en) Method and apparatus for processing annotated screen capture images by automated selection of image regions
JP7210380B2 (en) Image learning program, image learning method, and image recognition device
JP4238323B2 (en) Image processing method and image processing apparatus
Zhu et al. DANet: dynamic salient object detection networks leveraging auxiliary information
CN112907708B (en) Face cartoon method, equipment and computer storage medium
CN117372437B (en) Intelligent detection and quantification method and system for facial paralysis
TWI757025B (en) System and method for counting aquatic creatures
US11281911B2 (en) 2-D graphical symbols for representing semantic meaning of a video clip
JP2789622B2 (en) Character / graphic area determination device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant