CN102377895B - Image cropping method - Google Patents

Image cropping method Download PDF

Info

Publication number
CN102377895B
CN102377895B CN201010260918.1A CN201010260918A CN102377895B CN 102377895 B CN102377895 B CN 102377895B CN 201010260918 A CN201010260918 A CN 201010260918A CN 102377895 B CN102377895 B CN 102377895B
Authority
CN
China
Prior art keywords
image
pixel
coordinates
main
main pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010260918.1A
Other languages
Chinese (zh)
Other versions
CN102377895A (en
Inventor
王炯昇
林松辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Primax Electronics Ltd
Original Assignee
Primax Electronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Primax Electronics Ltd filed Critical Primax Electronics Ltd
Priority to CN201010260918.1A priority Critical patent/CN102377895B/en
Publication of CN102377895A publication Critical patent/CN102377895A/en
Application granted granted Critical
Publication of CN102377895B publication Critical patent/CN102377895B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an image cropping method for a multi-function printer. The method comprises the following steps of: searching an upper edge endpoint of a main image from a strip image of an original image; reading each strip image of the original image and searching a main endpoint coordinate for generating a main image area to be printed; and outputting the main image area for printing. The method comprises a rule for detecting spots so as to improve the accuracy of searching the main image.

Description

Image cropping method
Technical Field
The invention relates to an image cutting method, in particular to an image cutting method applied to an image processing flow of a multifunctional office machine.
Background
With the functions of scanners, copiers, and printers, the multi-function peripheral has become a device frequently used by enterprises or personal users nowadays. Referring to fig. 1, a flow of a copy function using a multi-function peripheral is shown. In the conventional copying process using the multi-function peripheral, for example, when a photo is taken, a user places the photo 1 on the scanning window 2, then a scanning carriage in the multi-function peripheral scans the whole scanning window along the Y direction, and then image data obtained by scanning is stored in the internal dynamic memory of the multi-function peripheral by taking a line image 3 as a unit, wherein the line image 3 comprises a plurality of pixels 4. Then, after the plurality of line images are accumulated in the internal dynamic memory, the multi-function peripheral performs the subsequent image processing flow by taking a single strip image 5 as a unit, and finally, an original image 6 formed by combining the processed plurality of strip images is transmitted to a printer of the multi-function peripheral by taking the strip image as a unit for printing.
When the document to be copied contains only an image of a small area, the user wants to be able to scan only the area where the image is present, in order to save the job time of the multi-function peripheral. As shown in fig. 1, the original image 6 includes a subject image 7, and the region other than the subject image 7 is a blank region having no image. Therefore, the conventional multi-function peripheral provides an image cropping method, that is, a pre-scanning procedure is performed before the main scanning procedure, i.e. a low-resolution fast scanning file is used to identify the size of the main image of the original image and the position to be scanned, and then the actual main image range to be scanned by the scanner is reset to perform the main scanning procedure.
However, the conventional image cropping method must go through the pre-scanning process, and even if the area of the main image to be actually scanned is small, the time required for the pre-scanning process is consumed. For example, A3 × 5 photo is scanned by a multifunction peripheral with a scanner window of A3 size, and if a conventional image cropping method is used, the scanner must first scan an area of A3 size, and then determine the position of the photo by an algorithm, so that the scanning range can be reset for final image scanning. Furthermore, the dynamic memory in the general MFP has a limited capacity, and if the MFP is subjected to the pre-scanning procedure, the dynamic memory in the MFP is required to store all the original image data, including the scanned data of the image area and the data of the blank area, i.e. the dynamic memory occupies most of the memory capacity, which affects the image processing speed of the MFP.
As can be seen from the above description, the conventional image cropping method is time-consuming, and consumes at least one time required for pre-scanning regardless of the size of the main image on the original image, and occupies a large amount of space in the dynamic memory.
Disclosure of Invention
The invention mainly aims to provide an image cropping method with higher processing speed.
It is another object of the present invention to provide an image cropping method that occupies less dynamic memory within a multi-function transaction.
The invention aims to provide an image cutting method applied to a multifunctional transaction machine, wherein the multifunctional transaction machine is used for scanning a document to obtain an original image and printing and outputting the original image, the original image is provided with a main image and is divided into a plurality of strip-shaped images, each strip-shaped image comprises a plurality of line-shaped images, and the method comprises the following steps:
(A) reading a band image of the original image;
(B) judging whether the read strip-shaped image has an upper edge endpoint coordinate of the main body image, comprising:
(B1) searching a first linear image with a main body image in the read strip-shaped image, and calculating two main body endpoint coordinates containing the main body image in the linear image;
(B2) respectively calculating whether a plurality of subsequent linear images of the first linear image have two main body endpoint coordinates;
(B3) judging that the width of two main body end point coordinates of at least one linear image in the subsequent linear images with the main body images is larger than a preset width value; wherein, when the judgment results of the steps (B1), (B2) and (B3) are all yes, the coordinates of the two body end points of the first line image are determined as the coordinates of the upper edge end point, and when one of the steps (B1), (B2) and (B3) is no, the steps (a) - (B) are repeated until the upper edge end point is determined;
(C) calculating the coordinates of the main body end points of all the subsequent linear images in the strip-shaped image;
(D) outputting the endpoint coordinate with the minimum X-axis coordinate value and the endpoint coordinate with the maximum X-axis coordinate value in all the endpoint coordinates of the main body contained in the strip-shaped image;
(E) receiving the coordinates of the end points of the main body output in the step (D) to perform printout processing;
(F) reading a next strip-shaped image, and searching the coordinates of the main body endpoint of each linear image of the next strip-shaped image;
(G) outputting the main body endpoint coordinate with the minimum X-axis coordinate value and the main body endpoint coordinate with the maximum X-axis coordinate value in all main body endpoint coordinates contained in the next belt-shaped image;
(H) receiving the coordinates of the end point of the main body output in the step (G) to perform printout processing; and
(I) repeating the steps (F), (G) and (H).
In a preferred embodiment, the step (B1) comprises the following steps:
(B1-1) reading a line image of the band image and Gamma-adjusting the line image, wherein the line image includes a plurality of pixels;
(B1-2) determining whether or not the pixels of the line image include the subject image, including:
performing horizontal reduction on the linear image according to a magnification to obtain a reduced linear image;
comparing the gray level value of each pixel point of the reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the reduced linear image; and
(B1-3) converting the pixel coordinates, including:
and converting the leftmost main pixel coordinate and the rightmost main pixel coordinate of the main pixel of the reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the two main endpoint coordinates.
In a preferred embodiment, the gray level threshold value W (n +1) is calculated by the following formula:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3 …, (A-1), W (0) is a gray scale initial value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
In a preferred embodiment, when W (n +1) max is greater than W (n), T is Td, and when W (n +1) max is less than W (n), T is Tu, and Tu and Td are different positive integers.
In a preferred embodiment, the step (B1) comprises the following steps:
(B1-1) reading a line image of the band image and Gamma-adjusting the line image, wherein the line image includes a plurality of pixels;
(B1-2) determining whether or not the pixels of the line image include the subject image, including:
performing horizontal reduction on the linear image according to a first magnification to obtain a first reduced linear image;
comparing the gray level value of each pixel point of the first reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a quasi-main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the quasi-main pixel of the first reduced linear image;
subtracting a preset value from an X-axis coordinate value of the leftmost quasi-main pixel coordinate of the first reduced linear image to obtain a left reference coordinate, and adding a preset value to the coordinate of the rightmost quasi-main pixel coordinate to obtain a right reference coordinate;
converting the left reference coordinate and the right reference coordinate into a leftmost reference coordinate and a rightmost reference coordinate of the linear image, horizontally reducing the linear image between the leftmost reference coordinate and the rightmost reference coordinate of the linear image according to a second magnification to obtain a second reduced linear image, and comparing a gray level value of each pixel point of the second reduced linear image with the gray level threshold value, wherein a pixel with a gray level value smaller than the gray level threshold value is taken as a main pixel, and coordinates of a leftmost main pixel and a rightmost main pixel of the second reduced linear image are recorded, wherein the second magnification is larger than the first magnification; and
(B1-3) converting the pixel coordinates, including:
and converting the coordinates of the leftmost main pixel and the rightmost main pixel of the second reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the main pixels of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the coordinates of the two end points.
In a preferred embodiment, the gray level threshold value W (n +1) is calculated by the following formula:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3 …, (A-1), W (0) is a gray scale initial value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
In a preferred embodiment, when W (n +1) max is greater than W (n), T is Td, and when W (n +1) max is less than W (n), T is Tu, and Tu and Td are different positive integers.
In a preferred embodiment, the step (F1) comprises the steps of:
(F1-1) reading a line image of the next band image and performing Gamma adjustment on the line image, wherein the line image includes a plurality of pixels;
(F1-2) the determining whether or not the pixels of the line image include the subject image includes:
performing horizontal reduction on the linear image according to a magnification to obtain a reduced linear image;
comparing the gray level value of each pixel point of the reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the reduced linear image; and
(F1-3) converting the pixel coordinates, including:
and converting the leftmost main pixel coordinate and the rightmost main pixel coordinate of the main pixel of the reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the two main endpoint coordinates.
In a preferred embodiment, the gray level threshold value W (n +1) is calculated by the following formula:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3 …, (A-1), W (0) is a gray scale initial value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
In a preferred embodiment, when W (n +1) max is greater than W (n), T is Td, and when W (n +1) max is less than W (n), T is Tu, and Tu and Td are different positive integers.
In a preferred embodiment, the step (F1) comprises the steps of:
(F1-1) reading a line image of the next band image and performing Gamma adjustment on the line image, wherein the line image includes a plurality of pixels;
(F1-2) the determining whether or not the pixels of the line image include the subject image includes:
performing horizontal reduction on the linear image according to a first magnification to obtain a first reduced linear image;
comparing the gray level value of each pixel point of the first reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a quasi-main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the quasi-main pixel of the first reduced linear image;
subtracting a preset value from an X-axis coordinate value of the leftmost quasi-main pixel coordinate of the first reduced linear image to obtain a left reference coordinate, and adding a preset value to the coordinate of the rightmost quasi-main pixel coordinate to obtain a right reference coordinate;
converting the left reference coordinate and the right reference coordinate into a leftmost reference coordinate and a rightmost reference coordinate of the linear image, horizontally reducing the linear image between the leftmost reference coordinate and the rightmost reference coordinate of the linear image according to a second magnification to obtain a second reduced linear image, and comparing a gray level value of each pixel point of the second reduced linear image with the gray level threshold value, wherein a pixel with a gray level value smaller than the gray level threshold value is taken as a main pixel, and coordinates of a leftmost main pixel and a rightmost main pixel of the second reduced linear image are recorded, wherein the second magnification is larger than the first magnification; and
(F1-3) converting the pixel coordinates, including:
and converting the coordinates of the leftmost main pixel and the rightmost main pixel of the second reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the main pixels of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the coordinates of the two end points.
In a preferred embodiment, the gray level threshold value W (n +1) is calculated by the following formula:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3 …, (A-1), W (0) is a gray scale initial value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
In a preferred embodiment, when W (n +1) max is greater than W (n), T is Td, and when W (n +1) max is less than W (n), T is Tu, and Tu and Td are different positive integers.
Drawings
FIG. 1 is a diagram illustrating a scanning process of a multi-function peripheral.
FIG. 2 is a diagram illustrating image cropping added to the image processing flow in the MFP of the present invention.
FIG. 3 is a flowchart illustrating an image cropping method according to a preferred embodiment of the present invention.
FIG. 4 is a diagram illustrating an original image in the image cropping method of the present invention.
Fig. 5A is a partial schematic view of a first band image according to the present invention.
Fig. 5B is a schematic diagram illustrating a method for determining whether there is a subject pixel in a plurality of line images in a first band image according to the present invention.
FIG. 6A is a partial schematic view of a second band image in accordance with the present invention.
Fig. 6B and 6C are schematic diagrams illustrating the determination of whether there is a main pixel in the plurality of line images in the second band image according to the present invention.
FIG. 7A is a partial schematic view of a third band image in accordance with the present invention.
FIG. 7B is a schematic diagram of cropping a subject image area in a third band-like image in accordance with the present invention.
Fig. 8A is a partial schematic view of a fourth band image according to the present invention.
Fig. 8B is a schematic diagram illustrating the determination of whether there are main pixels in the plurality of line images in the fourth strip image according to the present invention.
Fig. 8C is a schematic diagram of trimming a subject image area in the fourth band image in the present invention.
FIG. 9 is a schematic diagram of all the subject image areas after being cropped in the present invention.
FIG. 10 is a diagram illustrating another embodiment of determining whether a plurality of line images in a band image have a main pixel according to the present invention.
The reference numbers in the above figures are as follows:
1 photo 2 scanning window
3 line image 4 pixels
5 strip image 6 original image
7 main body image 100 image processing flow
S101-S110 step 201 original image
202 main body image 203 first strip image
204 second strip-like image 205 third strip-like image
206 fourth strip image 207 dirty image
2030-2032 line image 20301 reduces the line image
2040 to 2044 line image 20401 line image
20411 reduced line images 2051 to 2055
W1 Width 205' subject image region
2060 to 2065 line image 20601 reduced line image
206' subject image region 20602 first reduced line image
20603 second reduced line image
I1、I3、I5、I7、I9、I11、I13、I15、I21、I23Leftmost subject pixel coordinates
I2、I4、I6、I8、I10、I12、I14、I16、I22、I24Rightmost subject pixel coordinates
I17、I18Quasi-subject pixel coordinates
I19、I20、I19’、I20’、I19”、I20"reference coordinates
Detailed Description
In order to improve the inconvenience of the prior art, the invention provides an image cropping method applied to the image processing flow in the multifunctional transaction machine. In the method, when the multifunctional transaction machine performs copying, the main body image in the original image is cut off in real time in the image processing flow so as to accelerate the copying speed. .
Please refer to fig. 2, which is a flowchart of an image processing flow 100 of a multi-function peripheral including the image cropping method of the present invention, comprising the steps of:
s101: inputting a band-shaped image of an original image;
s102: correcting the brightness of the image;
s103: converting color coordinates;
s104: background elimination;
s105: enhancing the sharpness and smoothness of the image;
s106: image cutting;
s107: color coordinate conversion;
s108: adjusting the size of the image;
s109: processing the halftone; and
s110: and (7) printing.
The present invention relates to cutting out the main image area in the original image in the image processing flow, and the image processing flow 100 is well known by those skilled in the art except for the image cutting in step S106, and therefore will not be described again. Step S106 is explained below as an embodiment of the image cutting method of the present application.
FIG. 3 is a flowchart illustrating an image cropping method according to a preferred embodiment of the present invention. Fig. 3 shows the following steps:
s10: reading a strip image;
s20: judging whether the read-in strip-shaped image has an upper edge endpoint of the main image; if yes, go to step S30, otherwise, go back to step S10 until finding the upper edge endpoint of the main body image;
s30: calculating the coordinates of the main body end points of all the linear images in the strip-shaped image with the upper edge end point;
s40: outputting the main body endpoint coordinates with the minimum X-axis coordinate and the maximum X-axis coordinate in all main body endpoint coordinates contained in the strip-shaped image with the upper edge endpoint so as to perform subsequent printing processing;
s50: reading the next strip image;
s60: calculating the coordinates of the end point of the main body of each linear image of the read strip-shaped image;
s70: determining whether or not a plurality of consecutive images in the read band-shaped image do not have a subject image, if so, performing step S90, otherwise, performing step S80;
s80: outputting the main body endpoint coordinates with the minimum X-axis coordinate and the maximum X-axis coordinate in all main body endpoint coordinates contained in the strip-shaped image so as to perform subsequent printing processing;
s90: the subject end point coordinates having the minimum X-axis coordinate and the maximum X-axis coordinate among all subject end point coordinates included in the band-like image are output to perform the subsequent printing process, and then the flow is ended.
The details of each step of the flow of fig. 3 are described below.
Please refer to fig. 4, which is a diagram illustrating an original image, a main image and a band image according to the method of the present invention. The original image 201 refers to a paper size to be printed out, for example, a4 size, and the main body image 202 refers to a document image actually copied, for example, a6 size photograph. The original image 201 is cut into a plurality of band images having the same size, and fig. 4 only shows a first band image 203, a second band image 204, a third band image 205, and a fourth band image 206 for simplifying the description.
Referring to fig. 5A, the first band image 203 of the original image 201 is shown in more detail. The first band image 203 includes a plurality of line images, such as 2030, 2031, 2032. Each line image includes a plurality of pixels, wherein the pixels represented by the open circles represent background pixels, i.e., pixels of the subject-free image, and the pixels represented by the filled circles represent pixels having an image, e.g., a subject image or a dirty image, as shown in fig. 6A.
In determining whether the first band image 203 has an upper edge end point of the main body image, it is necessary to first determine whether a plurality of line images in the first band image 203 have main body pixels. By subject pixel is meant that the pixel contains a subject image, while pixels that do not contain a subject image are referred to as background pixels.
In the process of determining whether a pixel is a main pixel, a "preset gray level value" and a "gray level threshold value" are required. The "preset gray level value" is set by the system designer, and the "gray level threshold value" is obtained in the following manner.
The gray level threshold value is calculated from gray level values of pixels of a plurality of linear images after gamma adjustment in the strip-shaped image. The gray level threshold represents a gray level reference value of a background pixel, pixels with gray level values greater than the gray level threshold are considered as background pixels, and pixels with gray level values less than the gray level threshold are considered as main pixels. Wherein the number of line images used for calculating the gray level threshold is predetermined by the designer.
In this example, the calculation of the grayscale threshold value is illustrated by the 3 line images of the first band image 203, i.e., the line images 2030, 2031, 2032. First, the line images 2030, 2031, 2032 of the first band image 203 are sequentially read in, wherein the line images 2030, 2031, 2032 are gamma-adjusted before performing the gray level threshold calculation. Since gamma adjustment is well known to those skilled in the art, it is not described in detail.
Referring to fig. 5B, the gamma-adjusted first linear image 2030 is horizontally reduced by a magnification, for example, 1/64 times. The horizontal reduction by 1/64 is to obtain 1 gray-scale value by calculating an arithmetic average of the gray-scale values of 64 pixels in the first linear image 2030, and obtain the reduced first linear image 20301 after the horizontal reduction. For example, the first linear image 2030 originally has 6400 pixels, and the reduced first linear image 20301 obtained by horizontal reduction by 1/64 times has a gray scale value of 100 pixels. The maximum gray-scale values of all the pixels of the horizontally reduced first linear image 20301 are recorded. The calculation formula of the gray level threshold value W (n +1) is as follows:
W(n+1)=W(n)+(W(n+1)max-W(n))/T
w (n) is an accumulated gray level reference value of the nth line image, W (n +1) max is a maximum gray level value among gray level values of all pixels of the (n +1) th reduced line image after the (n +1) th line image is reduced horizontally by a magnification, and T is a positive integer. When W (n +1) max is greater than W (n), T is Td, and when W (n +1) max is less than W (n), T is Tu, where Tu and Td are different preset positive integers. Where n +1 is the number of line images for calculating the grayscale threshold, and 3 line images are used as the calculation basis in this example, so n is 0,1,2 in this example. When n is 0, W (n) is W (0), which is a predetermined gray scale value.
Therefore, the accumulated grayscale reference value W (1) of the 1 st line image is as follows:
W(1)=W(0)+(W(1)max-W(0))/T
since W (0), W (1) max and T are known, the cumulative gray level reference value W (1) for the 1 st line image can be calculated according to the formula.
Then, the gamma-adjusted second linear image 2031 is horizontally reduced by 1/64 times, and the maximum gray scale value of all pixels of the horizontally reduced second linear image (not shown) is recorded. As described above, an accumulated gray-scale reference value W (2) of the second linear image 2031 is obtained by substituting the aforementioned formula with the accumulated gray-scale reference value W (1) of the first linear image 2030 and the maximum gray-scale value of all pixels of the horizontally reduced second linear image 2031. Finally, the gamma-adjusted third line image 2032 is horizontally reduced by 1/64 times, and the maximum gray scale value of all pixels of the horizontally reduced third line image (not shown) is recorded. Also, the accumulated gray-scale reference value W (3) of the third line image 2032 is obtained according to the above formula, and the accumulated gray-scale reference value W (3) is the gray-scale threshold value determined by the system designer according to the 3 line images. In particular, the gray level threshold is calculated for the image to be copied every time the copying process is performed.
After obtaining the grayscale threshold, it is determined whether the first linear image 2030 of the first strip image 203 has a main pixel. The line image 2030 is first read. It should be noted that, since the gamma adjustment is already performed on the linear images 2030, 2031, 2032 when determining the gray level threshold value, the gamma adjustment is not required to be repeated on the linear images 2030, 2031, 2032 in the process of determining the subject pixel, but the gamma adjustment is required to be performed on the other linear images that have not been subjected to the gamma adjustment in the subsequent processing.
Referring to fig. 5B again, after the gamma-adjusted linear image 2030 is read, the linear image 2030 is also horizontally reduced by a magnification, such as 1/64 times, to obtain a reduced first linear image 20301. As described above, assuming that the linear image 2030 includes 6400 pixels, the reduced linear image 20301 obtained by horizontally reducing the linear image at the magnification 1/64 includes 100 grayscale values. The gray-scale values of 100 pixels in the reduced line image 20301 are compared with the gray-scale threshold value one by one. Pixels with gray scale values greater than the gray scale threshold value are judged as background pixels, and pixels with gray scale values less than the gray scale threshold value are judged as main pixels.
Because the method for judging whether the pixel of the image is the main image or not is to compare the gray-scale values, the horizontal reduction method is used for horizontally reducing the linear image into the reduced linear image with less pixels so as to reduce the number of the gray-scale values to be compared, namely the processing and operating time of the multifunctional peripheral.
Next, the coordinates of the leftmost main pixel and the rightmost main pixel among the main pixels of the reduced linear image 20301 are recorded. The leftmost and rightmost subject pixel coordinates of the subject pixels of the reduced line image 20301 are then restored to the leftmost and rightmost subject pixel coordinates of the corresponding line image 2030. Since the line images 2030, 2031, 2032 and other line images included in the band image 203 of fig. 5 are pixels without images, i.e. background pixels, the gray level of each pixel of the reduced line image, e.g. 20301, of the band image of fig. 5A is greater than the gray level threshold. The manner of restoring the subject pixel coordinates of the reduced line image to the original line image coordinates will be described below.
Since all the line images included in the first band image 203 do not have a main pixel, the first band image 203 does not include the upper edge end point of the main image, and therefore the next band image 204 is read to determine the upper edge end point of the main image.
Please refer to fig. 6A, 6B, and 6C, which are schematic diagrams illustrating determining whether there is a main pixel in the band image 204. As shown in fig. 6A, in the line images 2040-2044 included in the second band-shaped image 204, the line images 2041, 2042, 2043 have pixels marked as solid dots, which represents that the 3 line images have pixels containing image information.
First, it is determined whether the second strip image 204 has a subject pixel. The method of finding the subject pixels of all the line images of the second strip image 204 is the same as the method of judging the first strip image 203. As shown in fig. 6B, the first linear image 2040 of the second striped image 204 is horizontally scaled 1/64 times to produce a scaled-down first linear image 20401. Next, the gray-scale values of 100 pixels included in the reduced linear image 20401 are compared with the gray-scale threshold values, and the pixels having the gray-scale values smaller than the gray-scale threshold value are regarded as main pixels, and since the reduced linear image 20401 does not have the pixels having the gray-scale values smaller than the gray-scale threshold value, it is determined that all the pixels in the linear image 2040 are background pixels.
Referring to fig. 6C, next line image 2041 is read and horizontally reduced to obtain reduced line image 20411. The gray-scale values of all the pixels included in the reduced line image 20411 are compared with gray-scale threshold values, respectively. FIG. 6C shows reduced line image 20411 having a plurality of subject pixels, such as pixels labeled solid dots, labeled solid circlesThe pixels of the dots represent pixels with gray-scale values smaller than the gray-scale threshold value. After all the main pixels of the reduced line image 20411 are obtained, the coordinates (X) of the leftmost main pixel of all the main pixels of the reduced line image 20411 are recorded1Y) and coordinates (X) of rightmost subject pixel2,Y)。
The coordinates (X) of the leftmost main pixel of the linear image 20411 are then reduced1Y) and rightmost subject pixel coordinate (X)2Y) to the coordinates I of the leftmost subject pixel of the line image 20411Rightmost subject pixel coordinate I2Wherein:
I1=(64×(X1+1/2),Y),I2=(64×(X2+1/2),Y)
and record the leftmost main pixel coordinate I of the line image 20411And the rightmost subject pixel coordinate I2
Then, it is continuously determined whether the subsequent line images (e.g. the last 3 line images 2042, 2043, and 2044 of the first line image 2041) of the first line image 2041 have main pixels, and if yes, the leftmost main pixel coordinate and the rightmost main pixel coordinate of each line image are recorded.
In the step of searching the upper edge endpoint of the main image, the method adds a dirty point detection program. As shown in fig. 6A, the line images 2041, 2042, 2043 of the band image 204 are line images having main pixels, but the line image 2044 does not have main pixels. In the present invention, after detecting the first line image having the main pixel, it is necessary to determine whether a plurality of line images, for example, the subsequent 3 line images, subsequent to the line image have the main pixel. In the case of the band image 204 in fig. 6A, only 2 line images 2041, 2042 and 2043 of the subsequent 3 line images 2041, 2042 and 2043 of the line image 2041 have subject pixels, and the line image 2043 does not have subject pixels, so that it is determined that the line image 2041 does not include the upper edge end point of the subject, and the subject pixels included in the line images 2041, 2042 and 2043 are regarded as dirty point images and are ignored.
Referring to fig. 7A, a schematic diagram of the next strip image 205 is shown. Since neither of the strip images 203, 204 includes the top edge of the main image, the next strip image 205 is read to determine whether it has the top edge of the main image.
As shown in FIG. 7A, the line image 2051 in the strip-shaped image 205 has main pixels, and the coordinates I of two end-point main pixels of the line image 2051 are obtained according to the method for finding the leftmost and rightmost end-point main pixels described in the previous paragraph3、I4. To determine whether the subject pixel of the line image 2051 is a dirty pixel, it is necessary to determine whether the next 3 line images of the line image 2051 all have a subject pixel. In the example of fig. 7A, since the lower 3 line images 2052, 2053, and 2054 of the line image 2051 each have a main pixel, the main pixel coordinates I at both ends of the line image 2052 are recorded5、I6Two-end-point subject pixel coordinates I of the line image 20537、I8Two-endpoint subject pixel coordinates I of line image 20549、I10
In order to more accurately judge whether a subject image is a dirty point, in addition to detecting whether a plurality of subsequent line images of a first line image having the subject pixel all have the subject pixel, the method of the present invention sets another judgment reference to judge whether the detected subject image is a dirty point. The other criterion is to determine whether the width of the two-endpoint main pixel coordinates of at least one line image in the subsequent line images is greater than a preset value, for example, the width of 6 pixels. In FIG. 7A, the two end point subject pixel coordinates I of the line image 20549、I10The width W1 is greater than the preset value of 6 pixels in this example, so that it is determined that the main pixels included in the linear images 2051, 2052, 2053, and 2054 are not dirty images. Thus, the two-endpoint subject pixel coordinates I of the line image 2051 can be determined3、I4I.e. the upper edge endpoint coordinates of the body.
The judgment criterion used by the invention for judging dirty points is based on the following conditions: a subject image typically has continuous and extensive image information, while dirty points typically contain only small and discontinuous image information. If the main pixel of the first line image is indeed the upper edge of the main image, a plurality of consecutive line images following the upper edge of the main image should contain the main pixel with the main image information. In addition, the width of the two-endpoint main pixel represents the range of the main image, so that after the subsequent linear images are determined to have the main pixel, whether the detected main pixel is the upper edge of the main body or the dirty point image can be accurately judged by judging whether the width of the two-endpoint main pixel of any one of the subsequent linear images is larger than a preset width or not.
When the subject image top edge 2051 is determined, the subject pixel two-endpoint coordinates of all line images subsequent to the line image 2054 in the third strip image 205 are continuously calculated in sequence, and the two-endpoint subject pixel coordinates of each line image are recorded until the last line image 2055 in the third strip image 205, the two-endpoint subject pixel coordinates I of the line image 2055 in FIG. 7A11、I12
In this step, the endpoint coordinates of the subject pixels in the third strip image 205 with the minimum X-axis coordinate value and the endpoint coordinates of the subject pixels with the maximum X-axis coordinate value are output, as shown in fig. 7A, the endpoint coordinates of the subject pixels in the third strip image 205 include: two-end-point body coordinates I of the linear image 20513、I4Body coordinates I of both end points of the linear image 20525、I6Body coordinates I of both end points of the linear image 20537、I8Two-end-point body coordinates I of the linear image 20549、I10… and two-endpoint subject coordinates I of the last line image 205511、I12. Wherein the subject pixel end point coordinate with the minimum X-axis coordinate value and the subject pixel end point coordinate with the maximum X-axis coordinate value are the last line image 2 in the third strip image 205055 two end point coordinates I of subject image11、I12. Referring to FIG. 7B, the endpoint coordinate I of the main pixel is determined according to the minimum X-axis coordinate11And the endpoint coordinate I of the main pixel with the maximum X-axis coordinate value12Third belt image 205 is vertically cut in the Y direction to obtain a main image area 205'. The subject image area 205' is sent to subsequent image processing flows for image processing, including color coordinate conversion 107, adjusting image size 108, processing halftoning 109, and printing 110.
When the third band image 205 is processed, a fourth band image 206 is read in. Please refer to fig. 8A, 8B, and 8C, which are schematic diagrams of a fourth band image 206 according to the present invention. The fourth band image 206 includes line images 2060, 2061, 2062, 2063, and a plurality of subsequent line images. When the end points of the main body edge are obtained, the detection of the end points of the upper edge is not needed, and only the end point coordinates of the left and right main body pixels of each linear image are needed to be calculated for the subsequent strip-shaped images.
In this step, the coordinates of the two end points of the subject image on each line image are calculated for the fourth strip image 206. As described above, taking the linear image 2060 as an example, the linear image 2060 is also horizontally reduced to obtain the horizontally reduced linear image 20601, and the two-endpoint subject pixel coordinates (X) of the reduced linear image 20601 are calculated3,Y),(X4Y). Then (X)3,Y),(X4Y) to the coordinate system of the line image 2060 to obtain two endpoint subject pixel coordinates I of the line image 206013、I14. The same operation is performed on the remaining line images of the strip image 206 until the two end point body coordinates I of the last line image 2065 are obtained15、I16Until now.
Then, the coordinates of the end point of the subject pixel having the minimum X-axis coordinate value in the fourth strip image 206 and the coordinates of the end point of the subject pixel having the maximum X-axis coordinate value in the fourth strip image 206 are output, as shown in FIG. 8A, where the coordinates of the end point of the subject pixel having the minimum X-axis coordinate value in the fourth strip image 206 are I15And the coordinate of the end point of the main pixel with the maximum X-axis coordinate value is I16According to the end point coordinates I15And I16The strip image 206 is vertically cut in the Y direction to obtain a main image area 206 ', and the main image area 206' is sent to the subsequent image processing flow for image processing.
The processing method of the subsequent strip images of the fourth strip image 206 is the same as that for processing the fourth strip image 206, and thus is not repeated herein.
Please refer to fig. 9, which illustrates a schematic diagram of the main image areas, such as 205 'and 206', obtained by the image cropping method according to the present application with respect to the whole original image 201. As can be seen from fig. 9, most of the subject image areas 205', 206 include pixels of the subject image, while background pixels of other non-subject image areas are not sent to the subsequent printing process, thereby speeding up the copying speed.
Specifically, after most of the strip images are processed, if a linear image without any subject pixel is detected again, it means that the strip image including the subject image may be completely detected, and the subsequent strip images may not have any subject image. Therefore, when a line image without a subject pixel is detected again, that is, when none of the subsequent line images has a subject pixel, it indicates that the reading of the subject image is completed, and the remaining unread band image is a background image, and the flow is terminated, and the subsequent band image is not processed. This can further accelerate the speed of copying.
It is to be noted that the method for detecting a subject pixel of the present invention includes another embodiment. Referring to FIG. 10, another method for detecting a subject pixel according to the present invention is shown. Take the first line image 2060 of the band image 206 as an example. In another method for detecting a subject pixel, two horizontal line images are used to determine whether the line image has a subject pixel. The line image 2060 is first read and gamma-adjusted,then, the linear image is horizontally scaled down at a first magnification (e.g., 1/6) to obtain a first scaled-down linear image 20602, the gray level of each pixel of the first scaled-down linear image 20602 is compared with a gray level threshold, the pixel with the gray level less than the gray level threshold is regarded as a quasi-subject pixel, and the coordinate I of the leftmost quasi-subject pixel of the first scaled-down linear image 20602 is recorded17(X5Y) and the coordinate I of the rightmost quasi-subject pixel18(X6Y), this is the first horizontal reduction, followed by the leftmost subject pixel coordinate I of the first reduced line image 2060217Subtracting a preset value (e.g. 3 pixels) from the X-axis coordinate value to obtain a left reference coordinate I19(X5-3, Y), and connecting the rightmost subject pixel I18Adding the preset value to the X-axis coordinate value to obtain a right reference coordinate I20(X6+3, Y), referencing the left reference coordinate I19With a right reference coordinate I20Respectively restore the leftmost reference coordinate I on the loop image 206019' with rightmost reference coordinate I20', coordinate of the reference coordinate I19' and I20The inter-line image is horizontally reduced at a second magnification (e.g., 1/3) to obtain a second reduced line image 20603, which is a second horizontal reduction, and the left and right coordinates of the second reduced line image 20603 are I19”(6/3×(X5-3), Y) and I20”((6/3×(X6+3) + (6/3-1)), Y), then the gray level value of each pixel of the second reduced linear image 20603 is compared to the gray level threshold, the pixel with the gray level value less than the gray level threshold is considered as the main pixel, and the leftmost main pixel coordinate I of the second reduced linear image 20603 is recorded21(X7Y) and rightmost subject pixel coordinate I22(X8Y), and finally the leftmost subject pixel coordinate I of the second reduced linear image 2060321(X7Y) and rightmost subject pixel coordinate I22(X8Y) to the leftmost subject pixel coordinate I of the subject pixel of the line image 206023(3×(X7+1/2), Y) and rightmost subject pixel coordinate I24(3×(X8+1/2), Y) and records the leftmost subject pixel coordinate I of the line image 206023And the rightmost subject pixel coordinate I24Are the two endpoint body coordinates.
It should be noted that, in the preferred embodiment, two horizontal reductions are performed to avoid excessive erosion of the rightmost subject pixel coordinate and the leftmost subject pixel coordinate on the line image and even neglect of the actual subject pixel end point when only one horizontal reduction is performed, and the gray-scale value is calculated by using the arithmetic mean, for example, if the first magnification is set to 1/64, the gray-scale value of 64 pixels is converted into a gray-scale value at a time, and when the 64 pixels have the subject pixel therein but most of them are background pixels, the converted gray-scale value is most likely to be determined to be larger than the gray-scale threshold value, and then the 64 pixels are determined to be background pixels, so that in order to obtain the more accurate rightmost subject pixel coordinate and leftmost subject pixel coordinate result on the line image, the invention proposes to use the quasi-subject pixel left and right end point coordinate obtained by the first reduction to expand a range outwards, the method of adding and subtracting the preset value aims to omit the pixels which are determined as the background pixels on the periphery of the linear image, and then the linear image is horizontally reduced for the second time, and the second horizontal reduction multiplying power is larger than the first multiplying power, so that the main pixels can be more accurately determined, and the calculation amount of the multifunctional business machine is reduced.
In practical applications, the present invention may be implemented by way of firmware (firmware) provided in the multi-function peripheral. Because the invention adds the immediate image cutting method into the image processing flow of the multifunctional business machine, the position of the main image area in the strip-shaped image can be really judged, dirty images can be eliminated, the main image area in the strip-shaped image is really cut out in the image processing flow, and the time required by copying the document is prolonged. Further, the method of processing only the region having the subject image without processing the background region in units of band-shaped images saves the dynamic memory capacity required in copying. For the multifunctional transaction machine with limited system resources and memory, the method provided by the invention has a good performance improvement effect.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, so that other equivalent changes and modifications can be made without departing from the spirit of the present invention.

Claims (13)

1. An image cropping method applied to a multi-function peripheral (MFP) for scanning a document to obtain an original image and printing out the original image, wherein the original image has a main image and is divided into a plurality of strip-shaped images, and each strip-shaped image comprises a plurality of line-shaped images, the method comprising:
(A) reading a band image of the original image;
(B) judging whether the read strip-shaped image has an upper edge endpoint coordinate of the main body image, comprising:
(B1) searching a first linear image with a main body image in the read strip-shaped image, and calculating coordinates of two main body end points including the main body image in the linear image;
(B2) respectively calculating whether a plurality of subsequent linear images of the first linear image have two main body endpoint coordinates;
(B3) judging whether the width of the two main body end point coordinates is larger than a preset width value or not, wherein the two main body end point coordinates are the two main body end point coordinates of at least one linear image in a plurality of subsequent linear images with main body images; wherein, when the judgment results of steps (B1), (B2) and (B3) are all yes, the coordinates of the two body end points of the first line image are determined as the coordinates of the upper edge end point, and when one of steps (B1), (B2) and (B3) is no, steps (a) - (B) are repeated until the upper edge end point is determined;
(C) calculating the coordinates of the main body end points of all the subsequent linear images in the strip-shaped image;
(D) outputting the endpoint coordinate with the minimum X-axis coordinate value and the endpoint coordinate with the maximum X-axis coordinate value in all the endpoint coordinates of the main body contained in the strip-shaped image;
(E) receiving the coordinates of the end points of the main body output in the step (D) to perform printout processing;
(F) reading a next strip-shaped image, and searching the coordinates of the main body endpoint of each linear image of the next strip-shaped image;
(G) outputting the main body endpoint coordinate with the minimum X-axis coordinate value and the main body endpoint coordinate with the maximum X-axis coordinate value in all main body endpoint coordinates contained in the next belt-shaped image;
(H) receiving the coordinates of the end point of the main body output in the step (G) to perform printout processing; and
(I) repeating the steps (F), (G) and (H).
2. The image cropping method of claim 1, wherein step (B1) comprises the steps of:
(B1-1) reading a line image of the band image and Gamma-adjusting the line image, wherein the line image includes a plurality of pixels;
(B1-2) determining whether or not the pixels of the line image include the subject image, including:
performing horizontal reduction on the linear image according to a magnification to obtain a reduced linear image;
comparing the gray level value of each pixel point of the reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the reduced linear image; and
(B1-3) converting the pixel coordinates, including:
and converting the leftmost main pixel coordinate and the rightmost main pixel coordinate of the main pixel of the reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the two main endpoint coordinates.
3. The image cropping method of claim 2, wherein said gray level threshold value W (n +1) is calculated by the following equation:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3., (a-1), W (0) is an initial gray level value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
4. The image cropping method of claim 3, wherein T is Td when W (n +1) max is greater than W (n), and T is Tu when W (n +1) max is less than W (n), and Tu and Td are different positive integers.
5. The image cropping method of claim 1, wherein step (B1) comprises the steps of:
(B1-1) reading a line image of the band image and Gamma-adjusting the line image, wherein the line image includes a plurality of pixels;
(B1-2) determining whether or not the pixels of the line image include the subject image, including:
performing horizontal reduction on the linear image according to a first magnification to obtain a first reduced linear image;
comparing the gray level value of each pixel point of the first reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a quasi-main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the quasi-main pixel of the first reduced linear image;
subtracting a preset value from an X-axis coordinate value of the leftmost quasi-main pixel coordinate of the first reduced linear image to obtain a left reference coordinate, and adding a preset value to the coordinate of the rightmost quasi-main pixel coordinate to obtain a right reference coordinate;
converting the left reference coordinate and the right reference coordinate into a leftmost reference coordinate and a rightmost reference coordinate of the linear image, horizontally reducing the linear image between the leftmost reference coordinate and the rightmost reference coordinate of the linear image according to a second magnification to obtain a second reduced linear image, and comparing a gray level value of each pixel point of the second reduced linear image with the gray level threshold value, wherein a pixel with a gray level value smaller than the gray level threshold value is taken as a main pixel, and coordinates of a leftmost main pixel and a rightmost main pixel of the second reduced linear image are recorded, wherein the second magnification is larger than the first magnification; and
(B1-3) converting the pixel coordinates, including:
and converting the coordinates of the leftmost main pixel and the rightmost main pixel of the second reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the main pixels of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the coordinates of the two main end points.
6. The image cropping method of claim 5, wherein said gray level threshold value W (n +1) is calculated by the following equation:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3., (a-1), W (0) is an initial gray level value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
7. The image cropping method of claim 6, wherein T ═ Td when W (n +1) max is greater than W (n), and T ═ Tu when W (n +1) max is less than W (n), and Tu and Td are different positive integers.
8. The image cropping method of claim 1, wherein step (F) comprises the steps of:
(F-1) reading a line image of the next band image and Gamma-adjusting the line image, wherein the line image includes a plurality of pixels;
(F-2) determining whether the pixels of the line image include the subject image, including:
performing horizontal reduction on the linear image according to a magnification to obtain a reduced linear image;
comparing the gray level value of each pixel point of the reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the reduced linear image; and
(F-3) converting the pixel coordinates, including:
and converting the leftmost main pixel coordinate and the rightmost main pixel coordinate of the main pixel of the reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the two main endpoint coordinates.
9. The image cropping method of claim 8, wherein said gray level threshold value W (n +1) is calculated by the following equation:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3., (a-1), W (0) is an initial gray level value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
10. The image cropping method of claim 9, wherein T ═ Td when W (n +1) max is greater than W (n), and T ═ Tu when W (n +1) max is less than W (n), and Tu and Td are different positive integers.
11. The image cropping method of claim 1, wherein step (F) comprises the steps of:
(F-1) reading a line image of the next band image and Gamma-adjusting the line image, wherein the line image includes a plurality of pixels;
(F-2) determining whether the pixels of the line image include the subject image, including:
performing horizontal reduction on the linear image according to a first magnification to obtain a first reduced linear image;
comparing the gray level value of each pixel point of the first reduced linear image with a gray level threshold value, regarding a pixel with the gray level value smaller than the gray level threshold value as a quasi-main pixel, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the quasi-main pixel of the first reduced linear image;
subtracting a preset value from an X-axis coordinate value of the leftmost quasi-main pixel coordinate of the first reduced linear image to obtain a left reference coordinate, and adding a preset value to the coordinate of the rightmost quasi-main pixel coordinate to obtain a right reference coordinate;
converting the left reference coordinate and the right reference coordinate into a leftmost reference coordinate and a rightmost reference coordinate of the linear image, horizontally reducing the linear image between the leftmost reference coordinate and the rightmost reference coordinate of the linear image according to a second magnification to obtain a second reduced linear image, and comparing a gray level value of each pixel point of the second reduced linear image with the gray level threshold value, wherein a pixel with a gray level value smaller than the gray level threshold value is taken as a main pixel, and coordinates of a leftmost main pixel and a rightmost main pixel of the second reduced linear image are recorded, wherein the second magnification is larger than the first magnification; and
(F-3) converting the pixel coordinates, including:
and converting the coordinates of the leftmost main pixel and the rightmost main pixel of the second reduced linear image into the coordinates of the leftmost main pixel and the rightmost main pixel of the main pixels of the linear image, and recording the coordinates of the leftmost main pixel and the rightmost main pixel of the linear image as the coordinates of the two main end points.
12. The image cropping method of claim 11, wherein said gray level threshold value W (n +1) is calculated by the following equation:
w (n +1) ═ W (n) + (W (n +1) max-W (n))/T; wherein,
n is 0,1,2,3., (a-1), W (0) is an initial gray level value,
w (n) is an accumulated gray-scale reference value of the nth line image, W (n +1) max is the maximum gray-scale value of the gray-scale values of all pixels of the (n +1) th line image after the (n +1) th line image is reduced by a magnification level, and T is a positive integer.
13. The image cropping method of claim 12, wherein T ═ Td when W (n +1) max is greater than W (n), and T ═ Tu when W (n +1) max is less than W (n), and Tu and Td are different positive integers.
CN201010260918.1A 2010-08-20 2010-08-20 Image cropping method Expired - Fee Related CN102377895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010260918.1A CN102377895B (en) 2010-08-20 2010-08-20 Image cropping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010260918.1A CN102377895B (en) 2010-08-20 2010-08-20 Image cropping method

Publications (2)

Publication Number Publication Date
CN102377895A CN102377895A (en) 2012-03-14
CN102377895B true CN102377895B (en) 2014-10-08

Family

ID=45795828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010260918.1A Expired - Fee Related CN102377895B (en) 2010-08-20 2010-08-20 Image cropping method

Country Status (1)

Country Link
CN (1) CN102377895B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109348084B (en) * 2018-11-26 2020-01-31 珠海奔图电子有限公司 Image forming method, image forming apparatus, electronic device, and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1515110A (en) * 2001-06-06 2004-07-21 夏普株式会社 Image encoding method and image device
CN1845014A (en) * 2005-04-08 2006-10-11 佳能株式会社 Color image forming apparatus
EP1713248A1 (en) * 2004-02-02 2006-10-18 Nippon Telegraph and Telephone Corporation Electronic watermark embedding device, electronic watermark detection device, method thereof, and program
CN1882036A (en) * 2005-06-14 2006-12-20 佳能株式会社 Image processing apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1515110A (en) * 2001-06-06 2004-07-21 夏普株式会社 Image encoding method and image device
EP1713248A1 (en) * 2004-02-02 2006-10-18 Nippon Telegraph and Telephone Corporation Electronic watermark embedding device, electronic watermark detection device, method thereof, and program
CN1845014A (en) * 2005-04-08 2006-10-11 佳能株式会社 Color image forming apparatus
CN1882036A (en) * 2005-06-14 2006-12-20 佳能株式会社 Image processing apparatus and method

Also Published As

Publication number Publication date
CN102377895A (en) 2012-03-14

Similar Documents

Publication Publication Date Title
JP6353271B2 (en) Image processing apparatus and method
JP5477081B2 (en) Image processing apparatus, image processing method, and program
JP6797716B2 (en) Image processing device and image processing method
JP2016009941A (en) Image processing apparatus and image processing method
JP5407627B2 (en) Image processing apparatus, image processing method, and program
JP4502001B2 (en) Image processing apparatus and image processing method
US7064865B2 (en) Image processing method and apparatus for detecting characteristic points
TWI395466B (en) Method for auto-cropping image
CN102377895B (en) Image cropping method
US20130003083A1 (en) Image Processing Apparatus and Image Forming Apparatus
US20090091809A1 (en) Image Processing Device, Method and Program Product
JP5515552B2 (en) Pixel interpolation device, pixel interpolation method, and image reading device
US8009912B2 (en) Image-processing apparatus which has an image region distinction processing capability, and an image region distinction processing method
JP6681033B2 (en) Image processing device
JP5453215B2 (en) Image processing apparatus, image forming apparatus, and image processing method
JP2007128342A (en) Image decision device, image decision method and image decision program
JP3989687B2 (en) Color image processing method, color image processing apparatus, color image processing program, and recording medium
JP5413297B2 (en) Image processing apparatus, image processing method, program, and recording medium
JP5321034B2 (en) Image processing apparatus, image processing method, and computer-executable program
JP3911459B2 (en) Image reading device
JP6632303B2 (en) Image processing apparatus and image processing method
JP2023112514A (en) Image processing apparatus, method for controlling the same, and program
JP5047126B2 (en) Image processing apparatus and background removal method
JP2008199367A (en) Original read image processor and computer program
JP2006174285A (en) Image processing apparatus, image processing method, and program thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141008

Termination date: 20160820