US20070076965A1 - Image encoding apparatus - Google Patents

Image encoding apparatus Download PDF

Info

Publication number
US20070076965A1
US20070076965A1 US11/359,588 US35958806A US2007076965A1 US 20070076965 A1 US20070076965 A1 US 20070076965A1 US 35958806 A US35958806 A US 35958806A US 2007076965 A1 US2007076965 A1 US 2007076965A1
Authority
US
United States
Prior art keywords
edge
blocks
image
processing
edge block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/359,588
Other versions
US7873226B2 (en
Inventor
Miwa Shimada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Assigned to FUJITSU LIMITED reassignment FUJITSU LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHIMADA, MIWA
Publication of US20070076965A1 publication Critical patent/US20070076965A1/en
Application granted granted Critical
Publication of US7873226B2 publication Critical patent/US7873226B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present invention relates to a field of image encoding.
  • edge detecting methods there exist techniques that are applied in an image recognizing field other than the techniques disclosed in the above-mentioned Patent documents 1 to 5.
  • An object of the present invention is to provide a technique that is capable of improving the efficiency of an image encoding processing that requires a real-time operation.
  • Another object of the present invention is to provide a technique that is capable of executing an appropriate image quality improving processing.
  • the present invention applies the following structure.
  • the present invention relates to an image encoding apparatus, including:
  • a detection unit dividing an input image into blocks composed of plural pixels and detecting edge block candidates as blocks including an edge in the image
  • an extraction unit excluding an edge block candidate that is recognized as no necessity of image quality improving processing executed in image encoding processing from the detected edge block candidates and determining a remaining edge block candidate as edge block to be targeted in the image quality improving processing.
  • an edge block candidate that does not require the image quality improving processing is excluded from the edge block candidates, and the remaining edge block candidates are extracted (determined) as edge blocks.
  • the remaining edge block candidates are extracted (determined) as edge blocks.
  • an image encoding apparatus including:
  • a detection unit detecting edge blocks as blocks including an edge in an image composed of plural blocks
  • the image quality improving processing according to the priority is executed in the image encoding processing.
  • the image quality improving processing of each edge block for obtaining the image quality corresponding to the priority is executed. That is, a level can be given the image quality improving processing under limited conditions. As a result, the appropriate image quality improving processing can be conducted.
  • the present invention can be specified as an image encoding method and a recording medium that is stored a computer program each having the same features of the image encoding apparatus.
  • the efficiency of the image quality encoding processing can be improved in the image encoding processing that requires the real-time operations.
  • the appropriate image quality improving processing can be executed.
  • FIG. 1 is a diagram exemplifying a problem in a conventional art
  • FIG. 2 is a schematic explanatory diagram showing an image encoding apparatus according to the present invention.
  • FIG. 3 is a flowchart showing the detail of a second edge detecting process in the process shown in FIG. 2 ;
  • FIG. 4 is a diagram showing a structural example of an information processing apparatus that can function as the image encoding apparatus (a structural example of an image encoding apparatus);
  • FIG. 5A is a diagram for explaining a target pixel that is defined in a first edge detecting process, and close pixels of the target pixel;
  • FIG. 5B is a flowchart showing an example of the first edge detecting process
  • FIG. 6 is a flowchart showing an example of an edge block candidate determining process
  • FIG. 7 is a diagram showing an example of a flat portion and a complicated portion in an image
  • FIG. 8A is a diagram showing a definitional example of the target block and the close blocks in the second edge detecting process
  • FIG. 8B is a flowchart showing an example of the second edge detecting process which is executed according to the definition shown in FIG. 8A ;
  • FIG. 9A is a diagram showing an example of an image to targeted to the second edge detecting process according to a first specific example
  • FIG. 10A is a diagram showing an example of an image to be targeted to the second edge detecting process according to a second specific example
  • FIGS. 11A and 11B are explanatory diagrams showing a flat portion that is in contact with the edge in the image
  • FIG. 12 is a flowchart showing an example of a priority giving process
  • FIG. 13A is a diagram showing an image example to be targeted to the priority giving process
  • FIG. 13B is an explanatory diagram showing the result of the priority giving process with respect to the image shown in FIG. 13A ;
  • FIG. 14 is a diagram showing an original image to be encoded in the image
  • FIG. 15 is a diagram showing the detection result (edge block candidate map) of the edge blocks by the first edge block detecting process that is executed with respect to the original image shown in FIG. 14 ;
  • FIG. 16 is a diagram showing the detection result (second edge detection result map) of the edge blocks by the second edge block detecting process that is executed with respect to the original image shown in FIG. 14 .
  • a threshold value for determining whether or not the edge is included is adjusted to reduce the number of detection of the edge blocks.
  • the threshold adjustment there is a case in which significant edge blocks for the image encoding do not remain.
  • FIG. 1 is a diagram exemplifying a problem in the conventional art.
  • an object of a trapezoid having patterns is included in an image.
  • the edge detecting methods according to the conventional art is applied to this image, for example, both of blocks 1 and 2 are detected as edge blocks due to the pattern (mesh pattern) of the object.
  • the edge block becomes a target of the image quality improving processing. Since a total data (information) amount that can be used in the image encoding is limited, it is preferable that the number of edge blocks to be improved in the image quality improving processing is smaller. Also, the minute edge such as the pattern is not distinguished even if the image quality of the pattern is deteriorated, an influence of the minute edge on a person is small. That is, even if the image quality improving processing is conducted on the block 2 , its visual effect is low. On the other hand, the deterioration of the contour line that intersects with the block 1 is recognized as an image quality deterioration distinguished by the person. Therefore, it is desirable that only the block 1 is detected as the edge block.
  • the patterns included in the respective blocks 1 and 2 weighs heavily in a determination element which leads a determination result that the blocks 1 and 2 are edge blocks. That is, the block 1 is similar to the block 2 in the determination element. In this case, when the threshold value is adjusted, both of the blocks 1 and 2 are not detected as the edge block with the result that both of those blocks 1 and 2 are not to be improved in the image quality improving processing.
  • the edge block in order to detect the edge block which is recognized to be significant in image encoding (to be targeted on the image quality improving processing), the edge block is detected by means of the edge detecting methods disclosed in Patent documents 1 to 5 (first edge detecting processing), and a second edge detecting processing is conducted by using the result of the first edge detecting processing. That is, the edge blocks that have been detected by the first edge detecting processing are defined as edge block candidates, and significant edge blocks in image encoding are extracted from the edge block candidates.
  • the edge blocks that are recognized to be edge blocks requiring no improvement in the image quality are excluded from the edge blocks that are detected in the first edge detecting processing.
  • only the appropriate edge blocks are extracted by conducting the edge determinations at multiple stages.
  • FIG. 2 is a schematic explanatory diagram showing an image encoding processing according to the present invention.
  • a narrowly defined first edge detecting processing (edge pixel detecting processing) is first executed with respect to an original image to be encoded (ST 1 ).
  • the conventional arts disclosed in Patent documents 1 to 5 can be applied as the first edge detecting processing as described above.
  • edge block candidate determining processing is conducted according to the result of the first edge detecting processing (ST 2 ). That is, with respect to a predetermined number of blocks into which the original image is divided, determination of whether or not each block is the edge block candidates, is executed according to the result of the first edge detecting processing.
  • the first edge detecting processing (ST 1 ) is conducted on a pixel basis (unit), and blocks including edge pixels that are equal to or higher than a predetermined threshold value are determined as the edge block candidates.
  • a generating and storing processing of an edge block candidate map is executed (ST 3 ). That is, a map (edge block candidate map) in which a flag allocated to each block corresponding to the original image is turned on (is a candidate) or turned off (is not a candidate) is generated according to the result of the edge block candidate determining processing, and then stored in a predetermined storage area.
  • the first edge detecting processing (edge pixel detecting processing: ST 1 ), the edge block candidate determining processing (ST 2 ), and the edge block candidate map generating and storing processing (ST 3 ) as described above constitute a broadly defined first edge detecting processing (first edge block detecting processing).
  • the second edge detecting processing is executed (ST 4 ). That is, it is determined whether or not the respective blocks of the edge block candidates include the significant edges (to be improved in the image quality), and the edge block candidates including the significant edges are determined as the edge blocks. In this situation, the edge blocks including no significant edge (requiring no improvement in the image quality) are excluded.
  • the second edge detecting processing (ST 4 ) and the second edge block detection result map generating and storing processing (ST 5 ) as described above constitute a broadly defined second edge detecting processing (second edge block detecting processing).
  • FIG. 3 is a flowchart showing the detail of the second edge detecting processing in the processing shown in FIG. 2 .
  • the first edge detecting processing edge pixel detecting processing
  • the edge block candidate determining processing is executed as the broadly defined first edge detecting processing (S 01 : ST 1 , ST 2 ).
  • Step S 02 the generating and storing processing of the edge block candidate map is executed (S 02 : ST 3 ). Then, processing of the following Steps S 03 to S 08 is executed as the second edge detecting processing (second edge block detecting processing).
  • the edge block candidate map is read, one (target block to be processed) of the blocks in the edge block candidate map is specified, and it is determined whether or not the target block is an edge block candidate (S 03 ).
  • the edge block determining processing is executed (S 04 ). That is, it is determined whether or not the target block includes a significant edge (S 05 ).
  • the target block when the target block includes the significant edge, the target block is determined as the edge block (S 05 ; YES), and the edge block flag in the second edge detection result map corresponding to the target block is turned on (S 06 ). Thereafter, the processing is advanced to Step S 08 .
  • Step S 08 it is determined whether or not the target block is a final block.
  • the processing is returned to Step S 03 in order to execute the processing of steps S 03 to S 07 with respect to the remaining blocks.
  • the target block is the final block (S 08 ; YES)
  • the second edge detection result map is stored in a predetermined storage area. The second edge detecting processing is completed.
  • the image encoding processing is executed by using the edge information based on the second edge detection result map.
  • the number of edge blocks is reduced by removing the edge block candidates that do not require the image quality improvement in the second edge detecting processing. Accordingly, the encoding processing can be saved in labor (made efficient) by a reduction in the number of edge blocks.
  • the reduction in the number of edge blocks means an increase in the information (image information) amount which is supplied to the respective edge blocks in order to improve the image quality.
  • the appropriate image quality improving processing can be conducted.
  • the priority of each of the edge blocks may be set with respect to the image quality improving process so that the information amount (image information amount) that is allocated to the respective edge blocks can be controlled according to the priority.
  • the information amount image information amount
  • FIG. 4 is a diagram showing a structural example of an information processing apparatus that can function as an image encoding apparatus (a structural example of the image encoding apparatus).
  • an information processing apparatus 10 includes a CPU (central processing unit: processor) 1 , a main memory (MM: for example, RAM (random access memory)) 2 , an external storage device (for example, hard disk) 3 , an input/output interface (I/F) 4 , and a communication interface (communication I/F) 5 that are mutually connected through a bus B.
  • CPU central processing unit: processor
  • main memory MM: for example, RAM (random access memory)
  • I/F input/output interface
  • communication I/F communication interface
  • the external storage device 3 stores program for allowing the information processing apparatus 10 to function as the image encoding apparatus, which is executed by the CPU 1 , and data that is used at the time of executing the program therein.
  • the external storage device 3 includes a storage area to store the edge block candidate map (first map) and a storage area 32 to store the second edge detection result map (second map) as shown in FIG. 2 .
  • the CPU 1 loads the program that is stored in the external storage device 3 to the MM 2 and then executes the program. This realizes the first edge detecting processing (ST 1 ), the edge block candidate determining processing (ST 2 ), the edge block candidate map generating and storing processing (ST 3 ), the second edge detecting processing (ST 4 ), the second edge detection result map generating and storing processing (ST 5 ), and the image encoding processing, as shown in FIG. 2 .
  • a compression encoding processing is executed in conformity to the encoding standard of a moving picture such as MPEG-1, MPEG-2, H.264, or MPEG-4, to thereby generate a moving picture file including moving image data into which an original image has been compressed and encoded.
  • the moving image (image) file thus generated is stored in the external storage device 3 .
  • the I/F 4 is connected to an input device of the image or the moving image such as a camera.
  • the image data or the moving image data which is inputted through the I/F 4 is stored in the external storage device 3 , and dealt with as data (original image data) to be encoded by the CPU 1 .
  • the I/F 4 is connected to an output device of the image or the moving image such as a display, and the moving image based on the moving image file which has been generated by the CPU 1 can be displayed on the output device.
  • the communication I/F 5 is connected to a communication device (for example, a server or a terminal device) through a network such as the Internet.
  • the moving image file download file or stream distribution format file which is generated by the CPU 1 is transmitted to a transmission destination from the communication I/F 5 via the network.
  • the information processing apparatus 10 can distribute the moving image that has been photographed by the camera via the network.
  • the structure shown in FIG. 4 may be replaced by a structure in which a part or all of the processing shown in FIG. 2 are realized by hardware logic or the combination of the hardware logic with a software processing.
  • the CPU 1 can function as a detection unit (a detecting section), a extraction (determination) unit (an extracting (determining) section), a giving unit (a giving section), and a control unit (a control section, an image encoding section, and an image quality improving section) according to the present invention.
  • the image encoding processing includes an edge block detecting processing as shown in FIG. 2 , and the edge block detecting processing includes the above-mentioned first and second edge block detecting processing. Also, the image encoding processing includes an image quality improving processing for the edge block.
  • FIG. 5A is an explanatory diagram for explaining a target pixel that is defined in the first edge detecting processing, and peripheral pixels (close pixels) of the target pixel.
  • FIG. 5B is a flow chart showing an example of the first edge detecting processing that is executed by the CPU 1 . In this example, an edge detecting processing using a pixel difference is applied as the first edge detecting processing.
  • FIG. 5A shows 9 pixels that constitute a picture to be encoded (original picture).
  • the first edge detecting processing it is determined on a pixel basis (unit) whether or not the pixel is an edge pixel that constitutes the edge.
  • the pixel 4 is a pixel to be determined (target pixel)
  • pixels 0 to 3 and pixels 5 to 8 corresponding to adjoining pixels of the pixel 4 are defined as the peripheral pixels of the target pixel.
  • FIG. 5B shows the processing in the case where the pixel 4 is the target pixel.
  • the CPU 1 sets a value “i” indicative of the peripheral pixels to zero, and also sets a sum value “dsum”, which is a sum of absolute values “d” of the differences between the pixel values (pixel value of “i”-pixel value of the target pixel (pixel 4 ) to zero (S 001 ).
  • the CPU 1 obtains the absolute value “d” of the difference between the pixel 0 and the pixel 4 (S 002 ), and adds the absolute value “d” thus obtained to the current value of “dsum” (S 003 ). Then, the CPU 1 determines whether or not the current value of “i” is 8 (S 004 ). When the current value of “i” is not 8 (S 004 ; NO), the CPU 1 adds 1 to the current value of “i” (S 005 ) and the processing is returned to Step S 002 .
  • the sum value of “dsum” of the differences between the pixel 4 and each of the peripheral pixels is obtained through the loop processing of Steps S 002 to S 005 . Thereafter, when it is determined that “i” is 8 in Step S 004 (S 004 ; YES), the CPU 1 determines whether or not the value of “dsum” exceeds a predetermined threshold value (threshold 1 ) (S 006 ).
  • a map (edge pixel map) for indicating whether or not each of the pixels of the original image is the edge pixel is prepared on a storage area (for example, external memory device 3 ) that can be accessed by the CPU 1 .
  • the edge pixel flags (for example, on (an edge pixel) or off (not an edge pixel)) indicative of whether or not each of the pixels is the edge pixel are prepared in each field corresponding to each of the pixels in the map.
  • the CPU 1 turns on/off the edge pixel flags on the basis of the determination result of Step S 006 in Steps S 007 and S 008 .
  • the CPU 1 executes the above-described first edge detecting processing (S 001 to S 008 ) with all of the pixels that constitute the original image as the target pixels. As a result, the edge pixel map indicative of the distribution of the edge pixels in the original image is generated.
  • FIG. 6 is a flowchart showing an example the edge block candidate determining processing which is executed by the CPU 1 of the information processing apparatus 10 .
  • the edge block candidate determining processing shown in FIG. 6 is executed in each of the blocks into which the original image has been divided.
  • the block can be defined as an aggregate of one or more pixels.
  • the number of pixels that constitute the block can be selected from an arbitrary number of 1 or more.
  • the CPU 1 sets a value “i” that specifies one of the pixels included in the target block, and a value “dcnt” indicative of the number of edge pixels to zero, respectively (S 101 ).
  • the CPU 1 determines whether or not the edge pixel flag of the pixel corresponding to the current value “i” is “on”, with reference to the edge pixel map (S 102 ). In this situation, when the edge pixel is “off” (NO in Step 102 ), the CPU 1 advances the processing to Step S 104 . On the other hand, when the edge pixel flag is “on” (S 102 ; YES), the CPU 1 adds 1 to the current value of “dcnt” (S 103 ), and advances the processing to Step S 104 .
  • Step S 104 the CPU 1 determines whether or not the current value of “i” is indicative of a final pixel within the target block. In this situation, in the case where the value of “i” is not indicative of the final pixel (S 104 ; NO), the CPU 1 adds 1 to the value of “i” (S 105 ), and the processing is returned to Step S 102 . The number of edge pixels included within the target block is counted through the loop processing of Steps S 102 to S 105 .
  • the CPU 1 determines whether or not the value of “dcnt” exceeds a predetermined threshold value (threshold 2 ) (S 106 ). In this situation, in the case where the value of “dcnt” exceeds the threshold 2 (S 106 ; YES), the CPU 1 turns on the edge block candidate flag with respect to the target block (S 107 ). On the other hand, in the case where the value of “dcnt” does not exceed the threshold 2 (S 106 ; NO), the CPU 1 turns off the edge block candidate flag (S 108 ). Thereafter, the CPU 1 finishes the edge block candidate determining processing with respect to the target block.
  • a predetermined threshold value threshold 2
  • the edge block candidate map is stored in the storage area 31 of the external storage device 3 .
  • the edge block candidate map has edge block candidate flags corresponding to each of the blocks.
  • the CPU 1 reads the edge block candidate map, and turns on/off the edge block candidate flag of the target block according to the determination result of Step S 106 .
  • the CPU 1 executes the above-mentioned edge block candidate determining processing (S 101 to S 108 ) with all of the blocks corresponding to the original image as the target blocks.
  • the edge block candidate map becomes in a state where only the edge block candidate flags corresponding to the blocks that are determined as the edge block candidates are turned on.
  • This edge block candidate map is stored in the storage area 31 .
  • the edge blocks significant to encoding are extracted from the edge block candidates by using the edge block candidate map that have been prepared on the basis of the first edge detecting processing.
  • the edge to be improved in the image quality is an edge that is in contact with a flat portion in the image.
  • FIG. 7 is an example showing a flat portion and a complicated portion in the image, and shows a scene where a person marches on a track of an athletic field.
  • An image shown in FIG. 7 is a test chart of the institute of image information and television engineers.
  • the part of spectators' seats is complicated, so even if the edge of the part is a little deteriorated, the deterioration is not distinguished by human's eyes.
  • the deterioration of the edge of the line portion that separates the field from the track in the athletic field induces the recognition that the image quality is deteriorated with the human's eyes, because of the field is flat.
  • peripheral blocks close block: for example, blocks of the field portion in FIG. 7
  • the target block is a block including the edge that is in contact with the flat portion, by checking the distribution state of the edge block candidates with respect to the peripheral blocks of the target block.
  • FIG. 8A is a diagram showing a definitional example of the target block and the peripheral blocks in the second edge detecting processing
  • FIG. 8B is a flowchart showing an example of the second edge detecting processing which is executed according to the definition shown in FIG. 8A .
  • adjacent blocks 1 to 8 that are adjacent to the target block X are defined as the peripheral blocks.
  • the peripheral blocks can include blocks adjacent to the adjacent blocks. Also, one or more blocks that are arbitrarily selected from the adjacent blocks can be defined as the peripheral blocks.
  • FIG. 8A shows an example in which the blocks having the adjacent blocks in eight directions are the target blocks.
  • the adjacent blocks are three blocks of 2, X, and 4.
  • the adjacent blocks of a block that forms an end portion of the image such as the block 4 are five blocks of 1, 2, X, 7, and 6.
  • FIG. 8B shows the second edge detecting processing in the case where the block X is the target block.
  • the second edge detecting processing is executed by the CPU 1 ( FIG. 4 ).
  • the CPU 1 reads the edge block candidate map from the storage area 31 , and reads a second edge detection result map (initial state) from the storage area 32 .
  • the CPU 1 first sets a value “i” indicative of the peripheral block to “1” (indicative of the block 1 ), and sets a value “cnt” indicative of the number of peripheral blocks which are the edge block candidates to zero (S 201 ).
  • the CPU 1 determines whether or not the block “i” is an edge block candidate with reference to the edge block candidate map (S 202 ). In this situation, when the block “i” is the edge block candidate (S 202 ; YES), the CPU 1 advances the processing to Step S 204 . On the other hand, when the block “i” is not the edge block candidate (S 202 ; NO), the CPU 1 adds 1 to the value of “cnt” (S 203 ), and the processing is advanced to Step S 204 .
  • Step S 204 the CPU 1 determines whether or not the current value of “i” indicates a final block (block 8 in this example). In this situation, when the current value of “i” is not 8 (S 204 ; NO), the CPU 1 adds 1 to the current value of “i” (S 205 ), and the processing is returned to Step S 202 .
  • the CPU 1 determines whether or not each of the peripheral blocks is the edge block candidate through the loop processing of Steps S 202 to S 205 , and counts the number of peripheral blocks which are not the edge block candidates.
  • Step S 204 the CPU 1 determines whether or not the value of “cnt” (the number of peripheral blocks that are not edge block candidates) are lower than the threshold value (S 206 ).
  • the CPU 1 determines that the target block is not the edge block, and turns off the edge block flag of the corresponding block of the second edge detection result map (S 207 ). Thereafter, the CPU 1 finishes the second edge detecting processing with respect to the target block.
  • the CPU 1 determines that the target block is the edge block, and turns on the corresponding edge block flag (S 208 ). Thereafter, the CPU 1 finishes the second edge detecting processing with respect to the target block.
  • the CPU 1 executes the second edge detecting processing (S 201 to S 208 ) shown in FIG. 8B with respect to all of the blocks in the edge block candidate map.
  • the target blocks that the number of peripheral blocks thereof that are the edge block candidates is lower than the threshold are extracted from the edge block candidates as the edge blocks.
  • the second edge detection result map indicative of the distribution of the edge blocks in the image are stored in the storage area 32 .
  • the CPU 1 conducts the encoding processing according to the edge information based on the map, and generates the moving image file including the image encoding result (encoding image).
  • the CPU 1 conducts the compression encoding on the original image on the basis of the map information that is obtained from the second edge detection result map in the image encoding processing.
  • the data size of the encoded image which is obtained by compression encoding of the image is predetermined.
  • the data amount (distribution amount of the data size) of the respective blocks that constitute the encoded image can be arbitrarily determined.
  • the information amount of the respective blocks in the encoded image is determined by the compression ratio of the original image data corresponding to the respective blocks.
  • the information amount that is reduced from the original image is more reduced as the compression ratio becomes lower.
  • the compression ratio of a certain block is lower than that of another block
  • the reduced information amount due to the compression of the certain block is reduced than that of another block.
  • the certain block has the information amount larger than that of another block in a result of compression encoding.
  • the image in the block has an image quality closer to that of the original image as the information amount of the block becomes larger.
  • the CPU 1 determines the information amounts of the respective blocks after being compressed and encoded (the distribution amount of the maximum information amount that the encoded image can have), and executes the compression encoding processing with respect to the respective blocks at the compression ratio at which the information amount obtained as the compression result becomes the distribution amount in the compression encoding processing of the original image.
  • the processing associated with the determination of the information amount (compression ratio) corresponds to the image quality improving processing to be conducted on the edge blocks.
  • the CPU 1 determines the distribution of the information amount in such a manner that the information amounts of the edge blocks are larger than those of the blocks (non-edge blocks) which are not the edge blocks. That is, the CPU 1 conducts the compression encoding control so that the compression encoding is conducted on the edge blocks at the compression ratio lower than that of the non-edge blocks.
  • the edge block candidates that are low in the effect of the image quality improvement are excluded from the results of the second edge detecting processing. This means that the information amount that is allocated to the excluded edge block candidates may be reduced. The reduced amount can be allocated to the edge blocks that need to be improved in the image quality. Accordingly, the image quality improvement of the edge blocks is made efficient.
  • the second edge detecting processing ( FIG. 8 ) it is determined that the target blocks are not the edge blocks when the number of blocks which are the peripheral blocks of a target block of determination but not the edge block candidates is lower than the threshold, as described above. As a result, it is possible to optimize the detected edge blocks for image encoding.
  • the threshold that is used in the processing as shown in FIG. 8B is adjusted (threshold is set to 2) so as to judge that the target block is not the edge block when the peripheral blocks that are not the edge block candidates are equal to or lower than 1.
  • FIG. 9A is a diagram showing an example of an image to be targeted to the second edge detecting processing (edge block candidate map) according to the first specific example
  • FIG. 9A shows a part of an image (map) composed of 4 ⁇ 4 blocks, which includes an object Y.
  • FIG. 9A shows that the blocks marked with a character “e” are edge block candidates.
  • the adjacent blocks (not shown) of the respective blocks that are positioned at end portions in FIG. 9A (blocks that belong to first or fourth row, and blocks that belong to A or D column) are flat portions (blocks that are not the edge block candidates).
  • the blocks marked with the character of “E” are indicative of blocks that have been extracted as the edge blocks.
  • the blocks having the peripheral blocks which are not the edge block candidates, the number of which is equal to or lower than 1 are excluded from the edge blocks. That is, the edge block candidates other than the exclusion target blocks are extracted as the edge blocks.
  • blocks 2 -C, 3 -B, and 3 -C corresponding to the exclusion target blocks are excluded, and the remaining edge block candidates are detected as the edge blocks. In this way, the edge block candidates that are recognized to exhibit the low effect of the image quality improvement are excluded.
  • the CPU 1 can allocate the information amount for the image quality improvement which should be allocated to the exclusion target blocks to the edge blocks different from the exclusion target blocks. Accordingly, the CPU 1 can allocate large information amount to the edge blocks including the edges that are in contact with the flat portion large in the area, and conduct the preferred image quality improving processing.
  • a spray of water or gathering of small objects is dealt with as the edge portions in the image encoding.
  • the deterioration of the image quality of the water spray or the gathered small objects is not distinguished in evaluation of the image quality. From this viewpoint, it is desirable that the blocks including a part where the small edges are gathered are not detected as the edge blocks.
  • the threshold is adjusted (the threshold is set to 1) so that it is determined that the target block is not the edge block in the case where all of the peripheral blocks of the target block are the edge block candidates.
  • FIG. 10A is a diagram showing an example of an image (edge block candidate map) to be targeted to the second edge detecting processing according to the second specific example.
  • the images of FIGS. 10A and 10B are parts of the test chart of the institute of image information and television engineers.
  • FIG. 10A shows an image composed of 4 ⁇ 4 blocks, which represents a spray of water.
  • the blocks marked with the character of “e” are representative of the edge block candidates.
  • the blocks marked with the character of “E” are representative of the edge blocks.
  • the edge block candidates having the peripheral blocks which are not the edge block candidates, the number of which is lower than 1 are excluded, and the edge block candidates other than the exclusion target blocks are extracted as the edge blocks as shown in FIG. 10B .
  • the blocks 2 -C, 3 -B, and 3 -C corresponding to the exclusion target blocks are excluded.
  • the blocks including the gathering of the minute edges are suppressed from being determined to be the edge blocks.
  • the information amount for improving the image quality which is allocated to the edge blocks to be improved in the image quality can be increased, and the appropriate image quality improvement and encoding can be conducted.
  • FIGS. 11A and 11B There is a tendency that the deterioration of the image quality of the edges is more visible as an area of the flat portion that is in contact with the edge is larger.
  • a certain original image has portions shown in FIGS. 11A and 11B , respectively.
  • a blank portion indicates the flat portion
  • a portion having a pattern indicates an object having the edge.
  • the area of the flat portion that is in contact with the edge shown in FIG. 11A is larger than the area of the flat portion shown in FIG. 11B . Accordingly, in the case where the image quality improving processing is executed with respect to only one of FIGS. 11A and 11B , it is preferable to select FIG. 11A from the viewpoint of obtaining the high image quality improving effect.
  • the CPU 1 of the information processing apparatus 10 sets the image quality improvement priority with respect to the respective edge block candidates that are indicated in the edge block candidate map. That is, the higher image quality improvement priority than that of the respective edge blocks in the portion shown in FIG. 11B can be given the respective edge blocks shown in FIG. 1A .
  • the image quality improvement priority can be determined in accordance with the number of peripheral blocks which are not the edge block candidates of the respective edge blocks.
  • FIG. 12 is a flowchart showing an example of the priority giving processing which is executed by the CPU 1 of the information processing apparatus 10 .
  • a relation of the threshold value 1 and the threshold value 2 satisfies threshold value 1 >threshold value 2
  • a relation of the priorities 1 , 2 , and 3 satisfies priority 1 >priority 2 >priority 3 .
  • the processing shown in FIG. 12 starts at the time of completing the preparation of the edge block candidate map.
  • the CPU 1 specifies one of the edge block candidates that is a target edge block candidate from the edge block candidate map.
  • the CPU 1 conducts the same processing as the processing of S 201 to S 205 shown in FIG. 8B with respect to the target edge block candidate, and obtains the number of peripheral blocks (value of “cnt”) which are not the edge block candidates. Subsequently, the CPU 1 determines whether or not the value of “cnt” is equal to or higher than the predetermined threshold value 1 (S 301 ).
  • the CPU 1 sets (gives) the priority 1 to the edge block candidate (S 302 ), and completes the priority giving processing with respect to the target edge block candidate.
  • the CPU 1 determines whether or not the cnt value of the target edge block candidate is equal to or higher than the given threshold value 2 (S 303 ). In this situation, in the case where the cnt value is equal to or higher than the threshold value 2 (S 303 ; YES), the CPU 1 sets (gives) the priority 2 to the edge block candidate (S 304 ), and completes the priority giving processing with respect to the target edge block candidate.
  • the CPU 1 sets (gives) the priority 3 to the edge block candidate (S 305 ), and completes the priority giving processing with respect to the target edge block candidate.
  • the CPU 1 executes the processing of S 301 to S 305 with respect to all of the edge block candidates represented in the edge block candidate map. In this way, the image quality improvement priority according to the detection status of the edge block candidates related to the peripheral blocks is given to the respective edge block candidates.
  • the image quality improvement priority map is prepared instead of the second edge detection result map on the basis of the edge block candidate map.
  • the image quality improvement priority map represents the edge block candidates and the priorities that are given to the edge block candidates.
  • the image quality improvement priority map is stored in the external storage device 3 as required.
  • the CPU 1 conducts the image encoding processing by using the image quality improvement priority map.
  • the respective edge block candidates are dealt with as the edge blocks in the image encoding processing.
  • the CPU 1 can execute the image quality improving processing depending on the image quality improvement priority given each of the edge block candidates.
  • FIG. 13A is a diagram showing an example of an image to be targeted to the priority giving processing (an edge block candidate map), and FIG. 13B is an explanatory diagram showing the result of the priority giving processing with respect to the image shown in FIG. 13A .
  • FIG. 13A shows a part of an image (map) consisting of 4 ⁇ 4 blocks which includes an object Y as an example of the image to be targeted to the priority giving processing (edge block candidate map).
  • the adjacent blocks not shown of the respective blocks (blocks belonging to a first or fourth row, and blocks belonging to an A or D column) which are positioned at the end portions of FIG. 13A are the flat portions (blocks that are not the edge block candidates).
  • all of the blocks except for the block 1 -A and the block 1 -B are detected as the edge block candidates as a result of the first edge block detecting processing with respect to a part of the image (edge block candidates marked with the character of “e”).
  • numbers indicative of the priorities that are given according to the priority giving processing shown in FIG. 12 are expressed on the respective blocks that have been detected as the edge block candidates.
  • the number ( 1 ) indicated on the block 2 -A represents the priority 1
  • the number ( 2 ) indicated on the block 2 -B represents the priority 2
  • the number ( 3 ) indicated on the block 3 -B represents the priority 3 .
  • the priority giving result is that “3” is set as the threshold value 1
  • “1” is set as the threshold value 2 in the processing shown in FIG. 12 .
  • the priority 1 is set to those blocks.
  • the blocks 2 -B and 2 -C have one or two peripheral blocks which are not the edge block candidates, the priority 2 is set to those blocks.
  • the blocks 3 -B and 3 -C do not have the peripheral blocks which are not the edge block candidates, the priority 3 is set to those blocks.
  • the CPU 1 conducts the image quality improving processing according to the above priorities. That is, the CPU 1 gives levels to the image quality improvement according to the priorities. That is, differences are given to the information amounts (compression ratios) which are allocated to the respective edge blocks.
  • the image quality improving processing is executed so that the image quality becomes higher as the priority is higher.
  • the CPU 1 conducts the image quality improvement so that the edges become clear at the maximum degree with respect to the priority 1 .
  • the CPU 1 conducts the image quality improvement at a degree lower than the priority 1 with respect to the priorities 2 and 3 (priority 2 >priority 3 ).
  • the number of image quality improvement priority is not limited to 3 shown in the example of FIG. 12 , and can be selected from the arbitrary number of 2 or more.
  • the edge block candidates having a predetermined priority can be excluded from the edge blocks to be improved in the image quality.
  • a description of the corresponding edge block candidates in the image quality improvement priority map is changed to the same description as the blocks that are not the edge block candidates.
  • the edge block candidates that have been changed the description are not dealt with as the edge blocks in the image encoding processing. Accordingly, the same effects as those in the second edge detecting processing can be obtained.
  • the processing shown in FIG. 12 may be executed. That is, the priority giving processing may be executed with respect to the edge blocks.
  • FIG. 14 is a diagram showing an original image to be targeted in the image encoding
  • FIG. 15 is a diagram showing the detection result (edge block candidate map) of the edge blocks by the first edge block detecting processing.
  • FIG. 16 is a diagram showing the detection result (second edge detection result map) by the second edge block detecting processing.
  • the original image shown in FIG. 14 is a test chart of the institute of image information and television engineers.
  • Dots shown in FIGS. 15 and 16 indicate edge blocks.
  • the detection result shown in FIG. 15 is obtained through the processing shown in FIGS. 5 and 6 as the first edge block detecting processing.
  • the edge blocks that are recognized as no necessity of the image quality improvement are remarkably reduced (excluded) by the second edge block detecting processing.
  • the number of detection of the edge blocks by the first edge block detecting processing is 3613 whereas the number of detection of the edge blocks by the second edge block detecting processing is 686.
  • the number of detection of the edge blocks is reduced to make it possible to execute the image encoding processing including the appropriate image quality improving processing of the edges.
  • the edge blocks significant in encoding are extracted from the edge block candidates that have been detected in the first edge block detecting processing. That is, the blocks that are recognized as the low image quality improvement effect are excluded from the detection result of the edge blocks.
  • the number of edge blocks to be encoded image quality improvement
  • the encoding (image quality improvement) processing can be effectively executed.
  • the information amounts that should be allocated to the blocks of the edge block candidates which have not been detected as the edge blocks can be allocated to the edge blocks.
  • the appropriate image quality improving processing of the edge blocks can be conducted.
  • the image quality improvement priority is given to each of the edge blocks through the priority giving processing, and the image quality improving processing is executed according to the priorities.
  • a load CPU time, information amount
  • the image quality improving processing on the respective edge blocks can be appropriately distributed, and the effective image quality improving processing can be executed.
  • the edge portions of an object in which the deterioration of the image quality is distinguished by the image encoding are properly detected through a simple processing that is also mountable on the image encoding apparatus (for communication) of the real-time operation from the viewpoint of the encoding efficiency. That is, the second edge block detecting processing and the priority giving processing which are described in this embodiment are simple processing using the first edge block detecting processing result, and a processing time required for those processing is short. Accordingly, there is little influence from introducing those processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Image Processing (AREA)

Abstract

An image encoding apparatus includes a detection unit detecting edge block candidates as blocks including an edge in an image composed of plural blocks, and a determination unit excluding an edge block candidate that is recognized as no necessity of image quality improving processing executed in image encoding processing from the detected edge block candidates and determining a remaining edge block candidate as edge block to be targeted in the image quality improving processing.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a field of image encoding.
  • 2. Description of the Related Art
  • In a field of image encoding, there have been already proposed several techniques by which an edge (edge portion) of an object in an image is detected, and an image encoding control is executed on the basis of the detection result (edge information) (refer to Patent documents 1 to 5, for example).
  • There arises such a problem that, in an edge that forms a boundary of a complicated portion (large in change) and an edge that forms a boundary of a flat portion (small in change) in an image, the deterioration of the image in the edge that forms the boundary of the flat portion is more visible than that of the boundary of the complicated portion. It is desired to prevent the deterioration of the image quality by detecting such edges and conducting an image quality improving processing that is high in precision.
  • For example, in a method of detecting an edge by determining a difference of pixel values as disclosed in Patent documents 1 to 5, minute edges in pixel unit (for example, patterns of an object) are detected, and the method is not suitable for conducting the image improving process.
  • That is, because there are few influences made by the minute edges on human eyes (vision), even if the image improving processing is executed, the improvement effect is low. On the other hand, in the image encoding, a data (information) size of an image after being encoded is predetermined. Therefore, if a number of edges are detected, since data (information) amount for an image quality improvement which can be allocated to one edge is smaller, it is caused a possibility that the image quality improving processing cannot be conducted, sufficiently.
  • As edge detecting methods, there exist techniques that are applied in an image recognizing field other than the techniques disclosed in the above-mentioned Patent documents 1 to 5. However, it is difficult to apply the edge detecting methods in the image recognizing field to the image encoding field that requires real-time operations because the edge detecting methods in the image recognizing field are complicated and voluminous as a whole.
  • [Patent document 1] JP 9-167240 A
  • [Patent document 2] JP 2001-14476 A
  • [Patent document 3] JP 9-120458 A
  • [Patent document 4] JP 2003-230147 A
  • [Patent document 5] JP 8-194824 A
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to provide a technique that is capable of improving the efficiency of an image encoding processing that requires a real-time operation.
  • Also, another object of the present invention is to provide a technique that is capable of executing an appropriate image quality improving processing.
  • In order to solve the above-mentioned problem, the present invention applies the following structure.
  • That is, the present invention relates to an image encoding apparatus, including:
  • a detection unit dividing an input image into blocks composed of plural pixels and detecting edge block candidates as blocks including an edge in the image; and
  • an extraction unit excluding an edge block candidate that is recognized as no necessity of image quality improving processing executed in image encoding processing from the detected edge block candidates and determining a remaining edge block candidate as edge block to be targeted in the image quality improving processing.
  • According to the present invention, an edge block candidate that does not require the image quality improving processing is excluded from the edge block candidates, and the remaining edge block candidates are extracted (determined) as edge blocks. As a result, since the number of edge blocks to be targeted in the image quality improving processing can be reduced, quantity of the image quality improving processing can be reduced. In addition, since edge optimizing processing in block unit is executed, efficient image encoding processing can be executed. Also, since the number of edge blocks is reduced, sufficient information amount for improving the image quality can be allocated to the remaining edge block candidates. As a result, the appropriate image quality improving processing can be executed.
  • Also, the present invention provides an image encoding apparatus, including:
  • a detection unit detecting edge blocks as blocks including an edge in an image composed of plural blocks; and
  • a giving unit giving a priority of image quality improving processing executed in encoding processing of the image to the detected edge blocks.
  • According to the present invention, the image quality improving processing according to the priority is executed in the image encoding processing. In this situation, the image quality improving processing of each edge block for obtaining the image quality corresponding to the priority is executed. That is, a level can be given the image quality improving processing under limited conditions. As a result, the appropriate image quality improving processing can be conducted.
  • Also, the present invention can be specified as an image encoding method and a recording medium that is stored a computer program each having the same features of the image encoding apparatus.
  • According to the present invention, the efficiency of the image quality encoding processing can be improved in the image encoding processing that requires the real-time operations.
  • Also, according to the present invention, the appropriate image quality improving processing can be executed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram exemplifying a problem in a conventional art;
  • FIG. 2 is a schematic explanatory diagram showing an image encoding apparatus according to the present invention;
  • FIG. 3 is a flowchart showing the detail of a second edge detecting process in the process shown in FIG. 2;
  • FIG. 4 is a diagram showing a structural example of an information processing apparatus that can function as the image encoding apparatus (a structural example of an image encoding apparatus);
  • FIG. 5A is a diagram for explaining a target pixel that is defined in a first edge detecting process, and close pixels of the target pixel;
  • FIG. 5B is a flowchart showing an example of the first edge detecting process;
  • FIG. 6 is a flowchart showing an example of an edge block candidate determining process;
  • FIG. 7 is a diagram showing an example of a flat portion and a complicated portion in an image;
  • FIG. 8A is a diagram showing a definitional example of the target block and the close blocks in the second edge detecting process;
  • FIG. 8B is a flowchart showing an example of the second edge detecting process which is executed according to the definition shown in FIG. 8A;
  • FIG. 9A is a diagram showing an example of an image to targeted to the second edge detecting process according to a first specific example;
  • FIG. 9B is a diagram showing the result of executing the second edge detecting process (threshold=2) shown in FIG. 8 with respect to the image shown in FIG. 9A;
  • FIG. 10A is a diagram showing an example of an image to be targeted to the second edge detecting process according to a second specific example;
  • FIG. 10B is a diagram showing the result of executing the second edge detecting process (threshold=1) shown in FIG. 8B with respect to the image shown in FIG. 10A;
  • FIGS. 11A and 11B are explanatory diagrams showing a flat portion that is in contact with the edge in the image;
  • FIG. 12 is a flowchart showing an example of a priority giving process;
  • FIG. 13A is a diagram showing an image example to be targeted to the priority giving process;
  • FIG. 13B is an explanatory diagram showing the result of the priority giving process with respect to the image shown in FIG. 13A;
  • FIG. 14 is a diagram showing an original image to be encoded in the image;
  • FIG. 15 is a diagram showing the detection result (edge block candidate map) of the edge blocks by the first edge block detecting process that is executed with respect to the original image shown in FIG. 14; and
  • FIG. 16 is a diagram showing the detection result (second edge detection result map) of the edge blocks by the second edge block detecting process that is executed with respect to the original image shown in FIG. 14.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Hereinafter, a description will be given of embodiments according to the present invention with reference to the accompanying drawings. The structures of the embodiments are examples, and the present invention is not limited to the structures of the embodiments.
  • [Outline of the Invention]
  • First, an outline of the present invention will be described. There is an image encoding apparatus that conducts encoding processing on an image, which has been divided into plural blocks, on a block basis. In edge detecting methods (for example, Patent documents 1 to 5) which have been conventionally applied to the image encoding apparatus, there is a tendency to excessively detect blocks (edge blocks) including edges. This is because, for example, patterns of an object in the image are determined (detected) as minute edges.
  • To cope with the above problem, there is proposed that a threshold value for determining whether or not the edge is included is adjusted to reduce the number of detection of the edge blocks. However, in the threshold adjustment, there is a case in which significant edge blocks for the image encoding do not remain.
  • FIG. 1 is a diagram exemplifying a problem in the conventional art. In FIG. 1, an object of a trapezoid having patterns is included in an image. When the edge detecting methods according to the conventional art is applied to this image, for example, both of blocks 1 and 2 are detected as edge blocks due to the pattern (mesh pattern) of the object.
  • In this case, it is desirable that only the block 1 through which a contour of the object passes is detected as a significant edge block, and the block 2 is not detected as an edge block. As usual, the edge block becomes a target of the image quality improving processing. Since a total data (information) amount that can be used in the image encoding is limited, it is preferable that the number of edge blocks to be improved in the image quality improving processing is smaller. Also, the minute edge such as the pattern is not distinguished even if the image quality of the pattern is deteriorated, an influence of the minute edge on a person is small. That is, even if the image quality improving processing is conducted on the block 2, its visual effect is low. On the other hand, the deterioration of the contour line that intersects with the block 1 is recognized as an image quality deterioration distinguished by the person. Therefore, it is desirable that only the block 1 is detected as the edge block.
  • However, as shown in FIG. 1, the patterns included in the respective blocks 1 and 2 weighs heavily in a determination element which leads a determination result that the blocks 1 and 2 are edge blocks. That is, the block 1 is similar to the block 2 in the determination element. In this case, when the threshold value is adjusted, both of the blocks 1 and 2 are not detected as the edge block with the result that both of those blocks 1 and 2 are not to be improved in the image quality improving processing.
  • Under the above circumstances, according to the present invention, in order to detect the edge block which is recognized to be significant in image encoding (to be targeted on the image quality improving processing), the edge block is detected by means of the edge detecting methods disclosed in Patent documents 1 to 5 (first edge detecting processing), and a second edge detecting processing is conducted by using the result of the first edge detecting processing. That is, the edge blocks that have been detected by the first edge detecting processing are defined as edge block candidates, and significant edge blocks in image encoding are extracted from the edge block candidates.
  • In other words, according to the present invention, the edge blocks that are recognized to be edge blocks requiring no improvement in the image quality (low effect of the image quality improvement) are excluded from the edge blocks that are detected in the first edge detecting processing. In this way, according to the present invention, only the appropriate edge blocks are extracted by conducting the edge determinations at multiple stages.
  • FIG. 2 is a schematic explanatory diagram showing an image encoding processing according to the present invention. In FIG. 2, a narrowly defined first edge detecting processing (edge pixel detecting processing) is first executed with respect to an original image to be encoded (ST1). The conventional arts disclosed in Patent documents 1 to 5 can be applied as the first edge detecting processing as described above.
  • Subsequently, edge block candidate determining processing is conducted according to the result of the first edge detecting processing (ST2). That is, with respect to a predetermined number of blocks into which the original image is divided, determination of whether or not each block is the edge block candidates, is executed according to the result of the first edge detecting processing. The first edge detecting processing (ST1) is conducted on a pixel basis (unit), and blocks including edge pixels that are equal to or higher than a predetermined threshold value are determined as the edge block candidates.
  • Subsequently, a generating and storing processing of an edge block candidate map is executed (ST3). That is, a map (edge block candidate map) in which a flag allocated to each block corresponding to the original image is turned on (is a candidate) or turned off (is not a candidate) is generated according to the result of the edge block candidate determining processing, and then stored in a predetermined storage area.
  • The first edge detecting processing (edge pixel detecting processing: ST1), the edge block candidate determining processing (ST2), and the edge block candidate map generating and storing processing (ST3) as described above constitute a broadly defined first edge detecting processing (first edge block detecting processing).
  • Subsequently, the second edge detecting processing is executed (ST4). That is, it is determined whether or not the respective blocks of the edge block candidates include the significant edges (to be improved in the image quality), and the edge block candidates including the significant edges are determined as the edge blocks. In this situation, the edge blocks including no significant edge (requiring no improvement in the image quality) are excluded.
  • Subsequently, the generating and storing processing of a second edge detection result map is executed (ST5). That is, a map (second edge detection result map) in which a flag allocated to each block corresponding to the original image is turned on (is an edge block) or turned off (is not an edge block) is generated according to the result of the second edge detecting processing, and then stored in a predetermined storage area.
  • The second edge detecting processing (ST4) and the second edge block detection result map generating and storing processing (ST5) as described above constitute a broadly defined second edge detecting processing (second edge block detecting processing).
  • Finally, the image encoding processing using information (edge information) represented in the second edge detection result map is executed (ST6).
  • FIG. 3 is a flowchart showing the detail of the second edge detecting processing in the processing shown in FIG. 2. In FIG. 3, the first edge detecting processing (edge pixel detecting processing) is first executed on the original image, and the edge block candidate determining processing is executed as the broadly defined first edge detecting processing (S01: ST1, ST2).
  • Subsequently, the generating and storing processing of the edge block candidate map is executed (S02: ST3). Then, processing of the following Steps S03 to S08 is executed as the second edge detecting processing (second edge block detecting processing).
  • When the second edge detecting processing starts, the edge block candidate map is read, one (target block to be processed) of the blocks in the edge block candidate map is specified, and it is determined whether or not the target block is an edge block candidate (S03).
  • In this situation, when the target block is not the edge block candidate (S03; NO), a flag (edge block flag) corresponding to the target block in the second edge detection result map is turned off (S07), and the processing is advanced to Step S08.
  • On the other hand, when the target block is the edge block candidate (S03; YES), the edge block determining processing is executed (S04). That is, it is determined whether or not the target block includes a significant edge (S05).
  • In this situation, when the target block includes the significant edge, the target block is determined as the edge block (S05; YES), and the edge block flag in the second edge detection result map corresponding to the target block is turned on (S06). Thereafter, the processing is advanced to Step S08.
  • On the other hand, when the target block does not include the significant edge, it is determined that the target block is not the edge block (S05; NO), and the corresponding edge block flag is turned off (S07). Thereafter, the processing is advanced to Step S08.
  • In Step S08, it is determined whether or not the target block is a final block. When the target block is not a final block (S08; NO), the processing is returned to Step S03 in order to execute the processing of steps S03 to S07 with respect to the remaining blocks. On the other hand, when the target block is the final block (S08; YES), the second edge detection result map is stored in a predetermined storage area. The second edge detecting processing is completed.
  • Thereafter, the image encoding processing is executed by using the edge information based on the second edge detection result map. In this situation, the number of edge blocks is reduced by removing the edge block candidates that do not require the image quality improvement in the second edge detecting processing. Accordingly, the encoding processing can be saved in labor (made efficient) by a reduction in the number of edge blocks.
  • Also, the reduction in the number of edge blocks means an increase in the information (image information) amount which is supplied to the respective edge blocks in order to improve the image quality. As a result, the appropriate image quality improving processing can be conducted.
  • In this situation, the priority of each of the edge blocks may be set with respect to the image quality improving process so that the information amount (image information amount) that is allocated to the respective edge blocks can be controlled according to the priority. As a result, it is possible to execute the efficient image quality improving processing. Alternatively, it is possible to execute the image quality improving processing in the higher order of priority.
  • EMBODIMENTS
  • Embodiments based on the above-described outline according to the present invention will be described.
  • <Apparatus Structure>
  • FIG. 4 is a diagram showing a structural example of an information processing apparatus that can function as an image encoding apparatus (a structural example of the image encoding apparatus). In FIG. 4, an information processing apparatus 10 includes a CPU (central processing unit: processor) 1, a main memory (MM: for example, RAM (random access memory)) 2, an external storage device (for example, hard disk) 3, an input/output interface (I/F) 4, and a communication interface (communication I/F) 5 that are mutually connected through a bus B.
  • The external storage device 3 stores program for allowing the information processing apparatus 10 to function as the image encoding apparatus, which is executed by the CPU 1, and data that is used at the time of executing the program therein. In particular, the external storage device 3 includes a storage area to store the edge block candidate map (first map) and a storage area 32 to store the second edge detection result map (second map) as shown in FIG. 2.
  • The CPU 1 loads the program that is stored in the external storage device 3 to the MM 2 and then executes the program. This realizes the first edge detecting processing (ST1), the edge block candidate determining processing (ST2), the edge block candidate map generating and storing processing (ST3), the second edge detecting processing (ST4), the second edge detection result map generating and storing processing (ST5), and the image encoding processing, as shown in FIG. 2.
  • As the image encoding processing, a compression encoding processing is executed in conformity to the encoding standard of a moving picture such as MPEG-1, MPEG-2, H.264, or MPEG-4, to thereby generate a moving picture file including moving image data into which an original image has been compressed and encoded. The moving image (image) file thus generated is stored in the external storage device 3.
  • The I/F 4 is connected to an input device of the image or the moving image such as a camera. The image data or the moving image data which is inputted through the I/F 4 is stored in the external storage device 3, and dealt with as data (original image data) to be encoded by the CPU 1. Also, the I/F 4 is connected to an output device of the image or the moving image such as a display, and the moving image based on the moving image file which has been generated by the CPU 1 can be displayed on the output device.
  • The communication I/F 5 is connected to a communication device (for example, a server or a terminal device) through a network such as the Internet. The moving image file (download file or stream distribution format file) which is generated by the CPU 1 is transmitted to a transmission destination from the communication I/F 5 via the network.
  • With the above structure, the information processing apparatus 10 can distribute the moving image that has been photographed by the camera via the network.
  • The structure shown in FIG. 4 may be replaced by a structure in which a part or all of the processing shown in FIG. 2 are realized by hardware logic or the combination of the hardware logic with a software processing.
  • In the structure shown in FIG. 4, the CPU 1 can function as a detection unit (a detecting section), a extraction (determination) unit (an extracting (determining) section), a giving unit (a giving section), and a control unit (a control section, an image encoding section, and an image quality improving section) according to the present invention.
  • <Image Encoding Processing>
  • Subsequently, a description will be given of an image encoding processing (method) in the information processing apparatus 10 (image encoding apparatus). The image encoding processing includes an edge block detecting processing as shown in FIG. 2, and the edge block detecting processing includes the above-mentioned first and second edge block detecting processing. Also, the image encoding processing includes an image quality improving processing for the edge block.
  • <<First Edge Block Detecting Processing>>
  • [First Edge Detecting Processing (Edge Pixel Detecting Processing)]
  • Subsequently, a description will be given of a first edge detecting processing (edge pixel detecting processing) which is executed by the information processing apparatus 10. FIG. 5A is an explanatory diagram for explaining a target pixel that is defined in the first edge detecting processing, and peripheral pixels (close pixels) of the target pixel. FIG. 5B is a flow chart showing an example of the first edge detecting processing that is executed by the CPU 1. In this example, an edge detecting processing using a pixel difference is applied as the first edge detecting processing.
  • FIG. 5A shows 9 pixels that constitute a picture to be encoded (original picture). In the first edge detecting processing, it is determined on a pixel basis (unit) whether or not the pixel is an edge pixel that constitutes the edge. In an example shown in FIG. 5A, in the case where the pixel 4 is a pixel to be determined (target pixel), pixels 0 to 3 and pixels 5 to 8 corresponding to adjoining pixels of the pixel 4 are defined as the peripheral pixels of the target pixel. FIG. 5B shows the processing in the case where the pixel 4 is the target pixel.
  • In FIG. 5B, when the processing starts, the CPU 1 sets a value “i” indicative of the peripheral pixels to zero, and also sets a sum value “dsum”, which is a sum of absolute values “d” of the differences between the pixel values (pixel value of “i”-pixel value of the target pixel (pixel 4) to zero (S001).
  • Subsequently, the CPU 1 obtains the absolute value “d” of the difference between the pixel 0 and the pixel 4 (S002), and adds the absolute value “d” thus obtained to the current value of “dsum” (S003). Then, the CPU 1 determines whether or not the current value of “i” is 8 (S004). When the current value of “i” is not 8 (S004; NO), the CPU 1 adds 1 to the current value of “i” (S005) and the processing is returned to Step S002.
  • The sum value of “dsum” of the differences between the pixel 4 and each of the peripheral pixels is obtained through the loop processing of Steps S002 to S005. Thereafter, when it is determined that “i” is 8 in Step S004 (S004; YES), the CPU 1 determines whether or not the value of “dsum” exceeds a predetermined threshold value (threshold 1) (S006).
  • In this situation, when the value of “dsum” exceeds the threshold 1 (YES in S006), the CPU 1 turns on an edge pixel flag of the target pixel (pixel 4) (S007), whereas the value of “dsum” does not exceed the threshold 1 (NO in S006), the CPU 1 turns off the edge pixel flag (S008). Thereafter, the CPU 1 finishes the first edge detecting process with respect to the target pixel.
  • In this example, a map (edge pixel map) for indicating whether or not each of the pixels of the original image is the edge pixel is prepared on a storage area (for example, external memory device 3) that can be accessed by the CPU 1. The edge pixel flags (for example, on (an edge pixel) or off (not an edge pixel)) indicative of whether or not each of the pixels is the edge pixel are prepared in each field corresponding to each of the pixels in the map. The CPU 1 turns on/off the edge pixel flags on the basis of the determination result of Step S006 in Steps S007 and S008.
  • The CPU 1 executes the above-described first edge detecting processing (S001 to S008) with all of the pixels that constitute the original image as the target pixels. As a result, the edge pixel map indicative of the distribution of the edge pixels in the original image is generated.
  • [Edge Block Candidate Determining Processing]
  • Subsequently, a description will be given of the edge block candidate determining processing that is executed by the information processing apparatus 10. FIG. 6 is a flowchart showing an example the edge block candidate determining processing which is executed by the CPU 1 of the information processing apparatus 10.
  • The edge block candidate determining processing shown in FIG. 6 is executed in each of the blocks into which the original image has been divided. In the present invention, the block can be defined as an aggregate of one or more pixels. The number of pixels that constitute the block can be selected from an arbitrary number of 1 or more.
  • In FIG. 6, when the processing starts, the CPU 1 sets a value “i” that specifies one of the pixels included in the target block, and a value “dcnt” indicative of the number of edge pixels to zero, respectively (S101).
  • Subsequently, the CPU 1 determines whether or not the edge pixel flag of the pixel corresponding to the current value “i” is “on”, with reference to the edge pixel map (S102). In this situation, when the edge pixel is “off” (NO in Step 102), the CPU 1 advances the processing to Step S104. On the other hand, when the edge pixel flag is “on” (S102; YES), the CPU 1 adds 1 to the current value of “dcnt” (S103), and advances the processing to Step S104.
  • In Step S104, the CPU 1 determines whether or not the current value of “i” is indicative of a final pixel within the target block. In this situation, in the case where the value of “i” is not indicative of the final pixel (S104; NO), the CPU 1 adds 1 to the value of “i” (S105), and the processing is returned to Step S102. The number of edge pixels included within the target block is counted through the loop processing of Steps S102 to S105.
  • Thereafter, in the case where the value of “i” indicative of the final pixel is detected in Step S104 (S104; YES), the CPU 1 determines whether or not the value of “dcnt” exceeds a predetermined threshold value (threshold 2) (S106). In this situation, in the case where the value of “dcnt” exceeds the threshold 2 (S106; YES), the CPU 1 turns on the edge block candidate flag with respect to the target block (S107). On the other hand, in the case where the value of “dcnt” does not exceed the threshold 2 (S106; NO), the CPU 1 turns off the edge block candidate flag (S108). Thereafter, the CPU 1 finishes the edge block candidate determining processing with respect to the target block.
  • In this example, the edge block candidate map is stored in the storage area 31 of the external storage device 3. The edge block candidate map has edge block candidate flags corresponding to each of the blocks. The CPU 1 reads the edge block candidate map, and turns on/off the edge block candidate flag of the target block according to the determination result of Step S106.
  • The CPU 1 executes the above-mentioned edge block candidate determining processing (S101 to S108) with all of the blocks corresponding to the original image as the target blocks. As a result, the edge block candidate map becomes in a state where only the edge block candidate flags corresponding to the blocks that are determined as the edge block candidates are turned on. This edge block candidate map is stored in the storage area 31.
  • <<Second Edge Block Detecting Processing>>
  • [Second Edge Detecting Processing]
  • Subsequently, a description will be described of the second edge detecting processing that is executed by the information processing apparatus 10. In the second edge detecting processing, the edge blocks significant to encoding (image quality improvement) are extracted from the edge block candidates by using the edge block candidate map that have been prepared on the basis of the first edge detecting processing.
  • The edge to be improved in the image quality is an edge that is in contact with a flat portion in the image. For example,
  • FIG. 7 is an example showing a flat portion and a complicated portion in the image, and shows a scene where a person marches on a track of an athletic field. An image shown in FIG. 7 is a test chart of the institute of image information and television engineers.
  • In this image, the part of spectators' seats is complicated, so even if the edge of the part is a little deteriorated, the deterioration is not distinguished by human's eyes. On the other hand, the deterioration of the edge of the line portion that separates the field from the track in the athletic field induces the recognition that the image quality is deteriorated with the human's eyes, because of the field is flat.
  • This means that most of the peripheral blocks (close block: for example, blocks of the field portion in FIG. 7) of a block that is required to be detected as the edge block to be improved in the image quality are not determined to be the edge block candidate in the edge block candidate determining processing (ST2: FIG. 6). Accordingly, it can be determined whether or not the target block is a block including the edge that is in contact with the flat portion, by checking the distribution state of the edge block candidates with respect to the peripheral blocks of the target block.
  • For that reason, in the second edge detecting processing, the edge determination results (edge block candidate determining process results) of the peripheral blocks of the target block are used for determining whether or not the peripheral blocks are the edge blocks. FIG. 8A is a diagram showing a definitional example of the target block and the peripheral blocks in the second edge detecting processing, and FIG. 8B is a flowchart showing an example of the second edge detecting processing which is executed according to the definition shown in FIG. 8A.
  • In the example shown in FIG. 8A, when a block having hatching is determined as the target block X, adjacent blocks 1 to 8 that are adjacent to the target block X are defined as the peripheral blocks. The peripheral blocks can include blocks adjacent to the adjacent blocks. Also, one or more blocks that are arbitrarily selected from the adjacent blocks can be defined as the peripheral blocks.
  • Note that, FIG. 8A shows an example in which the blocks having the adjacent blocks in eight directions are the target blocks. In the case where the block 1 is a block that constitutes a corner of the image, the adjacent blocks are three blocks of 2, X, and 4. Also, in the case where the image is divided into nine blocks as shown in FIG. 8A, the adjacent blocks of a block that forms an end portion of the image such as the block 4 are five blocks of 1, 2, X, 7, and 6.
  • FIG. 8B shows the second edge detecting processing in the case where the block X is the target block. The second edge detecting processing is executed by the CPU 1 (FIG. 4). When the processing starts, the CPU 1 reads the edge block candidate map from the storage area 31, and reads a second edge detection result map (initial state) from the storage area 32.
  • The CPU 1 first sets a value “i” indicative of the peripheral block to “1” (indicative of the block 1), and sets a value “cnt” indicative of the number of peripheral blocks which are the edge block candidates to zero (S201).
  • Subsequently, the CPU 1 determines whether or not the block “i” is an edge block candidate with reference to the edge block candidate map (S202). In this situation, when the block “i” is the edge block candidate (S202; YES), the CPU 1 advances the processing to Step S204. On the other hand, when the block “i” is not the edge block candidate (S202; NO), the CPU 1 adds 1 to the value of “cnt” (S203), and the processing is advanced to Step S204.
  • In Step S204, the CPU 1 determines whether or not the current value of “i” indicates a final block (block 8 in this example). In this situation, when the current value of “i” is not 8 (S204; NO), the CPU 1 adds 1 to the current value of “i” (S205), and the processing is returned to Step S202. The CPU 1 determines whether or not each of the peripheral blocks is the edge block candidate through the loop processing of Steps S202 to S205, and counts the number of peripheral blocks which are not the edge block candidates.
  • Thereafter, when it is determined that the value of “i” is a value indicative of the final block (i=8) in Step S204 (S204; YES), the CPU 1 determines whether or not the value of “cnt” (the number of peripheral blocks that are not edge block candidates) are lower than the threshold value (S206).
  • In the case where the value of “cnt” is lower than the threshold (S206; YES), the CPU 1 determines that the target block is not the edge block, and turns off the edge block flag of the corresponding block of the second edge detection result map (S207). Thereafter, the CPU 1 finishes the second edge detecting processing with respect to the target block.
  • On the contrary, when the value of “cnt” is equal to or higher than the threshold (S206; NO), the CPU 1 determines that the target block is the edge block, and turns on the corresponding edge block flag (S208). Thereafter, the CPU 1 finishes the second edge detecting processing with respect to the target block.
  • The CPU 1 executes the second edge detecting processing (S201 to S208) shown in FIG. 8B with respect to all of the blocks in the edge block candidate map. As a result, the target blocks that the number of peripheral blocks thereof that are the edge block candidates is lower than the threshold, are extracted from the edge block candidates as the edge blocks. Then, the second edge detection result map indicative of the distribution of the edge blocks in the image are stored in the storage area 32. Thereafter, the CPU 1 conducts the encoding processing according to the edge information based on the map, and generates the moving image file including the image encoding result (encoding image).
  • [Image Encoding Processing]
  • The CPU 1 conducts the compression encoding on the original image on the basis of the map information that is obtained from the second edge detection result map in the image encoding processing. Herein, the data size of the encoded image which is obtained by compression encoding of the image is predetermined. On the other hand, the data amount (distribution amount of the data size) of the respective blocks that constitute the encoded image can be arbitrarily determined.
  • The information amount of the respective blocks in the encoded image is determined by the compression ratio of the original image data corresponding to the respective blocks. The information amount that is reduced from the original image is more reduced as the compression ratio becomes lower. For example, in the case where the compression ratio of a certain block is lower than that of another block, the reduced information amount due to the compression of the certain block is reduced than that of another block. Accordingly, the certain block has the information amount larger than that of another block in a result of compression encoding. The image in the block has an image quality closer to that of the original image as the information amount of the block becomes larger.
  • Accordingly, the CPU 1 determines the information amounts of the respective blocks after being compressed and encoded (the distribution amount of the maximum information amount that the encoded image can have), and executes the compression encoding processing with respect to the respective blocks at the compression ratio at which the information amount obtained as the compression result becomes the distribution amount in the compression encoding processing of the original image. The processing associated with the determination of the information amount (compression ratio) corresponds to the image quality improving processing to be conducted on the edge blocks.
  • In this situation, the CPU 1 determines the distribution of the information amount in such a manner that the information amounts of the edge blocks are larger than those of the blocks (non-edge blocks) which are not the edge blocks. That is, the CPU 1 conducts the compression encoding control so that the compression encoding is conducted on the edge blocks at the compression ratio lower than that of the non-edge blocks.
  • Herein, the edge block candidates that are low in the effect of the image quality improvement are excluded from the results of the second edge detecting processing. This means that the information amount that is allocated to the excluded edge block candidates may be reduced. The reduced amount can be allocated to the edge blocks that need to be improved in the image quality. Accordingly, the image quality improvement of the edge blocks is made efficient.
  • <SPECIFIC EXAMPLE>
  • Subsequently, a description will be given of a specific example using an image encoding method (an edge block detecting method) by means of the above-mentioned information processing apparatus 10.
  • <<First Specific Example>>
  • In execution of the image quality improvement in the image encoding processing, there are limited conditions (for example, the maximum data size of the encoded image). For that reason, it may be desirable that only the edge blocks that are recognized to exhibit the high image quality improvement effect are to be improved in the image quality.
  • Under the above circumstances, in the second edge detecting processing (FIG. 8), it is determined that the target blocks are not the edge blocks when the number of blocks which are the peripheral blocks of a target block of determination but not the edge block candidates is lower than the threshold, as described above. As a result, it is possible to optimize the detected edge blocks for image encoding.
  • For example, the threshold that is used in the processing as shown in FIG. 8B is adjusted (threshold is set to 2) so as to judge that the target block is not the edge block when the peripheral blocks that are not the edge block candidates are equal to or lower than 1.
  • FIG. 9A is a diagram showing an example of an image to be targeted to the second edge detecting processing (edge block candidate map) according to the first specific example, and FIG. 9B is a diagram showing the results of executing the second edge detecting processing (threshold=2) shown in FIG. 8 with respect to the image shown in FIG. 9A.
  • FIG. 9A shows a part of an image (map) composed of 4×4 blocks, which includes an object Y. FIG. 9A shows that the blocks marked with a character “e” are edge block candidates. The adjacent blocks (not shown) of the respective blocks that are positioned at end portions in FIG. 9A (blocks that belong to first or fourth row, and blocks that belong to A or D column) are flat portions (blocks that are not the edge block candidates). In FIG. 9B, the blocks marked with the character of “E” are indicative of blocks that have been extracted as the edge blocks.
  • After the second edge detecting processing is executed with respect to the image (edge block candidate map) shown in FIG. 9A, the blocks having the peripheral blocks which are not the edge block candidates, the number of which is equal to or lower than 1 (referred to as “exclusion target blocks”) are excluded from the edge blocks. That is, the edge block candidates other than the exclusion target blocks are extracted as the edge blocks.
  • In the example of FIG. 9B, blocks 2-C, 3-B, and 3-C corresponding to the exclusion target blocks are excluded, and the remaining edge block candidates are detected as the edge blocks. In this way, the edge block candidates that are recognized to exhibit the low effect of the image quality improvement are excluded.
  • As a result, the CPU1 can allocate the information amount for the image quality improvement which should be allocated to the exclusion target blocks to the edge blocks different from the exclusion target blocks. Accordingly, the CPU 1 can allocate large information amount to the edge blocks including the edges that are in contact with the flat portion large in the area, and conduct the preferred image quality improving processing.
  • <<Second Specific Example>>
  • A spray of water or gathering of small objects is dealt with as the edge portions in the image encoding. However, the deterioration of the image quality of the water spray or the gathered small objects is not distinguished in evaluation of the image quality. From this viewpoint, it is desirable that the blocks including a part where the small edges are gathered are not detected as the edge blocks.
  • Under the above circumstances, in the second edge detecting processing shown in FIG. 8B, the threshold is adjusted (the threshold is set to 1) so that it is determined that the target block is not the edge block in the case where all of the peripheral blocks of the target block are the edge block candidates.
  • FIG. 10A is a diagram showing an example of an image (edge block candidate map) to be targeted to the second edge detecting processing according to the second specific example. FIG. 10B is a diagram showing the result of executing the second edge detecting processing (threshold=1) shown in FIG. 8B with respect to the image shown in FIG. 10A. The images of FIGS. 10A and 10B are parts of the test chart of the institute of image information and television engineers.
  • FIG. 10A shows an image composed of 4×4 blocks, which represents a spray of water. In FIG. 10A, the blocks marked with the character of “e” are representative of the edge block candidates. In FIG. 10B, the blocks marked with the character of “E” are representative of the edge blocks.
  • When the second edge detecting processing (FIG. 8B) is executed with respect to the image (edge block candidate map) shown in FIG. 10A, the edge block candidates having the peripheral blocks which are not the edge block candidates, the number of which is lower than 1 (exclusion target blocks) are excluded, and the edge block candidates other than the exclusion target blocks are extracted as the edge blocks as shown in FIG. 10B.
  • In the example of FIG. 10B, the blocks 2-C, 3-B, and 3-C corresponding to the exclusion target blocks are excluded. Through the above processing, the blocks including the gathering of the minute edges are suppressed from being determined to be the edge blocks. As a result, the information amount for improving the image quality which is allocated to the edge blocks to be improved in the image quality can be increased, and the appropriate image quality improvement and encoding can be conducted.
  • <Priority Giving Processing>
  • Subsequently, a description will be given of a priority giving processing corresponding to a modified example of the above-mentioned second edge detecting processing. After the generation of the edge block candidate map has been completed, the priority giving processing is executed instead of the second edge detecting processing.
  • There is a tendency that the deterioration of the image quality of the edges is more visible as an area of the flat portion that is in contact with the edge is larger. For example, it is assumed that a certain original image has portions shown in FIGS. 11A and 11B, respectively. In FIGS. 11A and 11B, a blank portion indicates the flat portion, and a portion having a pattern indicates an object having the edge. In this case, the area of the flat portion that is in contact with the edge shown in FIG. 11A is larger than the area of the flat portion shown in FIG. 11B. Accordingly, in the case where the image quality improving processing is executed with respect to only one of FIGS. 11A and 11B, it is preferable to select FIG. 11A from the viewpoint of obtaining the high image quality improving effect.
  • The CPU 1 of the information processing apparatus 10 sets the image quality improvement priority with respect to the respective edge block candidates that are indicated in the edge block candidate map. That is, the higher image quality improvement priority than that of the respective edge blocks in the portion shown in FIG. 11B can be given the respective edge blocks shown in FIG. 1A. The image quality improvement priority can be determined in accordance with the number of peripheral blocks which are not the edge block candidates of the respective edge blocks.
  • FIG. 12 is a flowchart showing an example of the priority giving processing which is executed by the CPU 1 of the information processing apparatus 10. In FIG. 12, a relation of the threshold value 1 and the threshold value 2 satisfies threshold value 1>threshold value 2, and a relation of the priorities 1, 2, and 3 satisfies priority 1>priority 2>priority 3. Also, the processing shown in FIG. 12 starts at the time of completing the preparation of the edge block candidate map.
  • When the processing starts, the CPU 1 specifies one of the edge block candidates that is a target edge block candidate from the edge block candidate map.
  • The CPU 1 conducts the same processing as the processing of S201 to S205 shown in FIG. 8B with respect to the target edge block candidate, and obtains the number of peripheral blocks (value of “cnt”) which are not the edge block candidates. Subsequently, the CPU 1 determines whether or not the value of “cnt” is equal to or higher than the predetermined threshold value 1 (S301).
  • In this situation, in the case where the value of “cnt” is equal to or higher than the threshold value 1 (S301; YES), the CPU 1 sets (gives) the priority 1 to the edge block candidate (S302), and completes the priority giving processing with respect to the target edge block candidate.
  • On the other hand, in the case where the value of “cnt” is lower than the threshold value 1 (S301; NO), the CPU 1 determines whether or not the cnt value of the target edge block candidate is equal to or higher than the given threshold value 2 (S303). In this situation, in the case where the cnt value is equal to or higher than the threshold value 2 (S303; YES), the CPU 1 sets (gives) the priority 2 to the edge block candidate (S304), and completes the priority giving processing with respect to the target edge block candidate.
  • In contrast, in the case where the value of “cnt” is lower than the threshold value 2 (S303; NO), the CPU 1 sets (gives) the priority 3 to the edge block candidate (S305), and completes the priority giving processing with respect to the target edge block candidate.
  • The CPU 1 executes the processing of S301 to S305 with respect to all of the edge block candidates represented in the edge block candidate map. In this way, the image quality improvement priority according to the detection status of the edge block candidates related to the peripheral blocks is given to the respective edge block candidates.
  • In the above processing, the image quality improvement priority map is prepared instead of the second edge detection result map on the basis of the edge block candidate map. The image quality improvement priority map represents the edge block candidates and the priorities that are given to the edge block candidates. The image quality improvement priority map is stored in the external storage device 3 as required.
  • The CPU 1 conducts the image encoding processing by using the image quality improvement priority map. The respective edge block candidates are dealt with as the edge blocks in the image encoding processing. In this situation, the CPU 1 can execute the image quality improving processing depending on the image quality improvement priority given each of the edge block candidates.
  • <<SPECIFIC EXAMPLE>>
  • FIG. 13A is a diagram showing an example of an image to be targeted to the priority giving processing (an edge block candidate map), and FIG. 13B is an explanatory diagram showing the result of the priority giving processing with respect to the image shown in FIG. 13A.
  • FIG. 13A shows a part of an image (map) consisting of 4×4 blocks which includes an object Y as an example of the image to be targeted to the priority giving processing (edge block candidate map). The adjacent blocks not shown of the respective blocks (blocks belonging to a first or fourth row, and blocks belonging to an A or D column) which are positioned at the end portions of FIG. 13A are the flat portions (blocks that are not the edge block candidates).
  • Referring to FIG. 13A, all of the blocks except for the block 1-A and the block 1-B are detected as the edge block candidates as a result of the first edge block detecting processing with respect to a part of the image (edge block candidates marked with the character of “e”).
  • On the other hand, in FIG. 13B, numbers indicative of the priorities that are given according to the priority giving processing shown in FIG. 12 are expressed on the respective blocks that have been detected as the edge block candidates. The number (1) indicated on the block 2-A represents the priority 1, the number (2) indicated on the block 2-B represents the priority 2, and the number (3) indicated on the block 3-B represents the priority 3. Also, the priority giving result is that “3” is set as the threshold value 1, and “1” is set as the threshold value 2 in the processing shown in FIG. 12.
  • That is, in FIG. 13B, since the respective blocks positioned at the outermost side (the respective blocks belonging to the first or fourth row, and the respective blocks belonging to the A or D column) have three or more peripheral blocks which are not the edge block candidates, the priority 1 is set to those blocks. On the other hand, since the blocks 2-B and 2-C have one or two peripheral blocks which are not the edge block candidates, the priority 2 is set to those blocks. Also, since the blocks 3-B and 3-C do not have the peripheral blocks which are not the edge block candidates, the priority 3 is set to those blocks.
  • The CPU 1 conducts the image quality improving processing according to the above priorities. That is, the CPU 1 gives levels to the image quality improvement according to the priorities. That is, differences are given to the information amounts (compression ratios) which are allocated to the respective edge blocks. In the examples shown in FIGS. 12 and 13, the image quality improving processing is executed so that the image quality becomes higher as the priority is higher. For example, the CPU 1 conducts the image quality improvement so that the edges become clear at the maximum degree with respect to the priority 1. Also, the CPU 1 conducts the image quality improvement at a degree lower than the priority 1 with respect to the priorities 2 and 3 (priority 2>priority 3).
  • <<Modified Example>>
  • The number of image quality improvement priority is not limited to 3 shown in the example of FIG. 12, and can be selected from the arbitrary number of 2 or more.
  • Also, as a result of the priority giving processing, the edge block candidates having a predetermined priority (for example, the lowest priority (priority 3 in FIG. 12) can be excluded from the edge blocks to be improved in the image quality. For example, a description of the corresponding edge block candidates in the image quality improvement priority map is changed to the same description as the blocks that are not the edge block candidates. As a result, the edge block candidates that have been changed the description are not dealt with as the edge blocks in the image encoding processing. Accordingly, the same effects as those in the second edge detecting processing can be obtained.
  • Also, after the second edge detection result map has been prepared through the processing shown in FIG. 8B, the processing shown in FIG. 12 may be executed. That is, the priority giving processing may be executed with respect to the edge blocks.
  • [Applied Example]
  • Subsequently, a description will be given of an applied example using the structure of the above-mentioned embodiment. FIG. 14 is a diagram showing an original image to be targeted in the image encoding, and FIG. 15 is a diagram showing the detection result (edge block candidate map) of the edge blocks by the first edge block detecting processing. FIG. 16 is a diagram showing the detection result (second edge detection result map) by the second edge block detecting processing. The original image shown in FIG. 14 is a test chart of the institute of image information and television engineers.
  • Dots shown in FIGS. 15 and 16 indicate edge blocks. The detection result shown in FIG. 15 is obtained through the processing shown in FIGS. 5 and 6 as the first edge block detecting processing.
  • As understood from the comparison of FIG. 15 with FIG. 16, the edge blocks that are recognized as no necessity of the image quality improvement are remarkably reduced (excluded) by the second edge block detecting processing. The number of detection of the edge blocks by the first edge block detecting processing is 3613 whereas the number of detection of the edge blocks by the second edge block detecting processing is 686.
  • As described above, according to the present invention, the number of detection of the edge blocks is reduced to make it possible to execute the image encoding processing including the appropriate image quality improving processing of the edges.
  • [Operation and Effects of Embodiments]
  • According to this embodiment, in the second edge block detecting processing subsequent to the first edge block detecting processing, the edge blocks significant in encoding (to be improved in the image quality) are extracted from the edge block candidates that have been detected in the first edge block detecting processing. That is, the blocks that are recognized as the low image quality improvement effect are excluded from the detection result of the edge blocks. As a result, the number of edge blocks to be encoded (image quality improvement) can be reduced, and the encoding (image quality improvement) processing can be effectively executed.
  • In this situation, the information amounts that should be allocated to the blocks of the edge block candidates which have not been detected as the edge blocks can be allocated to the edge blocks. As a result, the appropriate image quality improving processing of the edge blocks can be conducted.
  • Also, according to this embodiment, the image quality improvement priority is given to each of the edge blocks through the priority giving processing, and the image quality improving processing is executed according to the priorities. As a result, a load (CPU time, information amount) of the image quality improving processing on the respective edge blocks can be appropriately distributed, and the effective image quality improving processing can be executed.
  • As described above, according to this embodiment, there is realized a technique by which the edge portions of an object in which the deterioration of the image quality is distinguished by the image encoding are properly detected through a simple processing that is also mountable on the image encoding apparatus (for communication) of the real-time operation from the viewpoint of the encoding efficiency. That is, the second edge block detecting processing and the priority giving processing which are described in this embodiment are simple processing using the first edge block detecting processing result, and a processing time required for those processing is short. Accordingly, there is little influence from introducing those processing.
  • [Others]
  • The disclosures of Japanese patent application No. JP2005-289051 filed on Sep. 30, 2005 including the specification, drawings and abstract are incorporated herein by reference.

Claims (20)

1. An image encoding apparatus, comprising:
a detection unit detecting edge block candidates as blocks including an edge in an image composed of plural blocks; and
a determination unit excluding a edge block candidate that is recognized as no necessity of image quality improving processing executed in image encoding processing from the detected edge block candidates and determining a remaining edge block candidate as edge block to be targeted in the image quality improving processing.
2. The image encoding apparatus according to claim 1, wherein the determination unit determines whether the edge block candidate is excluded by using a detection result of the edge block candidates with respect to peripheral blocks of the edge block candidate.
3. The image encoding apparatus according to claim 1, wherein the determination unit excludes the edge block candidate when, with respect to peripheral blocks of the edge block candidate, the number of the peripheral blocks that have not been detected as the edge block candidates is lower than a predetermined threshold value.
4. The image encoding apparatus according to claim 1, wherein the determination unit gives a priority of the image quality improving processing to each of the edge blocks.
5. An image encoding apparatus, comprising:
a detection unit detecting edge blocks as blocks including an edge in an image composed of plural blocks; and
a giving unit giving a priority of image quality improving processing which is executed with respect to the edge blocks in encoding processing of the image to each of the detected edge blocks.
6. The image encoding apparatus according to claim 5, wherein the giving unit gives the priority to each of the detected edge blocks according to a detection result of the edge blocks with respect to peripheral blocks of each of the edge blocks.
7. The image encoding apparatus according to claim 6, wherein the giving unit gives the priority corresponding to the number of peripheral blocks which have not been detected as the edge blocks by the detection unit to each of the edge blocks.
8. The image encoding apparatus according to claim 5, wherein the image quality improving processing is executed for obtaining an image quality according to the priority.
9. The image encoding apparatus according to 5, wherein the edge block to which a predetermined priority is given is excluded from the edge blocks to be targeted in the image quality improving processing.
10. An image encoding method, comprising:
detecting edge block candidates as blocks including an edge in an image composed of plural blocks;
excluding an edge block candidate that is recognized as no necessity of image quality improving processing executed in image encoding processing from the detected edge block candidates; and
determining a remaining edge block candidate as edge block to be targeted in the image quality improving processing.
11. The image encoding method according to claim 10, wherein the edge block candidate is excluded when, with respect to peripheral blocks of the edge block candidate, the number of the peripheral blocks that have not been detected as the edge block candidates is lower than a predetermined threshold value.
12. An image encoding method, comprising:
detecting edge blocks as blocks including an edge in an image composed of plural blocks; and
giving a priority of image quality improving processing which is executed with respect to the edge blocks in encoding processing of the image to each of the detected edge blocks.
13. The image encoding method according to claim 12, wherein the priority corresponding to the number of peripheral blocks which have not been detected as the edge blocks is given to each of the edge blocks.
14. The image encoding method according to claim 12, wherein the image quality improving processing is executed for obtaining an image quality according to the priority.
15. The image encoding method according to claim 12, wherein the edge block to which a predetermined priority is given is excluded from the edge blocks to be targeted in the image quality improving processing.
16. A recoding medium on which a program executed by an information processor is recoded, the program including the steps of:
detecting edge block candidates as blocks including an edge in an image composed of plural blocks;
excluding an edge block candidate that is recognized as no necessity of image quality improving processing executed in image encoding processing from the detected edge block candidates; and
determining a remaining edge block candidate as edge block to be targeted in the image quality improving processing.
17. The recording medium according to claim 16, wherein in the excluding step, the edge block candidate is excluded when, with respect to peripheral blocks of the edge block candidate, the number of the peripheral blocks that have not been detected as the edge block candidates is lower than a predetermined threshold value.
18. A recording medium on which a program executed by an information processor is recorded, the program including the steps of:
detecting edge blocks as blocks including an edge in an image composed of plural blocks; and
giving a priority of image quality improving processing which is executed with respect to the edge blocks in encoding processing of the image to each of the detected edge blocks.
19. The recording medium according to claim 18, wherein the priority corresponding to the number of the peripheral blocks which have not been detected as the edge blocks is given to each of the edge blocks.
20. The recording medium according to claim 18, wherein the edge block to which a predetermined priority is given is excluded from the edge blocks to be targeted in the image quality improving processing.
US11/359,588 2005-09-30 2006-02-23 Image encoding apparatus Expired - Fee Related US7873226B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2005-289051 2005-09-30
JP2005289051A JP4774265B2 (en) 2005-09-30 2005-09-30 Image encoding device
JPJP2005-289051 2005-09-30

Publications (2)

Publication Number Publication Date
US20070076965A1 true US20070076965A1 (en) 2007-04-05
US7873226B2 US7873226B2 (en) 2011-01-18

Family

ID=37902009

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/359,588 Expired - Fee Related US7873226B2 (en) 2005-09-30 2006-02-23 Image encoding apparatus

Country Status (2)

Country Link
US (1) US7873226B2 (en)
JP (1) JP4774265B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110052071A1 (en) * 2009-09-03 2011-03-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US20150098658A1 (en) * 2013-10-04 2015-04-09 Samsung Electronics Co., Ltd. Method and apparatus for processing image data, and recording medium
GB2533700A (en) * 2014-12-23 2016-06-29 Intel Corp Method and apparatus for a high throughput rasterizer

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010021039A1 (en) * 2008-08-21 2010-02-25 パイオニア株式会社 Image processing device, image processing method, and image processing program
JP5215951B2 (en) * 2009-07-01 2013-06-19 キヤノン株式会社 Encoding apparatus, control method therefor, and computer program
EP2348487A3 (en) * 2010-01-22 2017-09-13 Samsung Electronics Co., Ltd. Method and apparatus for creating animation message
JP5518224B2 (en) * 2013-03-04 2014-06-11 キヤノン株式会社 Encoding apparatus, encoding method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5157743A (en) * 1987-10-28 1992-10-20 Canon Kabushiki Kaisha Image information coding apparatus
US5926212A (en) * 1995-08-30 1999-07-20 Sony Corporation Image signal processing apparatus and recording/reproducing apparatus
US20050265624A1 (en) * 2004-05-27 2005-12-01 Konica Minolta Business Technologies, Inc. Image processing apparatus and image processing method
US20080123998A1 (en) * 2004-05-19 2008-05-29 Sony Corporation Image Processing Apparatus, Image Processing Method, Program of Image Processing Method, and Recording Medium in Which Program of Image Processing Method Has Been Recorded

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2646921B2 (en) 1991-11-15 1997-08-27 日本ビクター株式会社 Adaptive quantizer
JPH0638189A (en) * 1992-07-15 1994-02-10 Matsushita Electric Ind Co Ltd Picture coding method
JP3066278B2 (en) 1995-01-18 2000-07-17 三洋電機株式会社 Image encoding device and image decoding device
JPH09120458A (en) 1995-10-26 1997-05-06 Sanyo Electric Co Ltd Image encoding method
JPH09167240A (en) 1995-12-15 1997-06-24 Sony Corp Digital image signal processing device and method therefor
JP3903360B2 (en) 1999-06-28 2007-04-11 パイオニア株式会社 Edge detection method, edge detection apparatus, and image encoding apparatus
JP2003230147A (en) 2002-02-01 2003-08-15 Matsushita Electric Ind Co Ltd Apparatus and method for coding image signal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5157743A (en) * 1987-10-28 1992-10-20 Canon Kabushiki Kaisha Image information coding apparatus
US5926212A (en) * 1995-08-30 1999-07-20 Sony Corporation Image signal processing apparatus and recording/reproducing apparatus
US20080123998A1 (en) * 2004-05-19 2008-05-29 Sony Corporation Image Processing Apparatus, Image Processing Method, Program of Image Processing Method, and Recording Medium in Which Program of Image Processing Method Has Been Recorded
US20050265624A1 (en) * 2004-05-27 2005-12-01 Konica Minolta Business Technologies, Inc. Image processing apparatus and image processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110052071A1 (en) * 2009-09-03 2011-03-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US8891879B2 (en) * 2009-09-03 2014-11-18 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US20150098658A1 (en) * 2013-10-04 2015-04-09 Samsung Electronics Co., Ltd. Method and apparatus for processing image data, and recording medium
GB2533700A (en) * 2014-12-23 2016-06-29 Intel Corp Method and apparatus for a high throughput rasterizer
US10410081B2 (en) 2014-12-23 2019-09-10 Intel Corporation Method and apparatus for a high throughput rasterizer

Also Published As

Publication number Publication date
US7873226B2 (en) 2011-01-18
JP4774265B2 (en) 2011-09-14
JP2007104133A (en) 2007-04-19

Similar Documents

Publication Publication Date Title
US7873226B2 (en) Image encoding apparatus
CN100514367C (en) Color segmentation-based stereo 3D reconstruction system and process
US8194978B2 (en) Method of and apparatus for detecting and adjusting colour values of skin tone pixels
US8244054B2 (en) Method, apparatus and integrated circuit capable of reducing image ringing noise
JP5478047B2 (en) Video data compression pre-processing method, video data compression method and video data compression system using the same
US8300101B2 (en) Image processing method, image processing system, image pickup device, image processing device and computer program for manipulating a plurality of images
JP2009111978A (en) Method and system of estimating background color
KR20020064220A (en) Method and system for classifying image elements
CN114022790A (en) Cloud layer detection and image compression method and device in remote sensing image and storage medium
US20080205786A1 (en) Method and system for filtering images in video coding
CN110378860A (en) Method, apparatus, computer equipment and the storage medium of restored video
CN115880181A (en) Method, device and terminal for enhancing image contrast
US7853069B2 (en) Stereoscopic image regenerating apparatus, stereoscopic image regenerating method, and stereoscopic image regenerating program
US7646892B2 (en) Image inspecting apparatus, image inspecting method, control program and computer-readable storage medium
US20190259168A1 (en) Image processing apparatus, image processing method, and storage medium
JPH10320566A (en) Picture processor, picture processing method, and storage medium storing the same method
JP2005275854A (en) Image processor, image processing method, image processing program and recording medium with this program stored thereon
JP2001186336A (en) Line art image processing method and recording medium
JP2000306104A (en) Method and device for picture area division
US8351729B2 (en) Apparatus, method, and program for image correction
KR101512297B1 (en) Method for Determining Ground Line
JP4771087B2 (en) Image processing apparatus and image processing program
US7770098B2 (en) Signal processing apparatus and method therefor
CN113191210A (en) Image processing method, device and equipment
JP4113801B2 (en) Image data compression processing method and compression system

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJITSU LIMITED, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHIMADA, MIWA;REEL/FRAME:017590/0290

Effective date: 20051219

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20190118