CN114519661A - Image processing method, device, terminal and readable storage medium - Google Patents

Image processing method, device, terminal and readable storage medium Download PDF

Info

Publication number
CN114519661A
CN114519661A CN202210126024.6A CN202210126024A CN114519661A CN 114519661 A CN114519661 A CN 114519661A CN 202210126024 A CN202210126024 A CN 202210126024A CN 114519661 A CN114519661 A CN 114519661A
Authority
CN
China
Prior art keywords
filtering
processed
image
image block
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210126024.6A
Other languages
Chinese (zh)
Inventor
李勇华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210126024.6A priority Critical patent/CN114519661A/en
Publication of CN114519661A publication Critical patent/CN114519661A/en
Priority to PCT/CN2022/139725 priority patent/WO2023151385A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, which comprises the steps of determining a filtering area of each image block to be processed in an image to be processed according to the filtering radius of a current filtering layer; acquiring a coincidence region between a first filtering region of a current image block to be processed and a second filtering region of an image block to be processed adjacent to the current image block to be processed; and acquiring first image data and second image data, wherein the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering adjacent to-be-processed image blocks. According to the image processing method, the image processing device, the terminal and the nonvolatile computer readable storage medium, the second image data can be multiplexed by the adjacent image blocks to be processed corresponding to the same overlapping area, so that the data amount required to be acquired by filtering processing is reduced, and the filtering efficiency is improved.

Description

Image processing method, device, terminal and readable storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
At present, when an image is processed, the image is processed in a blocking manner, and when the image block is processed, image data in a processing area corresponding to each image block is obtained, and the processing area is generally larger than an area where the image block is located, so that the image data amount required to be obtained in the image processing is large.
Disclosure of Invention
Embodiments of the present application provide an image processing method, an image processing apparatus, a terminal, and a non-volatile computer-readable storage medium.
The image processing method comprises the steps of determining a filtering area of each image block to be processed in an image to be processed according to the filtering radius of a current filtering layer; acquiring a coincidence area between a first filtering area of a current image block to be processed and a second filtering area of an image block to be processed adjacent to the current image block to be processed; and acquiring first image data and second image data, wherein the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering the adjacent to-be-processed image blocks.
The image processing device of the embodiment of the application comprises a first determining module, a first acquiring module and a second acquiring module. The first determining module is used for determining a filtering area of each image block to be processed in the image to be processed according to the filtering radius of the current filtering layer; the first acquisition module is used for acquiring a superposition area between a first filtering area of a current image block to be processed and a second filtering area of an image block to be processed adjacent to the current image block to be processed; and the second acquisition module is used for acquiring first image data and second image data, the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering the adjacent to-be-processed image blocks.
The terminal comprises a processor, a first storage unit and a second storage unit, wherein the processor is used for determining a filtering area of each image block to be processed in an image to be processed according to the filtering radius of a current filtering layer; acquiring a coincidence region between a first filtering region of a current image block to be processed and a second filtering region of an image block to be processed adjacent to the current image block to be processed; and acquiring first image data and second image data, wherein the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering the adjacent to-be-processed image blocks.
A non-transitory computer-readable storage medium containing a computer program which, when executed by one or more processors, causes the processors to perform an image processing method of the present application. The image processing method comprises the steps of determining a filtering area of each image block to be processed in an image to be processed according to the filtering radius of a current filtering layer; acquiring a coincidence region between a first filtering region of a current image block to be processed and a second filtering region of an image block to be processed adjacent to the current image block to be processed; and acquiring first image data and second image data, wherein the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering the adjacent to-be-processed image blocks.
The image processing method, the image processing apparatus, the terminal and the non-volatile computer readable storage medium according to the embodiments of the present application first determine a filter region of a pre-divided image block to be processed in an image to be processed according to a filter radius of a current filter layer, where an area of the filter region is larger than that of the image block to be processed, and an overlap region exists between filter regions of adjacent image blocks to be processed, so that, when the current image block to be processed is subjected to filter processing, image data (i.e., first image data) of a region other than the overlap region in a first filter region of the current image block to be processed and image data (i.e., second image data) of the overlap region of the current image block to be processed are obtained, and since the second image data is image data of an overlap region between the first filter region and a second filter region of an adjacent image block to be processed, therefore, the second image data can be multiplexed by the corresponding adjacent to-be-processed image blocks, and compared with the case that each to-be-processed image block acquires the image data of the corresponding filtering area, the second image data needs to be read twice, the second image data of the current to-be-processed image block is multiplexed, and the second image data can be read only once, so that the data volume read-write quantity required by filtering processing is reduced, and the filtering efficiency can be improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a block schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic plan view of a terminal according to some embodiments of the present application;
FIGS. 4-7 are schematic illustrations of certain embodiments of the present application;
FIGS. 8 and 9 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIG. 10 is a schematic illustration of certain embodiments of the present application;
FIGS. 11 and 12 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIG. 13 is a schematic diagram of a connection between a processor and a computer readable storage medium according to some embodiments of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
Referring to fig. 1 to 3, an image processing method according to an embodiment of the present disclosure includes the following steps:
011: determining a filtering area of each image block to be processed in the image to be processed according to the filtering radius of the current filtering layer;
012: acquiring a coincidence region between a first filtering region of a current image block to be processed and a second filtering region of an image block to be processed adjacent to the current image block to be processed; and
013: and acquiring first image data and second image data, wherein the first image data comprises image data of an area outside the overlapping area in the first filtering area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering adjacent to-be-processed image blocks.
The image processing apparatus 10 of the present embodiment includes a first determination module 11, a first acquisition module 12, and a second acquisition module 13. The first determining module 11, the first acquiring module 12 and the second acquiring module 13 are configured to perform step 011, step 012 and step 013, respectively. That is, the first determining module 11 is configured to determine, according to the filtering radius of the current filtering layer, a filtering area of each to-be-processed image block in the to-be-processed image; the first obtaining module 12 is configured to obtain a coincidence region between a first filtering region of a current to-be-processed image block and a second filtering region of a to-be-processed image block adjacent to the current to-be-processed image block; the second obtaining module 13 is configured to obtain first image data and second image data, where the first image data includes image data of an area outside the overlapping area in the first filtering area, the second image data includes image data of the overlapping area, and the second image data is used when filtering is performed on an adjacent to-be-processed image block.
The terminal 100 of the present embodiment includes a processor 30. The processor 30 is configured to determine, according to the filtering radius of the current filtering layer, a filtering area of each to-be-processed image block in the to-be-processed image; acquiring a coincidence region between a first filtering region of a current image block to be processed and a second filtering region of an image block to be processed adjacent to the current image block to be processed; and acquiring first image data and second image data, wherein the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering adjacent to-be-processed image blocks. That is, step 011, step 012, and step 013 can be implemented by processor 30.
Specifically, the terminal 100 further includes a housing 40. The terminal 100 may be a mobile phone, a tablet computer, a display device, a notebook computer, a teller machine, a gate, a smart watch, a head-up display device, a game console, etc. As shown in fig. 3, the embodiment of the present application is described by taking the terminal 100 as an example, and it is understood that the specific form of the terminal 100 is not limited to a mobile phone. The housing 40 may also be used to mount functional modules of the terminal 100, such as a display device, an imaging device, a power supply device, and a communication device, so that the housing 40 provides protection for the functional modules against dust, falling, water, and the like.
The image to be processed may be an image captured by the camera 20 of the terminal 100, or may be an image downloaded from the internet, without limitation. The image to be processed may also be part of an image taken by the camera 20. The image to be processed may also be a depth image.
When filtering an image to be processed, the filtering is generally performed based on a preset filtering algorithm. The preset filtering algorithm may determine a preset size of the image to be processed, a preset number of filtering layers, and a filtering radius of each filtering layer.
The image to be processed comprises a first edge, a second edge, a third edge and a fourth edge, wherein the first edge is opposite to the third edge, and the second edge is opposite to the fourth edge. The first edge may be an upper edge and the second edge may be a left edge; alternatively, the first edge may be the lower edge and the second edge may be the right edge; alternatively, the first edge may be the upper edge, the second edge may be the right edge, and so on. The following description will be given by taking an example in which the first edge may be the upper edge and the second edge may be the left edge.
During filtering, if the size of the to-be-processed image is larger than the preset size, the to-be-processed image may be divided into a plurality of to-be-processed image blocks by the preset size, so that the processor 30 may directly obtain the position information of each to-be-processed image block, such as the vertex coordinates of the to-be-processed image block (taking the to-be-processed image block as a rectangle for example), and the positions of the to-be-processed image blocks in the to-be-processed image (such as the to-be-processed image block is located at the left edge, the upper edge, and both the left edge and the upper edge (i.e., the upper left corner)). Meanwhile, the processor 30 can also directly obtain the filtering radius of each filtering layer.
Then, the processor 30 may determine the filtering area of each to-be-processed image block according to the filtering radius of the current filtering layer. For example, the filter radius is the distance from the edge of the image block to be processed. When the filtering area of each to-be-processed image block is determined, the edge of the to-be-processed image block may be expanded according to the filtering radius, so as to determine the filtering area. Of course, in order to determine the position of the filtering area in the image to be processed, the filtering area may be determined according to the position information of the image block to be processed (such as the vertex coordinates of the image block to be processed) and the filtering radius.
In an example, referring to fig. 4, an image coordinate system is established with the upper left corner of the to-be-processed image block a1 as an origin, the width of the to-be-processed image block a1 is 8 (the number of pixels in the W direction) and the height of the to-be-processed image block a1 is 8 (the number of pixels in the H direction), the vertex coordinates of the to-be-processed image block a1 are (0,0), (8,0), (0,8) and (8,8), for example, the filter radius is 2 pixels, two rows of pixels are respectively added to the upper edge and the lower edge of the to-be-processed image block a1, two rows of pixels are respectively added to the left edge and the right edge, and the vertex coordinates of the to-be-processed image block S1 are (-2, -2), (10,2), (-10) and (10, 10). Thus, according to the position information of the to-be-processed image block a1 and the filtering radius of each filtering layer, the to-be-processed image block S1 of each filtering layer can be quickly determined.
In other embodiments, the filtering radii include a first filtering radius (e.g., corresponding to the top of the image), a second filtering radius (e.g., corresponding to the right of the image), a third filtering radius (e.g., corresponding to the bottom of the image), and a fourth filtering radius (e.g., corresponding to the left of the image), where the first filtering radius, the second filtering radius, the third filtering radius, and the fourth filtering radius are the same (as in the example shown in fig. 4); or the first filtering radius is the same as the third filtering radius, and the second filtering radius is the same as the fourth filtering radius; or the first filtering radius, the second filtering radius, the third filtering radius and the fourth filtering radius are different from each other.
In another example, referring to fig. 5, the image coordinate system is established with the upper left corner of the to-be-processed image block a1 as the origin, and the vertex coordinates of the to-be-processed image block a1 are (0,0), (8,0), (0,8) and (8,8), respectively, if the first filtering radius and the third filtering radius are the same and are both 2 pixels, and the second filtering radius and the fourth filtering radius are the same and are both 1 pixel, two rows of pixels are added to the upper edge and the lower edge of the to-be-processed image block a1, and one column of pixels is added to the left edge and the right edge, respectively, and the vertex coordinates of the to-be-processed image block S1 are (-1, -2), (9, -2), (-1,10) and (9,10), respectively. Thus, according to the position information of the image block a1 to be processed and the filtering radius of each filtering layer, the filtering area S1 of each filtering layer can be determined and obtained quickly.
In yet another example, referring to fig. 6, the image coordinate system is established with the upper left corner of the to-be-processed image block a1 as the origin, the vertex coordinates of the to-be-processed image block a1 are (0,0), (8,0), (0,8) and (8,8), respectively, if the first filtering radius, the second filtering radius, the third filtering radius and the fourth filtering radius are 4 pixels, 3 pixels, 2 pixels and 1 pixel, respectively, 4 rows of pixels and 2 rows of pixels are added to the upper edge and the lower edge of the to-be-processed image block a1, respectively, 1 column of pixels and 3 columns of pixels are added to the left edge and the right edge, and the vertex coordinates of the to-be-processed image block S1 are (-1, -4), (11, -4), (1, 10) and (11,10), respectively. Thus, according to the position information of the to-be-processed image block a1 and the filtering radius of each filtering layer, the to-be-processed image block S1 of each filtering layer can be quickly determined.
It can be understood that, when performing the filtering process, all the image data in the filtering area of each to-be-processed image block needs to be acquired. Because the filtering area is larger than the area where the image blocks to be processed are located, the overlapping area between the filtering areas of the adjacent image blocks to be processed can be determined when the filtering area of each image block to be processed is determined. Referring to fig. 6, for the adjacent to-be-processed image block a1, the filter areas S1 of the two adjacent to-be-processed image blocks a1 have an overlapping area C1, and it should be noted that the squares in fig. 6 are only used for illustrating pixels, but are not limited to one square having only 1 pixel.
In the prior art, when filtering two adjacent to-be-processed image blocks, the two to-be-processed image blocks need to read image data (i.e., first image data) of an area other than the overlapping area from the memory 50 (e.g., the dynamic random access memory 50 or the DRAM), and also need to read second image data from the memory 50 of the terminal to the memory 60 (e.g., a memory of a Tightly Coupled Memory (TCM) or a Visual Processing Unit (VPU) on a Processing chip of the terminal 100) once. That is, to perform the filtering process on the two image blocks to be processed, the second image data needs to be read from the memory 50 twice, that is, the total amount of data to be read is 2 (the first image data + the second image data). In the embodiment, after the second image data is read to perform filtering processing on the image block to be processed, the second image data can be stored in the first-level cache of the VPU, so that the adjacent image block to be processed, which is not subjected to filtering, can be conveniently used during filtering, the second image data can be read and written more quickly during multiplexing, the filtering efficiency is improved, and the power consumption is low.
It should be noted that the first image data of different to-be-processed image blocks may be different, and the amount of data read here is only illustrative and is not limited to that the first image data of two adjacent to-be-processed image blocks are the same.
After determining the overlapping area of two adjacent to-be-processed image blocks, after acquiring the first image data and the second image data of the current to-be-processed image block, the second image data is multiplexed with the to-be-processed image block which is adjacent to the current to-be-processed image block and is not filtered, that is, the second image data is read from the memory 50 to the memory 60 only once, so that the two adjacent to-be-processed image blocks are filtered, and the total data amount to be read is 2 × the first image data + the second image data. That is to say, according to the technical scheme of the application, the reading of one second image data can be reduced for every two adjacent to-be-processed image blocks, so that the read data amount of the filtering processing is reduced, and the filtering efficiency is improved.
After the filtering process of one to-be-processed image block is completed on the current filtering layer according to a preset filtering sequence (e.g., filtering the to-be-processed image blocks line by line from top to bottom, and filtering the to-be-processed image block one by one from left to right when each line of to-be-processed image block is processed), the processor 30 may store the second image data of the to-be-processed image block in the memory 60 for filtering the to-be-processed image block that is adjacent to the to-be-processed image and is not subjected to filtering process.
Specifically, a transfer memory may be separately set in the memory 60 (a part of the storage space is determined in the primary cache of the VPU as the transfer memory), and after the current to-be-processed image block acquires the image data in the first filtering region, the second image data of the overlapping region in the first filtering region is stored in the transfer memory for performing filtering processing on the to-be-processed image block adjacent to the current to-be-processed image block, so that the to-be-processed image block adjacent to the current to-be-processed image block only needs to read the second filtering region from the memory 50, and the first image data outside the overlapping region is only needed.
The image data existing in the filtering region may be directly obtained (for example, a portion where the filtering region coincides with the image to be processed), and the portion where the filtering region does not have the image data (a portion where the filtering region does not coincide with the image to be processed) needs to be filled according to the existing image data in the filtering region, so as to obtain the image data of each pixel in the filtering region.
The image processing method, the image processing apparatus 10 and the terminal 100 according to the embodiment of the present application firstly determine the filtering area of the pre-divided image block to be processed in the image to be processed according to the filtering radius of the current filtering layer, the area of the filtering area is larger than that of the image block to be processed, and an overlapping area exists between the filtering areas of adjacent image blocks to be processed, so when the current image block to be processed is subjected to filtering processing, the image data (i.e. the first image data) of the area other than the overlapping area in the first filtering area of the current image block to be processed and the image data (i.e. the second image data) of the overlapping area of the current image block to be processed are obtained, since the second image data is the image data of the overlapping area between the first filtering area and the second filtering area of the adjacent image block to be processed, the second image data can be multiplexed by the corresponding adjacent image block to be processed, compared with the case that each image block to be processed acquires the image data of the corresponding filtering area, the second image data needs to be read twice, the second image data of the current image block to be processed is multiplexed, and the second image data can be read once, so that the data volume read-write amount required by filtering processing is reduced, and the filtering efficiency can be improved.
Referring to fig. 2, fig. 3 and fig. 7, in some embodiments, the filter layers include multiple layers, and the image to be processed is sequentially filtered by the multiple filter layers, and the image processing method further includes:
014: and determining the filtering radius of the current filtering layer according to the preset sub-filtering radius of the current filtering layer and the preset sub-filtering radius of the filtering layer behind the current filtering layer.
In some embodiments, the image processing apparatus 10 further comprises a second determination module 14. The second determining module 14 is configured to perform step 014. That is, the second determining module 14 is configured to determine the filtering radius of the current filtering layer according to the preset sub-filtering radius of the current filtering layer and the preset sub-filtering radius of the filtering layer after the current filtering layer.
In some embodiments, the processor 30 is further configured to determine the filtering radius of the current filtering layer according to the preset sub-filtering radius of the current filtering layer and the preset sub-filtering radius of the filtering layer after the current filtering layer. That is, steps 0121 and 0122 may be implemented by processor 30.
Specifically, the filtering layer includes multiple layers, such as 1, 2, 3, 4, 5, etc., which sequentially process the image to be processed. Taking the example that the filtering layer comprises 2 layers, after the image data of the filtering area of the image block to be processed is sequentially processed by the 1 st layer and the 2 nd layer, filtering can be completed to generate a filtering image.
Each filtering layer has a preset sub-filtering radius, and when determining the filtering radius of the current filtering layer, the processor 30 may determine the filtering radius of the current filtering layer according to the preset sub-filtering radius of the current filtering layer and the sub-filtering radius of the filtering layer after the current filtering layer.
For example, the filter layers include 3 layers, the sub-filter radius of the first layer is r1, the sub-filter radius of the second layer is r2, and the sub-filter radius of the third layer is r3, and the processor 30 may use the sum of the preset sub-filter radius of the current filter layer and the sub-filter radii of all the filter layers after the current filter layer as the filter radius of the current filter layer, that is, the filter radius of the current filter layer is r1+ r2+ r 3.
Referring to fig. 2, fig. 3 and fig. 8, in some embodiments, the image processing method further includes the following steps:
015: determining an output area of a filtering area of the current filtering layer according to the sub-filtering radius of the filtering layer behind the current filtering layer;
016 determines a calculation area of the current image block to be processed according to the position information and the output area of the current image block to be processed and the output area of the image block to be processed which is adjacent to the current image block to be processed and has finished filtering processing;
017: re-determining a first filtering area of the current image block to be processed according to the sub-filtering radius of the current filtering layer and the calculation area;
step 013, comprising:
0131: and acquiring image data corresponding to the re-determined first filtering area in the set consisting of the first image data and the second image data.
In some embodiments, the image processing apparatus 10 further comprises a third determining module 15, a fourth determining module 16 and a fifth determining module 17. The third determining module 15, the fourth determining module 16, the fifth determining module 17 and the second acquiring module 13 are configured to perform step 015, step 016, step 017 and step 0131, respectively. That is, the third determining module 15 is configured to determine an output area of a filter area of the current filter layer according to a sub-filter radius of a filter layer after the current filter layer; the fourth determining module 16 is configured to determine a calculation area of the current to-be-processed image block according to the position information and the output area of the current to-be-processed image block and an output area of a to-be-processed image block adjacent to the current to-be-processed image block and having completed filtering processing; the fifth determining module 17 is configured to re-determine the first filtering area of the current image block to be processed according to the sub-filtering radius of the current filtering layer and the calculation area; the second obtaining module 17 is configured to obtain image data corresponding to the re-determined first filtering region from the set of the first image data and the second image data.
In some embodiments, the processor 30 is further configured to determine an output region of the filtering region of the current filtering layer according to the sub-filtering radii of the filtering layers subsequent to the current filtering layer; determining a calculation area of the current image block to be processed according to the position information and the output area of the current image block to be processed and the output area of the image block to be processed which is adjacent to the current image block to be processed and has finished filtering processing; re-determining a first filtering area of the current image block to be processed according to the sub-filtering radius of the current filtering layer and the calculation area; and acquiring image data corresponding to the re-determined first filtering area in the set consisting of the first image data and the second image data. That is, step 015, step 016, step 017 and step 0131 may be implemented by the processor 30.
Specifically, the filtering radius of the current filtering layer is determined according to the sub-filtering radius of the current filtering layer and the sub-filtering radius of the filtering layer after the current filtering layer. After the current filter layer performs filtering processing, the current filter layer outputs filtering data of a region where an image block to be processed is located and filtering data of a region corresponding to R2, that is, an output region of the filter region may be determined according to the image block to be processed and R2.
It can be understood that, referring to fig. 10, a dashed box represents the filtering region S1, a dashed box represents the output region S2, and the output regions S2 of the adjacent to-be-processed image blocks a1 have an overlapping portion C2, in the prior art, after filtering processing is performed on two adjacent to-be-processed image blocks a1, both of the two to-be-processed image blocks a1 output the filtering data of the overlapping portion C2, so that the overlapping portion C2 needs to perform filtering calculation twice and output twice, thereby reducing the efficiency of the filtering processing.
The filtering frame in the filtering process is generally determined according to the sub-filtering radius of the current layer (e.g., R1), and is generally (2R1+1) ((2R 1+ 1)). Therefore, the image data used by each pixel of the overlapped part during filtering is located in the overlapped area of the filtering areas of the two image blocks to be processed. Therefore, when the two to-be-processed image blocks are respectively subjected to filtering processing, the image data in the filtering frame when each pixel of the overlapping portion is subjected to filtering processing is the same, and after the two to-be-processed image blocks are respectively subjected to filtering processing, the filtering data in the overlapping portion is also the same, so that the filtering data in the overlapping portion can be multiplexed by the two to-be-processed image blocks.
After one of the two to-be-processed image blocks is subjected to filtering processing, filtering data of the overlapped part can be output, so that when the other one of the two to-be-processed image blocks is subjected to filtering processing, repeated filtering can be not performed on the overlapped part, and only an area except the overlapped part in an output area of the to-be-processed image block needs to be calculated.
Therefore, the processor 30 may determine the calculation area of the current image block to be processed according to the output area of the current image block to be processed and the output area of the image block to be processed adjacent to the current image block to be processed, where the image block to be processed adjacent to the current image block to be processed needs to be filtered, so as to ensure that the current image block to be processed can multiplex the filter data of the overlapped portion to be used as the filter data finally output by the current image block to be processed.
For example, the width of the to-be-processed image block a1 is TW, and the height thereof is TH, and after the filter processing is performed on the to-be-processed image block a1 on the left side in fig. 10, the filter data in the output region S2 corresponding to the to-be-processed image block a1 is output, the width of the output region S2 is TW +2 × R2, and the height thereof is TH +2 × R2, where the width of the overlapping portion C2 is 2 × R2, and the height thereof is TH +2 × R2. Therefore, the overlapped portion C2 has a width shift of 2 × R2 with respect to the output area S2 corresponding to the right image block a1 to be processed, and therefore, when the calculation area S3 (such as the filled portion in fig. 10) which needs to be calculated in the output area S2 corresponding to the right image block a1 to be processed is determined, the calculation area S3 in the output area S2 corresponding to the right image block a1 to be processed can be quickly determined according to the width shift of 2 × R1.
Of course, it is understood that when the image to be processed cannot be divided into the integral number of image blocks to be processed a1, there may be image blocks having a size other than TW TH, and when processing, the overlapping region C1 where input multiplexing is possible and the overlapping portion C2 where output multiplexing is possible can be accurately determined according to the actual width and length of the adjacent image block to be processed a1, thereby ensuring the accuracy of multiplexing.
The positions of the to-be-processed image blocks a1 in the to-be-processed image are different, and the positions and the numbers of the adjacent to-be-processed image blocks a1 which are not subjected to filtering processing are also different, but the principles of determining the calculation areas are basically the same, and are not described herein again.
In addition, when all the image blocks to be processed of the image to be processed are subjected to filtering processing, the processing can be performed line by line according to the lines formed by the image blocks to be processed from left to right. For example, referring to fig. 7 again, the image to be processed is divided into 9 image blocks to be processed, which are arranged in 3 × 3, and when performing filtering processing, the image block to be processed at the upper left corner may be processed first, then the image block to be processed may be processed rightward until the processing of the first row is completed, and then the second row is processed from the leftmost side of the second row. Of course, when all to-be-processed image blocks of the to-be-processed image are subjected to filtering processing, the to-be-processed image blocks can be processed column by column from top to bottom according to the columns formed by the to-be-processed image blocks, and the processing is not limited herein. In this embodiment, a mode of performing processing row by row from left to right and according to rows and columns formed by image blocks to be processed when filtering processing is performed on all image blocks to be processed of an image to be processed is described as an example.
Therefore, the processor 30 may obtain the position information of the current image block to be processed to further determine the image block to be processed, which is adjacent to the current image block to be processed and has undergone the filtering process, so as to implement multiplexing of the filtering data of the overlapped portion. Such as the position information including at least one of the upper edge, the left edge and the middle position of the image to be processed. When the current image block to be processed is the upper left corner, the position information of the current image to be processed can be determined to comprise an upper edge and a left edge, namely the current image to be processed is positioned at the upper edge and the left edge at the same time; when the position information of the current image to be processed only comprises the upper edge, namely the current image to be processed is positioned at the upper edge but not positioned at the left edge; when the position information of the current image to be processed only comprises the left edge, namely the current image to be processed is only positioned at the upper edge but not positioned at the left edge; when the position information of the current image to be processed includes only the middle position, i.e., the current image to be processed is not located at the upper edge or the left edge.
Therefore, when the calculation area is determined, if the position information includes an upper edge, it indicates that there are adjacent to-be-processed image blocks on the left side, the right side, and the lower side of the current to-be-processed image block. Because the filtering processing sequence of the image blocks to be processed is from left to right and line by line, only the image blocks to be processed on the left side of the current image blocks to be processed have been subjected to filtering processing. Therefore, the processor 30 may determine the calculation area according to the output area of the current image block to be processed and the output area of the image block to be processed adjacent to the left side of the current image block to be processed, for example, an area of the output area of the current image block to be processed, which is located outside the output area of the image block to be processed adjacent to the left side of the current image block to be processed, is used as the calculation area.
And if the position information comprises a left edge, the position information indicates that adjacent to-be-processed image blocks exist on the upper side, the right side and the lower side of the current to-be-processed image block. Because the filtering processing sequence of the image blocks to be processed is from left to right and line by line, only the image blocks to be processed on the upper side of the current image blocks to be processed have completed the filtering processing. Therefore, the processor 30 may determine the calculation area according to the output area of the current image block to be processed and the output area of the image block to be processed adjacent to the upper side of the current image block to be processed, for example, an area of the output area of the current image block to be processed, which is located outside the output area of the image block to be processed adjacent to the upper side of the current image block to be processed, is used as the calculation area.
If the position information comprises a left edge and an upper edge, the current image block to be processed is positioned at the upper left corner of the image to be processed, so that the current image block to be processed is the first image block to be subjected to filtering processing, and at the moment, the current image block to be processed has no reusable filtering data. Therefore, the processor 30 directly uses the output area of the current image block to be processed as the calculation area.
And if the position information comprises the middle position, the image blocks to be processed adjacent to each other exist on the upper side, the left side, the right side and the lower side of the current image block to be processed. The filtering processing sequence of the image blocks to be processed is from left to right and line by line, so that the image blocks to be processed on the upper side and the left side of the current image block to be processed are already subjected to filtering processing. Therefore, the processor 30 may determine the calculation area according to the output area of the current image block to be processed and the output areas of the two image blocks to be processed respectively adjacent to the upper side and the left side of the current image block to be processed, for example, an area of the output area of the current image block to be processed, which is located outside the output area of the image block to be processed adjacent to the upper side of the current image block to be processed and the output area of the image block to be processed adjacent to the left side, is used as the calculation area.
Then, after the calculation area is determined, the filtering area of the current image block to be processed can be restored according to the sub-filtering radius and the calculation area of the current filtering layer, and the filtering data of the calculation area can be output after the newly determined filtering area is subjected to filtering processing.
Therefore, the calculation area of the adjacent image block to be processed which is not subjected to filtering processing is determined through the output area of the image block to be processed which is subjected to filtering processing, so that the filter data of the overlapped part in the output area of the image block to be processed which is subjected to filtering processing can be multiplexed by the adjacent image block to be processed which is not subjected to filtering processing, and only the calculation of the filter data of the calculation area is needed, thereby reducing the calculation amount of the filtering processing and improving the filtering efficiency.
Referring to fig. 2, 3 and 11, in some embodiments, the filtering radius includes a fifth filtering radius and a sixth filtering radius, and the image processing method further includes the following steps:
018: filtering the image data corresponding to the re-determined first filtering area to obtain first filtering data;
019: acquiring second filtering data of a superposed part which is superposed with the output area of the current image block to be processed in the output area of the image block to be processed adjacent to the current image block to be processed according to the position information of the current image block to be processed;
020: and outputting the filtering data of the output area of the current image block to be processed according to the first filtering data and the second filtering data.
In some embodiments, the image processing apparatus 10 further includes a filtering module 18, a third obtaining module 19, and an output module 20. The filtering module 18, the third obtaining module 19 and the output module 20 are respectively used for executing step 018, step 019 and step 020. That is, the filtering module 18 is configured to perform filtering processing on the image data corresponding to the redetermined first filtering area to obtain first filtering data; the third obtaining module 19 is configured to obtain, according to the position information of the current image block to be processed, second filtering data of a portion, which is overlapped with the output area of the current image block to be processed, of the output area of the image block to be processed adjacent to the current image block to be processed; the output module 20 is configured to output the filtered data of the output area of the current image block to be processed according to the first filtered data and the second filtered data.
In some embodiments, the processor 30 is further configured to perform filtering processing on the image data corresponding to the re-determined first filtering region to obtain first filtering data; acquiring second filtering data of a superposed part which is superposed with the output area of the current image block to be processed in the output area of the image block to be processed adjacent to the current image block to be processed according to the position information of the current image block to be processed; and outputting the filtering data of the output area of the current image block to be processed according to the first filtering data and the second filtering data. That is, step 018, step 019 and step 020 can be implemented by processor 30.
Specifically, after filtering the current to-be-processed image block, filtering data of an output area of the current to-be-processed image block needs to be output, where the output area includes a calculation area and a portion where the output area of the current to-be-processed image block overlaps with an output area of an adjacent to-be-processed image block that has undergone filtering.
Therefore, after filtering the newly determined filtering area, the processor 30 may obtain first filtering data of the calculation area, and then the processor 30 directly obtains second filtering data of an overlapping portion of the to-be-processed image block that is adjacent to the current to-be-processed image block and has undergone filtering, so as to obtain filtering data of an output area of the current to-be-processed image block, where the overlapping portion of the to-be-processed image block that is adjacent to the current to-be-processed image block and has undergone filtering refers to an overlapping portion, located in the output area of the current to-be-processed image block, of the to-be-processed image block that is adjacent to the current to-be-processed image block and has undergone filtering.
Finally, the processor 30 outputs the first image data and the second image data, which are the filtered data of the output area of the current image block to be processed.
In other embodiments, after the current to-be-processed image block completes the filtering process and outputs the filtered data of the output area of the current to-be-processed image block, the to-be-processed image block (hereinafter referred to as a target image block) adjacent to the current to-be-processed image block but not subjected to the filtering process may be determined according to the position information of the current to-be-processed image block.
For example, the position information may include at least one of an upper edge, a left edge, and a middle position, and since a filtering processing order of the to-be-processed image blocks is preset (for example, the to-be-processed image blocks are filtered line by line from top to bottom, and filtering processing is performed one by one from left to right when each line of the to-be-processed image blocks is processed), the to-be-processed image blocks that have been filtered and the to-be-processed image blocks that have not been filtered in one or more adjacent to the current to-be-processed image blocks may be quickly determined according to the position information. If the position information includes the middle position, it may be determined that two to-be-processed image blocks adjacent to the upper side and the left side of the current to-be-processed image block have been subjected to filtering processing, and two to-be-processed image blocks adjacent to the lower side and the right side of the current to-be-processed image block have not been subjected to filtering processing.
Therefore, the filtering data of the overlapped part of the output area of the current image block to be processed and the output area of the target image block is stored in the relay memory arranged in the memory 60, so that after the filtering processing is performed on the target image block, the filtering data and the filtering data of the calculation area of the target image block are used as the filtering data of the output area of the target image block, and the multiplexing of the output data is realized.
Referring to fig. 2, fig. 3 and fig. 12, in some embodiments, the image processing method further includes:
021: and re-determining the filtering radius according to the preset down-sampling parameters, and performing filtering processing on the image blocks to be processed.
In some embodiments, the image processing apparatus 10 further comprises a sixth determining module 21. The sixth determining module 21 is configured to execute step 021. That is, the sixth determining module 21 is configured to re-determine the filtering radius according to the preset downsampling parameter, and perform filtering processing on the image block to be processed.
In some embodiments, the processor 30 is further configured to re-determine the filtering radius according to a preset downsampling parameter, and perform filtering processing on the image block to be processed. That is, step 021 may be implemented by processor 30.
Specifically, when the image size of the image block to be processed is large, in order to further improve the filtering efficiency, the processor 30 may perform filtering processing according to a preset downsampling parameter, for example, the pixels of the image block to be processed may be processed one by one at intervals of a predetermined number of pixels. For example, when the downsampling parameter is 1/2, it indicates that pixels of the image block to be processed can be processed one by one with an interval of 1 pixel when the filtering process is performed, so that the number of pixels to be filtered is reduced to 1/2, and the filtering amount is reduced.
Then, after the filtering of the image block to be processed is completed, the image block to be processed after the filtering is completed can be subjected to upsampling, so that the pixels which are not subjected to the filtering processing are interpolated according to the pixels which are subjected to the filtering processing in the image block to be processed after the filtering is completed, the pixel values of the pixels which are not subjected to the filtering processing are quickly obtained, and the filtering effect is ensured while the filtering processing amount is reduced.
In order to ensure that each pixel can be normally filtered when filtering is performed according to the downsampling parameters, the filtering radius of the current filtering layer needs to be determined again according to the downsampling parameters. For example, when the downsampling parameter is 1/2, the sub-filtering radius of the current filtering layer may be enlarged by 2 times, and when the downsampling parameter is 1/3, the sub-filtering radius of the current filtering layer may be enlarged by 3, so that the filtering radius of the current filtering layer is re-determined. In this way, it can be ensured that each pixel can be normally subjected to filtering processing when filtering processing is performed according to the downsampling parameters.
In other embodiments, when the size of the image to be processed is small and the filtering process is not easy to be performed, the processor 30 may only perform upsampling on the image to be processed to improve the filtering effect, and may re-determine the filtering radius according to the preset upsampling parameter and perform filtering process on the image block to be processed.
For example, if the upsampling parameter is 2, it means that each pixel can be acquired twice when performing filtering processing, so that the number of pixels performing filtering processing is increased to 2, and the filtering effect is improved. In order to ensure that each pixel can be normally filtered when filtering is performed according to the upsampling parameters, the filtering radius of the current filtering layer needs to be determined again according to the upsampling parameters. For example, when the upsampling parameter is 2, the sub-filtering radius of the current filtering layer may be decreased by the upsampling parameter/2 +1 (i.e., 2), and when the upsampling parameter is 4, the sub-filtering radius of the current filtering layer may be decreased by the upsampling parameter/2 +1 (i.e., 3), thereby re-determining the filtering radius of the current filtering layer. Therefore, when the filtering processing is carried out according to the up-sampling parameters, each pixel can be ensured to be normally carried out with the filtering processing.
Referring to fig. 13, a non-volatile computer readable storage medium 300 storing a computer program 302 according to an embodiment of the present disclosure, when the computer program 302 is executed by one or more processors 30, the processor 30 may execute the image processing method according to any of the above embodiments.
For example, referring to fig. 1, the computer program 302, when executed by the one or more processors 30, causes the processors 30 to perform the steps of:
011: determining a filtering area of each image block to be processed in the image to be processed according to the filtering radius of the current filtering layer;
012: acquiring a superposition area between filter areas of adjacent image blocks to be processed; and
013: the method comprises the steps of obtaining first image data and second image data, wherein the first image data comprise image data of an area outside a superposition area in a filtering area of an image block to be processed, and the second image data comprise image data of the superposition area of the image block to be processed, and two adjacent image blocks to be processed of the same corresponding superposition area multiplex the second image data when filtering processing is carried out.
For another example, referring to fig. 7, when the computer program 302 is executed by the one or more processors 30, the processors 30 may further perform the following steps:
014: and determining the filtering radius of the current filtering layer according to the preset sub-filtering radius of the current filtering layer and the preset sub-filtering radius of the filtering layer behind the current filtering layer.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more program modules for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes additional implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (11)

1. An image processing method, characterized by further comprising:
determining a filtering area of each image block to be processed in the image to be processed according to the filtering radius of the current filtering layer;
acquiring a coincidence region between a first filtering region of a current image block to be processed and a second filtering region of an image block to be processed adjacent to the current image block to be processed; and
acquiring first image data and second image data, wherein the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering the adjacent to-be-processed image blocks.
2. The image processing method according to claim 1, wherein the filter layer includes a plurality of layers, the image to be processed is sequentially subjected to filter processing by the plurality of filter layers, and the image processing method further comprises:
and determining the filtering radius of the current filtering layer according to the preset sub-filtering radius of the current filtering layer and the preset sub-filtering radius of the filtering layer behind the current filtering layer.
3. The image processing method according to claim 2, further comprising:
determining an output region of the filtering region of the current filtering layer according to the sub-filtering radius of the filtering layer after the current filtering layer;
determining a calculation area of the current image block to be processed according to the position information and the output area of the current image block to be processed and the output area of the image block to be processed which is adjacent to the current image block to be processed and has finished filtering processing;
re-determining the first filtering area according to the sub-filtering radius of the current filtering layer and the calculation area;
the acquiring of the first image data and the second image data includes:
and acquiring image data corresponding to the re-determined first filtering area in the set of the first image data and the second image data.
4. The image processing method according to claim 3, wherein the position information includes at least one of a first edge, a second edge, and a middle position of the image to be processed, and the determining the calculation region of the current image block to be processed according to the position information and the output region of the current image block to be processed and the output region of the image block to be processed adjacent to the current image block to be processed, which has been subjected to the filtering process, includes:
when the position information comprises the first edge, determining the calculation area according to the output area of the current image block to be processed and the output area of the image block to be processed adjacent to the first edge of the current image block to be processed;
when the position information comprises the second edge, determining the calculation area according to the output area of the current image block to be processed and the output area of the image block to be processed adjacent to the second edge of the current image block to be processed;
when the position information comprises the first edge and the second edge, determining the calculation area according to the output area of the current image block to be processed;
and when the position information is the middle position, determining the calculation area according to the output area of the current image block to be processed and the output areas of the two image blocks to be processed which are respectively adjacent to the first edge and the second edge of the current image block to be processed.
5. The image processing method according to claim 3, further comprising:
filtering the image data corresponding to the re-determined first filtering area to obtain first filtering data;
acquiring second filtering data of a superposed part which is superposed with the output area of the current image block to be processed and is in the output area of the image block to be processed which is adjacent to the current image block to be processed and has finished filtering processing according to the position information of the current image block to be processed;
and outputting the filtering data of the output area of the current image block to be processed according to the first filtering data and the second filtering data.
6. The image processing method according to claim 1, wherein the filtering radius comprises a first filtering radius, a second filtering radius, a third filtering radius, and a fourth filtering radius, and the first filtering radius, the second filtering radius, the third filtering radius, and the fourth filtering radius are all the same; or the first filtering radius is the same as the third filtering radius, and the second filtering radius is the same as the fourth filtering radius; or the first filtering radius, the second filtering radius, the third filtering radius and the fourth filtering radius are different from each other.
7. The image processing method according to claim 1, further comprising:
and after the filtering processing of the image block to be processed is finished, storing the second image data into a memory.
8. The image processing method according to claim 1, further comprising:
and re-determining the filtering radius according to a preset down-sampling parameter, and filtering the image block to be processed.
9. An image processing apparatus characterized by comprising:
the first determining module is used for determining a filtering area of each image block to be processed in the image to be processed according to the filtering radius of the current filtering layer;
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a superposition area between a first filtering area of a current image block to be processed and a second filtering area of an image block to be processed adjacent to the current image block to be processed; and
the second obtaining module is configured to obtain first image data and second image data, where the first image data includes image data of an area in the first filtering area and outside the overlapping area, and the second image data includes image data of the overlapping area, and the second image data is used when the adjacent to-be-processed image block is filtered.
10. The terminal is characterized by comprising a processor, a storage unit and a processing unit, wherein the processor is used for determining a filtering area of each image block to be processed in an image to be processed according to the filtering radius of a current filtering layer; acquiring a coincidence region between a first filtering region of a current image block to be processed and a second filtering region of an image block to be processed adjacent to the current image block to be processed; and acquiring first image data and second image data, wherein the first image data comprises image data of an area in the first filtering area and outside the overlapping area, the second image data comprises image data of the overlapping area, and the second image data is used for filtering the adjacent to-be-processed image blocks.
11. A non-transitory computer-readable storage medium comprising a computer program which, when executed by a processor, causes the processor to perform the image processing method of any one of claims 1 to 8.
CN202210126024.6A 2022-02-10 2022-02-10 Image processing method, device, terminal and readable storage medium Pending CN114519661A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210126024.6A CN114519661A (en) 2022-02-10 2022-02-10 Image processing method, device, terminal and readable storage medium
PCT/CN2022/139725 WO2023151385A1 (en) 2022-02-10 2022-12-16 Image processing method and device, terminal, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210126024.6A CN114519661A (en) 2022-02-10 2022-02-10 Image processing method, device, terminal and readable storage medium

Publications (1)

Publication Number Publication Date
CN114519661A true CN114519661A (en) 2022-05-20

Family

ID=81596717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210126024.6A Pending CN114519661A (en) 2022-02-10 2022-02-10 Image processing method, device, terminal and readable storage medium

Country Status (2)

Country Link
CN (1) CN114519661A (en)
WO (1) WO2023151385A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023151385A1 (en) * 2022-02-10 2023-08-17 Oppo广东移动通信有限公司 Image processing method and device, terminal, and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968769B (en) * 2012-11-27 2015-07-22 宁波盈芯信息科技有限公司 Image consistency enhancing device
JP6992351B2 (en) * 2017-09-19 2022-01-13 富士通株式会社 Information processing equipment, information processing methods and information processing programs
CN108282597A (en) * 2018-01-05 2018-07-13 佛山市顺德区中山大学研究院 A kind of real-time target tracing system and method based on FPGA
CN109255771B (en) * 2018-08-22 2021-02-19 广州兴森快捷电路科技有限公司 Image filtering method and device
CN110148087B (en) * 2019-05-22 2023-08-04 湖南华凯数字科技有限公司 Image compression and reconstruction method based on sparse representation
CN114519661A (en) * 2022-02-10 2022-05-20 Oppo广东移动通信有限公司 Image processing method, device, terminal and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023151385A1 (en) * 2022-02-10 2023-08-17 Oppo广东移动通信有限公司 Image processing method and device, terminal, and readable storage medium

Also Published As

Publication number Publication date
WO2023151385A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
CN110532955B (en) Example segmentation method and device based on feature attention and sub-upsampling
CN109961401B (en) Image correction method and storage medium for binocular camera
KR101617059B1 (en) Method and system for image resizing based on interpolation enhanced seam operations
CN109996023B (en) Image processing method and device
US9959600B2 (en) Motion image compensation method and device, display device
US10080007B2 (en) Hybrid tiling strategy for semi-global matching stereo hardware acceleration
CN108346131A (en) A kind of digital image scaling method, device and display equipment
CN111630560B (en) Method and system for correcting a distorted input image
US20150062171A1 (en) Method and device for providing a composition of multi image layers
CN114519661A (en) Image processing method, device, terminal and readable storage medium
EP4040376B1 (en) Picture processing method and apparatus, and electronic device and storage medium
CN115035128A (en) Image overlapping sliding window segmentation method and system based on FPGA
CN107220930A (en) Fish eye images processing method, computer installation and computer-readable recording medium
CN110689061B (en) Image processing method, device and system based on alignment feature pyramid network
GB2536754A (en) Graphics processing
CN113592720B (en) Image scaling processing method, device, equipment and storage medium
WO2023151386A1 (en) Data processing method and apparatus, and terminal and readable storage medium
EP1575298B1 (en) Data storage apparatus, data storage control apparatus, data storage control method, and data storage control program
CN110866875A (en) Image texture correction method and device
CN115797194A (en) Image denoising method, image denoising device, electronic device, storage medium, and program product
CN115393682A (en) Target detection method, target detection device, electronic device, and medium
CN110660013B (en) Semiconductor device, image recognition system, and image processing method
CN101739696A (en) System and method for rasterizing convex polygon
CN111709419A (en) Method, system and equipment for positioning banknote serial number and readable storage medium
JP4993355B2 (en) Occlusion processing method for free-viewpoint video generation with local region division

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination