WO2022228434A1 - 图像处理方法、装置、电子设备及可读存储介质 - Google Patents

图像处理方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2022228434A1
WO2022228434A1 PCT/CN2022/089303 CN2022089303W WO2022228434A1 WO 2022228434 A1 WO2022228434 A1 WO 2022228434A1 CN 2022089303 W CN2022089303 W CN 2022089303W WO 2022228434 A1 WO2022228434 A1 WO 2022228434A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
image block
blocks
weight
Prior art date
Application number
PCT/CN2022/089303
Other languages
English (en)
French (fr)
Inventor
吴仆
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2022228434A1 publication Critical patent/WO2022228434A1/zh
Priority to US18/384,369 priority Critical patent/US20240054620A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • Embodiments of the present invention relate to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and readable storage medium.
  • the existing image splitting method is to simply split the image into several image blocks.
  • the split image blocks are input into the deep learning network.
  • the deep learning network needs to fill the image blocks with redundant information to adapt to the deep learning network. After being processed by the deep learning network, it is output, and the output image blocks are spliced into the target image. Since the deep learning network fills the image blocks with redundant information when processing the image blocks, there are abnormal color spots at the borders of each image block in the spliced target image, and the borders are obvious.
  • the purpose of the embodiments of the present application is to provide an image processing method, apparatus, electronic device, and readable storage medium, so as to solve the problem of obvious boundaries between image blocks in a target image existing in the prior art.
  • an embodiment of the present application provides an image processing method, the method includes: dividing an image to be processed into multiple image blocks, wherein each image block includes an effective image area and an extended area;
  • the target network model is used to process the image data of each of the image blocks to obtain each target image block; to extract the target effective image area in each of the target image blocks respectively; to splicing each of the target effective image areas to generate target image.
  • an embodiment of the present application provides an image processing apparatus, wherein the apparatus includes: a segmentation module for dividing an image to be processed into blocks to obtain a plurality of image blocks, wherein each image block includes a valid image area and extension area; a processing module is used to process the image data of each of the image blocks through the target network model to obtain each target image block; an extraction module is used to extract the target image blocks respectively. a target effective image area; a splicing module is used for splicing each of the target effective image areas to generate a target image.
  • embodiments of the present application provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the method described.
  • the model is filled with redundant information to adapt to the target network model, so the operation of the target network model on the boundaries of each image block has no redundant information. Since the operation of the target network model on the boundaries of each image block has no redundant information, the target image generated based on the image blocks processed by the target network model has no abnormal color spots on the boundary of each image block, and no obvious boundary between the image blocks.
  • FIG. 1 is a flowchart showing the steps of an image processing method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram showing an image processing process according to an embodiment of the present application.
  • FIG. 3 is a structural block diagram showing an image processing apparatus according to an embodiment of the present application.
  • FIG. 4 is a structural block diagram showing an electronic device according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram showing a hardware structure of an electronic device according to an embodiment of the present application.
  • first, second and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between “first”, “second”, etc.
  • the objects are usually of one type, and the number of objects is not limited.
  • the first object may be one or more than one.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the associated objects are in an "or” relationship.
  • FIG. 1 a flowchart of steps of an image processing method according to an embodiment of the present application is shown.
  • Step 101 Divide the image to be processed into blocks to obtain multiple image blocks.
  • the image to be processed can be a RAW domain image.
  • the electronic equipment will introduce a series of noises due to the influence of the real environment, which will affect the quality of the generated target image.
  • the original meaning of RAW is unprocessed.
  • the RAW domain image is the original data that the image sensor converts the captured light source signal into a digital signal.
  • the acquisition of image information in the RAW domain can retain the most original image information. Therefore image processing algorithms usually work on RAW domain images.
  • an image to be processed in the RAW domain is used as an example for description.
  • the image to be processed is divided into several image blocks for separate processing.
  • Each image block obtained after splitting is input into the target network model for processing.
  • Image block alignment rules are preset in the target network model, and the image block alignment rules include: the size of each image block, the arrangement of the image blocks, and the like.
  • the image to be processed may be split into image blocks according to the image block alignment rules preset in the target network model, and the image data of the split image blocks do not need to be filled with redundant boundary information after being input into the target network model .
  • Fig. 2 is a schematic diagram of an image processing process, wherein 201 in Fig. 2 is an image to be processed in the RAW domain, 202 is an image block stored in the memory corresponding to the target network model, and the target network model obtains the image data of the image block from the memory for processing Deep learning, 203 is the final generated target image.
  • the image processing method of the embodiment of the present application needs to perform two copies of image data. The first time is to copy from the original RAW domain image to the memory corresponding to the target network model. Here, it is necessary to consider where the input image is from.
  • the block stage of the image to be processed includes step 101; the first image data copying stage includes step 102; step 103 is the stage of processing the image data based on the target network model; step 104 is The second copy of the image data and the stage of generating the target image based on the copied image data.
  • FIG. 2 is only an example, and the actual implementation process is not limited to dividing the to-be-processed image into 9 image blocks, and may also be divided into 4 image blocks or 16 image blocks.
  • the splitting rules of image blocks in the image to be processed can be flexibly adjusted according to the preset image block alignment rules in the target network model.
  • the RAW domain image is divided to include a plurality of image blocks, and each image block includes an effective image area and an extended area.
  • the effective image area is the area enclosed by the solid circle
  • the extended area is the area other than the effective image area enclosed by the dotted circle.
  • Step 102 Process the image data of each image block through the target network model to obtain each target image block.
  • the image data of each image block can be copied to the memory corresponding to the target network model, and the image data can be obtained in the memory of the target network model.
  • the image data of the image block includes, but is not limited to, the pixel value of each pixel included in the image block.
  • the storage location of each image block in the memory can be determined according to the position of each image block in the image to be processed. For each image block, it is stored in the corresponding location in memory along its boundaries.
  • the target network model can extract the image data of the image block from the memory multiple times for deep learning, and generate the target image block according to the processed image data.
  • the target network model can extract image data of one or more image blocks at a time, and the number of image blocks extracted by the target network model each time can be flexibly adjusted according to the computing power of the target network model.
  • Step 103 Extract target effective image areas in each target image block respectively.
  • the target effective image area For each image block, input the image of the image block into the target network model for processing to form the target image block indicated by the dotted line area in Figure 202.
  • the target effective image area needs to be extracted from the target image block.
  • the effective image area is the area enclosed by the solid circle in 202.
  • the image data in the target effective image area is the image data output from the target network model, that is, the image data copied for the second time.
  • the target effective image area is the same as the area enclosed by the solid circle in 203. corresponding to each area.
  • Step 104 splicing each target effective image area to generate a target image.
  • steps 101 to 103 the processing of the image to be processed based on the target network model has been completed, and finally the target effective image area corresponding to each image block is obtained, and each target effective image area is spliced to generate the target image.
  • the target pixel value of each pixel point in the overlapping part can be determined by combining the weight of the corresponding image block and the pixel value of the pixel point.
  • the weight value of the pixel point can be flexibly determined based on the position of each pixel point in the overlapping part, and the target pixel value of each pixel point is determined according to the weight value and pixel value of the pixel point.
  • the image processing method provided by the embodiment of the present application divides the image to be processed into a plurality of image blocks including an effective image area and an extended image area, and after each image block is input into the target network model, since each image block contains an extended area, no Then, the redundant information is filled into the target network model to adapt to the target network model, so the operation of the target network model on the boundary of each image block has no redundant information. Since the operation of the target network model on the boundaries of each image block has no redundant information, the target image generated based on the image blocks processed by the target network model has no abnormal color spots on the boundary of each image block, and no obvious boundary between the image blocks.
  • the image data of each image block may also be copied to the memory corresponding to the target network model
  • the method of copying the image data of each image block to the memory corresponding to the target network model includes the following steps:
  • Step 1 According to the position of each image block in the image to be processed, determine the storage position of the image data of each image block in the memory corresponding to the target network model;
  • an image to be processed is divided into nine image blocks in three rows and three columns, and the storage location of the image blocks in the first row and the first column in the memory is also the first row and the first column.
  • Step 2 The image data of the central image block is stored along the center of the central image block, and the image data of each of the image blocks is respectively stored along the target boundary of each non-central image block.
  • each image block includes a central image block and a plurality of non-central image blocks, and the overlapping boundary between the effective area boundary and the image block boundary in each image block is regarded as the target boundary.
  • the image blocks can be placed in the memory along their respective boundaries, so that the image data of the image blocks can be aligned with the network boundary when the image data of the image block is subsequently input into the target network model.
  • the image data of the central image block is stored along the center of the central image block, respectively.
  • the image data of each image block along the target boundary in each first image block it can be stored as follows:
  • A00 is the first image block
  • A01 is the second image
  • A02 is the third image
  • A10 is the fourth image
  • A11 is the fifth image
  • A12 is the sixth image
  • A20 is the first image.
  • Seven images, A21 is the eighth image, and A22 is the ninth image.
  • the image data of the first image block is stored along the upper border and the left border of the first image block in the first row and the first column;
  • the image data of the seventh image block is stored along the lower border and the left border of the seventh image block in the third row and the first column;
  • the image data of the ninth image block is stored along the lower border and the right border of the ninth image block in the third row and third column.
  • the method of splicing each target effective image area to generate the target image includes the following steps:
  • Step 1 Determine the position of each target effective image area according to the position of each image block in the image to be processed
  • Step 2 For each pixel point in the overlapping part of any adjacent two target valid image areas to be spliced, determine the first weight and the corresponding pixel point according to the distance between the pixel point and the center of the two target valid image areas to be spliced. the second weight;
  • the distance is inversely proportional to the first weight and the second weight.
  • Step 3 Determine the target pixel value corresponding to the pixel point according to the first weight and the second weight and the pixel value of the pixel point in the two target effective image areas to be spliced;
  • the following methods when determining the target pixel value corresponding to the pixel point according to the first weight and the second weight and the pixel value of the pixel point in the two target valid image areas to be spliced, the following methods can be used:
  • the second product value of the pixel value; the average value of the first product value and the second product is determined as the target pixel value corresponding to the pixel point.
  • the first weight is the weight of the pixel in the first target effective image area
  • the second weight is the weight of the pixel in the first target effective image area
  • This method of selectively determining the target pixel value corresponding to the pixel point requires a small amount of calculation.
  • the above only enumerates a way to optionally determine the target pixel value of the pixel based on the first weight and the second weight. In the actual implementation process, it is not limited to this. Those skilled in the art can flexibly set the target according to actual needs. How pixel values are determined.
  • Step 4 Adjust each pixel point of the overlapping part to the corresponding target pixel value, so as to complete the splicing of two adjacent target effective image areas to be spliced.
  • each splicing boundary in the spliced target image has no color spots and has a natural transition.
  • the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method.
  • the image processing apparatus provided by the embodiments of the present application is described by taking an image processing apparatus executing an image processing method as an example.
  • FIG. 3 is a structural block diagram of an image processing apparatus implementing an embodiment of the present application.
  • the image processing apparatus 300 in this embodiment of the present application includes the following functional modules:
  • the segmentation module 301 is used for segmenting the image to be processed to obtain a plurality of image blocks, wherein each image block includes an effective image area and an extended area;
  • the processing module 302 is configured to process the image data of each of the image blocks through the target network model to obtain each of the target image blocks;
  • the extraction module 303 is used to extract the target effective image area in each of the target image blocks respectively;
  • the splicing module 304 is used for splicing each of the target effective image regions to generate a target image.
  • the device further includes:
  • a position determination module configured to process the image data of each of the image blocks through the target network model to obtain each target image block, according to each of the image blocks in the to-be-processed the position in the image, to determine the storage position of the image data of each of the image blocks in the memory corresponding to the target network model;
  • the data copying module is applied to store the image data of the center image block along the center of the center image block, and store the image data of each image block along the target boundary in each non-center image block, wherein each of the image blocks It includes a central image block and a plurality of non-central image blocks, and the overlapping boundary between the boundary of the effective area and the boundary of the image block in each of the image blocks is regarded as the target boundary.
  • the data copy is specifically used for:
  • the first image block is stored along the upper border and left border of the first image block in the first row and the first column image data;
  • the image data of the ninth image block is stored along the lower border and the right border of the ninth image block in the third row and third column.
  • the splicing module includes:
  • a first sub-module configured to determine the position of each of the target effective image areas according to the position of each of the image blocks in the to-be-processed image
  • the second sub-module is configured to, for each pixel in the overlapping portion of any adjacent two target valid image areas to be spliced, determine according to the distance between the pixel point and the center of the two target valid image areas to be spliced the first weight and the second weight corresponding to the pixel point, wherein the distance is inversely proportional to the first weight and the second weight;
  • the third sub-module is configured to determine the target pixel corresponding to the pixel point according to the first weight and the second weight and the pixel value of the pixel point in the two target effective image areas to be spliced value;
  • the fourth sub-module is configured to adjust each pixel of the overlapping portion to the corresponding target pixel value, so as to complete the splicing of the two adjacent target effective image areas to be spliced.
  • the third submodule includes:
  • the first unit is used to calculate the first product of the first weight and the first pixel value of the pixel in the first target effective image area to be spliced, wherein the first weight is the pixel weights in the effective image area of the first target;
  • a second unit configured to calculate the second product of the second weight and the second pixel value of the pixel in the second target effective image area to be spliced, wherein the second weight is the pixel weights in the effective image area of the first target;
  • the third unit is configured to determine the average value of the first product and the second product as the target pixel value corresponding to the pixel point.
  • the image processing apparatus divides the image to be processed into a plurality of image blocks including an effective image area and an extended image area, and after each image block is input into the target network model, since each image block includes an extended area, no need Then, the redundant information is filled into the target network model to adapt to the target network model, so the operation of the target network model on the boundary of each image block has no redundant information. Since the operation of the target network model on the boundaries of each image block has no redundant information, the target image generated based on the image blocks processed by the target network model has no abnormal color spots on the boundary of each image block, and no obvious boundary between the image blocks.
  • the image processing apparatus shown in FIG. 3 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal.
  • the apparatus may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • netbook or a personal digital assistant
  • the non-mobile electronic device may be a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), a teller machine or a self-service machine, etc., the embodiment of the present application There is no specific limitation.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine a self-service machine
  • the image processing apparatus shown in FIG. 3 in the embodiment of the present application may be an apparatus having an operating system.
  • the operating system may be an Android (Android) operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the image processing apparatus shown in FIG. 3 provided in this embodiment of the present application can implement each process implemented by the method embodiments in FIG. 1 to FIG. 2 , and to avoid repetition, details are not described here.
  • an embodiment of the present application further provides an electronic device 400, including a processor 401, a memory 402, a program or instruction stored in the memory 402 and executable on the processor 401,
  • an electronic device 400 including a processor 401, a memory 402, a program or instruction stored in the memory 402 and executable on the processor 401,
  • the program or instruction is executed by the processor 401, each process of the above image processing method embodiments can be implemented, and the same technical effect can be achieved. To avoid repetition, details are not described here.
  • the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 500 includes but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, and a processor 510, etc. part.
  • the electronic device 500 may also include a power supply (such as a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 through a power management system, so as to manage charging, discharging, and power management through the power management system. consumption management and other functions.
  • the structure of the electronic device shown in FIG. 5 does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than the one shown, or combine some components, or arrange different components, which will not be repeated here. .
  • the processor 510 is configured to divide the image to be processed into blocks to obtain multiple image blocks, wherein each image block includes an effective image area and an extended area; through the target network model, the image of each image block is The data is processed to obtain each target image block; the target effective image area in each of the target image blocks is extracted respectively; and each of the target effective image areas is spliced to generate a target image.
  • the electronic device divides the image to be processed into a plurality of image blocks including an effective image area and an extended image area, and after each image block is input into the target network model, since each image block includes an extended area, there is no need to further
  • the redundant information is filled into the target network model to adapt to the target network model, so the operation of the target network model on the boundaries of each image block has no redundant information. Since the operation of the target network model on the boundaries of each image block has no redundant information, the target image generated based on the image blocks processed by the target network model has no abnormal color spots on the boundary of each image block, and no obvious boundary between the image blocks.
  • the processor 510 before the processor 510 processes the image data of each of the image blocks through the target network model to obtain each target image block, the processor 510 is further configured to:
  • the image data of the center image block is stored along the center of the center image block, and the image data of each image block is stored along the target boundary in each non-center image block, wherein each of the image blocks includes a center image block and a plurality of non-central image blocks, and the overlapping boundary between the effective area boundary and the image block boundary in each of the image blocks is regarded as the target boundary.
  • the processor 510 stores the image data of the central image block along the center of the central image block, respectively.
  • the processor 510 stores the image data of each image block along the target boundary in each first image block, it is specifically used for:
  • the image data of the ninth image block is stored along the lower border and the right border of the ninth image block in the third row and third column.
  • the processor 510 when the processor 510 splices each of the target effective image areas, and generates a target image, it is specifically used for:
  • the first pixel corresponding to the pixel point is determined according to the distance between the pixel point and the center of the two target valid image areas to be spliced. a weight and a second weight, wherein the distance is inversely proportional to the first weight and the second weight;
  • Each pixel point of the overlapping part is adjusted to the corresponding target pixel value, so as to complete the splicing of the two adjacent target effective image areas to be spliced.
  • the processor 510 determines the target pixel corresponding to the pixel point according to the first weight and the second weight and the pixel value of the pixel point in the two target effective image areas to be spliced. value, specifically for:
  • the average value of the first product and the second product is determined as the target pixel value corresponding to the pixel point.
  • the input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042. Such as camera) to obtain still pictures or video image data for processing.
  • the display unit 506 may include a display panel 5061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 507 includes a touch panel 5071 and other input devices 5072 .
  • the touch panel 5071 is also called a touch screen.
  • the touch panel 5071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 5072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which are not described herein again.
  • Memory 509 may be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • the processor 510 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 510.
  • Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium.
  • a program or an instruction is stored on the readable storage medium.
  • the processor is the processor in the electronic device described in the foregoing embodiments.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、装置、电子设备及可读存储介质,属于图像处理技术领域,方法包括:对待处理图像进行分块,得到多个图像块,其中,每个图像块包括有效图像区域和延展区域;通过目标网络模型,对各图像块的图像数据进行处理,得到各目标图像块;分别提取各目标图像块中的目标有效图像区域;对各目标有效图像区域进行拼接,生成目标图像。

Description

图像处理方法、装置、电子设备及可读存储介质
本申请要求在2021年4月30日提交中国专利局、申请号为202110484867.9、名称为“图像处理方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、电子设备及可读存储介质。
背景技术
随着深度学习的发展,很多图像处理算法都需要结合深度学习网络对图像进行处理,这种图像处理算法对图像处理涉及到的硬件设备的算力要求较高。目前电子设备中的处理器虽然可以一次性处理整张高分辨率图像,但处理速度非常慢。而对于电子设备中的某些硬件而言,由于本身的硬件限制,无法一次性处理整张图像,只能将图像拆分成若干图像块分开进行处理。
现有的图像拆分方式为,简单将图像拆分成若干图像块。所拆分出的图像块输入深度学习网络中,深度学习网络需对图像块进行冗余信息填充以适配深度学习网络,经深度学习网络处理后输出,输出的各图像块拼接成目标图像。由于深度学习网络对图像块进行处理时为图像块填充冗余信息,拼接后的目标图像中各图像块边界处存在异常色斑,边界明显。
发明内容
本申请实施例的目的是提供一种图像处理方法、装置、电子设备及可读存储介质,以解决现有技术中存在的目标图像中各图像块间边界明显的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种图像处理方法,所述方法包括:对待处理图像进行分块,得到多个图像块,其中,每个图像块包括有效图像区域和延展区域;通过所述目标网络模型,对各所述图像块的图像 数据进行处理,得到各目标图像块;分别提取各所述目标图像块中的目标有效图像区域;对各所述目标有效图像区域进行拼接,生成目标图像。
第二方面,本申请实施例提供了一种图像处理装置,其中,所述装置包括:分割模块,用于对待处理图像进行分块,得到多个图像块,其中,每个图像块包括有效图像区域和延展区域;处理模块,用于通过所述目标网络模型,对各所述图像块的图像数据进行处理,得到各目标图像块;提取模块,用于分别提取各所述目标图像块中的目标有效图像区域;拼接模块,用于对各所述目标有效图像区域进行拼接,生成目标图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
本申请实施例中,通过将待处理图像划分为多个包含有效图像区域和延展图像区域的图像块,将各图像块输入目标网络模型后由于每个图像块包含延展区域因此无需再向目标网络模型中填充冗余信息以适配目标网络模型,因此目标网络模型对各图像块边界的操作无冗余信息。由于目标网络模型对各图像块边界的操作无冗余信息,因此基于该目标网络模型处理后的图像块生成的目标图像中各图像块边界无异常色斑,各图像块间无明显边界。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中 的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是表示本申请实施例的一种图像处理方法的步骤流程图;
图2是表示本申请实施例的图像处理过程示意图;
图3是表示本申请实施例的一种图像处理装置的结构框图;
图4是表示本申请实施例的一种电子设备的结构框图;
图5是表示本申请实施例的一种电子设备的硬件结构示意图。
具体实施例
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像处理方法进行详细地说明。
参照图1,示出了本申请实施例的一种图像处理方法的步骤流程图。
本申请实施例的图像处理方法包括以下步骤:
步骤101:对待处理图像进行分块,得到多个图像块。
待处理图像可以为RAW域图像。电子设备在成像过程中受现实环境影响会引入一系列的噪声,从而影响所生成的目标图像的质量。RAW的原意为未经加工,RAW域图像是图像感应器将捕捉到的光源信号转化为数字信号的原始数据,在RAW域进行图像的信息采集可保留最原始的图像信息。因此图像处理算法通常对RAW域图像进行处理。本申请实施例 中以对RAW域的待处理图像为例进行说明。
受图像处理硬件算力的限制,无法一次性处理整张图像,因此本申请实施例中将待处理图像拆分成若干图像块分开进行处理。拆分后得到的各图像块输入目标网络模型中进行处理。目标网络模型中预设有图像块对齐规则,图像块对齐规则包括:各图像块尺寸、图像块的排列方式等。在对待处理图像进行图像块拆分时,若简单分块,拆分出的图像块与目标网络模型中预设的图像块对齐规则不匹配的情况下,将图像块输入目标网络模型中时需要对未对齐部分进行填充,所填充的内容则被视为边界冗余信息,目标网络模型对图像块边界的操作存在冗余信息的情况下,处理后的图像块的网络边界和真实图像边界无法真正对齐,最终导致处理后的图像块拼接成的目标图像中各图像块边界处存在异常色斑,图像块间边界明显。
本申请实施例中,可依据目标网络模型中预设的图像块对齐规则对待处理图像进行图像块拆分,拆分后的图像块的图像数据输入目标网络模型中后无需再填充边界冗余信息。
图2为图像处理过程示意图,其中,图2中的201为RAW域的待处理图像,202为目标网络模型对应的内存中存储的图像块,目标网络模型从内存中获取图像块的图像数据进行深度学习,203为最终生成的目标图像。如图2所示,本申请实施例的图像处理方法需执行两次图像数据拷贝,第一次,从原始的RAW域图像拷贝到目标网络模型对应的内存中,这里需要考虑从输入图像哪个位置开始拷贝,拷贝多少,以及在目标网络模型对应的内存中如何存放;第二次,目标网络模型对图像处理完后,拷贝目标网络模型的输出最终生成目标图像,这里需要考虑从目标网络模型输出的目标图像块中的哪个位置开始拷贝,拷贝多少,以及将其置于目标图像中的什么位置。
下面结合图2对本申请实施例的图像处理方法的具体流程进行说明。该流程大体可分为四个阶段:待处理图像的分块阶段包括步骤101;第一次图像数据拷贝阶段包括步骤102;步骤103为基于目标网络模型对图像数据进行处理的阶段;步骤104为第二次图像数据拷贝以及基于拷贝的图像数据生成目标图像的阶段。需要说明的是,图2中仅是一个示例, 在实际实现过程中并不局限于将待处理图像拆分成9个图像块,还可以拆分成4个图像块或16个图像块等。待处理图像中图像块的拆分规则可依据目标网络模型中预设的图像块对齐规则灵活调节。
如图2中的201所示,RAW域图像被拆分后包括多个图像块,每个图像块包括有效图像区域和延展区域。其中,有效图像区域为实线圈起的区域,延展区域为虚线圈起的除有效图像区域外的其他区域。在向目标网络模型中输入图像数据时,将各虚线圈起的图像块的图像数据拷贝至目标网络模型对应的内存中,输入的图像数据对应的图像块与201中虚线圈起的图像区域对应。如图2中的202所示,虽然实际需要处理的是实线圈起的图像区域中的图像数据,但为了保证图像的连续性,在虚线区域内实线区域以外的区域也放入实际有效的图像数据,无需通过填充冗余信息达到图像块边界对齐的目的。
步骤102:通过目标网络模型,对各图像块的图像数据进行处理,得到各目标图像块。
在实际实现过程中,可将各图像块的图像数据拷贝至目标网络模型对应的内存中,目标网络模型中内存中获取图像数据。
图像块的图像数据包括但不限于图像块包含的各像素点的像素值。在拷贝图像块时可依据各图像块在待处理图像中的位置,确定其在内存中的存储位置。针对每个图像块,沿其边界存储在内存中对应位置处。
目标网络模型可分多次从内存中提取图像块的图像数据进行深度学习,依据处理后的图像数据生成目标图像块。目标网络模型每次可提取一个或多个图像块的图像数据,对于目标网络模型每次提取的图像块数量,可依据目标网络模型的算力灵活调整。
步骤103:分别提取各目标图像块中的目标有效图像区域。
针对每个图像块,将图像块的图像输入目标网络模型中进行处理后可形成图202中虚线区域指示的目标图像块,最终拼接目标图像时需从目标图像块中提取目标有效图像区域,目标有效图像区域为202中实线圈起的区域,目标有效图像区域中的图像数据为从目标网络模型输出的图像数据,即第二次拷贝的图像数据,目标有效图像区域与203中实线圈起的各区域对应。
步骤104:对各目标有效图像区域进行拼接,生成目标图像。
通过步骤101至步骤103已完成基于目标网络模型对待处理图像的处理,最终得到各图像块对应的目标有效图像区域,各目标有效图像区域拼接生成目标图像。
目标有效图像区域的具体拼接方式,可由本领域技术人员根据实际需求进行设置,本申请实施例中对此不做具体限制。例如:对于拼接中重合部分,可以结合所对应图像块的权重以及像素点的像素值确定重合部分各像素点的目标像素值。再例如:对于拼接中重合部分,可基于各像素点在重合部分的位置,灵活确定像素点的权重值,依据像素点的权重值和像素值确定各像素点的目标像素值。
本申请实施例提供的图像处理方法,通过将待处理图像划分为多个包含有效图像区域和延展图像区域的图像块,将各图像块输入目标网络模型后由于每个图像块包含延展区域因此无需再向目标网络模型中填充冗余信息以适配目标网络模型,因此目标网络模型对各图像块边界的操作无冗余信息。由于目标网络模型对各图像块边界的操作无冗余信息,因此基于该目标网络模型处理后的图像块生成的目标图像中各图像块边界无异常色斑,各图像块间无明显边界。
在一种可选地实施例中,在通过目标网络模型,对各图像块的图像数据进行处理,得到各目标图像块之前,还可以将各图像块的图像数据拷贝至目标网络模型对应的内存中,将各图像块的图像数据拷贝至目标网络模型对应的内存的方式包括如下步骤:
步骤一:按照各图像块在待处理图像中的位置,确定各图像块的图像数据在目标网络模型对应的内存中的存放位置;
例如:一个待处理图像被拆分为三行三列共9个图像块,第一行第一列的图像块在内存中的存放位置也为第一行第一列。
步骤二:沿中心图像块的中心存放中心图像块的图像数据,分别沿各非中心图像块中的目标边界存放各所述图像块的图像数据。
其中,各图像块包括一个中心图像块和多个非中心图像块,每个图像块中有效区域边界与图像块边界的重叠边界视为目标边界。
该种可选地拷贝图像数据的方式,能够将图像块沿着各自的边界放 置在内存中,便于后续将图像块的图像数据输入目标网络模型中时与网络边界对齐。
可选地,在待处理图像被划分为如图2中的201所示的三行三列排布的9个图像块的情况下,沿中心图像块的中心存放中心图像块的图像数据,分别沿各第一图像块中的目标边界存放各图像块的图像数据的时,可按照如下方式存放:
图2中的待处理图像201中的A00即第一图像块,A01即第二图像,A02即第三图像,A10即第四图像,A11即第五图像,A12即第六图像,A20即第七图像,A21即第八图像,A22即第九图像。
沿着第一行第一列的第一图像块的上边界和左边界存放第一图像块的图像数据;
沿着第一行第二列的第二图像块的上边界存放第二图像块的图像数据;
沿着第一行第三列的第三图像块的上边界和右边界存放第三图像块的图像数据;
沿着第二行第一列的第四图像块的左边界存放第四图像块的图像数据;
沿着第二行第二列的第五图像块的中心存放第五图像块的图像数据,其中,第五图像块为中心图像块;
沿着第二行第三列的第六图像块的右边界存放第六图像块的图像数据;
沿着第三行第一列的第七图像块的下边界和左边界存放第七图像块的图像数据;
沿着第三行第二列的第八图像块的下边界存放第八图像块的图像数据;
沿着第三行第三列的第九图像块的下边界和右边界存放第九图像块的图像数据。
该种可选地将图像块的图像数据沿着各自的边界放置在内存中的方式,在图像块的图像数据输入目标网络模型后,图像块边界与网络边界对齐的精准度高。
在一种可选地实施例中,对各目标有效图像区域进行拼接,生成目标图像的方式包括如下步骤:
步骤一:按照各图像块在所述待处理图像中的位置,确定各目标有效图像区域的位置;
步骤二:针对任意相邻的两个待拼接目标有效图像区域的重合部分中的每个像素点,依据像素点与两个待拼接目标有效图像区域中心的距离确定像素点对应的第一权重和第二权重;
其中,距离与第一权重、第二权重成反比。
步骤三:依据第一权重和第二权重和像素点在两个待拼接的目标有效图像区域中的像素值,确定像素点对应的目标像素值;
可选地,依据第一权重和第二权重和像素点在两个待拼接的目标有效图像区域中的像素值,确定像素点对应的目标像素值时,可采用如下方式:
计算第一权重与像素点在待拼接的第一目标有效图像区域中的第一像素值的第一乘积值;计算第二权重与像素点在待拼接的第二目标有效图像区域中的第二像素值的第二乘积值;将第一乘积值与第二乘积的平均值,确定为像素点对应的目标像素值。
其中,第一权重为像素点在第一目标有效图像区域的权重;第二权重为像素点在第一目标有效图像区域的权重;
该种可选地确定像素点对应的目标像素值的方式,计算量小。
上述仅是列举了一种可选地基于第一权重和第二权重确定像素点的目标像素值的方式,在实际实现过程中,并不限于此本领域技术人员可根据实际需求灵活地设置目标像素值确定方式。
步骤四:将重合部分的各像素点均调整至对应的目标像素值,以完成相邻的两个待拼接目标有效图像区域的拼接。
该种可选地对目标有效图像区域进行拼接的方式,拼接后的目标图像中各拼接边界无色斑且过渡自然。
需要说明的是,本申请实施例提供的图像处理方法,执行主体可以为图像处理装置,或者该图像处理装置中的用于执行图像处理的方法的控制模块。本申请实施例中以图像处理装置执行图像处理方法为例,说 明本申请实施例提供的图像处理的装置。
图3为实现本申请实施例的一种图像处理装置的结构框图。
本申请实施例的图像处理装置300包括如下功能模块:
分割模块301,用于对待处理图像进行分块,得到多个图像块,其中,每个图像块包括有效图像区域和延展区域;
处理模块302,用于通过所述目标网络模型,对各所述图像块的图像数据进行处理,得到各目标图像块;
提取模块303,用于分别提取各所述目标图像块中的目标有效图像区域;
拼接模块304,用于对各所述目标有效图像区域进行拼接,生成目标图像。
可选地,所述装置还包括:
位置确定模块,用于在所述处理模块通过所述目标网络模型,对所述各所述图像块的图像数据进行处理,得到各目标图像块之前,按照各所述图像块在所述待处理图像中的位置,确定各所述图像块的图像数据在所述目标网络模型对应的内存中的存放位置;
数据拷贝模块,应用于沿中心图像块的中心存放所述中心图像块的图像数据,分别沿各非中心图像块中的目标边界存放各所述图像块的图像数据,其中,各所述图像块中包括一个中心图像块和多个非中心图像块,每个所述图像块中有效区域边界与图像块边界的重叠边界视为目标边界。
可选地,所述数据拷贝具体用于:
在所述待处理图像被划分为三行三列排布的9个图像块的情况下,沿着第一行第一列的第一图像块的上边界和左边界存放所述第一图像块的图像数据;
沿着第一行第二列的第二图像块的上边界存放所述第二图像块的图像数据;
沿着第一行第三列的第三图像块的上边界和右边界存放所述第三图像块的图像数据;
沿着第二行第一列的第四图像块的左边界存放所述第四图像块的图 像数据;
沿着第二行第二列的第五图像块的中心存放所述第五图像块的图像数据,其中,所述第五图像块为中心图像块;
沿着第二行第三列的第六图像块的右边界存放所述第六图像块的图像数据;
沿着第三行第一列的第七图像块的下边界和左边界存放所述第七图像块的图像数据;
沿着第三行第二列的第八图像块的下边界存放所述第八图像块的图像数据;
沿着第三行第三列的第九图像块的下边界和右边界存放所述第九图像块的图像数据。
可选地,所述拼接模块包括:
第一子模块,用于按照各所述图像块在所述待处理图像中的位置,确定各所述目标有效图像区域的位置;
第二子模块,用于针对任意相邻的两个待拼接目标有效图像区域的重合部分中的每个像素点,依据所述像素点与所述两个待拼接目标有效图像区域中心的距离确定所述像素点对应的第一权重和第二权重,其中,所述距离与所述第一权重、第二权重成反比;
第三子模块,用于依据所述第一权重和所述第二权重和所述像素点在所述两个待拼接的目标有效图像区域中的像素值,确定所述像素点对应的目标像素值;
第四子模块,用于将所述重合部分的各像素点均调整至对应的目标像素值,以完成所述相邻的两个待拼接目标有效图像区域的拼接。
可选地,所述第三子模块包括:
第一单元,用于计算所述第一权重与所述像素点在待拼接的第一目标有效图像区域中的第一像素值的第一乘积,其中,所述第一权重为所述像素点在所述第一目标有效图像区域的权重;
第二单元,用于计算所述第二权重与所述像素点在待拼接的第二目标有效图像区域中的第二像素值的第二乘积,其中,所述第二权重为所述像素点在所述第一目标有效图像区域的权重;
第三单元,用于将所述第一乘积与所述第二乘积的平均值,确定为所述像素点对应的目标像素值。
本申请实施例提供的图像处理装置,通过将待处理图像划分为多个包含有效图像区域和延展图像区域的图像块,将各图像块输入目标网络模型后由于每个图像块包含延展区域因此无需再向目标网络模型中填充冗余信息以适配目标网络模型,因此目标网络模型对各图像块边界的操作无冗余信息。由于目标网络模型对各图像块边界的操作无冗余信息,因此基于该目标网络模型处理后的图像块生成的目标图像中各图像块边界无异常色斑,各图像块间无明显边界。
本申请实施例中图3所示的图像处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图3所示的图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为iOS操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图3所示的图像处理装置能够实现图1至图2的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图4所示,本申请实施例还提供一种电子设备400,包括处理器401,存储器402,存储在存储器402上并可在所述处理器401上运行的程序或指令,该程序或指令被处理器401执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图5为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备500包括但不限于:射频单元501、网络模块502、音频输出单元503、输入单元504、传感器505、显示单元506、用户输入单元507、接口单元508、存储器509、以及处理器510等部件。本领域技术人员可以理解,电子设备500还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器510逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图5中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器510,用于对待处理图像进行分块,得到多个图像块,其中,每个图像块包括有效图像区域和延展区域;通过所述目标网络模型,对各所述图像块的图像数据进行处理,得到各目标图像块;分别提取各所述目标图像块中的目标有效图像区域;对各所述目标有效图像区域进行拼接,生成目标图像。
本申请实施例提供的电子设备,通过将待处理图像划分为多个包含有效图像区域和延展图像区域的图像块,将各图像块输入目标网络模型后由于每个图像块包含延展区域因此无需再向目标网络模型中填充冗余信息以适配目标网络模型,因此目标网络模型对各图像块边界的操作无冗余信息。由于目标网络模型对各图像块边界的操作无冗余信息,因此基于该目标网络模型处理后的图像块生成的目标图像中各图像块边界无异常色斑,各图像块间无明显边界。
可选地,处理器510在所述通过所述目标网络模型,对所述各所述图像块的图像数据进行处理,得到各目标图像块之前,还用于:
按照各所述图像块在所述待处理图像中的位置,确定各所述图像块的图像数据在所述目标网络模型对应的内存中的存放位置;
沿中心图像块的中心存放所述中心图像块的图像数据,分别沿各非中心图像块中的目标边界存放各所述图像块的图像数据,其中,各所述图像块中包括一个中心图像块和多个非中心图像块,每个所述图像块中有效区域边界与图像块边界的重叠边界视为目标边界。
可选地,处理器510在所述待处理图像被划分为三行三列排布的9个图像块的情况下,所述沿中心图像块的中心存放所述中心图像块的图像数据,分别沿各第一图像块中的目标边界存放各所述图像块的图像数据时,具体用于:
沿着第一行第一列的第一图像块的上边界和左边界存放所述第一图像块的图像数据;
沿着第一行第二列的第二图像块的上边界存放所述第二图像块的图像数据;
沿着第一行第三列的第三图像块的上边界和右边界存放所述第三图像块的图像数据;
沿着第二行第一列的第四图像块的左边界存放所述第四图像块的图像数据;
沿着第二行第二列的第五图像块的中心存放所述第五图像块的图像数据,其中,所述第五图像块为中心图像块;
沿着第二行第三列的第六图像块的右边界存放所述第六图像块的图像数据;
沿着第三行第一列的第七图像块的下边界和左边界存放所述第七图像块的图像数据;
沿着第三行第二列的第八图像块的下边界存放所述第八图像块的图像数据;
沿着第三行第三列的第九图像块的下边界和右边界存放所述第九图像块的图像数据。
可选地,处理器510对各所述目标有效图像区域进行拼接,生成目标图像时,具体用于:
按照各所述图像块在所述待处理图像中的位置,确定各所述目标有效图像区域的位置;
针对任意相邻的两个待拼接目标有效图像区域的重合部分中的每个像素点,依据所述像素点与所述两个待拼接目标有效图像区域中心的距离确定所述像素点对应的第一权重和第二权重,其中,所述距离与所述第一权重、第二权重成反比;
依据所述第一权重和所述第二权重和所述像素点在所述两个待拼接的目标有效图像区域中的像素值,确定所述像素点对应的目标像素值;
将所述重合部分的各像素点均调整至对应的目标像素值,以完成所述相邻的两个待拼接目标有效图像区域的拼接。
可选地,处理器510依据所述第一权重和所述第二权重和所述像素点在所述两个待拼接的目标有效图像区域中的像素值,确定所述像素点对应的目标像素值时,具体用于:
计算所述第一权重与所述像素点在待拼接的第一目标有效图像区域中的第一像素值的第一乘积,其中,所述第一权重为所述像素点在所述第一目标有效图像区域的权重;
计算所述第二权重与所述像素点在待拼接的第二目标有效图像区域中的第二像素值的第二乘积,其中,所述第二权重为所述像素点在所述第一目标有效图像区域的权重;
将所述第一乘积与所述第二乘积的平均值,确定为所述像素点对应的目标像素值。
应理解的是,本申请实施例中,输入单元504可以包括图形处理器(Graphics Processing Unit,GPU)5041和麦克风5042,图形处理器5041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元506可包括显示面板5061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板5061。用户输入单元507包括触控面板5071以及其他输入设备5072。触控面板5071,也称为触摸屏。触控面板5071可包括触摸检测装置和触摸控制器两个部分。其他输入设备5072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器509可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器510可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器510中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储 有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端 (可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (13)

  1. 一种图像处理方法,包括:
    对待处理图像进行分块,得到多个图像块,其中,每个图像块包括有效图像区域和延展区域;
    通过所述目标网络模型,对各所述图像块的图像数据进行处理,得到各目标图像块;
    分别提取各所述目标图像块中的目标有效图像区域;
    对各所述目标有效图像区域进行拼接,生成目标图像。
  2. 根据权利要求1所述的方法,其中,在所述通过所述目标网络模型,对所述各所述图像块的图像数据进行处理,得到各目标图像块的步骤之前,所述方法还包括:
    按照各所述图像块在所述待处理图像中的位置,确定各所述图像块的图像数据在所述目标网络模型对应的内存中的存放位置;
    沿中心图像块的中心存放所述中心图像块的图像数据,分别沿各非中心图像块中的目标边界存放各所述图像块的图像数据,其中,各所述图像块中包括一个中心图像块和多个非中心图像块,每个所述图像块中有效区域边界与图像块边界的重叠边界视为目标边界。
  3. 根据权利要求2所述的方法,其中,在所述待处理图像被划分为三行三列排布的9个图像块的情况下,所述沿中心图像块的中心存放所述中心图像块的图像数据,分别沿各第一图像块中的目标边界存放各所述图像块的图像数据的步骤,包括:
    沿着第一行第一列的第一图像块的上边界和左边界存放所述第一图像块的图像数据;
    沿着第一行第二列的第二图像块的上边界存放所述第二图像块的图像数据;
    沿着第一行第三列的第三图像块的上边界和右边界存放所述第三图像块的图像数据;
    沿着第二行第一列的第四图像块的左边界存放所述第四图像块的图像数据;
    沿着第二行第二列的第五图像块的中心存放所述第五图像块的图像数据,其中,所述第五图像块为中心图像块;
    沿着第二行第三列的第六图像块的右边界存放所述第六图像块的图像数据;
    沿着第三行第一列的第七图像块的下边界和左边界存放所述第七图像块的图像数据;
    沿着第三行第二列的第八图像块的下边界存放所述第八图像块的图像数据;
    沿着第三行第三列的第九图像块的下边界和右边界存放所述第九图像块的图像数据。
  4. 根据权利要求1所述的方法,其中,对各所述目标有效图像区域进行拼接,生成目标图像的步骤,包括:
    按照各所述图像块在所述待处理图像中的位置,确定各所述目标有效图像区域的位置;
    针对任意相邻的两个待拼接目标有效图像区域的重合部分中的每个像素点,依据所述像素点与所述两个待拼接目标有效图像区域中心的距离确定所述像素点对应的第一权重和第二权重,其中,所述距离与所述第一权重、第二权重成反比;
    依据所述第一权重和所述第二权重和所述像素点在所述两个待拼接的目标有效图像区域中的像素值,确定所述像素点对应的目标像素值;
    将所述重合部分的各像素点均调整至对应的目标像素值,以完成所述相邻的两个待拼接目标有效图像区域的拼接。
  5. 根据权利要求4所述的方法,其中,依据所述第一权重和所述第二权重和所述像素点在所述两个待拼接的目标有效图像区域中的像素值,确定所述像素点对应的目标像素值的步骤,包括:
    计算所述第一权重与所述像素点在待拼接的第一目标有效图像区域中的第一像素值的第一乘积,其中,所述第一权重为所述像素点在所述第一目标有效图像区域的权重;
    计算所述第二权重与所述像素点在待拼接的第二目标有效图像区域中的第二像素值的第二乘积,其中,所述第二权重为所述像素点在所述 第一目标有效图像区域的权重;
    将所述第一乘积与所述第二乘积的平均值,确定为所述像素点对应的目标像素值。
  6. 一种图像处理装置,所述装置包括:
    分割模块,用于对待处理图像进行分块,得到多个图像块,其中,每个图像块包括有效图像区域和延展区域;
    处理模块,用于通过所述目标网络模型,对各所述图像块的图像数据进行处理,得到各目标图像块;
    提取模块,用于分别提取各所述目标图像块中的目标有效图像区域;
    拼接模块,用于对各所述目标有效图像区域进行拼接,生成目标图像。
  7. 根据权利要求6所述的装置,其中,所述装置还包括:
    位置确定模块,用于在所述处理模块通过所述目标网络模型,对所述各所述图像块的图像数据进行处理,得到各目标图像块之前,按照各所述图像块在所述待处理图像中的位置,确定各所述图像块的图像数据在所述目标网络模型对应的内存中的存放位置;
    数据拷贝模块,用于沿中心图像块的中心存放所述中心图像块的图像数据,分别沿各非中心图像块中的目标边界存放各所述图像块的图像数据,其中,各所述图像块中包括一个中心图像块和多个非中心图像块,每个所述图像块中有效区域边界与图像块边界的重叠边界视为目标边界。
  8. 根据权利要求7所述的装置,其中,所述数据拷贝模块具体用于:
    在所述待处理图像被划分为三行三列排布的9个图像块的情况下,沿着第一行第一列的第一图像块的上边界和左边界存放所述第一图像块的图像数据;
    沿着第一行第二列的第二图像块的上边界存放所述第二图像块的图像数据;
    沿着第一行第三列的第三图像块的上边界和右边界存放所述第三图像块的图像数据;
    沿着第二行第一列的第四图像块的左边界存放所述第四图像块的图 像数据;
    沿着第二行第二列的第五图像块的中心存放所述第五图像块的图像数据,其中,所述第五图像块为中心图像块;
    沿着第二行第三列的第六图像块的右边界存放所述第六图像块的图像数据;
    沿着第三行第一列的第七图像块的下边界和左边界存放所述第七图像块的图像数据;
    沿着第三行第二列的第八图像块的下边界存放所述第八图像块的图像数据;
    沿着第三行第三列的第九图像块的下边界和右边界存放所述第九图像块的图像数据。
  9. 根据权利要求6所述的装置,其中,所述拼接模块包括:
    第一子模块,用于按照各所述图像块在所述待处理图像中的位置,确定各所述目标有效图像区域的位置;
    第二子模块,用于针对任意相邻的两个待拼接目标有效图像区域的重合部分中的每个像素点,依据所述像素点与所述两个待拼接目标有效图像区域中心的距离确定所述像素点对应的第一权重和第二权重,其中,所述距离与所述第一权重、第二权重成反比;
    第三子模块,用于依据所述第一权重和所述第二权重和所述像素点在所述两个待拼接的目标有效图像区域中的像素值,确定所述像素点对应的目标像素值;
    第四子模块,用于将所述重合部分的各像素点均调整至对应的目标像素值,以完成所述相邻的两个待拼接目标有效图像区域的拼接。
  10. 根据权利要求9所述的装置,其中,所述第三子模块包括:
    第一单元,用于计算所述第一权重与所述像素点在待拼接的第一目标有效图像区域中的第一像素值的第一乘积,其中,所述第一权重为所述像素点在所述第一目标有效图像区域的权重;
    第二单元,用于计算所述第二权重与所述像素点在待拼接的第二目标有效图像区域中的第二像素值的第二乘积,其中,所述第二权重为所述像素点在所述第一目标有效图像区域的权重;
    第三单元,用于将所述第一乘积与所述第二乘积的平均值,确定为所述像素点对应的目标像素值。
  11. 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-5任一项所述的图像处理方法的步骤。
  12. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-5任一项所述的图像处理方法的步骤。
  13. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-5任一项所述的图像处理方法。
PCT/CN2022/089303 2021-04-30 2022-04-26 图像处理方法、装置、电子设备及可读存储介质 WO2022228434A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/384,369 US20240054620A1 (en) 2021-04-30 2023-10-26 Image processing method, apparatus, electronic device, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110484867.9 2021-04-30
CN202110484867.9A CN113506304A (zh) 2021-04-30 2021-04-30 图像处理方法、装置、电子设备及可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/384,369 Continuation US20240054620A1 (en) 2021-04-30 2023-10-26 Image processing method, apparatus, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2022228434A1 true WO2022228434A1 (zh) 2022-11-03

Family

ID=78008423

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/089303 WO2022228434A1 (zh) 2021-04-30 2022-04-26 图像处理方法、装置、电子设备及可读存储介质

Country Status (3)

Country Link
US (1) US20240054620A1 (zh)
CN (1) CN113506304A (zh)
WO (1) WO2022228434A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506304A (zh) * 2021-04-30 2021-10-15 艾酷软件技术(上海)有限公司 图像处理方法、装置、电子设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070286522A1 (en) * 2006-03-27 2007-12-13 Sony Deutschland Gmbh Method for sharpness enhancing an image
CN106611401A (zh) * 2015-10-22 2017-05-03 阿里巴巴集团控股有限公司 一种在纹理内存中存储图像的方法及装置
CN109493281A (zh) * 2018-11-05 2019-03-19 北京旷视科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN110390679A (zh) * 2019-07-03 2019-10-29 上海联影智能医疗科技有限公司 图像处理方法、计算机设备和可读存储介质
CN110490803A (zh) * 2018-10-25 2019-11-22 北京连心医疗科技有限公司 一种图像语义分割分块预测的拼接方法、设备和存储介质
CN113506304A (zh) * 2021-04-30 2021-10-15 艾酷软件技术(上海)有限公司 图像处理方法、装置、电子设备及可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164848B (zh) * 2011-12-09 2015-04-08 腾讯科技(深圳)有限公司 图像处理方法和系统
CN111598779B (zh) * 2020-05-14 2023-07-14 Oppo广东移动通信有限公司 图像超分辨率处理方法和装置、电子设备及存储介质
CN112233062A (zh) * 2020-09-10 2021-01-15 浙江大华技术股份有限公司 地物变化检测方法、电子装置和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070286522A1 (en) * 2006-03-27 2007-12-13 Sony Deutschland Gmbh Method for sharpness enhancing an image
CN106611401A (zh) * 2015-10-22 2017-05-03 阿里巴巴集团控股有限公司 一种在纹理内存中存储图像的方法及装置
CN110490803A (zh) * 2018-10-25 2019-11-22 北京连心医疗科技有限公司 一种图像语义分割分块预测的拼接方法、设备和存储介质
CN109493281A (zh) * 2018-11-05 2019-03-19 北京旷视科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN110390679A (zh) * 2019-07-03 2019-10-29 上海联影智能医疗科技有限公司 图像处理方法、计算机设备和可读存储介质
CN113506304A (zh) * 2021-04-30 2021-10-15 艾酷软件技术(上海)有限公司 图像处理方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
US20240054620A1 (en) 2024-02-15
CN113506304A (zh) 2021-10-15

Similar Documents

Publication Publication Date Title
WO2020207191A1 (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
CN109064390B (zh) 一种图像处理方法、图像处理装置及移动终端
WO2017101250A1 (zh) 加载进度的显示方法及终端
CN108961183B (zh) 图像处理方法、终端设备及计算机可读存储介质
CN112102164B (zh) 一种图像处理方法、装置、终端及存储介质
CN108961267B (zh) 图片处理方法、图片处理装置及终端设备
US20160191604A1 (en) Remoting Application User Interfaces
Huang et al. RGB-D salient object detection by a CNN with multiple layers fusion
WO2022048598A1 (zh) 图像获取方法、装置、服务器和电子设备
CN103294360B (zh) 虚拟表面后备列表和沟槽
US20230326110A1 (en) Method, apparatus, device and media for publishing video
WO2022228434A1 (zh) 图像处理方法、装置、电子设备及可读存储介质
US20200234072A1 (en) Method and Apparatus for Detecting Target Objects in Images
US20220182554A1 (en) Image display method, mobile terminal, and computer-readable storage medium
CN110618852B (zh) 视图处理方法、视图处理装置及终端设备
CN109065001B (zh) 一种图像的降采样方法、装置、终端设备和介质
CN112199149A (zh) 界面渲染方法、装置及电子设备
CN108932704B (zh) 图片处理方法、图片处理装置及终端设备
WO2022135219A1 (zh) 图像显示方法、装置和电子设备
CN104867109A (zh) 一种显示方法及电子设备
CN114253449A (zh) 截屏方法、装置、设备及介质
CN106228519B (zh) 一种图像修复方法及终端
US20220075583A1 (en) Information processing method, server, terminal, and computer storage medium
CN111784607A (zh) 图像色调映射方法、装置、终端设备及存储介质
KR101911947B1 (ko) 정보 가독성 향상과 배경 이미지 조화를 위한 화면 디자인 방법 및 그 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794904

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794904

Country of ref document: EP

Kind code of ref document: A1