CN116228525A - Image processing method, device, electronic equipment and readable storage medium - Google Patents

Image processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116228525A
CN116228525A CN202310182153.1A CN202310182153A CN116228525A CN 116228525 A CN116228525 A CN 116228525A CN 202310182153 A CN202310182153 A CN 202310182153A CN 116228525 A CN116228525 A CN 116228525A
Authority
CN
China
Prior art keywords
image
sub
target
pixel point
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310182153.1A
Other languages
Chinese (zh)
Inventor
朱鑫炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310182153.1A priority Critical patent/CN116228525A/en
Publication of CN116228525A publication Critical patent/CN116228525A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a readable storage medium, and belongs to the technical field of image processing. The method comprises the following steps: dividing an image to be processed in a first image format into N first sub-images, wherein N is a positive integer; respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; combining the second sub-images of the N second image formats to obtain a target image of the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed.

Description

Image processing method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method, an image processing device, electronic equipment and a readable storage medium.
Background
Currently, in order to reduce occupation of bandwidth resources by image transmission, electronic devices generally define preview data collected by a camera as a certain image format (for example, YUV420 format). But at the final processing or display it may be necessary to convert the picture to another image format (e.g., RGB format), and the process of image format conversion is typically accompanied by image scaling.
For example, in the related art, it is necessary to convert an image from one image format to another, store the converted image in a memory, read the image converted from the image format from the memory, perform linear interpolation, and scale the image to obtain a final displayed image.
Therefore, the converted image is required to be stored in the memory during the conversion of the image format, the read-out and post-processing are performed from the memory during the scaling, the repeated read-write exists in the whole process, the calculated amount is overlarge, and the time consumption is longer.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an apparatus, an electronic device, and a readable storage medium, which can solve the problems of repeated reading and writing, excessive calculation amount, and long time consumption in the whole process of image format conversion.
In a first aspect, an embodiment of the present application provides an image processing method, including: dividing an image to be processed in a first image format into N first sub-images, wherein N is a positive integer; respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; combining the second sub-images of the N second image formats to obtain a target image of the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed.
In a second aspect, an embodiment of the present application provides an image processing apparatus including: a processing module and a combining module; the processing module is used for dividing the image to be processed in the first image format into N first sub-images, wherein N is a positive integer; the processing module is further used for respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; the combination module is used for combining the second sub-images in the N second image formats to obtain a target image in the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, an image to be processed in a first image format is divided into N first sub-images, wherein N is a positive integer; respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; combining the second sub-images of the N second image formats to obtain a target image of the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed. Therefore, the electronic equipment can select different image processing modes based on the target scaling factor, if the image needs to be reduced, the image is reduced firstly and then subjected to image format conversion, the unnecessary pixel points before the image is reduced are avoided from being subjected to format conversion, and if the image needs to be amplified, the image is subjected to image format conversion and then amplified, the unnecessary pixel points in the amplified image are avoided from being subjected to format conversion, and therefore the calculated amount is reduced; in addition, the electronic device can divide the image to be processed into N sub-images, and process the N sub-images respectively, so that the electronic device does not need to repeatedly read data due to overlarge images, and the efficiency of image processing is improved.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of acquiring a pixel position in an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a second method for obtaining a pixel position in an image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 6 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The following terms are used to explain the terms related to the embodiments of the present application:
1) Image format
Defining different image formats by different color coding methods; for example, YUV image format, RGB image format, CMKY image format.
2) YUV image format
Y represents brightness (Luminance or Luma); u and V represent chromaticity (Chroma or Chroma).
The YUV data format stores image data as three planes: y-plane, U-plane and V-plane. The Y plane stores luminance information of an image, and the U, V plane stores chrominance information of an image. The YUV420 coding mode is a special YUV coding mode, because the U and V planes of the YUV420 coding mode adopt sub-pixel sampling, the YUV420 format can reduce the image size on the premise of maintaining the image quality. If the Y component of the pixel is sampled by the black dot, the UV component of the pixel is represented by the open circle, and one group of UV components is shared by every four Y, the YUV420 acquisition mode is shown in fig. 1.
3) RGB image format
RGB has 3 channels RGB, correspond to three components of red, green, blue separately, determine the color by the value of three components; in the case of the same image size, the data amount of the image in YUV420 format is only 1/2 of that in RGB888 format. The formula for converting YUV to RGB is as follows:
R=1.164*(y-16)+1.596*(v-128)
G=1.164*(y-16)+0.813*(v-128)-0.391*(u-128)
B=1.164*(y-16)+2.018*(U-128)
the terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method, the device, the electronic equipment and the readable storage medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Currently, in order to reduce occupation of bandwidth resources by image transmission, electronic devices generally define preview data collected by a camera as a certain image format (for example, YUV420 format). But at the final processing or display it may be necessary to convert the picture to another image format (e.g., RGB format), and the process of image format conversion is typically accompanied by image scaling. Therefore, repeated reading and writing exist in the whole image format conversion process, the calculated amount is too large, and the time consumption is longer.
For example, in the related art, the whole YUG image is generally converted into the RGB image format, the result is stored in the memory, the RGB image is read from the memory, and after linear interpolation, the RGB image is scaled to obtain the final scaled RGB image.
Therefore, in the image processing method provided in the embodiment of the present application, an image to be processed in a first image format is divided into N first sub-images, where N is a positive integer; respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; combining the second sub-images of the N second image formats to obtain a target image of the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed. Therefore, the electronic equipment can select different image processing modes based on the target scaling factor, if the image needs to be reduced, the image is reduced firstly and then subjected to image format conversion, the unnecessary pixel points before the image is reduced are avoided from being subjected to format conversion, and if the image needs to be amplified, the image is subjected to image format conversion and then amplified, the unnecessary pixel points in the amplified image are avoided from being subjected to format conversion, and therefore the calculated amount is reduced; in addition, the electronic device can divide the image to be processed into N sub-images, and process the N sub-images respectively, so that the electronic device does not need to repeatedly read data due to overlarge images, and the efficiency of image processing is improved.
The main execution body of the image processing method provided in this embodiment may be an image processing apparatus, and the image processing apparatus may be an electronic device, or may be a control module or a processing module in the electronic device. The technical solutions provided in the embodiments of the present application are described below by taking an electronic device as an example.
An embodiment of the present application provides an image processing method, and fig. 1 shows a flowchart of the image processing method provided in the embodiment of the present application, where the method may be applied to an electronic device. As shown in fig. 1, the image processing method provided in the embodiment of the present application may include the following steps 201 to 203.
Step 201, the electronic device divides an image to be processed in a first image format into N first sub-images, where N is a positive integer.
In this embodiment of the present application, the first image format may be a YUV image format, an RBG image format, or a CMKY image format, which is not limited in this embodiment of the present application.
In the embodiment of the present application, the electronic device may divide the image to be processed according to a preset division size, or may divide the image to be processed according to size information of the image to be processed.
Optionally, in the embodiment of the present application, in the step 201 "the electronic device divides the image to be processed in the first image format into N first sub-images", the method includes steps 201a to 201c:
Step 201a, the electronic device calculates the target sub-image height based on the number of cores of a central processing unit (Central Processing Unit, CPU) of the electronic device and the height of the image to be processed.
The electronic device calculates the height of the target sub-image by the following first formula, based on the number of CPU cores of the electronic device and the height of the image to be processed, for example.
Illustratively, the first formula:
Figure BDA0004102677650000061
wherein H represents the height of the image to be processed, and threads represents the number of CPU cores; basic_tile is typically a fixed parameter, set to 8 or 16.
Step 201b, the electronic device calculates the width of the target sub-image based on the buffer sizes of all levels of the electronic device and the width of the image to be processed.
Illustratively, the electronic device calculates the target sub-image width by the following second formula based on the buffer sizes of the respective levels of the electronic device and the width of the image to be processed.
Illustratively, the second formula: w' =cache size /basic_tile。
Wherein, the cache size Representing the cache size of each level; basic_tile is typically a fixed parameter, set to 8 or 16.
In step 201c, the electronic device divides the image to be processed in the first image format into N first sub-images based on the target sub-image height and the target sub-image width.
Illustratively, the image to be processed in the first image format is divided according to the determined target sub-image height and target sub-image width, so as to obtain first sub-images of N target sub-image heights and target sub-image widths.
For example, after acquiring its own hardware information, such as the number of CPU cores, the size of each level of Cache (Cache), etc., and the image size of the image to be processed, the electronic device may determine the height of the sub-image (i.e., the target sub-image height) according to the number of CPU cores and the height of the whole image by using a formula, determine the width of the sub-image (i.e., the target sub-image width) according to each level of Cache size by using a formula, and then divide the whole image (i.e., the image to be processed) according to the determined height and width of the sub-image to obtain a plurality of sub-images (i.e., the first sub-image). And finally, binding each sub-graph to a corresponding CPU physical core, and executing image processing operation.
Therefore, the whole image is divided into sub-images, and different CPU physical cores are used for processing respectively, so that the Data of each sub-image can be stored in a cache, and the problem that the performance is influenced due to the fact that a Double Data Rate (DDR) is frequently accessed and accessed when an image is processed is avoided.
Step 202, the electronic device performs image processing on each first sub-image according to a target processing mode corresponding to the target scaling factor, so as to obtain second sub-images in N second image formats.
In this embodiment of the present application, the target scaling factor may be preset by a system, or may be calculated by an electronic device based on a formula.
In this embodiment of the present application, the target scaling factor is calculated by a third formula and a fourth formula.
Illustratively, the third formula: scale_w=output image width/input image width.
The input image width is the width of the image to be processed, and the output image width is the width of the image after format conversion.
Illustratively, the fourth formula: scale_h=output image high/input image high.
The input image is high, namely the image to be processed is high, and the output image is high, namely the image after format conversion is needed.
Illustratively, the electronic device calculates the scaling factor of the height and width of the image to be processed according to the size of the input/output image after acquiring the image to be processed and the image information required to be format-converted, such as the size of the input/output image, the data bit width, and the like.
In this embodiment of the present application, the target processing manner includes a first processing manner and a second processing manner.
In an example, in a case where the target scaling factor is smaller than 1, the target processing mode is a first processing mode.
Illustratively, the first processing manner is: image scaling is performed first and then image format conversion is performed. It may be appreciated that, in the case where the target scaling factor is less than 1, the electronic device may perform image reduction on each first sub-image according to the target scaling factor to obtain N third sub-images, and then perform image format conversion on each third sub-image for each third sub-image to obtain N second sub-images in the second image format.
In this way, the pixels of the image after the reduction are reduced, and the electronic equipment does not need to perform image format conversion on the reduced pixels.
In another example, in a case where the target scaling factor is greater than 1, the target processing mode is the second processing mode.
Illustratively, the second processing manner is: the image format conversion is performed first and then the image scaling is performed. It can be understood that, in the case that the target scaling factor is greater than 1, the electronic device performs image format conversion on each first sub-image to obtain fourth sub-images in N second image formats, and then performs image magnification on each fourth sub-image according to the target scaling factor to obtain second sub-images in N second image formats.
Thus, the electronic device performs image format conversion on the image first and then enlarges the image, and compared with the process of image format conversion during enlargement, the electronic device reduces the calculated amount.
It should be noted that, in the case that the target scaling factor is greater than 1, it is determined that the image to be processed needs to be enlarged. Under the condition that the target scaling coefficient is smaller than 1, judging that the image to be processed needs to be reduced
Step 203, the electronic device combines the second sub-images in the N second image formats to obtain a target image in the second image format.
In this embodiment of the present application, the second image format may be a YUV image format, an RBG image format, or a CMKY image format, which is not limited in this embodiment of the present application.
In this embodiment of the present application, the electronic device combines the N second sub-images into a complete image, that is, the target image in the second format, through a preset function.
Illustratively, the electronic device waits until all sub-threads complete the sub-graph (i.e., the second sub-image described above) traversal, at which point a full RGB graph (i.e., the target image described above) after the resize (i.e., the preset function) can be obtained.
In the image processing method provided by the embodiment of the application, an image to be processed in a first image format is divided into N first sub-images, wherein N is a positive integer; respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; combining the second sub-images of the N second image formats to obtain a target image of the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed. Therefore, the electronic equipment can select different image processing modes based on the target scaling factor, if the image needs to be reduced, the image is reduced firstly and then subjected to image format conversion, the unnecessary pixel points before the image is reduced are avoided from being subjected to format conversion, and if the image needs to be amplified, the image is subjected to image format conversion and then amplified, the unnecessary pixel points in the amplified image are avoided from being subjected to format conversion, and therefore the calculated amount is reduced; in addition, the electronic device can divide the image to be processed into N sub-images, and process the N sub-images respectively, so that the electronic device does not need to repeatedly read data due to overlarge images, and the efficiency of image processing is improved.
The following describes the above-described target processing manner in two possible embodiments.
In a first possible embodiment, the target processing manner is a first processing manner.
Optionally, in this embodiment of the present application, in the step 202 "the electronic device performs image processing on each first sub-image according to the target processing manner corresponding to the target scaling factor to obtain the second sub-images in the N second image formats", the method includes steps 202a and 202c:
step 202a, the electronic device performs image reduction on each first sub-image according to the target scaling factor, so as to obtain N third sub-images.
In an exemplary embodiment, if the target scaling factor is smaller than 1, it is determined that the image to be processed needs to be scaled down, that is, each first sub-image is scaled down to obtain N third sub-images.
Step 202b, the electronic device obtains, for each third sub-image, image data information in a first image format of each pixel point in the third sub-image based on the position information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image and the image data information of each pixel point in the first sub-image corresponding to the third sub-image.
Illustratively, the position information of each pixel point in any third sub-image in the first sub-image corresponding to the third sub-image includes: and the height information and the width information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image.
For any third sub-image, the height information of each pixel in the first sub-image corresponding to the third sub-image is calculated by traversing all the pixels of the third sub-image in the vertical direction, and the width information of each pixel in the first sub-image corresponding to the third sub-image is calculated by traversing all the pixels of the third sub-image in the horizontal direction.
Illustratively, the traversing of all the pixels of the third sub-image in the vertical direction may be equivalent to traversing along the height of the third sub-image, and the vertical direction may be simply referred to as the H direction; the traversing of all the pixels of the third sub-image in the horizontal direction may correspond to traversing along the width of the third sub-image, and the horizontal direction is referred to as the W direction.
For example, the electronic device may calculate the height information of each pixel point in the first sub-image corresponding to the third sub-image using the following fifth formula.
A fifth formula: src (src) h =scale h *(dst h +0.5)-0.5
Wherein src is h Representing the position height of each pixel point in the first sub-image, scale h Scaling factor, dst, representing the height of each sub-image h Representing the position height of each pixel point in the third sub-image.
For example, the electronic device may calculate the width information of each pixel point in the first sub-image corresponding to the third sub-image using the following sixth formula.
A sixth formula: src (src) w =scale w *(dst w +0.5)-0.5
Wherein src is w Representing the position width of each pixel point in the first sub-image, scale w Scaling factor, dst, representing the width of each sub-image w Representing the width of the position of each pixel in the third sub-image.
In an exemplary embodiment, after calculating the height information or the width information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image, if a certain pixel point is located between two adjacent pixel points, weighting the obtained position information of the pixel point in the third sub-image in the first sub-image by using the quantization weight, thereby obtaining the quantization weight value of the pixel point. Therefore, the quantization weight value is an integer, so that the calculated amount can be reduced, and the calculation difficulty is reduced.
For example, the electronic device may perform quantization weight calculation on the position information of each pixel point using the following seventh formula.
The seventh equation: weight 1=q (src h -h);weight2=Q-weight1
Wherein Q is a quantization coefficient, and is generally set to q= 2^7, and is determined by the data bit width of the original image and the CPU register bit width, and weight1 and weight2 respectively represent the distance between the pixel point and two adjacent pixel points.
For example, as shown in fig. 2, taking a pixel point a in the H direction as an example, a position of the pixel point a corresponding to the src (i.e., the first sub-image) is located between two adjacent pixel points (e.g., a pixel point B and a pixel point C), and the height information of the pixel point a in the src graph is src_h. And (3) carrying out quantization weighting processing on the pixel point A to obtain distance information between the pixel point A and two adjacent pixel points (such as the pixel point B and the pixel point C) so as to obtain a corresponding quantization weight value. And according to the quantization weight value of the pixel point a, the height information of the pixel point a in the dst chart (i.e. the third sub-image) is, for example, dst_h.
For example, as shown in fig. 3, taking the pixel point A1 in the W direction as an example, the position of the pixel point A1 corresponding to the src (i.e. the first sub-image) is located between two adjacent pixel points (e.g. the pixel point B1 and the pixel point C1), and the width information of the pixel point a in the src diagram is src_w. And (3) carrying out quantization weighting processing on the pixel point A1 to obtain distance information between the pixel point A1 and two adjacent pixel points (such as the pixel point B1 and the pixel point C1) so as to obtain a corresponding quantization weight value. And according to the quantization weight value of the pixel point A1, the width information of the pixel point A1 in the dst chart (i.e. the third sub-image) is, for example, dst_w.
Optionally, in the embodiment of the present application, the foregoing step 202b includes steps 202b1 to 202b3:
step 202b1, the electronic device performs interpolation calculation on the corresponding pixel point of the first pixel point in the vertical direction in the first sub-image corresponding to the third sub-image based on the height information of the first pixel point in the third sub-image in the first sub-image corresponding to the third sub-image, to obtain the first image data information of the first image format of the first pixel point.
The first pixel is illustratively any pixel in a third sub-image.
The first pixel point is located between two adjacent pixel points in the corresponding first sub-image in the third sub-image.
Exemplary, interpolation calculation is performed on the first pixel point based on the image data information of the first image format of the two adjacent pixel points, so as to obtain the first image data information of the first image format of the first pixel point.
For example, the electronic device may calculate first image data information in a first image format for the first pixel using the first formula set.
A first formula set:
Figure BDA0004102677650000121
wherein Y (h+1) and Y (h) represent Y values of two adjacent pixel points, and UV (h+1) and UV (h) represent UV values of two adjacent pixel points.
Step 202b2, the electronic device performs interpolation calculation on the corresponding pixel point of the first pixel point in the horizontal direction in the first sub-image corresponding to the third sub-image based on the width information of the first pixel point in the third sub-image in the first sub-image corresponding to the third sub-image, to obtain the second image data information of the first image format of the first pixel point.
Exemplary, interpolation calculation is performed on the first pixel point based on the image data information of the first image format of the two adjacent pixel points, so as to obtain the second image data information of the first image format of the first pixel point.
For example, the electronic device may calculate the second image data information in the first image format for the first pixel using the second formula set.
A second formula set:
Figure BDA0004102677650000122
wherein Y (w+1) and Y (w) represent Y values of two adjacent pixel points, U (w+1) and U (w) represent U values of two adjacent pixel points, and V (w+1) and V (w) represent V values of two adjacent pixel points.
Step 202b3, the electronic device obtains target image data information of the first image format of the first pixel based on the first image data information of the first image format of the first pixel and the second image data information of the first image format of the first pixel.
Illustratively, the first image data information of the first image format of the first pixel point and the second image data information of the first image format of the first pixel point are fused, so as to obtain the target image data information of the first image format of the first pixel point.
Step 202c, the electronic device performs image format conversion on the third sub-image based on the image data information of the first image format of each pixel point in the third sub-image, so as to obtain a second sub-image of the second image format corresponding to the third sub-image.
For any one of the third sub-images, the electronic device performs color space image conversion on the third sub-image based on the image data information of the first image format of each pixel in the third sub-image, so as to obtain a second sub-image of the second image format corresponding to the third sub-image.
For example, the electronic device may perform image format conversion on the third sub-image by using the following third formula set to obtain a second sub-image in a second image format corresponding to the third sub-image.
A third formula group:
Figure BDA0004102677650000131
for example, the first image format is YUV format and the second image format is RGB format. Firstly, traversing pixels in a target sub-image (namely the third sub-image) in the H direction, and calculating the corresponding position of each pixel point in the original image (namely the first sub-image) according to the target scaling coefficient. If the calculation result is floating point, we need to quantize it into fixed point weight, so that it is convenient for subsequent quantization acceleration calculation. All quantization weights corresponding to all target sub-pixel points can be calculated through single instruction multiple data (Single Instruction Multiple Data, SIMD) parallel and stored in a quantization weight table (Y direction).
Then, traversing the pixels in the target sub-image (i.e. the third sub-image) in the W direction, and calculating the corresponding position of each pixel point in the original image (i.e. the first sub-image) according to the target scaling factor. And performing quantization weight calculation to obtain quantization weights of the target sub-pixel points, and storing the quantization weights in a quantization weight table (W direction).
Next, for each pixel point in the YUV domain, an interpolation target graph in the H direction is calculated by using the Y-direction quantization table and the original graph data, so as to obtain YUV data information (i.e., the first image data information) of the target sub-pixel point in the H direction. And interpolating in the W direction by using the W-direction quantization weight table to obtain YUV data information (i.e., the second image data information) of the target sub-pixel point in the W direction.
And then, the YUV data information in the H direction and the YUV data information in the W direction are fused and calculated to obtain the YUV data information of each pixel point.
Finally, the quantized color space conversion formula is utilized to convert YUV data information in the register into RGB, and the RGB value of the corresponding pixel point can be obtained after dequantization, so that a target subgraph in RGB format can be obtained.
It should be noted that, the linear interpolation can be accelerated by SIMD parallel, and an interpolation result of 16 pixels can be obtained for a single calculation of a CPU with a 128-bit width. Since the scaling of the Y and UV domains are consistent, the weights of the quantization table can be used for both the Y and UV domains, and the interpolation calculation result (H-direction scaling map YUV format) is saved in the memory.
It should be noted that, since the sampling rate of the UV domain in the W direction is only half of that of the Y domain. Therefore, the UV needs to be split first, interpolation is calculated on the Y, U and V components respectively, and the corresponding result is temporarily stored in a register.
In this way, the image processing is performed after the sub-image is reduced, so that unnecessary pixel points are reduced for image processing, and in addition, the efficiency of the image processing is improved by only performing one-time color space conversion after the sub-image is reduced.
In a second possible embodiment, the target processing manner is a second processing manner.
Optionally, in this embodiment of the present application, in the step 202 "the electronic device performs image processing on each first sub-image according to the target processing manner corresponding to the target scaling factor to obtain the second sub-images in the N second image formats", the method includes steps 202A and 202B:
step 202A, the electronic device performs image format conversion on each first sub-image based on the position information and the image data information of each pixel point in the first sub-image, to obtain a fourth sub-image in the second image format corresponding to the first sub-image.
For any first sub-image, the electronic device directly performs color space conversion on the first sub-image according to the position information and the image data information of each pixel point in the first sub-image, so as to obtain a fourth sub-image in a second image format corresponding to the first sub-image.
Step 202B, the electronic device performs image amplification on the fourth sub-image according to the target scaling factor, so as to obtain a second sub-image in a second image format.
For example, in the case where the target scaling factor is greater than 1, it is determined that the image to be processed needs to be enlarged.
In an exemplary case, if the target scaling factor is greater than 1, it is determined that the image to be processed needs to be enlarged, that is, each fourth sub-image is enlarged, so as to obtain a second sub-image in the second image format.
Illustratively, the electronic device obtains the second sub-image in the second image format by interpolation.
For example, the electronic device interpolates the fourth sub-image using the fourth set of formulas.
A fourth formula set:
Figure BDA0004102677650000151
thus, the data processing of image conversion is performed before the sub-image is enlarged, unnecessary data calculation is omitted, and the efficiency of image processing is improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or an electronic device, or may be a functional module or entity in the electronic device. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
Fig. 4 shows a schematic diagram of one possible configuration of an image processing apparatus involved in an embodiment of the present application. As shown in fig. 4, the image processing apparatus 700 may include: a processing module 701 and a combining module 702;
the processing module 701 is configured to divide an image to be processed in a first image format into N first sub-images, where N is a positive integer; the processing module 701 is further configured to perform image processing on each first sub-image according to a target processing manner corresponding to the target scaling factor, so as to obtain N second sub-images in a second image format; the combining module 702 is configured to combine the N second sub-images in the second image format to obtain a target image in the second image format; under the condition that the target scaling coefficient is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is the second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed.
Optionally, in this embodiment of the present application, the target processing manner is a first processing manner;
The processing module 701 is specifically configured to:
according to the target scaling factor, carrying out image reduction on each first sub-image to obtain N third sub-images;
for each third sub-image, acquiring image data information of a first image format of each pixel point in the third sub-image based on the position information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image and the image data information of each pixel point in the first sub-image corresponding to the third sub-image;
and carrying out image format conversion on the third sub-image based on the image data information of the first image format of each pixel point in the third sub-image to obtain a second sub-image of the second image format corresponding to the third sub-image.
Optionally, in the embodiment of the present application, the processing module 701 is specifically configured to:
performing interpolation calculation on corresponding pixel points of the first pixel points in the vertical direction in the first sub-image corresponding to the third sub-image based on the height information of the first pixel points in the third sub-image in the first sub-image corresponding to the third sub-image to obtain first image data information of a first image format of the first pixel points;
Performing interpolation calculation on corresponding pixel points of the first pixel points in the horizontal direction in the first sub-image corresponding to the third sub-image based on the width information of the first pixel points in the third sub-image in the first sub-image corresponding to the third sub-image to obtain second image data information of the first image format of the first pixel points;
obtaining target image data information of a first image format of the first pixel point based on the first image data information and the second image data information; wherein the first pixel point is any pixel point in the third sub-image.
Optionally, in this embodiment of the present application, the target processing manner is a second processing manner;
the processing module 701 is specifically configured to:
and converting the image format of the first sub-image based on the position information and the image data information of each pixel point in the first sub-image aiming at each first sub-image to obtain a fourth sub-image in a second image format corresponding to the first sub-image, and amplifying the fourth sub-image according to a target scaling factor to obtain a second sub-image in the second image format.
Optionally, in the embodiment of the present application, the processing module 701 is specifically configured to:
Calculating the height of a target sub-image based on the number of CPU cores of the electronic equipment and the height of the image to be processed;
calculating the width of a target sub-image based on the buffer sizes of all levels of the electronic equipment and the width of the image to be processed;
the image to be processed in the first image format is divided into N first sub-images based on the target sub-image height and the target sub-image width.
In the image processing device provided by the embodiment of the application, the device divides an image to be processed in a first image format into N first sub-images, wherein N is a positive integer; respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; combining the second sub-images of the N second image formats to obtain a target image of the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed. Therefore, the electronic equipment can select different image processing modes based on the target scaling factor, if the image needs to be reduced, the image is reduced firstly and then subjected to image format conversion, the unnecessary pixel points before the image is reduced are avoided from being subjected to format conversion, and if the image needs to be amplified, the image is subjected to image format conversion and then amplified, the unnecessary pixel points in the amplified image are avoided from being subjected to format conversion, and therefore the calculated amount is reduced; in addition, the electronic device can divide the image to be processed into N sub-images, and process the N sub-images respectively, so that the electronic device does not need to repeatedly read data due to overlarge images, and the efficiency of image processing is improved.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus provided in this embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 3, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 5, the embodiment of the present application further provides an electronic device 800, including a processor 801 and a memory 802, where a program or an instruction capable of running on the processor 801 is stored in the memory 802, and the program or the instruction implements each step of the embodiment of the image processing method when executed by the processor 801, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 110 is configured to divide an image to be processed in a first image format into N first sub-images, where N is a positive integer; the processor 110 is further configured to perform image processing on each of the first sub-images according to a target processing manner corresponding to the target scaling factor, so as to obtain N second sub-images in a second image format; the processor 110 is further configured to combine the N second sub-images in the second image format to obtain a target image in the second image format; under the condition that the target scaling coefficient is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is the second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed.
Optionally, in this embodiment of the present application, the target processing manner is a first processing manner;
the processor 110 is specifically configured to:
according to the target scaling factor, carrying out image reduction on each first sub-image to obtain N third sub-images;
for each third sub-image, acquiring image data information of a first image format of each pixel point in the third sub-image based on the position information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image and the image data information of each pixel point in the first sub-image corresponding to the third sub-image;
And carrying out image format conversion on the third sub-image based on the image data information of the first image format of each pixel point in the third sub-image to obtain a second sub-image of the second image format corresponding to the third sub-image.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to:
performing interpolation calculation on corresponding pixel points of the first pixel points in the vertical direction in the first sub-image corresponding to the third sub-image based on the height information of the first pixel points in the third sub-image in the first sub-image corresponding to the third sub-image to obtain first image data information of a first image format of the first pixel points;
performing interpolation calculation on corresponding pixel points of the first pixel points in the horizontal direction in the first sub-image corresponding to the third sub-image based on the width information of the first pixel points in the third sub-image in the first sub-image corresponding to the third sub-image to obtain second image data information of the first image format of the first pixel points;
obtaining target image data information of a first image format of the first pixel point based on the first image data information and the second image data information; wherein the first pixel point is any pixel point in the third sub-image.
Optionally, in this embodiment of the present application, the target processing manner is a second processing manner;
the processor 110 is specifically configured to:
and converting the image format of the first sub-image based on the position information and the image data information of each pixel point in the first sub-image aiming at each first sub-image to obtain a fourth sub-image in a second image format corresponding to the first sub-image, and amplifying the fourth sub-image according to a target scaling factor to obtain a second sub-image in the second image format.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to:
calculating the height of a target sub-image based on the number of CPU cores of the electronic equipment and the height of the image to be processed;
calculating the width of a target sub-image based on the buffer sizes of all levels of the electronic equipment and the width of the image to be processed;
the image to be processed in the first image format is divided into N first sub-images based on the target sub-image height and the target sub-image width.
In the electronic device provided by the embodiment of the application, the electronic device divides the image to be processed in the first image format into N first sub-images, where N is a positive integer; respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to the target scaling coefficient so as to obtain second sub-images in N second image formats; combining the second sub-images of the N second image formats to obtain a target image of the second image format; wherein, in the case that the target scaling factor is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed. Therefore, the electronic equipment can select different image processing modes based on the target scaling factor, if the image needs to be reduced, the image is reduced firstly and then subjected to image format conversion, the unnecessary pixel points before the image is reduced are avoided from being subjected to format conversion, and if the image needs to be amplified, the image is subjected to image format conversion and then amplified, the unnecessary pixel points in the amplified image are avoided from being subjected to format conversion, and therefore the calculated amount is reduced; in addition, the electronic device can divide the image to be processed into N sub-images, and process the N sub-images respectively, so that the electronic device does not need to repeatedly read data due to overlarge images, and the efficiency of image processing is improved.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image processing method described above, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (12)

1. An image processing method, the method comprising:
dividing an image to be processed in a first image format into N first sub-images, wherein N is a positive integer;
respectively carrying out image processing on each first sub-image according to a target processing mode corresponding to a target scaling coefficient so as to obtain second sub-images in N second image formats;
combining the second sub-images in the N second image formats to obtain a target image in the second image format;
under the condition that the target scaling coefficient is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed.
2. The method of claim 1, wherein the target processing mode is the first processing mode;
and respectively performing image processing on each first sub-image according to a target processing mode corresponding to a target scaling factor to obtain second sub-images in N second image formats, wherein the method comprises the following steps:
Performing image reduction on each first sub-image according to the target scaling coefficient to obtain N third sub-images;
for each third sub-image, acquiring image data information of the first image format of each pixel point in the third sub-image based on the position information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image and the image data information of each pixel point in the first sub-image corresponding to the third sub-image;
and converting the image format of the third sub-image based on the image data information of the first image format of each pixel point in the third sub-image to obtain a second sub-image of the second image format corresponding to the third sub-image.
3. The method according to claim 2, wherein the acquiring the image data information of the first image format of each pixel point in the third sub-image based on the position information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image and the image data information of each pixel point in the first sub-image corresponding to the third sub-image includes:
Performing interpolation calculation on corresponding pixel points of the first pixel point in the vertical direction in the first sub-image corresponding to the third sub-image based on the height information of the first pixel point in the third sub-image in the first sub-image corresponding to the third sub-image to obtain first image data information of the first image format of the first pixel point;
performing interpolation calculation on corresponding pixel points of the first pixel point in the horizontal direction in the first sub-image corresponding to the third sub-image based on the width information of the first pixel point in the third sub-image in the first sub-image corresponding to the third sub-image to obtain second image data information of the first image format of the first pixel point;
obtaining target image data information of the first image format of the first pixel point based on the first image data information and the second image data information;
the first pixel point is any pixel point in the third sub-image.
4. The method of claim 1, wherein the target treatment regime is the second treatment regime;
and respectively performing image processing on each first sub-image according to a target processing mode corresponding to a target scaling factor to obtain second sub-images in N second image formats, wherein the method comprises the following steps:
And performing image format conversion on the first sub-image based on the position information and the image data information of each pixel point in the first sub-image aiming at each first sub-image to obtain a fourth sub-image in the second image format corresponding to the first sub-image, and performing image amplification on the fourth sub-image according to the target scaling coefficient to obtain a second sub-image in the second image format corresponding to the first sub-image.
5. The method of claim 1, wherein the dividing the image to be processed in the first image format into N first sub-images comprises:
calculating the height of a target sub-image based on the number of CPU cores of the electronic equipment and the height of the image to be processed;
calculating a target sub-image width based on each level of cache size of the electronic equipment and the width of the image to be processed;
dividing the image to be processed in the first image format into the N first sub-images based on the target sub-image height and the target sub-image width.
6. An image processing apparatus, characterized in that the image processing apparatus comprises: a processing module and a combining module;
the processing module is used for dividing the image to be processed in the first image format into N first sub-images, wherein N is a positive integer;
The processing module is further configured to perform image processing on each of the first sub-images according to a target processing manner corresponding to the target scaling factor, so as to obtain N second sub-images in a second image format;
the combination module is used for combining the second sub-images in the N second image formats to obtain a target image in the second image format;
under the condition that the target scaling coefficient is smaller than 1, the target processing mode is a first processing mode; or, in the case that the target scaling factor is greater than 1, the target processing mode is a second processing mode; the first processing mode is as follows: firstly, performing image scaling and then performing image format conversion, wherein the second processing mode is as follows: the image format conversion is performed first and then the image scaling is performed.
7. The apparatus of claim 6, wherein the target processing mode is the first processing mode;
the processing module is specifically configured to:
performing image reduction on each first sub-image according to the target scaling coefficient to obtain N third sub-images;
for each third sub-image, acquiring image data information of the first image format of each pixel point in the third sub-image based on the position information of each pixel point in the third sub-image in the first sub-image corresponding to the third sub-image and the image data information of each pixel point in the first sub-image corresponding to the third sub-image;
And converting the image format of the third sub-image based on the image data information of the first image format of each pixel point in the third sub-image to obtain a second sub-image of the second image format corresponding to the third sub-image.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the processing module is specifically configured to:
performing interpolation calculation on corresponding pixel points of the first pixel point in the vertical direction in the first sub-image corresponding to the third sub-image based on the height information of the first pixel point in the third sub-image in the first sub-image corresponding to the third sub-image to obtain first image data information of the first image format of the first pixel point;
performing interpolation calculation on corresponding pixel points of the first pixel point in the horizontal direction in the first sub-image corresponding to the third sub-image based on the width information of the first pixel point in the third sub-image in the first sub-image corresponding to the third sub-image to obtain second image data information of the first image format of the first pixel point;
obtaining target image data information of the first image format of the first pixel point based on the first image data information and the second image data information;
The first pixel point is any pixel point in the third sub-image.
9. The apparatus of claim 6, wherein the target processing mode is the second processing mode;
the processing module is specifically configured to:
and performing image format conversion on the first sub-image based on the position information and the image data information of each pixel point in the first sub-image aiming at each first sub-image to obtain a fourth sub-image in the second image format corresponding to the first sub-image, and performing image amplification on the fourth sub-image according to the target scaling coefficient to obtain a second sub-image in the second image format corresponding to the first sub-image.
10. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the processing module is specifically configured to:
calculating the height of a target sub-image based on the number of CPU cores of the electronic equipment and the height of the image to be processed;
calculating a target sub-image width based on each level of cache size of the electronic equipment and the width of the image to be processed;
dividing the image to be processed in the first image format into the N first sub-images based on the target sub-image height and the target sub-image width.
11. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any of claims 1 to 5.
CN202310182153.1A 2023-02-27 2023-02-27 Image processing method, device, electronic equipment and readable storage medium Pending CN116228525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310182153.1A CN116228525A (en) 2023-02-27 2023-02-27 Image processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310182153.1A CN116228525A (en) 2023-02-27 2023-02-27 Image processing method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116228525A true CN116228525A (en) 2023-06-06

Family

ID=86578259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310182153.1A Pending CN116228525A (en) 2023-02-27 2023-02-27 Image processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116228525A (en)

Similar Documents

Publication Publication Date Title
JP4295340B2 (en) Enlarge and pinch 2D images
US11113795B2 (en) Image edge processing method, electronic device, and computer readable storage medium
US8290252B2 (en) Image-based backgrounds for images
KR101138852B1 (en) Smart clipper for mobile displays
US8189944B1 (en) Fast edge-preserving smoothing of images
JP4612845B2 (en) Image processing apparatus and method
US20080240602A1 (en) Edge mapping incorporating panchromatic pixels
US20050024380A1 (en) Method for reducing random access memory of IC in display devices
JP2012104114A (en) Perspective transformation of two-dimensional images
JP2005056374A (en) Apparatus for and method of edge enhancement in image processing
CN111179370B (en) Picture generation method and device, electronic equipment and storage medium
US8873884B2 (en) Method and system for resizing an image
CN114119778A (en) Deep color mode generation method of user interface, electronic equipment and storage medium
CN116228525A (en) Image processing method, device, electronic equipment and readable storage medium
EP1704556B1 (en) Method and system to combine a digital graphics object and a digital picture
CN114155137A (en) Format conversion method, controller and computer-readable storage medium
JP2004185183A (en) Image correction device, character recognition method, and image correction program
US20040091173A1 (en) Method, apparatus and system for the spatial interpolation of color images and video sequences in real time
JPH07210127A (en) Method of processing color image
US9898831B2 (en) Macropixel processing system, method and article
JP2005123813A (en) Image processing apparatus
CN107135382A (en) A kind of quick Zoom method of image based on YUV signal processing
WO2021053735A1 (en) Upscaling device, upscaling method, and upscaling program
CN116471489A (en) Image preprocessing method and device
JP2016095667A (en) Image processing device and electronics apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination