CN117408884A - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117408884A
CN117408884A CN202311543160.6A CN202311543160A CN117408884A CN 117408884 A CN117408884 A CN 117408884A CN 202311543160 A CN202311543160 A CN 202311543160A CN 117408884 A CN117408884 A CN 117408884A
Authority
CN
China
Prior art keywords
image
rendering
processing
parameters
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311543160.6A
Other languages
Chinese (zh)
Inventor
刘国祥
陈春辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311543160.6A priority Critical patent/CN117408884A/en
Publication of CN117408884A publication Critical patent/CN117408884A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: rendering a first image and a second image, wherein the first image and the second image are rendered based on the visual field of the virtual camera under the condition that the visual field of the virtual camera moves according to first motion information, and the resolution of the first image and the resolution of the second image are both the first resolution; determining sampling parameters and fusion parameters of the first image and the second image; based on the sampling parameters, carrying out up-sampling processing on the first image and the second image to obtain a processed third image and a processed fourth image, wherein the resolution of the third image and the resolution of the fourth image are second resolution, and the second resolution is larger than the first resolution; and carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method, an image processing device, electronic equipment and a storage medium.
Background
In the related art, for an electronic game, both the quality of the image rendered and the image frame rate ultimately rendered have a large impact on the game experience. When the computing power of the electronic device is unchanged, the image quality and the image frame rate are inversely proportional, that is, the higher the image quality is, the lower the frame rate is.
In order to balance the frame number and the image quality, the resolution of the rendering can be reduced during the rendering to save the calculation power, and the super resolution processing can be performed on the low resolution image obtained by the rendering to improve the image quality. The low resolution image may lose some picture details, and the processed image still lacks picture details due to the loss of the detail information during super resolution processing, resulting in impaired image quality.
Disclosure of Invention
An object of an embodiment of the present application is to provide an image processing method, an image processing device, an electronic apparatus, and a storage medium, which can solve the problem of image quality impairment caused by loss of picture details.
In a first aspect, an embodiment of the present application provides an image processing method, including:
rendering a first image and a second image, wherein the first image and the second image are rendered based on the visual field of the virtual camera under the condition that the visual field of the virtual camera moves according to first motion information, and the resolution of the first image and the resolution of the second image are both the first resolution;
Determining sampling parameters and fusion parameters of the first image and the second image;
based on the sampling parameters, carrying out up-sampling processing on the first image and the second image to obtain a processed third image and a processed fourth image, wherein the resolution of the third image and the resolution of the fourth image are second resolution, and the second resolution is larger than the first resolution;
and carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the rendering module is used for rendering a first image and a second image, wherein the first image and the second image are rendered based on the visual field of the virtual camera under the condition that the visual field of the virtual camera moves in the first motion information, and the resolution of the first image and the resolution of the second image are both the first resolution;
the determining module is used for determining sampling parameters and fusion parameters of the first image and the second image;
the processing module is used for carrying out up-sampling processing on the first image and the second image based on the sampling parameters to obtain a processed third image and a processed fourth image, wherein the resolution ratio of the third image and the fourth image is a second resolution ratio which is larger than the first resolution ratio; and
And carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the method as in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface coupled to the processor for running a program or instructions implementing the steps of the method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement a method as in the first aspect.
In the embodiment of the application, the first image and the second image of the virtual camera when the field of view is at different positions are rendered in the rendering pipeline by adopting a camera shake mode, so that the first image and the second image can comprise different image information and are complementary to each other. In the up-sampling process of super-resolution processing, up-sampling is carried out on the first image and the second image respectively, and image details recorded in a third image and a fourth image with higher resolution after up-sampling are fused, so that a high-resolution image with more picture details can be obtained, and the problems of picture detail loss and picture quality damage can be effectively reduced.
Drawings
FIG. 1 illustrates a flow chart of an image processing method of some embodiments of the present application;
FIG. 2 illustrates a rendering schematic of camera shake of some embodiments of the present application;
FIG. 3 illustrates a schematic diagram of an information processing network according to some embodiments of the present application;
fig. 4 shows a block diagram of the image processing apparatus of some embodiments of the present application;
FIG. 5 shows a block diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method, the device, the electronic equipment and the storage medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
In some embodiments of the present application, an image processing method is provided, fig. 1 shows a flowchart of the image processing method of some embodiments of the present application, and as shown in fig. 1, the image processing method includes:
step 102, rendering a first image and a second image.
The first image and the second image are rendered based on the visual field of the virtual camera under the condition that the visual field of the virtual camera moves in the first motion information, and the resolution of the first image and the resolution of the second image are both the first resolution.
In the embodiment of the application, the first image and the second image are low-resolution images rendered based on a camera shake method. Among them, the camera shake method is an image sampling method. In particular, when rendering low resolution images, it is common practice that the field of view of the virtual camera remains unchanged, and pixel values within a field of view are acquired, which may result in loss of image information in the inter-pixel space due to the fixed pixel density.
In view of the above problems, the embodiment of the present application controls the field of view of the virtual camera to move according to the preset first motion information, so as to render and obtain multiple frame images that are continuous in time sequence but have different field of view positions, and specifically includes the first image and the second image.
The virtual camera, namely, the viewpoint of the user, the field of view of the virtual camera determines the range of game scenes that the user can see in the game, and objects in the field of view of the virtual camera are rendered into a part of the final on-screen image in a mode of image objects.
For example, fig. 2 shows a schematic rendering diagram of camera shake according to some embodiments of the present application, as shown in fig. 2, where the field of view of the virtual camera is respectively shifted to different directions according to preset first motion information, so as to obtain a plurality of images in which different pixel information is recorded, as a first image 202 and a second image 204 shown in fig. 2.
As shown in fig. 2, the first image 202 records a plurality of pixels 2022, and information of the interval region between the pixels 2022 is not recorded. The second image 204 has a different field of view position than the first image 202, and the plurality of pixels 2042 recorded in the second image 204 include information of the interval region between the pixels 2022 of the first image 202.
That is, the first image 202 and the second image 204 record different picture details, and the details recorded in the first image 202 and the second image 204 are combined, so that the rendered image can include more details, and more sequence information is provided for super-resolution up-sampling, so that more picture details are reserved in the high-resolution image obtained by super-resolution processing.
Step 104, determining sampling parameters and fusion parameters of the first image and the second image.
In the embodiment of the application, in the process of super-resolution processing, each pixel in the image is sampled according to a certain sampling radius, so that the pixels are filled, and the image with low resolution is supplemented to the image with high resolution.
According to the method and the device for processing the image, the first image and the second image are rendered in a camera shake mode, and different picture details are recorded because the view field positions of the first image and the second image are different, so that fusion parameters are required to be determined, and the picture details in the first image and the picture details in the second image are fused based on the fusion parameters, so that a high-resolution image with more picture details is obtained.
And 106, performing up-sampling processing on the first image and the second image based on the sampling parameters to obtain a processed third image and a processed fourth image.
The resolution of the third image and the fourth image is a second resolution, and the second resolution is larger than the first resolution.
In the embodiment of the application, the up-sampling processing is performed on the first image and the second image respectively through sampling parameters. The upsampling process is specifically a process of filling pixels in the low resolution image, thereby supplementing the low resolution image to the high resolution image.
Illustratively, the first image and the second image are rendered at a first resolution of 960×540, and the upsampling process may super-resolve the first image and the second image to a second resolution of 1920×1080, resulting in a third image and a fourth image of 1920×1080 resolution.
Illustratively, the first image and the second image are rendered at a first resolution of 1280×720, and the upsampling process may super-resolve the first image and the second image to a second resolution of 2560×1440, resulting in a third image and a fourth image with a resolution of 2560×1440.
With the original rendering resolution being 1920×1080, when the original resolution is reduced from 1920×1080 to 960×540, the number of the pixels to be rendered can be reduced from 2073600 pixels to 518400 pixels, which is equivalent to rendering only one quarter of the pixels, so that the computational power required for rendering the image is greatly reduced, and the saved computational power can be distributed to the higher frame rate to be rendered, thereby improving the game frame rate.
While lower resolution means lower picture quality, here, the super resolution of the low resolution image of 960×540 is processed into the high resolution image of 1920×1080 by means of upsampling, so that the picture quality of the final on-screen display approaches the original rendering resolution, i.e. a balance between picture quality and frame rate is achieved.
And step 108, carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.
In the embodiment of the application, aiming at the problem that the super-resolution method in the prior art can cause the loss of picture details, the embodiment of the application controls the visual field of the virtual camera to move according to the preset first motion information by adopting a camera shake mode, so that multi-frame images which are continuous in time sequence but have different visual field positions are rendered.
Since the field positions of these images are different, the recorded picture details are different, and after super-resolution processing is performed, the third image and the fourth image record different picture details, respectively. Therefore, based on the determined fusion parameters, the third image and the fourth image are fused, and the obtained fifth image comprises details recorded in the third image and details recorded in the fourth image, so that the fifth image can retain more picture details.
According to the method and the device for rendering the virtual camera, the camera shake mode is adopted, the first image and the second image of the virtual camera are rendered in the rendering pipeline when the field of view of the virtual camera is in different positions, so that the first image and the second image can comprise different image information, and the first image and the second image are complementary to each other. In the up-sampling process of super-resolution processing, up-sampling is carried out on the first image and the second image respectively, and image details recorded in a third image and a fourth image with higher resolution after up-sampling are fused, so that a high-resolution image with more picture details can be obtained, and the problems of picture detail loss and picture quality damage can be effectively reduced.
In some embodiments of the present application, rendering the first image and the second image includes:
determining image rendering parameters according to the field of view of the virtual camera and the first motion information, wherein the image rendering parameters comprise image information to be rendered, depth information and motion vector information;
rendering based on the image rendering parameters to obtain a first image and a second image;
determining sampling parameters and fusion parameters of the first image and the second image includes:
information processing is carried out on the image rendering parameters, and processed network input data are obtained;
And inputting the network input data into an information processing network to obtain sampling parameters and fusion parameters.
In the embodiment of the application, the electronic device renders the first image and the second image in a rendering pipeline, specifically, renders the first image and the second image in the rendering pipeline through a rasterization process or a ray tracing process and other rendering technologies, and stores image rendering parameters in the rendering process, specifically including image information to be rendered, depth information and motion vector information.
The rendered image information is information such as a scene and an object to be rendered within a field of view of the virtual camera. The motion vector information specifically indicates a moving mode of one pixel in 2 frames of images which are continuous in time sequence, specifically, since the field of view of the virtual camera moves according to the preset first motion information, the field of view positions of the first image and the second image are different, the positions of the pixels recording the same object information in the two images are also different, and the two images, such as the first image and the second image, can be matched with the same pixel content by recording the motion vector information.
The depth information is specifically used for judging whether objects in two adjacent frames of images in time sequence are blocked or not, so that the same pixels in the two frames of images can be matched more accurately.
After the first image and the second image are rendered, the rendering image information generated in the rendering process is preprocessed, so that network input data such as brightness information, speed information, shielding information and the like containing more effective information is obtained, and front frames and rear frames can be fused better.
Fig. 3 is a schematic diagram of an information processing network according to some embodiments of the present application, where, as shown in fig. 3, network Input data Input obtained by preprocessing is Input into the information processing network shown in fig. 3, and specific sampling parameters and fusion parameters are obtained by network reasoning. Specifically, the latitude of the network Input data Input is <1×3×540×960>, the information processing network includes 3 convolutional layers, i.e. Conv1, conv2 and Conv3 shown in fig. 3, wherein the size of the convolution kernel W of the convolution layer Conv1 is <32 x 3>, the offset term B of the convolutional layer Conv1 has a size <32>. The convolution kernel W of the convolution layer Conv2 has a size <32 x 3>, the offset term B of the convolutional layer Conv2 has a size <32>. The size of the convolution kernel W of the convolution layer Conv3 is <4×32×3×3>, and the size of the offset term B of the convolution layer Conv1 is <4>. The size of the convolution kernel W and the size of the offset term B in the convolution layer may be selected according to practical requirements, which is not specifically limited in the embodiment of the present application.
The convolution layers Conv1 and Conv2 and the convolution layers Conv2 and Conv3 are connected through a ReLU function, i.e. a linear rectification function.
After the network Input data Input passes through a 3-layer convolution layer and a 2-layer ReLU function, after the processing of an activation function Sigmoid, finally, data of depth latitude (Ddepth) is transferred to Space latitude (Space) through a DdepthToSpace layer, and the final Output with the latitude of <1 multiplied by 1080 multiplied by 1920> is obtained, namely, sampling parameters and fusion parameters.
According to the embodiment of the application, the network pretreatment is carried out on the image rendering parameters generated in the image rendering process, the sampling parameters and the fusion parameters are inferred through the processing network, so that a better upsampling effect and a better fusion effect can be obtained, more picture details are contained in the high-resolution image obtained through final processing, and the image quality is improved.
In some embodiments of the present application, rendering the first image and the second image includes:
rendering the first image and the second image through an image rendering pipeline;
after determining the sampling parameters and the fusion parameters of the first image and the second image, the method further comprises:
transmitting the first image, the second image, the sampling parameters and the fusion parameters to an image processing pipeline through image rendering pipeline rendering;
Upsampling the first image and the second image based on the sampling parameters, comprising:
upsampling the first image and the second image through an image processing pipeline;
and carrying out fusion processing on the third image and the fourth image according to the fusion parameters, wherein the fusion processing comprises the following steps:
performing fusion processing on the third image and the fourth image through an image processing pipeline; and
the method further comprises the steps of:
the fifth image is sent to the image rendering pipeline through the image processing pipeline.
In the embodiment of the application, the image rendering pipeline is specifically a rendering pipeline running in a graphics processor (Graphics Processing Unit, GPU), and the rendering pipeline is used for rendering images, such as game images, user interface images, and the like, wherein the first image and the second image are both rendered through the image rendering pipeline.
In a game scenario, the efficiency of the rendering pipeline determines the number of frame images that can be generated per unit time, and thus the game frame rate. In order to ensure that the working efficiency of the rendering pipeline is not additionally occupied, the embodiment of the application is provided with an image processing pipeline.
The image processing pipeline is used for carrying out up-sampling processing on the low-resolution image rendered by the image rendering pipeline, so as to obtain a fused high-resolution image. In some embodiments, the image processing pipeline may be a pipeline running in a neural network processor (Neural network Processing Unit, NPU) of the electronic device, or may be running by providing a separate external arithmetic chip.
After the first image and the second image are rendered by the image rendering pipeline, the first image, the second image and sampling parameters and fusion parameters generated in the process of rendering the first image and the second image are sent to the image processing pipeline, the first image and the second image are subjected to up-sampling processing by the image processing pipeline, and the third image and the fourth image subjected to up-sampling processing are subjected to fusion processing, so that a frame of fifth image with higher resolution and more image details is obtained.
In this process, the image rendering pipeline can continue to perform the operation of rendering the image, so that the reduction of the frame number caused by the occupation of GPU computing power by up-sampling, image fusion and other processes is avoided.
After the image processing pipeline processes the fifth image, the fifth image is transmitted back to the image rendering pipeline, and the image rendering pipeline continues to process on the basis of the fifth image, such as overlaying a user interface, an icon and the like, so that a frame of final image is obtained and displayed on a screen.
According to the embodiment of the application, the independent image processing pipelines are arranged to perform up-sampling processing and fusion processing on the images rendered in the image rendering pipelines, so that the computing power originally used for rendering the images is not occupied, the image rendering efficiency can be ensured, and the game frame rate is ensured while the game picture is ensured.
In some embodiments of the present application, the third image comprises a first pixel, and the sampling parameter comprises a sampling radius;
and carrying out fusion processing on the third image and the fourth image according to fusion parameters to obtain a processed fifth image, wherein the fusion processing comprises the following steps:
determining the pixel position of the first pixel in the fourth image according to the pixel position of the first pixel in the third image and the motion vector information;
determining a first image area in the third image according to the sampling radius and the pixel position of the first pixel in the third image;
determining a second image area in the fourth image according to the sampling radius and the pixel position of the first pixel in the fourth image, wherein the image content in the second image area is the same as the image content in the first image area;
and carrying out fusion processing on the first image area and the second image area based on the fusion parameters to obtain a fifth image.
In an embodiment of the present application, the sampling parameter includes a sampling radius. Specifically, during super-resolution processing, each pixel in the image is sampled according to a certain sampling radius, so that the pixels are filled, and the low-resolution image is supplemented to the high-resolution image. In the related art, the same sampling radius is used for different pixels in the image, and when the pixels are positioned at the corners of the image, the sampling radius covers a large number of invalid areas, so that the calculation force is wasted.
According to the embodiment of the application, the proper sampling radius is inferred through a network, the proper sampling radius is adaptively acquired for different pixel points, so that fine detail contents can be reproduced more efficiently, whether the pixel points are positioned at the edge of a picture is judged by sensing the contents around the sampling points, and different sampling radii are adjusted for different image contents.
In this embodiment of the present application, the third image is obtained by performing super-resolution upsampling on the first image, and the fourth image is obtained by performing super-resolution upsampling on the second image. The first image and the second image are temporally adjacent two image frames. Here, the following frame in which the first image is the second image is described as an example.
The first pixel may be any one of the pixels in the third image. For each pixel in the third image, tracking the position of the first pixel in the fourth image of the next frame according to the pixel position of the pixel in the third image and the stored motion vector information, thereby realizing the mark tracking of the same image frame in the third image and the fourth image.
And then, according to the determined sampling radius corresponding to the first pixel, sampling surrounding pixel points of the first pixel in the third image and the surrounding pixel points of the first pixel in the fourth image respectively, obtaining a first image area in the third image, obtaining a second image area in the fourth image, wherein the image contents of the first image area and the second image area are the same, namely the contents in the same virtual camera view field.
Thus, alignment of content in two different image frames that are sequential in time is achieved. And fusing the first image area and the second image area with aligned contents, and complementing different image details recorded in the two frames of images to obtain a fifth image with more picture details.
According to the embodiment of the application, the sampling radius can be acquired in a self-adaptive manner for different pixel points, fine detail content can be reproduced well, the content around the sampling points, such as whether the content is an edge or not, the sampling radius is adjusted according to different image content, meanwhile, frame images with different time sequences are fused in a better time sequence according to fusion parameters, and the fusion granularity is pixel-level, so that a finer fusion effect is achieved.
In some embodiments of the present application, the image processing method is performed by an electronic device comprising an image processing chip for generating a fifth image;
after rendering the first image and the second image based on the image rendering parameters, the method further comprises:
the image rendering parameters, the first image and the second image are sent to an image processing chip.
In the embodiment of the application, the electronic device includes an image processing chip, which may be an off-chip, that is, the image processing chip does not occupy the computing power of a central processing unit (central Processing Unit, CPU) or a graphics processing unit (Graphics Processing Unit, GPU) of the electronic device to perform super-resolution processing, so as to save more computing power to increase the number of game frames.
Specifically, it is assumed that a module that performs the process of super resolution and image fusion is defined as a VNSS module, and all operations of the VNSS module are performed on an image processing chip. Specifically, after the first image and the second image are rendered, the first image and the second image which are obtained by rendering and the image rendering parameters generated in the rendering process are sent to an image processing chip, wherein the image rendering parameters specifically comprise image information to be rendered, depth information, motion vector information and the like.
And performing super-resolution up-sampling processing and fusion processing through an image processing chip to obtain a fifth image with higher resolution, wherein the fifth image contains more picture details.
After the fifth image is obtained, the image processing chip sends the fifth image back to the original rendering pipeline, and after the fifth image with the display resolution is subjected to picture post-processing in the rendering pipeline, a target image is obtained and displayed on a screen.
According to the embodiment of the application, the super-resolution up-sampling processing is carried out on the low-resolution rendered image through the externally hung independent image processing chip, so that the CPU and GPU computing power of the electronic equipment cannot be occupied, more computing power can be yielded, and the game frame rate can be improved.
In some embodiments of the present application, an image processing method is performed by an electronic device comprising a central processor, a graphics processor, and a neural network processor for generating a fifth image;
Rendering based on the image rendering parameters to obtain a first image and a second image, including:
rendering in a graphics processor based on the image rendering parameters to obtain a first image and a second image; and
the method further comprises the steps of:
transmitting the image rendering parameters, the first image and the second image to a central processing unit through a graphics processor;
the image rendering parameters, the first image and the second image are sent to the neural network processor by the central processor.
In the embodiment of the present application, the electronic device includes a Central Processing Unit (CPU), a Graphics Processor (GPU) and a neural network processor (Neural network Processing Unit, NPU), specifically, it is assumed that a module that performs a process of super resolution and image fusion is defined as a VNSS module, and an operation of the VNSS module is performed on the NPU.
The game rendering pipeline is carried out on the GPU, and because a direct communication path is not formed between the GPU and the NPU at present, after a first image and a second image are rendered on the rendering pipeline of the GPU, the first image and the second image which are obtained by rendering and image rendering parameters generated in the rendering process specifically comprise image information to be rendered, depth information, motion vector information and the like, the memory (namely, video memory) of the GPU is sent to the memory of the CPU, the CPU copies the data from the CPU memory to the NPU memory, and the NPU inputs the processed data into a network, and the sampling radius adjustment coefficient and the time sequence fusion coefficient of the network are obtained through network reasoning.
And then, the NPU copies the predicted data from the NPU memory to the CPU memory, then copies the predicted data from the CPU memory to the GPU memory, and continues to execute the image processing step in the rendering pipeline of the GPU.
According to the embodiment of the application, the CPU is used for scheduling the data interaction between the graphic processor and the neural network processor, so that the data intercommunication between the GPU and the NPU is realized, and the image processing efficiency is improved.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
In some embodiments of the present application, there is provided an image processing apparatus, fig. 4 shows a block diagram of the image processing apparatus of some embodiments of the present application, as shown in fig. 4, an image processing apparatus 400 includes:
a rendering module 402, configured to render a first image and a second image, where the first image and the second image are rendered based on a field of view of the virtual camera when the field of view of the virtual camera moves with the first motion information, and a resolution of the first image and a resolution of the second image are both the first resolution;
A determining module 404, configured to determine sampling parameters and fusion parameters of the first image and the second image;
the processing module 406 is configured to perform upsampling processing on the first image and the second image based on the sampling parameter to obtain a processed third image and a processed fourth image, where the resolutions of the third image and the fourth image are a second resolution, and the second resolution is greater than the first resolution; and carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.
According to the method and the device for rendering the virtual camera, the camera shake mode is adopted, the first image and the second image of the virtual camera are rendered in the rendering pipeline when the field of view of the virtual camera is in different positions, so that the first image and the second image can comprise different image information, and the first image and the second image are complementary to each other. In the up-sampling process of super-resolution processing, up-sampling is carried out on the first image and the second image respectively, and image details recorded in a third image and a fourth image with higher resolution after up-sampling are fused, so that a high-resolution image with more picture details can be obtained, and the problems of picture detail loss and picture quality damage can be effectively reduced.
In some embodiments of the present application, the determining module is further configured to determine an image rendering parameter according to a field of view of the virtual camera and the first motion information, where the image rendering parameter includes image information to be rendered, depth information, and motion vector information;
the rendering module is specifically used for rendering to obtain a first image and a second image based on the image rendering parameters;
the processing module is also used for carrying out information processing on the image rendering parameters to obtain processed network input data; and inputting the network input data into an information processing network to obtain sampling parameters and fusion parameters.
According to the embodiment of the application, the network pretreatment is carried out on the image rendering parameters generated in the image rendering process, the sampling parameters and the fusion parameters are inferred through the processing network, so that a better upsampling effect and a better fusion effect can be obtained, more picture details are contained in the high-resolution image obtained through final processing, and the image quality is improved.
In some embodiments of the present application, the rendering module is further configured to render the first image and the second image through the image rendering pipeline; transmitting the first image, the second image, the sampling parameters and the fusion parameters to an image processing pipeline through image rendering pipeline rendering;
The processing module is also used for carrying out up-sampling processing on the first image and the second image through the image processing pipeline; performing fusion processing on the third image and the fourth image through an image processing pipeline; and sending the fifth image to the image rendering pipeline through the image processing pipeline.
According to the embodiment of the application, the independent image processing pipelines are arranged to perform up-sampling processing and fusion processing on the images rendered in the image rendering pipelines, so that the computing power originally used for rendering the images is not occupied, the image rendering efficiency can be ensured, and the game frame rate is ensured while the game picture is ensured.
In some embodiments of the present application, the third image comprises a first pixel, and the sampling parameter comprises a sampling radius;
the determining module is further used for determining the pixel position of the first pixel in the fourth image according to the pixel position of the first pixel in the third image and the motion vector information; determining a first image area in the third image according to the sampling radius and the pixel position of the first pixel in the third image; determining a second image area in the fourth image according to the sampling radius and the pixel position of the first pixel in the fourth image, wherein the image content in the second image area is the same as the image content in the first image area;
And the processing module is also used for carrying out fusion processing on the first image area and the second image area based on the fusion parameters to obtain a fifth image.
According to the embodiment of the application, the sampling radius can be acquired in a self-adaptive manner for different pixel points, fine detail content can be reproduced well, the content around the sampling points, such as whether the content is an edge or not, the sampling radius is adjusted according to different image content, meanwhile, frame images with different time sequences are fused in a better time sequence according to fusion parameters, and the fusion granularity is pixel-level, so that a finer fusion effect is achieved.
In some embodiments of the present application, the image processing apparatus includes an image processing chip for generating a fifth image;
and the processing module is also used for sending the image rendering parameters, the first image and the second image to the image processing chip.
According to the embodiment of the application, the super-resolution up-sampling processing is carried out on the low-resolution rendered image through the externally hung independent image processing chip, so that the CPU and GPU computing power of the electronic equipment cannot be occupied, more computing power can be yielded, and the game frame rate can be improved.
In some embodiments of the present application, the image processing apparatus includes a central processor, a graphics processor, and a neural network processor for generating a fifth image;
The processing module is also used for rendering the first image and the second image based on the image rendering parameters in the graphic processor;
the apparatus further comprises:
the data interaction module is used for sending the image rendering parameters, the first image and the second image to the central processing unit through the graphic processor; the image rendering parameters, the first image and the second image are sent to the neural network processor by the central processor.
According to the embodiment of the application, the CPU is used for scheduling the data interaction between the graphic processor and the neural network processor, so that the data intercommunication between the GPU and the NPU is realized, and the image processing efficiency is improved.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing device provided in the embodiment of the present application can implement each process implemented by the embodiment of the method, and in order to avoid repetition, details are not repeated here.
Optionally, an electronic device is further provided in the embodiments of the present application, fig. 5 shows a block diagram of a structure of an electronic device according to an embodiment of the present application, as shown in fig. 5, an electronic device 500 includes a processor 502, a memory 504, and a program or an instruction stored in the memory 504 and capable of running on the processor 502, where the program or the instruction implements each process of the foregoing method embodiment when executed by the processor 502, and the same technical effects are achieved, and are not repeated herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, and processor 610.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 610 is configured to render a first image and a second image, where the first image and the second image are rendered based on a field of view of the virtual camera when the field of view of the virtual camera moves with the first motion information, and a resolution of the first image and a resolution of the second image are both the first resolution; determining sampling parameters and fusion parameters of the first image and the second image; based on the sampling parameters, carrying out up-sampling processing on the first image and the second image to obtain a processed third image and a processed fourth image, wherein the resolution of the third image and the resolution of the fourth image are second resolution, and the second resolution is larger than the first resolution; and carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.
According to the method and the device for rendering the virtual camera, the camera shake mode is adopted, the first image and the second image of the virtual camera are rendered in the rendering pipeline when the field of view of the virtual camera is in different positions, so that the first image and the second image can comprise different image information, and the first image and the second image are complementary to each other. In the up-sampling process of super-resolution processing, up-sampling is carried out on the first image and the second image respectively, and image details recorded in a third image and a fourth image with higher resolution after up-sampling are fused, so that a high-resolution image with more picture details can be obtained, and the problems of picture detail loss and picture quality damage can be effectively reduced.
Optionally, the processor 610 is further configured to determine an image rendering parameter according to the field of view of the virtual camera and the first motion information, where the image rendering parameter includes image information to be rendered, depth information, and motion vector information; rendering based on the image rendering parameters to obtain a first image and a second image; information processing is carried out on the image rendering parameters, and processed network input data are obtained; and inputting the network input data into an information processing network to obtain sampling parameters and fusion parameters.
According to the embodiment of the application, the network pretreatment is carried out on the image rendering parameters generated in the image rendering process, the sampling parameters and the fusion parameters are inferred through the processing network, so that a better upsampling effect and a better fusion effect can be obtained, more picture details are contained in the high-resolution image obtained through final processing, and the image quality is improved.
Optionally, the third image comprises a first pixel, and the sampling parameter comprises a sampling radius;
the processor 610 is further configured to determine a pixel position of the first pixel in the fourth image according to the pixel position of the first pixel in the third image and the motion vector information; determining a first image area in the third image according to the sampling radius and the pixel position of the first pixel in the third image; determining a second image area in the fourth image according to the sampling radius and the pixel position of the first pixel in the fourth image, wherein the image content in the second image area is the same as the image content in the first image area; and carrying out fusion processing on the first image area and the second image area based on the fusion parameters to obtain a fifth image.
According to the embodiment of the application, the sampling radius can be acquired in a self-adaptive manner for different pixel points, fine detail content can be reproduced well, the content around the sampling points, such as whether the content is an edge or not, the sampling radius is adjusted according to different image content, meanwhile, frame images with different time sequences are fused in a better time sequence according to fusion parameters, and the fusion granularity is pixel-level, so that a finer fusion effect is achieved.
Optionally, the processor 610 is further configured to send the image rendering parameters, the first image and the second image to the image processing chip.
According to the embodiment of the application, the super-resolution up-sampling processing is carried out on the low-resolution rendered image through the externally hung independent image processing chip, so that the CPU and GPU computing power of the electronic equipment cannot be occupied, more computing power can be yielded, and the game frame rate can be improved.
Optionally, the processor 610 is further configured to render, in the graphics processor, a first image and a second image based on the image rendering parameters; transmitting the image rendering parameters, the first image and the second image to a central processing unit through a graphics processor; the image rendering parameters, the first image and the second image are sent to the neural network processor by the central processor.
According to the embodiment of the application, the CPU is used for scheduling the data interaction between the graphic processor and the neural network processor, so that the data intercommunication between the GPU and the NPU is realized, and the image processing efficiency is improved.
It should be understood that in the embodiment of the present application, the input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, and the graphics processor 6041 processes image data of still pictures or videos obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. The touch panel 6071 is also called a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory 609 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 609 in the present embodiment includes, but is not limited to, these and any other suitable types of memory.
The processor 610 may include one or more processing units; optionally, the processor 610 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the application further provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, the processes of the above method embodiment are realized, the same technical effects can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the above method embodiments, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (14)

1. An image processing method, the method comprising:
rendering a first image and a second image, wherein the first image and the second image are rendered based on the visual field of a virtual camera under the condition that the visual field of the virtual camera moves in first motion information, and the resolution of the first image and the resolution of the second image are both first resolution;
determining sampling parameters and fusion parameters of the first image and the second image;
based on the sampling parameters, carrying out up-sampling processing on the first image and the second image to obtain a processed third image and a processed fourth image, wherein the resolution of the third image and the resolution of the fourth image are second resolution, and the second resolution is larger than the first resolution;
and carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.
2. The image processing method of claim 1, wherein rendering the first image and the second image comprises:
determining an image rendering parameter according to the field of view of the virtual camera and the first motion information, wherein the image rendering parameter comprises image information to be rendered, depth information and motion vector information;
Rendering the first image and the second image based on the image rendering parameters;
the determining sampling parameters and fusion parameters of the first image and the second image comprises:
information processing is carried out on the image rendering parameters, and processed network input data are obtained;
and inputting the network input data into an information processing network to obtain the sampling parameters and the fusion parameters.
3. The image processing method of claim 1, wherein rendering the first image and the second image comprises:
rendering the first image and the second image through an image rendering pipeline;
after the determining of the sampling parameters and the fusion parameters of the first image and the second image, the method further comprises:
transmitting the first image, the second image, the sampling parameters, and the fusion parameters to an image processing pipeline through image rendering pipeline rendering;
the upsampling of the first image and the second image based on the sampling parameters includes:
upsampling the first image and the second image through the image processing pipeline;
The fusing processing of the third image and the fourth image according to the fusing parameters comprises:
performing fusion processing on the third image and the fourth image through the image processing pipeline; and
the method further comprises the steps of:
the fifth image is sent to the image rendering pipeline by the image processing pipeline.
4. The image processing method according to claim 2, wherein the third image includes a first pixel, and the sampling parameter includes a sampling radius;
the fusing processing is performed on the third image and the fourth image according to the fusing parameters, so as to obtain a processed fifth image, which comprises the following steps:
determining a pixel position of the first pixel in the fourth image according to the pixel position of the first pixel in the third image and the motion vector information;
determining a first image region in the third image according to the sampling radius and the pixel position of the first pixel in the third image;
determining a second image area in the fourth image according to the sampling radius and the pixel position of the first pixel in the fourth image, wherein the image content in the second image area is the same as the image content in the first image area;
And carrying out fusion processing on the first image area and the second image area based on the fusion parameters to obtain the fifth image.
5. The image processing method according to claim 2, wherein the image processing method is performed by an electronic device including an image processing chip for generating the fifth image;
after the rendering of the first image and the second image based on the image rendering parameters, the method further comprises:
and sending the image rendering parameters, the first image and the second image to the image processing chip.
6. The image processing method according to claim 2, wherein the image processing method is performed by an electronic device including a central processor, a graphics processor, and a neural network processor for generating the fifth image;
the rendering, based on the image rendering parameters, the first image and the second image includes:
rendering, in the graphics processor, the first image and the second image based on the image rendering parameters; and
The method further comprises the steps of:
transmitting, by the graphics processor, the image rendering parameters, the first image, and the second image to a central processor;
and sending the image rendering parameters, the first image and the second image to the neural network processor through the central processor.
7. An image processing apparatus, characterized in that the image processing apparatus comprises:
the rendering module is used for rendering a first image and a second image, wherein the first image and the second image are rendered based on the visual field of the virtual camera under the condition that the visual field of the virtual camera moves in first motion information, and the resolution of the first image and the resolution of the second image are both the first resolution;
the determining module is used for determining sampling parameters and fusion parameters of the first image and the second image;
the processing module is used for carrying out up-sampling processing on the first image and the second image based on the sampling parameters to obtain a processed third image and a processed fourth image, wherein the resolution ratio of the third image and the resolution ratio of the fourth image are second resolution ratio, and the second resolution ratio is larger than the first resolution ratio; and
And carrying out fusion processing on the third image and the fourth image according to the fusion parameters to obtain a processed fifth image, wherein the resolution of the fifth image is the second resolution.
8. The image processing apparatus according to claim 7, wherein,
the determining module is further configured to determine an image rendering parameter according to a field of view of the virtual camera and the first motion information, where the image rendering parameter includes image information to be rendered, depth information, and motion vector information;
the rendering module is specifically configured to render the first image and the second image based on the image rendering parameters;
the processing module is also used for carrying out information processing on the image rendering parameters to obtain processed network input data; and
and inputting the network input data into an information processing network to obtain the sampling parameters and the fusion parameters.
9. The image processing apparatus according to claim 7, wherein,
the rendering module is further configured to render the first image and the second image through an image rendering pipeline; transmitting the first image, the second image, the sampling parameters, and the fusion parameters to an image processing pipeline through image rendering pipeline rendering;
The processing module is further used for carrying out up-sampling processing on the first image and the second image through the image processing pipeline; performing fusion processing on the third image and the fourth image through the image processing pipeline; and sending the fifth image to the image rendering pipeline through the image processing pipeline.
10. The image processing apparatus of claim 8, wherein the third image comprises a first pixel and the sampling parameter comprises a sampling radius;
the determining module is further configured to determine a pixel position of the first pixel in the fourth image according to a pixel position of the first pixel in the third image and the motion vector information; and
determining a first image region in the third image according to the sampling radius and the pixel position of the first pixel in the third image;
determining a second image area in the fourth image according to the sampling radius and the pixel position of the first pixel in the fourth image, wherein the image content in the second image area is the same as the image content in the first image area;
And the processing module is further used for carrying out fusion processing on the first image area and the second image area based on the fusion parameters to obtain the fifth image.
11. The image processing apparatus according to claim 8, wherein the image processing apparatus includes an image processing chip for generating the fifth image;
the processing module is further configured to send the image rendering parameter, the first image, and the second image to the image processing chip.
12. The image processing apparatus of claim 8, wherein the image processing apparatus comprises a central processor, a graphics processor, and a neural network processor, the neural network processor to generate the fifth image;
the processing module is further used for rendering the first image and the second image based on the image rendering parameters in the graphics processor; and
the apparatus further comprises:
the data interaction module is used for sending the image rendering parameters, the first image and the second image to the central processing unit through the graphic processor; and sending the image rendering parameters, the first image and the second image to the neural network processor through the central processor.
13. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method of any one of claims 1 to 6.
14. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the method according to any of claims 1 to 6.
CN202311543160.6A 2023-11-20 2023-11-20 Image processing method, device, electronic equipment and storage medium Pending CN117408884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311543160.6A CN117408884A (en) 2023-11-20 2023-11-20 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311543160.6A CN117408884A (en) 2023-11-20 2023-11-20 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117408884A true CN117408884A (en) 2024-01-16

Family

ID=89498019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311543160.6A Pending CN117408884A (en) 2023-11-20 2023-11-20 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117408884A (en)

Similar Documents

Publication Publication Date Title
US10430075B2 (en) Image processing for introducing blurring effects to an image
CN110377264B (en) Layer synthesis method, device, electronic equipment and storage medium
CN108965847B (en) Method and device for processing panoramic video data
CN112019891B (en) Multimedia content display method and device, terminal and storage medium
TWI267061B (en) Method for processing multi-layered images
CN113409188A (en) Image background replacing method, system, electronic equipment and storage medium
Bordallo López et al. Interactive multi-frame reconstruction for mobile devices
CN117408884A (en) Image processing method, device, electronic equipment and storage medium
CN109859328B (en) Scene switching method, device, equipment and medium
CN113891005B (en) Shooting method and device and electronic equipment
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US11195247B1 (en) Camera motion aware local tone mapping
CN114302209A (en) Video processing method, video processing device, electronic equipment and medium
CN113709372B (en) Image generation method and electronic device
KR101864454B1 (en) Apparatus and method for composing images in an image processing device
CN113923367B (en) Shooting method and shooting device
CN116095250B (en) Method and device for video cropping
CN117271090A (en) Image processing method, device, electronic equipment and medium
CN111861857A (en) Display module, display equipment, image display method and storage medium
CN115514859A (en) Image processing circuit, image processing method and electronic device
CN114968142A (en) Screen projection processing method and device, electronic equipment and storage medium
CN117631932A (en) Screenshot method and device, electronic equipment and computer readable storage medium
CN116917900A (en) Processing data in a pixel-to-pixel neural network
CN114638772A (en) Video processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination