CN113066166A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113066166A
CN113066166A CN202110311566.6A CN202110311566A CN113066166A CN 113066166 A CN113066166 A CN 113066166A CN 202110311566 A CN202110311566 A CN 202110311566A CN 113066166 A CN113066166 A CN 113066166A
Authority
CN
China
Prior art keywords
panoramic
image
initial
sample image
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110311566.6A
Other languages
Chinese (zh)
Inventor
王光伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110311566.6A priority Critical patent/CN113066166A/en
Publication of CN113066166A publication Critical patent/CN113066166A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses an image processing method and device and electronic equipment. One embodiment of the method comprises: acquiring a panoramic image to be processed; generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed. The target panoramic image with the target illumination effect can be reconstructed, and the watching experience of a user is improved.

Description

Image processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
An image reconstruction technique is a technique for obtaining shape information of a three-dimensional object by digital processing from data measured outside the object, and has been widely used in various fields such as medical imaging and industrial nondestructive testing. In some application scenarios, after a panoramic image is reconstructed by using an image reconstruction model, it is often necessary to perform processing such as changing color tones, adding shadows, and the like on the panoramic image to improve the viewing experience of a user.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides an image processing method and device and electronic equipment.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring a panoramic image to be processed; generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including: the acquisition module is used for acquiring a panoramic image to be processed; the generating module is used for generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the image processing method of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the image processing method described in the first aspect above.
According to the image processing method, the image processing device and the electronic equipment, the panoramic image to be processed is obtained; generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed. The target panoramic image with the target illumination effect can be reconstructed, and the watching experience of a user is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram of one embodiment of an image processing method according to the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating one embodiment of a training target image processing model according to the present disclosure;
FIG. 3 is a schematic flow chart diagram illustrating another embodiment of a training target image processing model according to the present disclosure;
FIG. 4 is a schematic block diagram of one embodiment of an image processing apparatus according to the present disclosure;
FIG. 5 is an exemplary system architecture to which the image processing method of one embodiment of the present disclosure may be applied;
fig. 6 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, a flowchart of an embodiment of an image processing method according to the present disclosure is shown, as shown in fig. 1, the image processing method includes the following steps 101 to 102.
Step 101, acquiring a panoramic image to be processed;
in some application scenarios, after an image is captured by a device that can substantially capture an image, such as a panoramic camera, a mobile phone, or the like, the to-be-processed panoramic image may be generated based on the captured image. The panoramic image to be processed here may include, for example, a landscape panoramic image, an indoor panoramic image, and the like.
102, generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed.
The target illumination information may include, for example, brightness information, an incident angle, and the like of a target light source, and in some application scenarios, the target illumination information may be predetermined, so that the generated target panoramic image may have a target illumination effect, where the target illumination effect may be embodied by a shadow in the scene, for example; the color panorama image may include a panorama image without illumination information having color information. The color information here may include, for example, a color value of a pixel. The luminance impact matrix may include a pre-processed radiation propagation matrix (PRT matrix), which may be used to characterize the impact of the light source and/or color value on each pixel.
In some application scenes, the panoramic image to be processed can be relighted to obtain a target panoramic image with a target illumination effect. Specifically, target illumination information may be given to the panoramic image to be processed based on the predetermined PRT matrix and the color panoramic image, so as to generate a target panoramic image having a target illumination effect. In these application scenes, a plurality of target illumination information may be preset, and these target illumination information may represent, for example, illumination information when the same light source illuminates a scene reflected by the to-be-processed panoramic image at different positions. In this way, a plurality of target panoramic images with target illumination effects can be generated through the predetermined PRT matrix, the color panoramic image and the plurality of target illumination information, and a dynamic process that a scene reflected by the to-be-processed panoramic image changes along with the position change of the same light source is realized. For example, the predetermined target illumination information a1, target illumination information a2, and target illumination information A3 correspond to the illumination information of morning, noon, and evening, respectively, then the corresponding target panoramic image a1, target panoramic image a2, and target panoramic image A3 may be generated based on the luminance influence matrix, the color panoramic image, respectively. Therefore, when the three target panoramic images are continuously played, the scene of the panoramic image to be processed in one day can be obtained, and the user can conveniently experience the illumination condition of the scene in different time periods.
In the embodiment, a panoramic image to be processed is obtained first; then generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed. The target panoramic image with the target illumination effect can be reconstructed, and the watching experience of a user is improved.
In some alternative implementations, the luminance-influencing matrix and the de-illuminated color panoramic image predetermined in step 102 above are obtained based on the following steps: and inputting at least one reference panoramic image to be processed, which is the same as the scene of the panoramic image to be processed, into a pre-trained target image processing model to obtain the panoramic image.
The target image processing model may for example comprise an image segmentation network U-net.
The scene of the reference panoramic image to be processed can be consistent with the scene of the panoramic image to be processed, so that the brightness influence matrix corresponding to the reference panoramic image to be processed and the panoramic image to be processed is consistent with the color panoramic image. In some application scenarios, for example, two panoramic images may be acquired in the same scene, one of which is used as the to-be-processed panoramic image and the other is used as the reference to-be-processed panoramic image.
After the reference to-be-processed panoramic image is input into the target image processing model, a brightness influence matrix and a color panoramic image corresponding to the reference to-be-processed panoramic image can be obtained through the target image processing model. In this way, based on the luminance influence matrix, the color panoramic image, and the target illumination information, a target panoramic image having a target illumination effect in the scene can be obtained.
Referring to fig. 2, a schematic flowchart of an embodiment of training a target image processing model according to the present disclosure is shown, as shown in fig. 2, the target image processing model is obtained based on the following steps:
step 201, obtaining a training sample image set, wherein the training sample image set comprises at least one initial panoramic training sample image; the initial panoramic training sample image corresponds to preset initial illumination information;
in some application scenarios, panoramic images without illumination information may be generated first, and then known illumination information may be added to these non-illuminated panoramic images to obtain the initial panoramic training sample image. In this way, the initial illumination information can be used for carrying out comparison training on the relevant illumination parameters of the target image processing model, and the convergence of the target image processing model is promoted.
In some optional implementations, the step 201 may include the following sub-steps:
firstly, acquiring at least one initial environment map in the same scene;
the initial environment map can be used for simulating the mapping effect of a smooth surface existing in a scene on the surrounding scene, and the illumination information corresponding to the scene can be determined based on the mapping effect. In some application scenarios, the initial environment map may be labeled with initial illumination information in advance, so that the illumination information in the initial panoramic training sample image generated by the initial environment map may be known. The plurality of initial environment maps correspond to the same scene, so that the PRT matrix and the color panoramic image obtained based on the initial environment maps can be the same, and the PRT matrix and the image color information can be restrained conveniently in the training process of the image processing model.
Then, at least one initial panoramic training sample image of the training sample image set is generated based on the at least one initial environmental map.
After the multiple initial environment maps are acquired, a corresponding initial panoramic training sample image can be generated for each initial environment map, and the multiple initial panoramic training sample images can be sorted to obtain a training sample image set.
Step 202, performing the following model training operation by using the at least one initial panoramic training sample image:
substep 2021, inputting the initial panoramic training sample image into an initial image processing model to obtain an output result;
in some application scenarios, after the training sample image set is obtained, the initial panoramic training sample image therein may be input into the initial image processing model, so as to obtain a corresponding output result.
Substep 2022, reconstructing a panoramic image according to the output result to obtain a reconstructed panoramic sample image corresponding to the initial panoramic sample image;
after the output result corresponding to the initial panoramic training sample image is obtained by using the initial image processing model, the output result can be used for reconstructing a reconstructed panoramic sample image corresponding to the initial panoramic training sample image.
In some alternative implementations, the output result includes a predicted luminance impact matrix, predicted illumination information, and a de-illuminated predicted color panorama image, and the predicted illumination information is determined based on a predicted environment map, such that the step 2022 may include: reconstructing a panoramic image according to the predicted brightness influence matrix, the predicted environment mapping and the predicted color panoramic image after illumination removal to obtain a reconstructed panoramic sample image;
in some application scenarios, the initial image processing model may output a predicted luminance impact matrix, a predicted color panoramic image, and a predicted environment map. Then, a corresponding reconstructed panoramic sample image can be obtained according to the rendering of the three parts.
Substep 2023, determining whether a loss value of a preset loss function meets a preset condition according to the reconstructed panoramic sample image and the initial panoramic sample image;
the predetermined loss function may include, for example, an average absolute error loss function (also referred to as an L1 loss function).
In some application scenarios, after obtaining the reconstructed panoramic sample image, the reconstructed panoramic sample image and the initial panoramic sample image may be calculated based on a preset loss function to determine a loss value of the preset loss function. In these application scenarios, it may be further determined whether the loss value of the preset loss function satisfies a preset condition, where the preset condition may include, for example, that the loss value is smaller than a preset loss threshold. That is, when the loss value calculated based on the preset loss function between the reconstructed panoramic sample image and the initial panoramic sample image is smaller than the preset threshold, the loss value of the preset loss function may be considered to satisfy the preset condition.
Substep 2024, if yes, stopping training, and determining the initial image processing model when the training is stopped as the target image processing model;
if it is determined that the loss value satisfies the preset condition, the training operation on the initial image processing model may be stopped, and the initial image processing model at this time may be determined as the target image processing model. The luminance impact matrix and the color panorama image may then be determined using the target image processing model.
And substep 2025, otherwise, adjusting the model parameters of the initial image processing model according to the loss value of the preset loss function, and performing the model training operation again.
If the loss value is determined not to meet the preset condition, the current initial image processing model can be regarded as incapable of outputting a more accurate brightness influence matrix and a color panoramic image corresponding to the reference panoramic image to be processed. Then, the model parameters of the initial image processing model may be adjusted in a direction in which the loss value satisfies the preset condition. And performing the model training operation again to make the initial image processing model converged to obtain the target image processing model.
In this implementation manner, the initial image processing model is trained through a plurality of initial panoramic training sample images in the training sample image set, so that a target image processing model capable of outputting a relatively accurate luminance influence matrix and a color panoramic image corresponding to the reference to-be-processed panoramic image can be obtained.
The corresponding PRT matrix, color panoramic image should be the same in the same scene. In some application scenarios, based on the same PRT matrix, multiple panoramic images acquired in the same scene may be relighted. That is, based on the PRT matrix and the color panoramic image in the same scene, the corresponding reconstructed panoramic sample images can be reconstructed respectively under the target illumination information that changes according to a certain rule, and then the dynamic change effect of the scene under different illumination can be realized. Certain laws here may include, for example, the east-rising-west-falling law of the sun, and accordingly, the reconstructed panoramic sample image may exhibit illumination variations of the scene over the course of a day. Therefore, one initial panoramic training sample image a may be randomly selected as a calibration initial panoramic training sample image in a training sample image set acquired under the same scene. Then, a calibration reconstructed panoramic sample image a' may be rendered based on a predicted PRT matrix B obtained from another initial panoramic training sample image B, a predicted color panoramic image B, and a calibration initial environment map a corresponding to the calibration initial panoramic training sample image a. In the expected state, the calibration reconstructed panoramic sample image a' should coincide with the calibration initial panoramic training sample image a. Calibration procedures for the predicted PRT matrix and the predicted color panoramic image may then be implemented.
Referring to fig. 3, a schematic flowchart of another embodiment of the present disclosure for training a target image processing model is shown, as shown in fig. 3, the target image processing model is obtained based on the following steps:
step 301, obtaining a training sample image set, where the training sample image set includes an initial fuzzy panoramic training sample image, the initial fuzzy panoramic training sample image is determined by an initial fuzzy environment map, and the initial fuzzy environment map is generated based on an initial spherical harmonic function corresponding to the initial environment map;
in some application scenarios, after the initial environment map is obtained, an initial spherical harmonic function corresponding to the initial environment map may be calculated, and an initial blurred environment map may be generated through the initial spherical harmonic function, which may then be used to generate an initial blurred panoramic training sample image. Here, the calculation of more parameters can be reduced in the process of generating the initial fuzzy environment map through the initial spherical harmonic function, and then the convergence speed of the initial image processing model can be increased.
In these application scenarios, after obtaining a plurality of initial fuzzy panoramic training sample images, the initial fuzzy panoramic training sample images may be sorted to obtain a training sample image set.
Step 302, performing the following model training operation by using the at least one initial fuzzy panoramic training sample image:
a substep 3021, inputting the initial fuzzy panoramic training sample image into an initial image processing model to obtain an output result;
after the initial fuzzy panoramic training sample image is obtained, the initial fuzzy panoramic training sample image can be input into an initial image processing model, and an output result of the initial image processing model is obtained.
A substep 3022, reconstructing a panoramic image according to the output result to obtain a reconstructed fuzzy panoramic sample image corresponding to the initial fuzzy panoramic training sample image;
in some application scenarios, the output results may include a predicted luminance impact matrix, a predicted spherical harmonic, and a de-illuminated predicted color panoramic image.
In this way, after the initial image processing model is used for obtaining the preset brightness influence matrix, the predicted spherical harmonic function and the predicted color panoramic image, the reconstructed fuzzy panoramic sample image corresponding to the initial fuzzy panoramic training sample image can be reconstructed.
A substep 3023, determining whether a loss value of the preset loss function satisfies a preset condition according to the reconstructed blurred panorama sample image and the initial blurred panorama sample image;
the predetermined loss function may also include, for example, an average absolute error loss function (also referred to as an L1 loss function).
After the reconstructed blurred panorama sample image is obtained, the reconstructed blurred panorama sample image and the initial blurred panorama training sample image may be calculated based on a preset loss function to determine whether a loss value of the preset loss function satisfies a preset condition. In some application scenarios, the preset condition may include, for example, that the loss value is smaller than a preset loss threshold. That is, when the loss value calculated based on the preset loss function between the reconstructed panoramic sample image and the initial panoramic sample image is smaller than the preset threshold, the loss value of the preset loss function may be considered to satisfy the preset condition.
If yes, stopping training, and determining an initial image processing model when the training is stopped as the target image processing model;
and a substep 3025, otherwise, adjusting the model parameters of the initial image processing model according to the loss value of the preset loss function, and re-executing the model training operation.
The implementation process and the achieved technical effect of the sub-steps 3024 to 3025 may be the same as or similar to the sub-steps 2024 to 2025 shown in fig. 2, and are not described herein again.
In the implementation mode, the step of training the initial image processing model based on the initial fuzzy panoramic training sample image is highlighted, and the convergence speed of the initial image processing model is increased.
Referring to fig. 4, which shows a schematic structural diagram of an embodiment of an image processing apparatus according to the present disclosure, as shown in fig. 4, the image processing apparatus includes an obtaining module 401 and a generating module 402. The acquiring module 401 is configured to acquire a panoramic image to be processed; a generating module 402, configured to generate a target panoramic image based on a predetermined brightness influence matrix, target illumination information, and a color panoramic image corresponding to the to-be-processed panoramic image; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed.
It should be noted that, for specific processing of the obtaining module 401 and the generating module 402 of the image processing apparatus and technical effects brought by the specific processing, reference may be made to the related descriptions of step 101 to step 102 in the corresponding embodiment of fig. 1, and no further description is given here.
In some optional implementations of this embodiment, the predetermined luminance impact matrix and the de-illuminated color panoramic image are obtained based on the following steps: and inputting at least one reference panoramic image to be processed, which is the same as the scene of the panoramic image to be processed, into a pre-trained target image processing model to obtain the panoramic image.
In some optional implementations of this embodiment, the target image processing model is obtained based on the following steps: acquiring a training sample image set, wherein the training sample image set comprises at least one initial panoramic training sample image; the initial panoramic training sample image corresponds to preset initial illumination information; performing the following model training operation using the at least one initial panoramic training sample image: inputting the initial panoramic training sample image into an initial image processing model to obtain an output result; reconstructing a panoramic image according to the output result to obtain a reconstructed panoramic sample image corresponding to the initial panoramic sample image; determining whether a loss value of a preset loss function meets a preset condition or not according to the reconstructed panoramic sample image and the initial panoramic sample image; if so, stopping training, and determining the initial image processing model when the training is stopped as the target image processing model; otherwise, adjusting the model parameters of the initial image processing model according to the loss value of the preset loss function, and re-executing the model training operation.
In some optional implementations of this embodiment, the obtaining of the training sample image set includes: acquiring at least one initial environment map under the same scene; at least one initial panoramic training sample image of a training sample image set is generated based on the at least one initial environmental map.
In some optional implementations of this embodiment, the output result includes a predicted luminance influence matrix, predicted illumination information, and a predicted color panorama image after de-illumination; and the predicted illumination information is determined based on a predicted environment map, and the panoramic image reconstruction is performed according to the output result to obtain a reconstructed panoramic sample image corresponding to the initial panoramic sample image, including: and reconstructing the panoramic image according to the predicted brightness influence matrix, the predicted environment mapping and the predicted color panoramic image after illumination removal to obtain the reconstructed panoramic sample image.
In some optional implementations of this embodiment, the target image processing model is obtained based on the following steps: acquiring a training sample image set, wherein the training sample image set comprises an initial fuzzy panoramic training sample image, the initial fuzzy panoramic training sample image is determined by an initial fuzzy environment map, and the initial fuzzy environment map is generated based on an initial spherical harmonic function corresponding to the initial environment map; performing the following model training operation using the at least one initial blurred panorama training sample image: inputting the initial fuzzy panoramic training sample image into an initial image processing model to obtain an output result; reconstructing a panoramic image according to the output result to obtain a reconstructed fuzzy panoramic sample image corresponding to the initial fuzzy panoramic training sample image; determining whether the loss value of the preset loss function meets a preset condition or not according to the reconstructed fuzzy panoramic sample image and the initial fuzzy panoramic sample image; if so, stopping training, and determining the initial image processing model when the training is stopped as the target image processing model; otherwise, adjusting the model parameters of the initial image processing model according to the loss value of the preset loss function, and executing the model training operation again.
Referring to fig. 5, an exemplary system architecture to which the image processing method of one embodiment of the present disclosure may be applied is shown.
As shown in fig. 5, the system architecture may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices and servers described above may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., Ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The terminal devices 501, 502, 503 may interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have various client applications installed thereon, such as a video distribution application, a search application, and a news application.
The terminal devices 501, 502, 503 may be hardware or software. When the terminal devices 501, 502, 503 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal devices 501, 502, and 503 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 505 may be a server that can provide various services, for example, receives an image acquisition request transmitted by the terminal apparatuses 501, 502, 503, performs analysis processing on the image acquisition request, and transmits the analysis processing result (for example, image data corresponding to the above-described acquisition request) to the terminal apparatuses 501, 502, 503.
It should be noted that the image processing method provided by the embodiment of the present disclosure may be executed by a server, and accordingly, the image processing apparatus may be provided in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, a block diagram of an electronic device (e.g., the server of FIG. 5) suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a panoramic image to be processed; generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation on the unit itself, and for example, the acquisition module 401 may also be described as a "module that acquires a panoramic image to be processed".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (9)

1. An image processing method, comprising:
acquiring a panoramic image to be processed;
generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed.
2. The method according to claim 1, wherein the predetermined luminance impact matrix and the de-illuminated color panoramic image are obtained based on the steps of:
and inputting at least one reference panoramic image to be processed, which is the same as the scene of the panoramic image to be processed, into a pre-trained target image processing model to obtain the panoramic image.
3. The method of claim 2, wherein the target image processing model is derived based on the steps of:
acquiring a training sample image set, wherein the training sample image set comprises at least one initial panoramic training sample image; the initial panoramic training sample image corresponds to preset initial illumination information;
performing the following model training operation using the at least one initial panoramic training sample image:
inputting the initial panoramic training sample image into an initial image processing model to obtain an output result;
reconstructing a panoramic image according to the output result to obtain a reconstructed panoramic sample image corresponding to the initial panoramic sample image;
determining whether a loss value of a preset loss function meets a preset condition or not according to the reconstructed panoramic sample image and the initial panoramic sample image;
if so, stopping training, and determining the initial image processing model when the training is stopped as the target image processing model;
otherwise, adjusting the model parameters of the initial image processing model according to the loss value of the preset loss function, and re-executing the model training operation.
4. The method of claim 3, wherein the obtaining a training sample image set comprises:
acquiring at least one initial environment map under the same scene;
at least one initial panoramic training sample image of a training sample image set is generated based on the at least one initial environmental map.
5. The method of claim 3, wherein the output results include a predicted luminance impact matrix, predicted illumination information, and a de-illuminated predicted color panorama image; and
the predicted lighting information is determined based on a predicted environment map, an
The reconstructing the panoramic image according to the output result to obtain a reconstructed panoramic sample image corresponding to the initial panoramic sample image includes:
and reconstructing the panoramic image according to the predicted brightness influence matrix, the predicted environment mapping and the predicted color panoramic image after illumination removal to obtain the reconstructed panoramic sample image.
6. The method of claim 2, wherein the target image processing model is derived based on the steps of:
acquiring a training sample image set, wherein the training sample image set comprises an initial fuzzy panoramic training sample image, the initial fuzzy panoramic training sample image is determined by an initial fuzzy environment map, and the initial fuzzy environment map is generated based on an initial spherical harmonic function corresponding to the initial environment map;
performing the following model training operation using the at least one initial blurred panorama training sample image:
inputting the initial fuzzy panoramic training sample image into an initial image processing model to obtain an output result;
reconstructing a panoramic image according to the output result to obtain a reconstructed fuzzy panoramic sample image corresponding to the initial fuzzy panoramic training sample image;
determining whether the loss value of the preset loss function meets a preset condition or not according to the reconstructed fuzzy panoramic sample image and the initial fuzzy panoramic sample image;
if so, stopping training, and determining the initial image processing model when the training is stopped as the target image processing model;
otherwise, adjusting the model parameters of the initial image processing model according to the loss value of the preset loss function, and executing the model training operation again.
7. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a panoramic image to be processed;
the generating module is used for generating a target panoramic image based on a predetermined brightness influence matrix, target illumination information and a color panoramic image corresponding to the panoramic image to be processed; and the color panoramic image is obtained on the basis of removing illumination from the panoramic image to be processed.
8. An electronic device, comprising:
one or more processors;
storage means having one or more programs stored thereon which, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202110311566.6A 2021-03-23 2021-03-23 Image processing method and device and electronic equipment Pending CN113066166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110311566.6A CN113066166A (en) 2021-03-23 2021-03-23 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110311566.6A CN113066166A (en) 2021-03-23 2021-03-23 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113066166A true CN113066166A (en) 2021-07-02

Family

ID=76561594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110311566.6A Pending CN113066166A (en) 2021-03-23 2021-03-23 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113066166A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066722A (en) * 2021-11-03 2022-02-18 北京字节跳动网络技术有限公司 Method and device for acquiring image and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222680A (en) * 2017-06-30 2017-09-29 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of panoramic picture
CN108012078A (en) * 2017-11-28 2018-05-08 广东欧珀移动通信有限公司 Brightness of image processing method, device, storage medium and electronic equipment
CN112330788A (en) * 2020-11-26 2021-02-05 北京字跳网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222680A (en) * 2017-06-30 2017-09-29 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of panoramic picture
CN108012078A (en) * 2017-11-28 2018-05-08 广东欧珀移动通信有限公司 Brightness of image processing method, device, storage medium and electronic equipment
CN112330788A (en) * 2020-11-26 2021-02-05 北京字跳网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066722A (en) * 2021-11-03 2022-02-18 北京字节跳动网络技术有限公司 Method and device for acquiring image and electronic equipment
CN114066722B (en) * 2021-11-03 2024-03-19 抖音视界有限公司 Method and device for acquiring image and electronic equipment

Similar Documents

Publication Publication Date Title
CN110021052B (en) Method and apparatus for generating fundus image generation model
CN110516678B (en) Image processing method and device
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN110310299B (en) Method and apparatus for training optical flow network, and method and apparatus for processing image
CN110825286B (en) Image processing method and device and electronic equipment
CN110349107B (en) Image enhancement method, device, electronic equipment and storage medium
CN111459364B (en) Icon updating method and device and electronic equipment
CN110766780A (en) Method and device for rendering room image, electronic equipment and computer readable medium
CN112330788A (en) Image processing method, image processing device, readable medium and electronic equipment
US20230132137A1 (en) Method and apparatus for converting picture into video, and device and storage medium
CN116934577A (en) Method, device, equipment and medium for generating style image
CN111586295B (en) Image generation method and device and electronic equipment
CN113066166A (en) Image processing method and device and electronic equipment
CN112235563B (en) Focusing test method and device, computer equipment and storage medium
CN112258622A (en) Image processing method, image processing device, readable medium and electronic equipment
CN111292406A (en) Model rendering method and device, electronic equipment and medium
CN114125485B (en) Image processing method, device, equipment and medium
CN112070888B (en) Image generation method, device, equipment and computer readable medium
CN113364993B (en) Exposure parameter value processing method and device and electronic equipment
CN115170395A (en) Panoramic image stitching method, panoramic image stitching device, electronic equipment, panoramic image stitching medium and program product
CN115170714A (en) Scanned image rendering method and device, electronic equipment and storage medium
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN113034670A (en) Training method and device for image reconstruction model and electronic equipment
CN114693860A (en) Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment
CN112492230A (en) Video processing method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination