WO2024055837A1 - 一种图像处理方法、装置、设备及介质 - Google Patents

一种图像处理方法、装置、设备及介质 Download PDF

Info

Publication number
WO2024055837A1
WO2024055837A1 PCT/CN2023/115438 CN2023115438W WO2024055837A1 WO 2024055837 A1 WO2024055837 A1 WO 2024055837A1 CN 2023115438 W CN2023115438 W CN 2023115438W WO 2024055837 A1 WO2024055837 A1 WO 2024055837A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
map
radiation
illumination
normal vector
Prior art date
Application number
PCT/CN2023/115438
Other languages
English (en)
French (fr)
Inventor
范帝楷
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024055837A1 publication Critical patent/WO2024055837A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular, to an image processing method, device, equipment and medium.
  • the present disclosure provides an image processing method, device, equipment and medium.
  • Embodiments of the present disclosure provide an image processing method, which includes:
  • An embodiment of the present disclosure also provides an image processing device, which includes:
  • the acquisition module is used to acquire the environment key frame images and corresponding depth images captured by the extended reality device;
  • a processing module configured to determine an illumination model based on the key frame image and the depth image
  • a rendering module configured to render the extended reality object to be rendered according to the lighting model.
  • An embodiment of the present disclosure also provides an electronic device.
  • the electronic device includes: a processor; a memory used to store instructions executable by the processor; and the processor is used to read the instruction from the memory.
  • the instructions are executable and executed to implement the image processing method as provided by embodiments of the present disclosure.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the image processing method provided by the embodiments of the present disclosure.
  • An embodiment of the present disclosure also provides a computer program product, including: a computer program/instruction, which when executed by a processor implements the image processing method described in any embodiment of the present disclosure.
  • the image processing method, device and storage medium of the embodiment of the present disclosure obtain the environment key frame image and the corresponding depth image captured by the extended reality device, determine the lighting model based on the environment key frame image and the depth image, and determine the lighting model to be rendered based on the lighting model.
  • Realistic objects are rendered.
  • the image of the corresponding environmental scene is obtained to determine the corresponding lighting model.
  • the lighting of the real scene can be restored, and the material of the real scene can be restored, thereby achieving more realistic virtual reality and mixed reality.
  • Extended reality scenarios such as reality and augmented reality further enhance the immersion of extended reality application scenarios.
  • Figure 1 is a schematic flowchart of an image processing method provided by some embodiments of the present disclosure
  • FIG. 2 is a schematic flowchart of another image processing method provided by some embodiments of the present disclosure.
  • Figure 3 is a schematic structural diagram of an image processing device provided by some embodiments of the present disclosure.
  • Figure 4 is a schematic structural diagram of an electronic device provided by some embodiments of the present disclosure.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • Figure 1 is a schematic flowchart of an image processing method provided by some embodiments of the present disclosure.
  • the method can be executed by an image processing device, where the device can be implemented using software and/or hardware, and can generally be integrated in an electronic device.
  • the method includes:
  • Step 101 Obtain the environment key frame image and corresponding depth image captured by the extended reality device.
  • the extended reality device can be any device related to virtual reality, augmented reality or mixed reality.
  • the embodiments of the present disclosure do not limit the form of the extended reality device.
  • the extended reality device can be a head display device or AR glasses.
  • the environmental scene refers to the scene of the space where the user wears the extended reality device, such as a small room.
  • Environmental keyframe images refer to color images taken of environmental scenes, such as RGB images, or grayscale images; depth images refer to images taken of environmental scenes with three-dimensional depth feature information.
  • the environment key frame images and corresponding depth images captured by the extended reality device there are many ways to obtain the environment key frame images and corresponding depth images captured by the extended reality device.
  • the environmental scene is photographed to obtain an environment key frame image and a corresponding depth image.
  • each shooting position is set in advance, and the extended reality device is controlled to shoot at each shooting position to obtain an environment key frame image and a corresponding depth image.
  • the above two methods are only examples of obtaining the environment key frame images and corresponding depth images captured by the extended reality device. This disclosure does not limit the specific methods of obtaining the environment key frame images and corresponding depth images captured by the extended reality device.
  • the extended reality device can be controlled to rotate in the environmental scene, and after each rotation of the extended reality device at a preset angle, the environmental scene is photographed to obtain the environment key frame image and corresponding depth image.
  • Step 102 Determine the lighting model based on the environment key frame image and depth image.
  • the environment key frame image is processed to obtain a radiation map
  • the depth image is processed.
  • Obtain the normal vector map calculate based on the preset internal parameter matrix and the pixel coordinates of the radiation map, obtain the direction vector of the texture model, determine the pixel radiation value based on the radiation map, and obtain the normal vector based on the normal vector map, based on the pixel radiation value,
  • the direction vector and normal vector are calculated to obtain the illumination intensity and illumination color of each azimuth coordinate of the texture model.
  • the illumination model is determined.
  • the preset texture model can be set according to the needs of the application scenario, such as a hemisphere model.
  • all depth maps are transferred to standard position coordinates to obtain the radiation map corresponding to the environment key frame image
  • all radiation maps are transferred to standard position coordinates to obtain the normal vector map corresponding to the depth map.
  • the standard position can be set as needed, such as the position of the extended reality object to be rendered as the standard position.
  • the lighting model can be determined based on the environment key frame image and the depth image.
  • Step 103 Render the extended reality object to be rendered according to the lighting model.
  • Extended reality objects to be rendered refer to extended reality objects that will be displayed in the extended reality scene.
  • the extended reality object to be rendered can be set according to the needs of the application scene. For example, select a chair as an extended reality object to render in the extended reality scene, or select a pot of flowers as an extended reality object to render in the extended reality scene.
  • rendering the extended reality object to be rendered according to the lighting model can be understood as sending the lighting model including ambient lighting (light intensity and lighting color) to the rendering engine, and the rendering engine is based on the lighting model to render the extended reality object to be rendered. To render.
  • the image processing solution provided by the embodiment of the present disclosure obtains the environment key frame image and the corresponding depth image captured by the extended reality device, determines the lighting model based on the environment key frame image and the depth image, and renders the extended reality object to be rendered according to the lighting model.
  • the image of the corresponding environmental scene is obtained to determine the corresponding lighting model.
  • the lighting of the real scene can be restored, and the material of the real scene can be restored, thereby achieving more realistic virtual reality and mixed reality.
  • Extended reality scenarios such as reality and augmented reality further enhance the immersion of extended reality application scenarios.
  • obtaining the environment key frame image and the corresponding depth image captured by the extended reality device includes: capturing the environment scene after each rotation of the extended reality device at a preset angle to obtain the environment key frame image and the depth image. .
  • an extended reality device such as a head-mounted display device or AR glasses starts to work normally
  • the angle of rotation of the user is recorded, for example, an RGB image (environment key frame image) and a depth map are selected every 5 degrees.
  • determining the lighting model based on the key frame image and the depth image includes: processing the environment key frame image to obtain a radiation map, processing the depth image to obtain a normal vector map, based on the preset internal parameter matrix and Calculate the pixel coordinates of the radiation map to obtain the direction vector of the texture model. Based on the radiation map, determine the pixel radiation value, and obtain the normal vector based on the normal vector map. Calculate based on the pixel radiation value, direction vector and normal vector to obtain each of the texture model. Based on the illumination intensity and illumination color of each azimuth coordinate of the texture model, the illumination model is determined.
  • the depth image before processing the depth image to obtain the normal vector map it also includes Including: obtaining the shooting position corresponding to the radiation map and depth map.
  • the shooting position is not the preset standard position, perform coordinate conversion on the radiation map and depth image to obtain a new radiation map and a new depth image.
  • the new radiation map is used as a depth image for subsequent processing.
  • the standard position can be set according to the application scenario, such as the position of the extended reality object to be rendered.
  • the radiation map and depth map at non-standard positions need to be converted to standard position image coordinates for calculation.
  • processing the environment key frame image to obtain the radiation map includes: obtaining the color channel corresponding to the environment key frame image, and processing according to the calibration mapping table corresponding to each color channel to obtain the radiation map.
  • processing the depth image to obtain the normal vector map includes: calculating the normal vector on the three-dimensional object corresponding to each pixel based on the depth value and pixel coordinates of the depth image to obtain the normal vector map.
  • the camera of the extended reality device needs to be offline calibrated in advance, that is, photometric calibration.
  • the main purpose is to remove the impact of camera exposure on imaging, thereby obtaining the irradiation map.
  • the key frame image is an RGB image
  • each of the three color channels of the RGB image is photometrically calibrated to obtain different calibration mapping tables for the three color channels, which are used for preprocessing of the RGB image. Three color channels are obtained respectively.
  • Radiation diagram of color channels when the key frame image is a grayscale image, perform a photometric calibration on the color channel of the grayscale image to obtain a calibration mapping table, which is used for preprocessing of the grayscale image to obtain a radiation diagram.
  • the color maps of the three color channels are mapped into radiation maps based on the calibration mapping table for each color channel; for the depth map, based on the depth value and pixel coordinates, the corresponding three-dimensional space object for each pixel is calculated.
  • the normal vector on is obtained to obtain a normal vector map.
  • the radiation map is attached to the texture model (such as a hemisphere model) based on the radiation map and the normal vector map.
  • the texture model such as a hemisphere model
  • K represents the camera internal parameter matrix
  • u and v represent the x and y coordinates of the image respectively
  • the function N represents the The vector is normalized (that is, the modulus length becomes 1).
  • the pixel radiation value corresponding to pixel (u, v) is obtained by multiplying the illumination color value (material) on the spherical surface of the hemispheric model by the illumination intensity value (light intensity), and then multiplying it by the cosine of n and f, recorded as pixel
  • the radiation value is E(u,v)
  • the light intensity is I( ⁇ , ⁇ )
  • the material is C( ⁇ , ⁇ ), as shown in formula (2):
  • E(u,v) I( ⁇ , ⁇ )C( ⁇ , ⁇ )
  • P represents the coordinate value of the point observed in the current frame in the standard frame
  • d(u,v) represents the value of the pixel coordinates of the depth image (u,v)
  • R wc represents the rotation from c to w
  • t wc represents The translation from c to w is the coordinate transformation formula of the point in the above formula (3).
  • a new depth map and a new radiation map at the standard position after eliminating translation can be generated, and then the mapping operation can be performed.
  • the illumination function to be solved is established based on the pixel radiation value, direction vector, normal vector, illumination intensity and illumination color, and the illumination function to be solved is converted to obtain the target illumination function. Based on the target illumination function and the discretized At each azimuth angle, the target energy function is obtained, and the target energy function is calculated to obtain the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • the illumination function to be solved is established based on the radiated value, direction vector, normal vector, illumination intensity and illumination color, such as formula (2).
  • the known ones are E(u,v) and
  • I it can be considered that it is basically uniform indoors, that is, the azimuth angle change will not cause too much brightness change.
  • the material C it can be considered that There are only a limited number of indoor materials, so its solution is sparse; among them, materials can be understood as the different colors produced by light shining on different objects.
  • the azimuth angle is discretized.
  • the value of ⁇ is from 0 to 360 degrees, and the value of ⁇ is from 0 to 90 degrees.
  • takes a value every 3 degrees, recorded as ⁇ p
  • takes a value every degree.
  • recorded as ⁇ q set as shown in formula (11):
  • the first term of the loss function represents the measurement data
  • the second term represents that the lighting is basically uniform
  • the third term represents that the material is sparse.
  • the illumination intensity and illumination color are determined by obtaining images from different angles of the corresponding environmental scene based on the extended reality device, thereby determining the illumination model based on the illumination intensity and illumination color. Based on the illumination model, the illumination of the real scene can be restored and the real scene can be restored. materials, thereby achieving more realistic extended reality scenes and further improving the immersion of extended reality application scenarios.
  • FIG. 2 is a schematic flowchart of another image processing method provided by some embodiments of the present disclosure. Based on the above embodiments, this embodiment further optimizes the above image processing method. As shown in Figure 2, the method includes:
  • Step 201 After the extended reality device rotates to a preset angle each time, the environmental scene is photographed to obtain an environmental key frame image and a depth image.
  • extended reality equipment such as AR glasses
  • the environmental scene is room A
  • the user wears the AR glasses and controls the AR glasses to start working
  • controls the AR glasses to rotate according to a preset angle controls the AR glasses to rotate according to a preset angle, and obtains the corresponding environment key frame image and depth image for each rotation at a certain angle.
  • shooting room A every 5 degrees to obtain environment key frame images and depth images.
  • Step 202 Obtain the color channel corresponding to the environment key frame image, and process it according to the calibration mapping table corresponding to each color channel to obtain the radiation map.
  • the environment key frame image is an RGB image, including three color channels of RGB.
  • the three calibration mapping tables corresponding to RGB are respectively obtained, which are calibration mapping table 1, calibration mapping table 2 and calibration mapping table.
  • the R color channel is calibrated according to The mapping table 1 is processed to obtain the radiation map a1, the G color channel is processed according to the calibration mapping table 2, and the radiation map a2 is obtained, and the B color channel is processed according to the calibration mapping table 3 to obtain the radiation map a3, thus for an RGB image , three radiation patterns a1, a2 and a3 are obtained.
  • Step 203 Obtain the shooting position corresponding to the radiation map and the depth map.
  • the shooting position is not the preset standard position, perform coordinate conversion on the radiation map and the depth image to obtain a new radiation map and a new depth image, and use the new radiation
  • the image is used as a radiation map, and the new depth image is used as a depth image for subsequent processing.
  • the standard position can be set according to the application scenario, such as the position of the extended reality object to be rendered.
  • the radiation pattern and depth map of the non-standard position need to be converted to the standard position image coordinates for calculation, that is, the new radiation map and the new depth image are subsequently calculated and processed for the non-standard position. .
  • Step 204 Calculate the normal vector on the three-dimensional object corresponding to each pixel based on the depth value and pixel coordinates of the depth image to obtain a normal vector map.
  • the normal vector on the three-dimensional object corresponding to each pixel is calculated to obtain a normal vector map.
  • Step 205 Calculate based on the preset internal parameter matrix and the pixel coordinates of the radiation map to obtain the direction vector of the texture model.
  • Step 206 Determine the pixel radiation value based on the radiation map, and obtain the normal vector based on the normal vector map.
  • Step 207 Calculate based on the pixel radiation value, direction vector and normal vector to obtain the illumination intensity and illumination color of each azimuth coordinate of the texture model, and determine the illumination model based on the illumination intensity and illumination color of each azimuth coordinate of the texture model.
  • the illumination function to be solved is established based on the pixel radiation value, direction vector, normal vector, illumination intensity and illumination color, and the illumination function to be solved is converted to obtain the target illumination function, based on the target illumination function and discretization At each azimuth angle, the target energy function is obtained, and the target energy function is calculated to obtain the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • the preset internal parameter matrix can be the camera internal parameter matrix.
  • the direction vector of each point on the texture model can be calculated, and the pixel radiation value can be determined based on the radiation map, and the method can be obtained based on the normal vector map.
  • Vector thereby calculating based on the pixel radiation value, direction vector and normal vector, obtaining the illumination intensity and illumination color of each azimuth angle coordinate of the texture model, and determining the illumination model based on the illumination intensity and illumination color of each azimuth angle coordinate of the texture model .
  • Step 208 Render the extended reality object to be rendered according to the lighting model.
  • Extended reality objects to be rendered refer to extended reality objects that will be displayed in the extended reality scene.
  • the extended reality objects to be rendered can be set according to the needs of the application scene. For example, select a chair as an extended reality object to be rendered in the extended reality scene, and another example is to select a pot of flowers as an extended reality object to be rendered in the extended reality scene.
  • rendering the extended reality object to be rendered based on the lighting model to obtain the extended reality scene can be understood as sending the lighting model including ambient lighting (light intensity and lighting color) to the rendering engine, and the rendering engine is based on the lighting model Render the extended reality object to be rendered.
  • the image processing solution provided by the embodiment of the present disclosure shoots the environmental scene after each rotation of the extended reality device at a preset angle, obtains the environmental key frame image and the depth image, obtains the color channel corresponding to the environmental key frame image, and performs the processing according to each Process each color channel and the corresponding pre-calibration mapping table to obtain the radiation map, obtain the shooting position corresponding to the radiation map and depth map, and coordinate the radiation map and depth image when the shooting position is not the preset standard position. Conversion, a new radiation map and a new depth image are obtained, based on the depth The depth value and pixel coordinates of the degree image are used to calculate the normal vector on the three-dimensional space object corresponding to each pixel, and the normal vector map is obtained.
  • the calculation is performed based on the preset internal parameter matrix and the pixel coordinates of the radiation map, and the direction vector of the texture model is obtained.
  • Based on the radiation map determine the pixel radiation value, and obtain the normal vector based on the normal vector map.
  • the illumination intensity and illumination color of the azimuth angle coordinates are used to determine the illumination model, and the extended reality object to be rendered is rendered according to the illumination model.
  • the lighting of extended reality scenes such as MR can be restored, and the materials of the real scene can be restored, thereby achieving a more realistic extended reality scene, that is, image sampling in different directions is obtained based on devices such as head display devices.
  • a better ambient lighting and environmental material model of a scene such as a small room can be restored, which is more friendly to VR/AR and other scenes and can bring a better sense of immersion to extended reality applications.
  • FIG 3 is a schematic structural diagram of an image processing device provided by some embodiments of the present disclosure.
  • the device can be implemented by software and/or hardware, and can generally be integrated in electronic equipment. As shown in Figure 3, the device includes:
  • the acquisition module 301 is used to acquire the environment key frame images and corresponding depth images captured by the extended reality device;
  • Processing module 302 configured to determine an illumination model according to the key frame image and the depth image
  • the rendering module 303 is configured to render the extended reality object to be rendered according to the lighting model.
  • the acquisition module 301 is specifically used to:
  • the extended reality device rotates a preset angle each time, the environmental scene is photographed to obtain the environmental key frame image and the depth image.
  • processing module 302 includes:
  • a first processing unit configured to process the environment key frame image to obtain a radiation pattern
  • a second processing unit configured to process the depth image to obtain a normal vector map
  • a calculation unit configured to perform calculations based on a preset internal parameter matrix and the pixel coordinates of the radiation pattern to obtain the direction vector of the texture model
  • Determine and obtain a unit configured to determine a pixel radiation value based on the radiation map, and obtain a normal vector based on the normal vector map;
  • a calculation unit configured to perform calculations based on the pixel radiation value, the direction vector and the normal vector, Obtain the illumination intensity and illumination color of each azimuth angle coordinate of the texture model;
  • a determining unit configured to determine the illumination model based on the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • processing module 302 also includes:
  • An acquisition unit used to acquire the shooting position corresponding to the radiation map and the depth map
  • a conversion unit configured to perform coordinate conversion on the radiation pattern and the depth image to obtain a new radiation pattern and a new depth image when the shooting position is not a preset standard position, and convert the new radiation pattern into As the radiation pattern, the new depth image is used as the depth image for subsequent processing.
  • the first processing unit is specifically used for:
  • the color channels corresponding to the environment key frame image are obtained, and processed according to the calibration mapping table corresponding to each color channel, to obtain the radiation map.
  • the second processing unit is specifically used for:
  • the computing unit is specifically used for:
  • the target energy function is obtained
  • the target energy function is calculated to obtain the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • the image processing device provided by the embodiments of the present disclosure can execute the image processing method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • modules included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, each functional module
  • the specific names are only for the convenience of mutual distinction and are not used to limit the scope of protection of the present disclosure.
  • the modules described above may be implemented as software components executing on one or more general-purpose processors, or as hardware for performing certain functions, such as programmable logic devices and/or special purpose sets. into a circuit.
  • these modules may be embodied in the form of a software product that may be stored in a non-volatile storage medium.
  • non-volatile storage media include enabling computer equipment (such as personal computers, servers, network equipment, mobile terminals, etc.) to execute the methods described in the embodiments of the present disclosure.
  • the above modules can also be implemented on a single device or distributed on multiple devices. The functionality of these modules can be combined with each other or further split into sub-modules.
  • Some embodiments of the present disclosure also provide a computer program product, which includes a computer program/instruction.
  • a computer program product which includes a computer program/instruction.
  • the image processing method provided by any embodiment of the present disclosure is implemented.
  • Some embodiments of the present disclosure further provide a computer program/instruction, which, when executed by a processor, implements the image processing method provided by any embodiment of the present disclosure.
  • Some embodiments of the present disclosure also provide an electronic device, which may include a processing device and a storage device, and the storage device may be used to store executable instructions.
  • the processing device may be configured to read executable instructions from the storage device and execute the executable instructions to implement the image processing method in the above embodiment.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by some embodiments of the present disclosure.
  • the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA), a tablet computer (PAD), a portable multimedia player (Portable Media Mobile terminals such as Player, PMP), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PDA Personal Digital Assistant
  • PAD tablet computer
  • PMP portable multimedia player
  • vehicle-mounted terminals such as vehicle-mounted navigation terminals
  • fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • the electronic device shown in FIG. 4 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 400 may include a processing device (such as a central processing unit, a graphics processor, etc.) 401, which may process data according to a program stored in a read-only memory (Read-Only Memory, ROM) 402 or from a storage device. 408 loads the program in the random access memory (Random Access Memory, RAM) 403 to perform various appropriate actions and processing. In the RAM 403, various programs and data required for the operation of the electronic device 400 are also stored.
  • the processing device 401, ROM 402 and RAM 403 are connected to each other via a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404.
  • the following devices can be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display An output device 407 such as a liquid crystal display (LCD), speaker, vibrator, etc.; a storage device 408 including a magnetic tape, a hard disk, etc.; and a communication device 409.
  • the communication device 409 may allow the electronic device 400 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 4 illustrates electronic device 400 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 409, or from storage device 408, or from ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the image processing method of the embodiment of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
  • Computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Electrically Erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any of the above The right combination.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code contained on a computer-readable medium may Use any appropriate medium for transmission, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • HTTP HyperText Transfer Protocol
  • Data communications e.g., communications network
  • Examples of communication networks include Local Area Network (“LAN”), Wide Area Network (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), and any currently known or future developed networks.
  • LAN Local Area Network
  • WAN Wide Area Network
  • the Internet e.g., the Internet
  • end-to-end networks e.g., ad hoc end-to-end networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device obtains the environment key frame image and the corresponding depth image captured by the extended reality device, and performs the following tasks according to the environment: Keyframe images and depth images are used to determine the lighting model, and the extended reality object to be rendered is rendered according to the lighting model.
  • computer program code for performing operations of the present disclosure may be written in one or more programming languages, or a combination thereof, including, but not limited to, object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional procedural programming languages - such as "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may function differently than those labeled in the figures. happens sequentially. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC), application-specific standard products (Application Specific Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programming logic device, CPLD), etc.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • the present disclosure provides an image processing method, including:
  • obtaining the environment key frame image and the corresponding depth image captured by extended reality includes:
  • the extended reality device rotates a preset angle each time, the environmental scene is photographed to obtain the key frame image and the depth image.
  • determining an illumination model based on the environment key frame image and the depth image includes:
  • the illumination model is determined based on the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • the image processing method before processing the depth image to obtain the normal vector map, the image processing method further includes:
  • processing the environment key frame image to obtain a radiation map includes:
  • the color channels corresponding to the environment key frame image are obtained, and processed according to the calibration mapping table corresponding to each color channel, to obtain the radiation map.
  • the processing of the depth image to obtain a normal vector map includes:
  • the calculation is performed based on the pixel radiation value, the direction vector and the normal vector to obtain each azimuth angle of the texture model.
  • the light intensity and light color of the coordinates including:
  • the target energy function is obtained
  • the target energy function is calculated to obtain the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • an image processing device including:
  • the acquisition module is used to acquire the environment key frame images and corresponding depth images captured by the extended reality device;
  • a processing module configured to determine a lighting model based on the environment key frame image and the depth image
  • a rendering module configured to render the extended reality object to be rendered according to the lighting model.
  • the acquisition module is specifically used to:
  • the extended reality device rotates a preset angle each time, the environmental scene is photographed to obtain the key frame image and the depth image.
  • the processing module includes:
  • a first processing unit configured to process the environment key frame image to obtain a radiation pattern
  • a second processing unit configured to process the depth image to obtain a normal vector map
  • a calculation unit configured to perform calculations based on a preset internal parameter matrix and the pixel coordinates of the radiation pattern to obtain the direction vector of the texture model
  • Determine and obtain a unit configured to determine a pixel radiation value based on the radiation map, and obtain a normal vector based on the normal vector map;
  • a calculation unit configured to perform calculations based on the pixel radiation value, the direction vector and the normal vector, Obtain the illumination intensity and illumination color of each azimuth angle coordinate of the texture model;
  • a determining unit configured to determine the illumination model based on the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • the processing module further includes:
  • An acquisition unit used to acquire the shooting position corresponding to the radiation map and the depth map
  • a conversion unit configured to perform coordinate conversion on the radiation pattern and the depth image to obtain a new radiation pattern and a new depth image when the shooting position is not a preset standard position, and convert the new radiation pattern into As the radiation pattern, take the new depth image as the depth image.
  • the first processing unit is specifically used for:
  • the color channel corresponding to the environment key frame image is obtained, and processed according to the calibration mapping table corresponding to each color channel, to obtain the radiation map.
  • the second processing unit is specifically used for:
  • the computing unit is specifically used for:
  • the target energy function is obtained
  • the target energy function is calculated to obtain the illumination intensity and illumination color of each azimuth angle coordinate of the texture model.
  • the present disclosure provides an electronic device, including:
  • memory for storing instructions executable by the processor
  • the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the image processing methods provided by this disclosure.
  • the present disclosure provides a computer-readable storage medium storing a computer program, the computer program being used to execute any of the images provided by the present disclosure. Approach.
  • Some embodiments of the present disclosure also provide a computer program product, which includes a computer program/instruction.
  • a computer program product which includes a computer program/instruction.
  • the image processing method provided by any embodiment of the present disclosure is implemented.
  • Some embodiments of the present disclosure also provide a computer program/instruction, which when executed by a processor implements the image processing method provided by any embodiment of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例涉及一种图像处理方法、装置、设备及介质,其中该方法包括:获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像,根据环境关键帧图像和深度图像,确定光照模型,根据光照模型对待渲染扩展现实物体进行渲染。

Description

一种图像处理方法、装置、设备及介质
相关申请的交叉引用
本申请是以申请号为202211122991.1、申请日为2022年09月15日的中国专利申请为基础,并主张其优先权,该中国专利申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、设备及介质。
背景技术
随着科技的发展,虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)和混合现实(Mixed Reality,MR)等人机交互方式应用到各个场景。
相关技术中,针对MR场景需要将虚拟物体插入到场景中显示,仅仅基于相机视点显示扩展现实物体,并未考虑到场景中的光照强度和光照颜色等因素,导致显示结果还原度比较差,影响混合现实应用的沉浸感。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种图像处理方法、装置、设备及介质。
本公开实施例提供了一种图像处理方法,所述方法包括:
获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像;
根据所述关键帧图像和所述深度图像,确定光照模型;
根据所述光照模型对待渲染扩展现实物体进行渲染。
本公开实施例还提供了一种图像处理装置,所述装置包括:
获取模块,用于获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像;
处理模块,用于根据所述关键帧图像和所述深度图像,确定光照模型;
渲染模块,用于根据所述光照模型对待渲染扩展现实物体进行渲染。
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的图像处理方法。
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的图像处理方法。
本公开实施例还提供了一种计算机程序产品,包括:计算机程序/指令,所述计算机程序/指令被处理器执行时实现本公开任意实施例所说明的图像处理方法。
本公开实施例的图像处理方法、装置和存储介质,通过获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像,根据环境关键帧图像和深度图像,确定光照模型,根据光照模型对待渲染扩展现实物体进行渲染。采用上述技术方案,基于扩展现实设备获取所属环境场景的图像确定对应的光照模型,基于光照模型可以将真实场景的光照还原、以及还原出真实场景的材质,从而实现更为真实的虚拟现实、混合现实和增强现实等扩展现实场景,进一步提高扩展现实应用场景的沉浸感。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开一些实施例提供的一种图像处理方法的流程示意图;
图2为本公开一些实施例提供的另一种图像处理方法的流程示意图;
图3为本公开一些实施例提供的一种图像处理装置的结构示意图;
图4为本公开一些实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地 理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本公开一些实施例提供的一种图像处理方法的流程示意图,该方法可以由图像处理装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图1所示,该方法包括:
步骤101、获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像。
扩展现实设备可以为任意一个具有虚拟现实或增强现实或混合现实相关的设备。本公开实施例对于扩展现实设备的形式不作限定。例如扩展现实设备可以是头部显示设备,也可以为AR眼镜。环境场景指的是用户佩戴扩展现实设备所属的空间的场景,比如一个小房间。环境关键帧图像指的是对环境场景拍摄的彩色图像,例如RGB图像,也可以是灰度图像;深度图像指的是对环境场景拍摄具有三维深度特征信息的图像。
在一些实施例中,环境关键帧图像和深度图像通常为多个。
在本公开一些实施例中,获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像的方式有很多种。在一些实施方式中,在扩展现实设备每次转动预设角度后,对环境场景进行拍摄,得到环境关键帧图像和对应的深度图像。
在另一些实施方式中,预先设置各个拍摄位置,控制扩展现实设备在每个拍摄位置进行拍摄,得到环境关键帧图像和对应的深度图像。以上两种方式仅为获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像的示例,本公开不对获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像的具体方式进行限制。
具体的,在用户佩戴扩展现实设备的过程中,可以控制扩展现实设备在环境场景中转动,并在扩展现实设备每次转动预设角度后,对环境场景进行拍摄,得到环境关键帧图像和对应的深度图像。
步骤102、根据环境关键帧图像和深度图像,确定光照模型。
在本公开一些实施例中,根据环境关键帧图像和深度图像,确定光照模型的方式有很多种,在一些实施方式中,对环境关键帧图像进行处理得到辐射图,并对深度图像进行处理,得到法向量图,基于预设的内参矩阵和辐射图的像素坐标进行计算,得到贴图模型的方向向量,基于辐射图,确定像素辐射值,并基于法向量图获取法向量,基于像素辐射值、方向向量和法向量进行计算,得到贴图模型每个方位角坐标的光照强度和光照颜色,基于贴图模型每个方位角坐标的光照强度和光照颜色,确定光照模型。在一些实施例中,预设的贴图模型可以根据应用场景需要选择设置,比如半球模型。
在另一些实施方式中,将所有深度图都转到标准位置坐标下,获取环境关键帧图像对应的辐射图,将所有辐射图都转到标准位置坐标下,获取深度图对应的法向量图,基于辐射图确定贴图模型的方向向量,最后基于辐射图的像素辐射值、法向量图的法向量、方向向量计算贴图模型每个方位角坐标的光照强度和光照颜色,得到光照模型。其中,标准位置可以根据需要设置,比如待渲染扩展现实物体的位置作为标准位置。
在本公开一些实施例中,当获取环境关键帧图像和对应的深度图像之后,可以基于环境关键帧图像和深度图像,确定光照模型。
步骤103、根据光照模型对待渲染扩展现实物体进行渲染。
待渲染扩展现实物体指的是将在扩展现实场景中显示的扩展现实物体。待渲染扩展现实物体可以根据应用场景需要选择设置,例如选择一张椅子作为扩展现实物体渲染在扩展现实场景中,再例如选择一盆花作为扩展现实物体渲染在扩展现实场景中。
在本公开一些实施例中,根据光照模型对待渲染扩展现实物体进行渲染可以理解为将包括环境光照(光照强度和光照颜色)的光照模型发送给渲染引擎,渲染引擎基于光照模型对待渲染扩展现实物体进行渲染。
本公开实施例提供的图像处理方案,获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像,根据环境关键帧图像和深度图像,确定光照模型,根据光照模型对待渲染扩展现实物体进行渲染。采用上述技术方案,基于扩展现实设备获取所属环境场景的图像确定对应的光照模型,基于光照模型可以将真实场景的光照还原、以及还原出真实场景的材质,从而实现更为真实的虚拟现实、混合现实和增强现实等扩展现实场景,进一步提高扩展现实应用场景的沉浸感。
在一些实施例中,获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像,包括:在扩展现实设备每次转动预设角度后,对环境场景进行拍摄,得到环境关键帧图像和深度图像。
具体地,当扩展现实设备比如头戴式显示设备或AR眼镜开始正常工作后,记录用户转动的角度,比如每5度选取一帧RGB图像(环境关键帧图像)和深度图。
由此,通过角度转动,可以获取不同角度的环境关键帧图像和深度图像,从而获取不同角度的光照强度和光照颜色作为光照环境,提高后续光照模型的精确性,最后保证渲染结果的真实性,满足用户使用需求。
在一些实施例中,根据关键帧图像和深度图像,确定光照模型,包括:对环境关键帧图像进行处理得到辐射图,并对深度图像进行处理,得到法向量图,基于预设的内参矩阵和辐射图的像素坐标进行计算,得到贴图模型的方向向量,基于辐射图,确定像素辐射值,并基于法向量图获取法向量,基于像素辐射值、方向向量和法向量进行计算,得到贴图模型每个方位角坐标的光照强度和光照颜色,基于贴图模型每个方位角坐标的光照强度和光照颜色,确定光照模型。
在本公开一些实施例中,在对深度图像进行处理,得到法向量图之前,还包 括:获取辐射图和深度图对应的拍摄位置,在拍摄位置不为预设的标准位置的情况下,对辐射图和深度图像进行坐标转换,得到新辐射图和新深度图像,以新辐射图作为辐射图,以新深度图像作为深度图像,以进行后续的处理。
在本公开一些实施例中,标准位置可以根据应用场景设置,比如为待渲染扩展现实物体的位置。为了保证后续计算的精确性,需要将非标准位置的辐射图和深度图转换到标准位置图像坐标下进行计算。
在本公开一些实施例中,对环境关键帧图像进行处理,得到辐射图,包括:获取环境关键帧图像对应的颜色通道,并按照每个颜色通道对应的标定映射表进行处理,得到辐射图。
在本公开一些实施例中,对深度图像进行处理,得到法向量图,包括:基于深度图像的深度值和像素坐标计算每个像素对应三维空间物体上的法向量,得到法向量图。
具体地,预先需要对扩展现实设备的相机进行离线标定环节,即光度标定,主要目的是去除相机曝光对成像带来的影响,从而得到辐照图。更具体地,对关键帧图像为RGB图像时,将RGB图像三个颜色通道都分别进行一次光度标定,得到三个颜色通道不同的标定映射表,用于做RGB图像的预处理,分别得到三个颜色通道的辐射图;对关键帧图像为灰度图像时,将灰度图像的颜色通道进行一次光度标定,得到一个标定映射表,用于做灰度图像的预处理,得到一个辐射图。
也就是说,对于RGB图像,对每个颜色通道基于标定映射表,将三个颜色通道的色彩图映射成辐射图;对于深度图,基于深度值和像素坐标,计算每个像素对应三维空间物体上的法向量,得到一张法向量图。
具体地,对于标准位置,基于辐射图和法向量图将辐射图贴到贴图模型(比如半球模型)上。
以半球模型为例进行说明,通常认为环境并不是一个真正的球面或者立方体,以参考半球模型对环境进行建模,半球模型上的每一个点的方向向量f可以通过将像素投影到归一化平面上后进行单位化得到,如公式(1)所示:
f=N(K-1(u,v,1)T)     (1)
其中,K表示相机内参矩阵;u,v分别表示图像的x和y坐标;函数N表示将 向量单位化(即模长变为1)。
具体地,像素(u,v)对应的像素辐射值为该半球模型的球面处的光照颜色值(材质)乘以光照强度值(光强),再乘以n和f的余弦得到,记像素辐射值为E(u,v),光强为I(α,β),材质为C(α,β),如公式(2)所示:
E(u,v)=I(α,β)C(α,β)|<n,f>|      (2)
公式(2)中,|<n,f>|表示两个向量n和f的内积的绝对值,(α,β)表示半球方位角坐标。
基于前述描述,针对非标准位置还需要进行坐标转换,也就是说,对于非标准位置,相机除了旋转,还有平移,因此在贴图过程中,需要消除相机相对于标准位置的平移带来的影响。
具体地,记标准位置代表的图像帧为w,当前相机帧为c,基于深度图和相机的当前帧图像投影到标准位置如公式(3)所示:
P=RwcK-1(u,v,1)Td(u,v)+twc      (3)
其中,P表示将当前帧观测到的点在标准帧下的坐标值,d(u,v)表示深度图像(u,v)像素坐标的值,Rwc表示c到w的旋转,twc表示c到w的平移,在上述公式(3)为点的坐标变换公式。
具体地,如公式(4)和(5)所示:

其中,分别表示标准位置下图像的像素坐标值,Px,Py,Pz分别表示点P的x,y,z值。
具体地,在像素坐标系下,标准位置得到的深度值如公式(6)所示:
具体地,在像素坐标系下,标准位置得到的强度值如公式(7)所示:
根据上述公式(6)和(7),可以生成一个消除平移之后的标准位置的新深度图和新辐射图,然后再进行贴图操作。
当所有方位角α,β覆盖完后,对于所有贴图关系,即公式(2)所示,需要计算一个全局最优的I(α,β)和c(α,β)。
在本公开实施例中,基于像素辐射值、方向向量、法向量、光照强度和光照颜色建立待求解光照函数,将待求解光照函数进行转换,得到目标光照函数,基于目标光照函数和离散化的各个方位角度,得到目标能量函数,对目标能量函数进行计算,得到贴图模型每个方位角坐标的光照强度和光照颜色。
具体地,基于所辐射值、方向向量、法向量、光照强度和光照颜色建立待求解光照函数如公式(2),已知的为E(u,v)和|<n,f>|,未知的为I(α,β)和C(α,β),对于光强I,可以认为在室内它是基本均匀的,即方位角变化不会引起太大的亮度变化,对于材质C,可以认为室内材质只有有限的几种,因此它的解是稀疏的;其中,材质可理解为光照射到不同物体产生的不同颜色。
具体地,对公式(2)对进行取对数,从而将乘法转化为加法,其中,i为常数,得到如公式(8)至(10)所示:

lnI(α,β)+lnC(α,β)=λi(α,β)     (9)
i(α,β)+c(α,β)=λi(α,β)      (10)
具体地,将方位角度离散化,α的取值为0到360度,β取值为0到90度,比如对于α每3度取一个值,记为αp,对于β每度取一个值,记为βq,设置如公式(11)所示:
由此,得到目标能量函数F如公式(12)所示:
公式12中,损失函数第一项表示测量数据,第二项表示光照是基本均匀的,第三项表示材质是稀疏的。通过比如交替方向乘子法对目标能量函数进行求解,可以最优的得到ip,q和cp,q,最终通过计算指数得到最终的光照强度和光照颜色(材质)。
由此,通过基于扩展现实设备获取所属环境场景不同角度的图像确定光照强度和光照颜色,从而基于光照强度和光照颜色确定光照模型,基于光照模型可以将真实场景的光照还原、以及还原出真实场景的材质,从而实现更为真实的扩展现实场景,进一步提高扩展现实应用场景的沉浸感。
图2为本公开一些实施例提供的另一种图像处理方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述图像处理方法。如图2所示,该方法包括:
步骤201、在扩展现实设备每次转动预设角度后,对环境场景进行拍摄,得到环境关键帧图像和深度图像。
示例性的,扩展现实设备比如AR眼镜,环境场景为房间A,在用户佩戴AR眼镜控制AR眼镜开始工作,控制AR眼镜按照预设角度转动,每转动一定角度获取对应环境关键帧图像和深度图像,比如每转动5度对房间A进行拍摄,获取环境关键帧图像和深度图像。
步骤202、获取环境关键帧图像对应的颜色通道,并按照每个颜色通道对应的标定映射表进行处理,得到辐射图。
示例性,环境关键帧图像为RGB图像,包括RGB三个颜色通道,分别获取RGB对应的三个标定映射表分别为标定映射表1、标定映射表2和标定映射表,将R颜色通道按照标定映射表1进行处理,得到辐射图a1、将G颜色通道按照标定映射表2进行处理,得到辐射图a2和将B颜色通道按照标定映射表3进行处理,得到辐射图a3,从而针对一RGB图像,得到三个辐射图a1、a2和a3。
步骤203、获取辐射图和深度图对应的拍摄位置,在拍摄位置不为预设的标准位置的情况下,对辐射图和深度图像进行坐标转换,得到新辐射图和新深度图像,以新辐射图作为辐射图,以新深度图像作为深度图像,以进行后续的处理。
在本公开一些实施例中,标准位置可以根据应用场景设置,比如为待渲染扩展现实物体的位置。其中,为了保证后续计算的精确性,需要将非标准位置的辐射图和深度图转换到标准位置图像坐标下进行计算,即后续针对非标准位置,将新辐射图和新深度图像进行后续计算处理。
步骤204、基于深度图像的深度值和像素坐标计算每个像素对应三维空间物体上的法向量,得到法向量图。
具体地,对于深度图,基于深度值和像素坐标,计算每个像素对应三维空间物体上的法向量,得到一张法向量图。
步骤205、基于预设的内参矩阵和辐射图的像素坐标进行计算,得到贴图模型的方向向量。
步骤206、基于辐射图,确定像素辐射值,并基于法向量图获取法向量。
步骤207、基于像素辐射值、方向向量和法向量进行计算,得到贴图模型每个方位角坐标的光照强度和光照颜色,并基于贴图模型每个方位角坐标的光照强度和光照颜色,确定光照模型。
在本公开一些实施例中,基于像素辐射值、方向向量、法向量、光照强度和光照颜色建立待求解光照函数,将待求解光照函数进行转换,得到目标光照函数,基于目标光照函数和离散化的各个方位角度,得到目标能量函数,对目标能量函数进行计算,得到贴图模型每个方位角坐标的光照强度和光照颜色。
预设的内参矩阵可以是相机内参矩阵,通过图像的像素坐标和内参矩阵,可以计算贴图模型上的每一个点的方向向量,以及基于辐射图,确定像素辐射值,并基于法向量图获取法向量,从而基于像素辐射值、方向向量和法向量进行计算,得到贴图模型每个方位角坐标的光照强度和光照颜色,并基于贴图模型每个方位角坐标的光照强度和光照颜色,确定光照模型。
步骤208、根据光照模型对待渲染扩展现实物体进行渲染。
待渲染扩展现实物体指的是将在扩展现实场景中显示的扩展现实物体。待渲染扩展现实物体可以根据应用场景需要选择设置。例如选择一张椅子作为扩展现实物体渲染在扩展现实场景中,再例如选择一盆花作为扩展现实物体渲染在扩展现实场景中。
在本公开一些实施例中,基于光照模型对待渲染扩展现实物体进行渲染,得到扩展现实场景可以理解为将包括环境光照(光照强度和光照颜色)的光照模型发送给渲染引擎,渲染引擎基于光照模型对待渲染扩展现实物体进行渲染。
本公开实施例提供的图像处理方案,在扩展现实设备每次转动预设角度后,对环境场景进行拍摄,得到环境关键帧图像和深度图像,获取环境关键帧图像对应的颜色通道,并按照每个颜色通道和对应的预标定映射表进行处理,得到辐射图,获取辐射图和深度图对应的拍摄位置,在拍摄位置不为预设的标准位置的情况下,对辐射图和深度图像进行坐标转换,得到新辐射图和新深度图像,基于深 度图像的深度值和像素坐标计算每个像素对应三维空间物体上的法向量,得到法向量图,基于预设的内参矩阵和辐射图的像素坐标进行计算,得到贴图模型的方向向量,基于辐射图,确定像素辐射值,并基于法向量图获取法向量,基于像素辐射值、方向向量和法向量进行计算,得到贴图模型每个方位角坐标的光照强度和光照颜色,并基于贴图模型每个方位角坐标的光照强度和光照颜色,确定光照模型,根据光照模型对待渲染扩展现实物体进行渲染。采用上述技术方案,可以将MR等扩展现实场景的光照还原,并还原出真实场景的材质,从而实现更为真实的扩展现实场景,即基于诸如头部显示设备的设备获取到不同方位的图像采样后,即可恢复出一个场景比如小房间的较好的环境光照和环境材质模型,对于VR/AR等场景较为友好,能为扩展现实应用带来更好的沉浸感。
图3为本公开一些实施例提供的一种图像处理装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图3所示,该装置包括:
获取模块301,用于获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像;
处理模块302,用于根据所述关键帧图像和所述深度图像,确定光照模型;
渲染模块303,用于根据所述光照模型对待渲染扩展现实物体进行渲染。
可选的,所述获取模块301具体用于:
在所述扩展现实设备每次转动预设角度后,对所述环境场景进行拍摄,得到所述环境关键帧图像和所述深度图像。
可选的,所述处理模块302,包括:
第一处理单元,用于对所述环境关键帧图像进行处理得到辐射图;
第二处理单元,用于对所述深度图像进行处理得到法向量图;
计算单元,用于基于预设的内参矩阵和所述辐射图的像素坐标进行计算,得到所述贴图模型的方向向量;
确定获取单元,用于基于所述辐射图,确定像素辐射值,并基于所述法向量图获取法向量;
计算单元,用于基于所述像素辐射值、所述方向向量和所述法向量进行计算, 得到所述贴图模型每个方位角坐标的光照强度和光照颜色;
确定单元,用于基于所述贴图模型每个方位角坐标的光照强度和光照颜色,确定所述光照模型。
可选的,所述处理模块302,还包括:
获取单元,用于获取所述辐射图和所述深度图对应的拍摄位置;
转换单元,用于在所述拍摄位置不为预设的标准位置的情况下,对所述辐射图和所述深度图像进行坐标转换,得到新辐射图和新深度图像,将所述新辐射图作为所述辐射图,将所述新深度图像作为所述深度图像,以进行后续的处理。
可选的,所述第一处理单元,具体用于:
获取所述环境关键帧图像对应的颜色通道,并按照每个所述颜色通道对应的标定映射表进行处理,得到所述辐射图。
可选的,所述第二处理单元,具体用于:
基于所述深度图像的深度值和像素坐标计算每个像素对应三维空间物体上的法向量,得到所述法向量图。
可选的,所述计算单元,具体用于:
基于所述像素辐射值、所述方向向量、所述法向量、所述光照强度和所述光照颜色建立待求解光照函数;
将所述待求解光照函数进行转换,得到目标光照函数;
基于所述目标光照函数和离散化的各个方位角度,得到目标能量函数;
对所述目标能量函数进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色。
本公开实施例所提供的图像处理装置可执行本公开任意实施例所提供的图像处理方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述图像处理装置的实施例中,所包括的各个模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能模块的具体名称也只是为了便于相互区分,并不用于限制本公开的保护范围。上述模块可以被实现为在一个或多个通用处理器上执行的软件组件,也可以被实现为诸如用于执行某些功能的硬件,诸如可编程逻辑器件和/或专用集 成电路。在一些实施例中,这些模块可以体现为软件产品的形式,该软件产品可以存储在非易失性存储介质中。这些非易失性存储介质中包括使得计算机设备(例如个人计算机、服务器、网络设备、移动终端等)执行本公开实施例中描述的方法。在一些实施例中,上述模块还可以在单个设备上实现,也可以分布在多个设备上。这些模块的功能可以相互合并,也可以进一步拆分为多个子模块。
本公开一些实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的图像处理方法。
本公开一些实施例还提供了一种计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的图像处理方法。
本公开一些实施例还提供了一种电子设备,该电子设备可以包括处理装置和存储装置,存储装置可以用于存储可执行指令。其中,处理装置可以用于从存储装置中读取可执行指令,并执行可执行指令以实现上述实施例中的图像处理方法。
图4为本公开一些实施例提供的一种电子设备的结构示意图。下面具体参考图4,其示出了适于用来实现本公开实施例中的电子设备400的结构示意图。本公开实施例中的电子设备400可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(Read-Only Memory,ROM)402中的程序或者从存储装置408加载到随机访问存储器(Random Access Memory,RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(Input/Output,I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示 器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的图像处理方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Electrically Erasable Programmable read only memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以 用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(Hyper Text Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,“LAN”),广域网(Wide Area Network,“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像,根据环境关键帧图像和深度图像,确定光照模型,根据光照模型对待渲染扩展现实物体进行渲染。
在本公开一些实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的 顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Prog ram mable Gate Array,FPGA)、专用集成电路(Application-Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programming logic device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,包括:
获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像;
根据所述环境关键帧图像和所述深度图像,确定光照模型;
根据所述光照模型对待渲染扩展现实物体进行渲染。
根据本公开的一个或多个实施例,本公开提供的图像处理方法中,所述获取扩展现实拍摄的环境关键帧图像和对应的深度图像,包括:
在所述扩展现实设备每次转动预设角度后,对所述环境场景进行拍摄,得到所述关键帧图像和所述深度图像。
根据本公开的一个或多个实施例,本公开提供的图像处理方法中,根据所述环境关键帧图像和所述深度图像,确定光照模型,包括:
对所述环境关键帧图像进行处理得到辐射图,并对所述深度图像进行处理得到法向量图;
基于预设的内参矩阵和所述辐射图的像素坐标进行计算,得到所述贴图模型的方向向量;
基于所述辐射图,确定像素辐射值,并基于所述法向量图获取法向量;
基于所述像素辐射值、所述方向向量和所述法向量进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色;
基于所述贴图模型每个方位角坐标的光照强度和光照颜色,确定所述光照模型。
根据本公开的一个或多个实施例,本公开提供的图像处理方法中,在对所述深度图像进行处理得到所述法向量图之前,所述图像处理方法还包括:
获取所述辐射图和所述深度图对应的拍摄位置;
在所述拍摄位置不为预设的标准位置的情况下,对所述辐射图和所述深度图像进行坐标转换,得到新辐射图和新深度图像,将所述新辐射图作为所述辐射图,将所述新深度图像作为所述深度图像。
根据本公开的一个或多个实施例,本公开提供的图像处理方法中,所述对所述环境关键帧图像进行处理得到辐射图,包括:
获取所述环境关键帧图像对应的颜色通道,并按照每个所述颜色通道对应的标定映射表进行处理,得到所述辐射图。
根据本公开的一个或多个实施例,本公开提供的图像处理方法中,所述对所述深度图像进行处理得到法向量图,包括:
基于所述深度图像的深度值和像素坐标计算每个像素对应三维空间物体上的 法向量,得到所述法向量图。
根据本公开的一个或多个实施例,本公开提供的图像处理方法中,所述基于所述像素辐射值、所述方向向量和所述法向量进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色,包括:
基于所述像素辐射值、所述方向向量、所述法向量、所述光照强度和所述光照颜色建立待求解光照函数;
将所述待求解光照函数进行转换,得到目标光照函数;
基于所述目标光照函数和离散化的各个方位角度,得到目标能量函数;
对所述目标能量函数进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色。
根据本公开的一个或多个实施例,本公开提供了一种图像处理装置,包括:
获取模块,用于获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像;
处理模块,用于根据所述环境关键帧图像和所述深度图像,确定光照模型;
渲染模块,用于根据所述光照模型对待渲染扩展现实物体进行渲染。
根据本公开的一个或多个实施例,本公开提供的图像处理装置中,所述获取模块具体用于:
在所述扩展现实设备每次转动预设角度后,对所述环境场景进行拍摄,得到所述关键帧图像和所述深度图像。
根据本公开的一个或多个实施例,本公开提供的图像处理装置中,所述处理模块,包括:
第一处理单元,用于对所述环境关键帧图像进行处理得到辐射图;
第二处理单元,用于对所述深度图像进行处理得到法向量图;
计算单元,用于基于预设的内参矩阵和所述辐射图的像素坐标进行计算,得到所述贴图模型的方向向量;
确定获取单元,用于基于所述辐射图,确定像素辐射值,并基于所述法向量图获取法向量;
计算单元,用于基于所述像素辐射值、所述方向向量和所述法向量进行计算, 得到所述贴图模型每个方位角坐标的光照强度和光照颜色;
确定单元,用于基于所述贴图模型每个方位角坐标的光照强度和光照颜色,确定所述光照模型。
根据本公开的一个或多个实施例,本公开提供的图像处理装置中,所述处理模块,还包括:
获取单元,用于获取所述辐射图和所述深度图对应的拍摄位置;
转换单元,用于在所述拍摄位置不为预设的标准位置的情况下,对所述辐射图和所述深度图像进行坐标转换,得到新辐射图和新深度图像,将所述新辐射图作为所述辐射图,将所述新深度图像作为所述深度图像。
根据本公开的一个或多个实施例,本公开提供的图像处理装置中,所述第一处理单元,具体用于:
获取所述环境关键帧图像对应的颜色通道,并按照每个所述颜色通道对应的标定映射表进行处理,得到所述辐射图。
根据本公开的一个或多个实施例,本公开提供的图像处理装置中,所述第二处理单元,具体用于:
基于所述深度图像的深度值和像素坐标计算每个像素对应三维空间物体上的法向量,得到所述法向量图。
根据本公开的一个或多个实施例,本公开提供的图像处理装置中,所述计算单元,具体用于:
基于所述像素辐射值、所述方向向量、所述法向量、所述光照强度和所述光照颜色建立待求解光照函数;
将所述待求解光照函数进行转换,得到目标光照函数;
基于所述目标光照函数和离散化的各个方位角度,得到目标能量函数;
对所述目标能量函数进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开提供的任一所述的图像处理方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开提供的任一所述的图像处理方法。
本公开一些实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的图像处理方法。
本公开一些实施例还提供了一种计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (14)

  1. 一种图像处理方法,包括:
    获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像;
    根据所述环境关键帧图像和所述深度图像,确定光照模型;
    根据所述光照模型对待渲染扩展现实物体进行渲染。
  2. 根据权利要求1所述的图像处理方法,其中,所述获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像,包括:
    在所述扩展现实设备每次转动预设角度后,对环境场景进行拍摄,得到所述环境关键帧图像和所述深度图像。
  3. 根据权利要求1-2任意一项所述的图像处理方法,其中,所述根据所述环境关键帧图像和所述深度图像,确定光照模型,包括:
    对所述环境关键帧图像进行处理得到辐射图,并对所述深度图像进行处理得到法向量图;
    基于预设的内参矩阵和所述辐射图的像素坐标进行计算,得到贴图模型的方向向量;
    基于所述辐射图,确定像素辐射值,并基于所述法向量图获取法向量;
    基于所述像素辐射值、所述方向向量和所述法向量进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色;
    基于所述贴图模型每个方位角坐标的光照强度和光照颜色,确定所述光照模型。
  4. 根据权利要求3所述的图像处理方法,其中,在对所述深度图像进行处理得到所述法向量图之前,所述图像处理方法还包括:
    获取所述辐射图和所述深度图对应的拍摄位置;
    在所述拍摄位置不为预设的标准位置的情况下,对所述辐射图和所述深度图像进行坐标转换,得到新辐射图和新深度图像,将所述新辐射图作为所述辐射图,将所述新深度图像作为所述深度图像。
  5. 根据权利要求3-4任意一项所述的图像处理方法,其中,所述对所述环境关键帧图像进行处理得到辐射图,包括:
    获取所述环境关键帧图像对应的颜色通道,并按照每个所述颜色通道对应的标定映射表进行处理,得到所述辐射图。
  6. 根据权利要求3-5任意一项所述的图像处理方法,其中,所述对所述深度图像进行处理得到法向量图,包括:
    基于所述深度图像的深度值和像素坐标计算每个像素对应三维空间物体上的法向量得到所述法向量图。
  7. 根据权利要求3-6任意一项所述的图像处理方法,其中,所述基于所述像素辐射值、所述方向向量和所述法向量进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色,包括:
    基于所述像素辐射值、所述方向向量、所述法向量、所述光照强度和所述光照颜色建立待求解光照函数;
    将所述待求解光照函数进行转换,得到目标光照函数;
    基于所述目标光照函数和离散化的各个方位角度,得到目标能量函数;
    对所述目标能量函数进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色。
  8. 一种图像处理装置,包括:
    获取模块,用于获取扩展现实设备拍摄的环境关键帧图像和对应的深度图像;
    处理模块,用于根据所述环境关键帧图像和所述深度图像,确定光照模型;
    渲染模块,用于根据所述光照模型对待渲染扩展现实物体进行渲染。
  9. 根据权利要求8所述的图像处理装置,其中,所述获取模块还用于:
    在所述扩展现实设备每次转动预设角度后,对环境场景进行拍摄,得到所述环境关键帧图像和所述深度图像。
  10. 根据权利要求8-9任意一项所述的图像处理装置,其中,所述处理模块,包括:
    第一处理单元,用于对所述环境关键帧图像进行处理得到辐射图;
    第二处理单元,用于对所述深度图像进行处理得到法向量图;
    计算单元,用于基于预设的内参矩阵和所述辐射图的像素坐标进行计算,得到所述贴图模型的方向向量;
    确定获取单元,用于基于所述辐射图,确定像素辐射值,并基于所述法向量图获取法向量;
    计算单元,用于基于所述像素辐射值、所述方向向量和所述法向量进行计算,得到所述贴图模型每个方位角坐标的光照强度和光照颜色;
    确定单元,用于基于所述贴图模型每个方位角坐标的光照强度和光照颜色,确定所述光照模型。
  11. 根据权利要求10所述的图像处理装置,其中,在所述第二处理单元被用于对所述深度图像进行处理得到法向量图之前,所述处理模块还包括:
    获取单元,用于获取所述辐射图和所述深度图对应的拍摄位置;
    转换单元,用于在所述拍摄位置不为预设的标准位置的情况下,对所述辐射图和所述深度图像进行坐标转换,得到新辐射图和新深度图像,将所述新辐射图作为所述辐射图,将所述新深度图像作为所述深度图像,以进行后续的所述处理。
  12. 一种电子设备,其中,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-7中任一所述的图像处理方法。
  13. 一种计算机可读存储介质,其中,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-7中任一所述的图像处理方法。
  14. 一种计算机程序产品,包括:计算机程序/指令,所述计算机程序/指令被处理器执行时实现上述权利要求1-7中任一项所述的图像处理方法。
PCT/CN2023/115438 2022-09-15 2023-08-29 一种图像处理方法、装置、设备及介质 WO2024055837A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211122991.1 2022-09-15
CN202211122991.1A CN117745928A (zh) 2022-09-15 2022-09-15 一种图像处理方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2024055837A1 true WO2024055837A1 (zh) 2024-03-21

Family

ID=90274209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/115438 WO2024055837A1 (zh) 2022-09-15 2023-08-29 一种图像处理方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN117745928A (zh)
WO (1) WO2024055837A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994477A (zh) * 2024-04-02 2024-05-07 虚拟现实(深圳)智能科技有限公司 Xr扩展现实场景的实现方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050012757A1 (en) * 2003-07-14 2005-01-20 Samsung Electronics Co., Ltd. Image-based rendering and editing method and apparatus
CN105825544A (zh) * 2015-11-25 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN111402392A (zh) * 2020-01-06 2020-07-10 香港光云科技有限公司 光照模型计算方法、材质参数的处理方法及其处理装置
CN112233220A (zh) * 2020-10-15 2021-01-15 洛阳众智软件科技股份有限公司 基于OpenSceneGraph的体积光生成方法、装置、设备和存储介质
CN114782613A (zh) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 图像渲染方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050012757A1 (en) * 2003-07-14 2005-01-20 Samsung Electronics Co., Ltd. Image-based rendering and editing method and apparatus
CN105825544A (zh) * 2015-11-25 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN111402392A (zh) * 2020-01-06 2020-07-10 香港光云科技有限公司 光照模型计算方法、材质参数的处理方法及其处理装置
CN112233220A (zh) * 2020-10-15 2021-01-15 洛阳众智软件科技股份有限公司 基于OpenSceneGraph的体积光生成方法、装置、设备和存储介质
CN114782613A (zh) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 图像渲染方法、装置、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994477A (zh) * 2024-04-02 2024-05-07 虚拟现实(深圳)智能科技有限公司 Xr扩展现实场景的实现方法、装置、设备及存储介质
CN117994477B (zh) * 2024-04-02 2024-06-11 虚拟现实(深圳)智能科技有限公司 Xr扩展现实场景的实现方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN117745928A (zh) 2024-03-22

Similar Documents

Publication Publication Date Title
CN108665536B (zh) 三维与实景数据可视化方法、装置与计算机可读存储介质
CN111932664B (zh) 图像渲染方法、装置、电子设备及存储介质
WO2019205852A1 (zh) 确定图像捕捉设备的位姿的方法、装置及其存储介质
US8803880B2 (en) Image-based lighting simulation for objects
WO2024055837A1 (zh) 一种图像处理方法、装置、设备及介质
CN109754464B (zh) 用于生成信息的方法和装置
WO2023207356A1 (zh) 图像渲染方法、装置、设备及存储介质
CN113144611B (zh) 场景渲染方法及装置、计算机存储介质、电子设备
WO2023029893A1 (zh) 纹理映射方法、装置、设备及存储介质
JP2023521270A (ja) 多様なポートレートから照明を学習すること
WO2023207379A1 (zh) 图像处理方法、装置、设备及存储介质
CN109801354B (zh) 全景处理方法和装置
Chalmers et al. Reconstructing reflection maps using a stacked-CNN for mixed reality rendering
CN113724391A (zh) 三维模型构建方法、装置、电子设备和计算机可读介质
See et al. Virtual reality 360 interactive panorama reproduction obstacles and issues
CN110084873B (zh) 用于渲染三维模型的方法和装置
CN112288878B (zh) 增强现实预览方法及预览装置、电子设备及存储介质
WO2023193613A1 (zh) 高光渲染方法、装置、介质及电子设备
CN110662015A (zh) 用于显示图像的方法及装置
WO2023197860A1 (zh) 高光渲染方法、装置、介质及电子设备
CN109816791B (zh) 用于生成信息的方法和装置
WO2023226628A1 (zh) 图像展示方法、装置、电子设备及存储介质
WO2023109564A1 (zh) 视频图像处理方法、装置、电子设备及存储介质
WO2022083213A1 (zh) 图像生成方法、装置、设备和计算机可读介质
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23864569

Country of ref document: EP

Kind code of ref document: A1