WO2023026543A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2023026543A1
WO2023026543A1 PCT/JP2022/011543 JP2022011543W WO2023026543A1 WO 2023026543 A1 WO2023026543 A1 WO 2023026543A1 JP 2022011543 W JP2022011543 W JP 2022011543W WO 2023026543 A1 WO2023026543 A1 WO 2023026543A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
unit
information processing
component
Prior art date
Application number
PCT/JP2022/011543
Other languages
English (en)
Japanese (ja)
Inventor
久之 館野
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2023026543A1 publication Critical patent/WO2023026543A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • G03B15/03Combinations of cameras with lighting apparatus; Flash units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a program.
  • the present disclosure proposes an information processing device, an information processing method, and a program that enable high-quality video shooting.
  • an information processing apparatus acquires an IR image that is a captured image obtained by irradiating an object with infrared light and that includes a visible light component and an IR component.
  • an acquisition unit that extracts IR component information from the IR image; and an image processing unit that performs image processing related to brightness or brightness of the captured image of the target based on the IR component information.
  • FIG. 11 is a diagram showing a state in which visible light is used for moving image shooting;
  • FIG. 10 is a diagram showing how a moving image is captured using an IR light; It is a figure which shows the outline
  • 1 is a diagram illustrating a configuration example of a server according to an embodiment of the present disclosure;
  • FIG. It is a figure which shows an example of an infrared illumination part. It is a figure which shows the frequency characteristic of the filter with which an imaging part is provided.
  • FIG. 4 is a diagram for explaining basic method 1; 4 is a flowchart showing image output processing for realizing basic method 1;
  • FIG. 10 is a diagram for explaining basic technique 2; 9 is a flowchart showing image output processing for realizing basic method 2; It is a figure for demonstrating an expansive method. It is a figure for demonstrating an expansive method. 10 is a flowchart showing image output processing for realizing an advanced method; It is a flowchart which shows an estimation process.
  • 1 is a diagram showing an example of a photographing studio of the live-action volumetric photographing system of this embodiment
  • FIG. FIG. 3 is a diagram showing a processing example of the information processing device 10 in the live-action volumetric imaging system
  • FIG. 3 is a diagram showing a state in which a plurality of visible light lights and a plurality of IR lights are arranged omnidirectionally;
  • FIG. 1 is a diagram showing how a visible light is used for moving image shooting.
  • a ring-shaped visible light LED (Light Emitting Diode) light in the example of FIG. 1) is used for illumination.
  • the impression of the image changes depending on how the lighting is applied, so it is difficult to decide how to apply the lighting.
  • the user wears glasses there is a problem that the light is reflected on the glasses when the user is exposed to the light from the front.
  • FIG. 2 is a diagram showing how moving images are captured using an IR light. Then, the information processing apparatus of the present embodiment performs image processing on the captured image based on the infrared light irradiation information as if the subject were irradiated with visible light. This realizes simple relighting that is effective for close-up scenes.
  • FIG. 3 is a diagram showing an overview of image processing according to this embodiment.
  • An information processing apparatus obtains an IR image obtained by irradiating an object (eg, a user and surrounding objects) with infrared light.
  • An IR image is a captured image containing a visible light component and an IR component obtained by irradiating an object with infrared light.
  • the information processing device performs image processing related to brightness or brightness on the captured image of the target based on the information of the IR component extracted from the IR image.
  • the user can shoot moving images with stable lighting without painful or complicated lighting adjustments.
  • IR ring light Infrared light irradiation, it is desirable to use an IR ring light in which infrared light emitting elements are arranged in a ring around the lens.
  • a polarizing filter may be used to prevent reflection of light on the glasses.
  • the information processing device 10 is a computer used by the user for video shooting.
  • the information processing device 10 is typically a personal computer, but is not limited to a personal computer.
  • the information processing device 10 may be a mobile terminal such as a mobile phone, a smart device (smartphone or tablet), a PDA (Personal Digital Assistant), or a notebook PC.
  • the information processing device 10 may be a wearable device such as a smart watch.
  • the information processing apparatus 10 may also be an xR device such as an AR (Augmented Reality) device, a VR (Virtual Reality) device, or an MR (Mixed Reality) device.
  • the xR device may be a glasses-type device such as AR glasses or MR glasses, or a head-mounted device such as a VR head-mounted display.
  • the information processing device 10 may also be a portable IoT (Internet of Things) device. Also, the information processing apparatus 10 may be a motorcycle, a mobile relay vehicle, or the like equipped with a communication device such as an FPU (Field Pickup Unit). Further, the information processing device 10 may be a server device such as a PC server, a midrange server, or a mainframe server. In addition, the information processing apparatus 10 can employ any form of computer.
  • a portable IoT Internet of Things
  • the information processing apparatus 10 may be a motorcycle, a mobile relay vehicle, or the like equipped with a communication device such as an FPU (Field Pickup Unit).
  • the information processing device 10 may be a server device such as a PC server, a midrange server, or a mainframe server.
  • the information processing apparatus 10 can employ any form of computer.
  • FIG. 4 is a diagram showing a configuration example of the information processing device 10 according to the embodiment of the present disclosure.
  • the information processing apparatus 10 includes a communication section 11 , a storage section 12 , a control section 13 , an output section 14 , an infrared illumination section 15 , a synchronization signal generation section 16 and an imaging section 17 .
  • the configuration shown in FIG. 4 is a functional configuration, and the hardware configuration may differ from this. Also, the functions of the information processing apparatus 10 may be distributed and implemented in a plurality of physically separated configurations.
  • the communication unit 11 is a communication interface for communicating with other devices.
  • the communication unit 11 is a LAN (Local Area Network) interface such as a NIC (Network Interface Card).
  • the communication unit 11 may be a device connection interface such as USB (Universal Serial Bus).
  • the communication unit 11 may be a wired interface or a wireless interface.
  • the communication unit 11 communicates with an external device under the control of the control unit 13 .
  • the storage unit 12 is a data readable/writable storage device such as a DRAM (Dynamic Random Access Memory), an SRAM (Static Random Access Memory), a flash memory, a hard disk, or the like.
  • the storage unit 12 functions as storage means of the information processing device 10 .
  • the storage unit 12 functions as a frame buffer for moving images captured by the imaging unit 17 .
  • the control unit 13 is a controller that controls each unit of the information processing device 10 .
  • the control unit 13 is implemented by a processor such as a CPU (Central Processing Unit), MPU (Micro Processing Unit), GPU (Graphics Processing Unit), or the like.
  • the control unit 13 is implemented by the processor executing various programs stored in the storage device inside the information processing apparatus 10 using a RAM (Random Access Memory) or the like as a work area.
  • the control unit 13 may be realized by an integrated circuit such as ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array).
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the control unit 13 includes an acquisition unit 131 , an extraction unit 132 , an image processing unit 133 , an output control unit 134 , a learning unit 135 and an estimation unit 136 .
  • Each block (obtaining unit 131 to estimating unit 136) constituting the control unit 13 is a functional block indicating the function of the control unit 13.
  • FIG. These functional blocks may be software blocks or hardware blocks.
  • each of the functional blocks described above may be one software module realized by software (including microprograms), or may be one circuit block on a semiconductor chip (die). Of course, each functional block may be one processor or one integrated circuit.
  • the control unit 13 may be configured in functional units different from the functional blocks described above. The configuration method of the functional blocks is arbitrary.
  • control unit 13 may be configured in functional units different from the functional blocks described above. Also, some or all of the blocks (acquisition unit 131 to estimation unit 136) that make up the control unit 13 may be operated by another device. The operation of each block constituting the control unit 13 will be described later.
  • the output unit 14 is a device that performs various outputs such as sound, light, vibration, and images to the outside.
  • the output unit 14 performs various outputs to the user under the control of the control unit 13 .
  • the output unit 14 includes a display device (display unit) that displays various types of information.
  • the display device is, for example, a liquid crystal display or an organic EL display.
  • the output unit 14 may be a touch panel display device. In this case, the output section 14 also functions as an input section.
  • the infrared illumination unit 15 is an IR light (IR illumination light source) that outputs invisible infrared light.
  • the upper limit of the wavelength of light that can be perceived by the human eye is 760-830 nm. Wavelengths such as 850 nm or 940 nm are the major IR illumination sources on the market. Therefore, the infrared illuminator 15 is typically an IR light that outputs infrared light with a wavelength of 850 nm or 940 nm. However, the infrared illuminator 15 is not limited to an IR light that outputs infrared light with a wavelength of 850 nm or 940 nm.
  • the infrared illuminator 15 may be capable of outputting infrared light of other wavelengths.
  • FIG. 5 is a diagram showing an example of the infrared illuminator 15. As shown in FIG. It is desirable that the infrared illuminator 15 be a ring light in order to clearly project the face.
  • the infrared illumination unit 15 is an IR light in which IR light emitting elements are arranged in a ring shape around a lens.
  • the synchronizing signal generating unit 16 is a synchronizing signal generator that generates a synchronizing signal for synchronizing the blinking period of the infrared illumination unit 15 and the frame period of the video (moving image) captured by the imaging unit 17 .
  • the synchronizing signal generator 16 outputs a synchronizing signal under the control of the control unit 13 .
  • the imaging unit 17 is a conversion unit that converts an optical image into an electrical signal.
  • the imaging unit 17 includes, for example, an image sensor and a signal processing circuit that processes analog pixel signals output from the image sensor, and converts light entering from the lens into digital data (image data).
  • An image captured by the imaging unit 17 is not limited to a video (moving image), and may be a still image. Note that the imaging unit can be rephrased as a camera.
  • the imaging unit 17 of this embodiment is a camera (hereinafter also referred to as an IR camera) that can simultaneously acquire visible light and infrared light (IR light).
  • An IR camera can be realized by removing the IR cut filter normally included in commercially available cameras.
  • FIG. 6 is a diagram showing frequency characteristics of a filter included in the imaging unit 17. As shown in FIG. In the example of FIG. 6, the imaging unit 17 is configured to detect infrared light with a wavelength of 850 nm. However, if the infrared illumination unit 15 is a light source that outputs infrared light with a wavelength of 940 nm, the imaging unit 17 may be configured to detect infrared light with a wavelength of 940 nm.
  • FIG. 7 is a diagram for explaining basic method 1.
  • FIG. An outline of basic method 1 will be described below with reference to FIG.
  • the information processing device 10 operates the infrared illumination unit 15 and the imaging unit 17 according to the user's operation. At this time, the information processing apparatus 10 blinks the infrared illuminator 15 while synchronizing with the image (moving image) captured by the imaging unit 17 . For example, the information processing device 10 blinks the infrared illumination unit 15 while synchronizing with the frame period of the video (moving image) captured by the imaging unit 17 . Thereby, the information processing apparatus 10 can acquire a visible light image and an IR image in a time division manner in synchronization with the blinking cycle of infrared light.
  • the information processing apparatus 10 can acquire the image of the frame when the infrared light is not irradiated as the visible light image and the image of the frame when the infrared light is irradiated as the IR image.
  • the frame when the IR light is OFF is the visible light image
  • the frame when the IR light is OFF is the IR image.
  • an IR image is a captured image containing a visible light component and an IR component obtained by irradiating an object (the user and surrounding objects in the example of FIG. 7) with infrared light.
  • the information processing device 10 extracts IR component information (hereinafter referred to as IR component information) from the IR image.
  • IR component information indicates from which direction the infrared light from the infrared illumination unit 15 hits the object.
  • the information processing device 10 may acquire the difference between the visible light image and the IR image as IR component information.
  • the information processing apparatus 10 calculates the difference between two consecutive frames of images (a visible light image and an IR image) starting from a frame at a timing when infrared light is not irradiated (IR light OFF frame). Obtained as ingredient information.
  • the information processing device 10 can detect the IR component of light that always exists, such as the light of a room light (for example, a fluorescent lamp), from the IR component information.
  • a room light for example, a fluorescent lamp
  • the information processing device 10 performs image processing related to brightness or brightness on the captured image based on the IR component information. For example, based on the IR component information, the information processing device 10 performs image processing on the next frame image (visible light image) of the two continuous frames (visible light image and IR image) used to extract the IR component information. . In the example of FIG. 7, the information processing apparatus 10 rewrites the luminance (L) information in the HSL color space of the visible light image based on the IR component information. More specifically, the information processing device 10 converts the visible light image from RGB to HSL, and maps the intensity of the IR component to the luminance (L) of the visible light image in the HSL color space.
  • the mapping may be a complete replacement or blending with the original luminance.
  • the color space used in the visible light image (input image) is RGB, but the color space used in the visible light image (input image) is not limited to RGB.
  • the color space used in the visible light image (input image) may be a color space other than RGB, such as YUV.
  • the YUV color space is a color space that expresses colors with luminance (Y) and color difference components (U, V).
  • the color space used for the visible light image (input image) can be appropriately changed according to the color space of the image output by the camera. Note that if the color space used in the visible light image (input image) is a color space having an axis capable of mapping IR components such as brightness and lightness, this color space conversion step can be omitted.
  • the information processing device 10 may blur the edge of the IR component.
  • the information processing apparatus 10 performs edge blurring processing on the difference image as IR component information, and performs image processing on the next frame image of two continuous frames based on the edge-blurred difference image. . Thereby, the information processing apparatus 10 can generate an image with little discomfort even in a scene with motion.
  • the information processing apparatus 10 may correct the IR component information based on motion prediction between frames, and perform image processing on the next frame image of two continuous frames based on the corrected IR component information.
  • the information processing device 10 may acquire the optical flow between adjacent frames of the visible light image, transform the IR component, and map it. This also makes it possible to generate an image with little sense of incongruity.
  • the information processing device 10 converts the image from HSL to RGB and outputs it to the output unit 14 .
  • the information processing device 10 rewrites the luminance (L) information of the captured image in the HSL color space based on the IR component information.
  • the information processing apparatus 10 may rewrite the brightness (V) information of the captured image in the HSV color space based on the IR component information.
  • the information processing device 10 may rewrite the luminance (Y) information of the captured image in the YCoCg color space based on the IR component information.
  • the information processing device 10 may rewrite the values in the RGB color space based on the IR component information.
  • the color space used by the information processing apparatus 10 for image processing is not limited to the color space described above.
  • the color space of the final output image is not limited to RGB depending on the application, and may be YUV, for example.
  • FIG. 8 is a flowchart showing image output processing for realizing basic method 1.
  • the following processing is executed by the control unit 13 of the information processing device 10 .
  • the control unit 13 starts image output processing when the user starts imaging (for example, a video conference).
  • the control unit 13 activates the imaging unit 17 (step S101).
  • the imaging unit 17 is an IR camera that can simultaneously acquire visible light and infrared light (IR light).
  • the control unit 13 blinks the infrared illumination unit 15 while synchronizing with the frame cycle of the video (moving image) captured by the imaging unit 17 (step S102).
  • the infrared illuminator 15 is an IR light that outputs invisible infrared light.
  • the acquisition unit 131 of the information processing device 10 acquires the image captured by the imaging unit 17 . Since the infrared light is blinking in synchronization with the frame period of the video, the acquisition unit 131 alternately acquires the visible light image and the IR image (step S103).
  • the IR image is a captured image including not only the IR component under the influence of the infrared light emitted by the infrared illuminator 15 but also the visible light component.
  • the extraction unit 132 of the information processing device 10 extracts IR component information from the IR image (step S104). Specifically, the extraction unit 132 acquires the difference between the visible light image and the IR image as IR component information. In basic method 1, the extraction unit 132 acquires, as IR component information, the difference between two consecutive frames of images (a visible light image and an IR image) starting from the frame at which the IR light is turned off.
  • the image processing unit 133 of the information processing device 10 performs image processing related to brightness or brightness of the captured image based on the IR component information (step S105). For example, based on the IR component information, the information processing device 10 performs image processing on the next frame image (visible light image) of the two continuous frames (visible light image and IR image) used to extract the IR component information. . For example, the image processing unit 133 rewrites the luminance information of the visible light image based on the IR component information.
  • the output control unit 134 of the information processing device 10 outputs the captured image subjected to the image processing to the output unit 14 (step S106).
  • control unit 13 of the information processing device 10 determines whether or not the shooting has ended (step S107). If the shooting has not ended (step S107: No), the control unit 13 returns the process to step S103. If the shooting has ended (step S107: Yes), the control unit 13 stops the operations of the imaging unit 17 and the infrared illumination unit 15 (step S108). When the operations of the imaging unit 17 and the infrared illumination unit 15 are stopped, the control unit 13 ends the image output processing.
  • the information processing apparatus 10 performs image processing so that it appears as if visible light is hitting an object (for example, a user) based on irradiation information (that is, IR component information) of invisible infrared light. It is carried out. As a result, the user can shoot moving images with stable lighting without being dazzled.
  • irradiation information that is, IR component information
  • Basic method 2 In basic method 1, the information processing apparatus 10 performs image processing on the next frame image (visible light image) of two continuous frames (visible light image and IR image). However, in the case of a scene with motion, this method may result in an unnatural image after image processing. Therefore, in basic method 2, the frame used for generating the difference image is used as the frame to be subjected to image processing, so that even in a scene with motion, the image does not look unnatural.
  • FIG. 9 is a diagram for explaining basic method 2. The outline of basic method 2 will be described below with reference to FIG.
  • the information processing device 10 operates the infrared illumination unit 15 and the imaging unit 17 according to the user's operation. At this time, the information processing apparatus 10 blinks the infrared illuminator 15 while synchronizing with the image (moving image) captured by the imaging unit 17 . For example, the information processing device 10 blinks the infrared illumination unit 15 while synchronizing with the frame period of the video (moving image) captured by the imaging unit 17 . Thereby, the information processing apparatus 10 can acquire a visible light image and an IR image in a time division manner in synchronization with the blinking cycle of infrared light.
  • the information processing device 10 extracts IR component information from the IR image. At this time, the information processing device 10 acquires the difference between the visible light image and the IR image as IR component information.
  • the information processing apparatus 10 converts the difference between two consecutive frames of images (visible light image and IR image) starting from the frame at the timing when the infrared light is irradiated (IR light ON frame) to the IR component. obtained as information.
  • the information processing device 10 performs image processing related to brightness or brightness on the captured image based on the IR component information. For example, based on the IR component information, the information processing device 10 performs image processing on the last frame image (visible light image) of two consecutive frames (IR image and visible light image) used to extract the IR component information. conduct. In the example of FIG. 9, the information processing apparatus 10 rewrites the luminance (L) information in the HSL color space of the visible light image based on the IR component information.
  • the HSL color space is a color space that expresses colors with three components of hue (Hue), saturation (Saturation), and brightness (Lightness).
  • the information processing device 10 converts the image from HSL to RGB and outputs it to the output unit 14 .
  • the information processing device 10 rewrites the information of the luminance (L) in the HSL color space of the captured image based on the IR component information.
  • the information processing apparatus 10 may rewrite the brightness (V) information of the captured image in the HSV color space based on the IR component information.
  • the HSV color space is a color space that expresses colors with three components of hue (Hue), saturation (Saturation/Chroma), and brightness (Value/Brightness).
  • the information processing device 10 may rewrite the luminance (Y) information of the captured image in the YCoCg color space based on the IR component information.
  • the YCoCg color space is a color space that expresses colors by luminance (Y) and color difference components (Co (darkness of orange) and Cg (darkness of green)).
  • Y luminance
  • Cg darkness of green
  • the information processing device 10 may rewrite the values in the RGB color space based on the IR component information.
  • the color space used by the information processing apparatus 10 for image processing is not limited to the color space described above.
  • FIG. 10 is a flowchart showing image output processing for realizing basic method 2.
  • FIG. The following processing is executed by the control unit 13 of the information processing device 10 .
  • the control unit 13 starts image output processing when the user starts imaging (for example, a video conference).
  • control unit 13 activates the imaging unit 17 (step S201). Then, the control unit 13 blinks the infrared illumination unit 15 while synchronizing with the frame period of the video (moving image) captured by the imaging unit 17 (step S202).
  • the acquisition unit 131 of the information processing device 10 acquires the image captured by the imaging unit 17 . Since the infrared light is blinking in synchronization with the frame cycle of the video, the acquisition unit 131 alternately acquires the visible light image and the IR image (step S203).
  • the extraction unit 132 of the information processing device 10 extracts IR component information from the IR image (step S204). Specifically, the extraction unit 132 acquires the difference between the visible light image and the IR image as IR component information. In basic method 2, the extraction unit 132 acquires, as IR component information, the difference between two consecutive frames of images (an IR image and a visible light image) starting from the frame at which the IR light is ON.
  • the image processing unit 133 of the information processing device 10 performs image processing related to luminance or brightness of the captured image based on the IR component information (step S205). For example, based on the IR component information, the information processing device 10 performs image processing on the last frame image (visible light image) of the two consecutive frames (IR image and visible light image) used to extract the IR component information. conduct. For example, the image processing unit 133 rewrites the luminance information of the visible light image based on the IR component information.
  • the output control unit 134 of the information processing device 10 outputs the captured image subjected to the image processing to the output unit 14 (step S206).
  • control unit 13 of the information processing device 10 determines whether or not the shooting has ended (step S207). If the shooting has not ended (step S207: No), the control unit 13 returns the process to step S203. If the shooting has ended (step S207: Yes), the control unit 13 stops the operations of the imaging unit 17 and the infrared illumination unit 15 (step S208). When the operations of the imaging unit 17 and the infrared illumination unit 15 are stopped, the control unit 13 ends the image output processing.
  • the information processing apparatus 10 performs image processing on one of the frames used to generate the IR component information (difference image).
  • the time lag with the target image is small. Therefore, the user can obtain an image with less discomfort.
  • the information processing device 10 performs image processing on the captured image based on the IR component information.
  • the information processing apparatus 10 generates a learning model by learning based on the image before image processing and the image after image processing, and uses the generated learning model to estimate the image after image processing from the captured image. may Accordingly, the information processing apparatus 10 can acquire an image as if the user were illuminated without irradiating the user with infrared light.
  • FIG. 11 is a diagram showing processing up to completion of learning of the learning model
  • FIG. 12 is a diagram showing processing after completion of learning of the learning model. An outline of the advanced method will be described below with reference to FIGS. 11 and 12.
  • FIG. 11 is a diagram showing processing up to completion of learning of the learning model
  • FIG. 12 is a diagram showing processing after completion of learning of the learning model. An outline of the advanced method will be described below with reference to FIGS. 11 and 12.
  • FIG. 11 is a diagram showing processing up to completion of learning of the learning model
  • FIG. 12 is a diagram showing processing after completion of learning of the learning model.
  • the information processing apparatus 10 operates the infrared illumination section 15 and the imaging section 17 according to the user's operation. At this time, the information processing apparatus 10 blinks the infrared illuminator 15 while synchronizing with the image (moving image) captured by the imaging unit 17 . Thereby, the information processing apparatus 10 can acquire a visible light image and an IR image in a time division manner in synchronization with the blinking cycle of infrared light. Then, the information processing device 10 extracts IR component information from the IR image. Then, the information processing apparatus 10 performs image processing regarding brightness or brightness on the captured image based on the IR component information. Then, the information processing device 10 outputs the image after image processing to the output unit 14 .
  • the information processing device 10 learns a learning model based on the image before image processing and the image after image processing.
  • a learning model is, for example, a model for learning the relationship between an image before image processing and an image after image processing.
  • the information processing apparatus 10 learns the learning model so as to minimize the difference between the image before image processing and the image after image processing.
  • a learning model is, for example, a machine learning model such as a neural network model.
  • a neural network model is composed of layers called an input layer containing a plurality of nodes, an intermediate layer (or hidden layer), and an output layer, and each node is connected via edges. Each layer has a function called activation function, and each edge is weighted.
  • a learning model has one or more intermediate layers (or hidden layers). When the learning model is a neural network model, learning the learning model means, for example, setting the number of intermediate layers (or hidden layers), the number of nodes in each layer, or the weight of each edge.
  • the neural network model may be a model based on deep learning.
  • the neural network model may be a model called DNN (Deep Neural Network).
  • the neural network model may be a model called a CNN (Convolution Neural Network), RNN (Recurrent Neural Network), or LSTM (Long Short-Term Memory).
  • CNN Convolution Neural Network
  • RNN Recurrent Neural Network
  • LSTM Long Short-Term Memory
  • learning models are not limited to neural network models.
  • the learning model may be a model based on reinforcement learning. In reinforcement learning, actions (settings) that maximize value are learned through trial and error.
  • the learning model may be a logistic regression model.
  • the learning model may consist of multiple models.
  • a learning model may consist of multiple neural network models. More specifically, the learning model may consist of multiple neural network models selected from, for example, CNN, RNN, and LSTM. When a learning model is composed of multiple neural network models, these multiple neural network models may be in a dependent relationship or in a parallel relationship.
  • the information processing device 10 stores, in the storage unit 12, character strings, numerical values, and the like that indicate the model structure and connection coefficients as information that constitutes the learning model.
  • the learning model uses a pair of data of an image before image processing (captured image such as a visible light image) and an image after image processing as learning data, and acquires an image before image processing (for example, a captured image such as a visible light image). It may be a model that has learned to output an image after image processing (hereinafter referred to as an estimated image) when a captured image is input.
  • the first learning model includes an input layer for inputting a captured image, an output layer for outputting an estimated image, and a layer other than the output layer which is one of the layers from the input layer to the output layer. 1 element and a second element whose value is calculated based on the weight of the first element and the first element.
  • an operation is performed based on the first element and the weight of the first element (that is, the connection coefficient), so that the estimated image is output from the output layer according to the captured image input to the input layer.
  • the weight of the first element that is, the connection coefficient
  • the learning model is realized by a neural network with one or more hidden layers, such as DNN.
  • the first element included in the learning model corresponds to any node of the input layer or intermediate layer.
  • the second element corresponds to the next node, which is a node to which the value is transmitted from the node corresponding to the first element.
  • the weight of the first element corresponds to the connection coefficient, which is the weight considered for the value transmitted from the node corresponding to the first element to the node corresponding to the second element.
  • the first element included in the learning model corresponds to input data (xi) such as x1 and x2.
  • the weight of the first element corresponds to the coefficient ai corresponding to xi.
  • the regression model can be viewed as a simple perceptron with an input layer and an output layer.
  • the first element can be regarded as a node of the input layer
  • the second element can be regarded as a node of the output layer.
  • the information processing device 10 uses a model having an arbitrary structure, such as a neural network or a regression model, to calculate information to be output.
  • the learning model is set with coefficients so that an estimated image is output when a captured image (for example, a visible light image before image processing) is input.
  • the information processing apparatus 10 sets the coefficient based on the degree of similarity between the image after image processing and the value obtained by inputting the captured image (visible light image before image processing) into the learning model.
  • the information processing apparatus 10 uses such a learning model to generate an estimated image from the captured image.
  • the learning model As an example of a learning model, a model that outputs an estimated image when a captured image is input is shown.
  • the learning model according to the embodiment may be a model that is generated based on results obtained by repeatedly inputting and outputting data to the learning model.
  • the learning model may be a model that constitutes part of the GAN.
  • the learning device that learns the learning model may be the information processing device 10, or may be another information processing device.
  • the information processing apparatus 10 learns a learning model.
  • the information processing apparatus 10 learns the learning model and stores the learned learning model in the storage unit 12 . More specifically, the information processing apparatus 10 sets the connection coefficient of the learning model so that the learning model outputs return information when drafter information is input to the learning model.
  • the information processing apparatus 10 inputs a captured image to a node in the input layer of the learning model, propagates the data to the output layer of the learning model by following each intermediate layer, and outputs an estimated image. Then, the information processing apparatus 10 corrects the connection coefficients of the learning model based on the difference between the estimated image actually output by the learning model and the actual image after image processing. For example, the information processing apparatus 10 may correct the connection coefficients using a technique such as back propagation. At this time, the information processing apparatus 10 may correct the connection coefficient based on the cosine similarity between the vector representing the first measured data and the vector representing the value actually output by the learning model.
  • the information processing device 10 may learn the learning model using any learning algorithm.
  • the information processing device 10 may learn a learning model using learning algorithms such as neural networks, support vector machines, clustering, and reinforcement learning.
  • the information processing apparatus 10 starts generating an estimated image.
  • the information processing apparatus 10 uses the generated learning model to estimate an image after image processing of the captured image from the newly acquired captured image.
  • the information processing apparatus 10 switches the image output to the output unit 14 from the image generated by the image processing to the image estimated using the learning model (hereinafter referred to as the estimated image).
  • the information processing device 10 may stop outputting infrared light from the infrared illumination unit 15 at the timing when the image output to the output unit 14 is switched from the image generated by the image processing to the estimated image.
  • half of the frames captured by the information processing apparatus 10 are visible light images before the learning of the learning model is completed, but all the frames are visible light images after the learning of the learning model is completed.
  • the information processing apparatus 10 then generates an estimated image of the visible light image using the learning model, and outputs the generated estimated image to the output unit 14 .
  • the information processing apparatus 10 can double the frame rate of the video output to the output unit 14 before the completion of learning.
  • FIG. 13 is a flow chart showing image output processing for realizing the advanced method.
  • the following processing is executed by the control unit 13 of the information processing device 10 .
  • the control unit 13 starts image output processing when the user starts imaging (for example, a video conference).
  • control unit 13 activates the imaging unit 17 (step S301). Then, the control unit 13 blinks the infrared illumination unit 15 while synchronizing with the frame period of the video (moving image) captured by the imaging unit 17 (step S302).
  • the acquisition unit 131 of the information processing device 10 acquires the image captured by the imaging unit 17 . Since the infrared light is blinking in synchronization with the frame cycle of the video, the acquisition unit 131 alternately acquires the visible light image and the IR image (step S303).
  • the extraction unit 132 of the information processing device 10 extracts IR component information from the IR image (step S304). Specifically, the extraction unit 132 acquires the difference between the visible light image and the IR image as IR component information. Then, the image processing unit 133 of the information processing device 10 performs image processing related to luminance or lightness of the captured image based on the IR component information (step S305). Then, the output control unit 134 of the information processing device 10 outputs the processed image to the output unit 14 (step S306).
  • the learning unit 135 of the information processing device 10 performs learning of the learning model based on the image before image processing and the image after image processing (step S307).
  • step S308 determines whether or not the shooting has ended. If the shooting has ended (step S308: Yes), the control unit 13 advances the process to step S311. If the shooting has not ended (step S308: No), the control unit 13 determines whether learning of the learning model has been completed (step S309). If learning has not been completed (step S309: No), the control unit 13 returns the process to step S303. If the learning has been completed (step S309: Yes), the control unit 13 starts the estimation process (step S310).
  • FIG. 14 is a flow chart showing the estimation process.
  • the control unit 13 of the information processing device 10 stops the operation of the infrared illumination unit 15 (step S401). Then, the acquisition unit 131 of the information processing device 10 acquires the captured image (that is, the visible light image) (step S402). Then, the estimating unit 136 of the information processing apparatus 10 inputs the captured image to the learning model, thereby estimating the image after the image processing of the captured image (step S403). Then, the output control unit 134 of the information processing device 10 outputs the estimated image to the output unit 14 (step S404).
  • control unit 13 of the information processing device 10 determines whether or not the shooting has ended (step S405). If the shooting has not ended (step S405: No), the control unit 13 returns the process to step S402. If the shooting has ended (step S405: Yes), the control unit 13 returns the processing to the flow of FIG. 13 and stops the operations of the imaging unit 17 and the infrared illumination unit 15 (step S311). When the operations of the imaging unit 17 and the infrared illumination unit 15 are stopped, the control unit 13 ends the image output processing.
  • the information processing apparatus 10 can acquire an image as if the user were lighting without irradiating the user with infrared light. Also, before learning is completed, the IR image is not output to the output unit 14, so the frame rate is halved. can.
  • the method of this embodiment can be applied to actual volumetric photography.
  • the live-action volumetric is a technique that acquires three-dimensional information of a subject (for example, a person) in a studio or the like and converts it into 3DCG as it is.
  • the information processing apparatus 10 surrounds and photographs a subject with multiple cameras. Then, the information processing apparatus 10 converts the subject into three-dimensional data from the image data to generate content. Then, the information processing apparatus 10 renders the content from a free viewpoint based on the user's operation.
  • the information processing apparatus 10 shoots a subject mainly in a studio with a plurality of fixed lighting fixtures on the ceiling in order to realize volumetric photography. At this time, if the subject is uniformly illuminated with bright illumination, the quality of the texture and modeling is improved, but the unevenness is reduced, resulting in an unnatural CG-like image. On the other hand, if the subject is shot with biased lighting, the shadows and unevenness will increase, but the texture and modeling quality will deteriorate. It is difficult to add additional lighting even if there is no part of the subject that is illuminated by the shape of the subject and the contrast with the green screen is low.
  • a visible light camera capable of photographing infrared rays and an IR light capable of individually controlling lighting, extinguishing, and irradiation direction are additionally arranged in a conventional live-action volumetric imaging system.
  • the information processing apparatus 10 performs image processing (for example, correction or enhancement of shadows) on the visible light image based on the IR component information.
  • image processing for example, correction or enhancement of shadows
  • FIG. 15 is a diagram showing an example of a photography studio of the live-action volumetric photography system of this embodiment.
  • a plurality of visible light lights 20 a plurality of IR lights (infrared illuminator 15 shown in FIG. 15) are arranged in the photography studio.
  • a plurality of IR cameras 30 are arranged in the photography studio.
  • the IR camera 30 is a camera that can simultaneously acquire visible light and infrared light.
  • the configuration of the IR camera 30 is similar to that of the imaging section 17 .
  • FIG. 16 is a diagram showing a processing example of the information processing device 10 in the live-action volumetric imaging system.
  • the information processing device 10 acquires a multi-viewpoint image composed of visible light images from a plurality of directions of a subject and an IR image of the subject from a plurality of directions.
  • a multi-viewpoint image is an image for generating a 3D model of a subject.
  • the information processing device 10 corrects the multi-viewpoint image based on IR component information (shadow information) extracted from the IR image.
  • IR component information can be used as auxiliary information for foreground-background separation.
  • a specific correction method image processing method
  • the imaging environment of the object may be a combination of a high-speed imaging camera capable of simultaneously acquiring visible light and infrared light, and visible light and IR light arranged omnidirectionally.
  • FIG. 17 is a diagram showing a state in which a plurality of visible light lights 20 and a plurality of IR lights (infrared illuminators 15) are omnidirectionally arranged.
  • the information processing device 10 simultaneously acquires shading/reflectance (albedo) from an arbitrary light source position while shooting with visible light. Since the imaging frame ratio of IR image:visible light image is not limited to 1:1, shading/albedo from multiple light source positions can be acquired simultaneously depending on camera performance. Since it uses infrared light, it does not affect visible light image capturing. If only the shading in the IR monochrome image is acquired, the frame rate can be increased to the limit independently of the visible light camera. Further, the information processing apparatus 10 can add photo-realistic shadows later based on the albedo.
  • the user may produce new video content by synthesizing the 3D model of the subject generated in this embodiment and the 3D data managed by another server. Further, for example, when there is background data acquired by an imaging device such as Lidar, the user can combine the 3D model of the subject generated in the present embodiment with the background data, so that the user can see the subject as background data. It is also possible to create content that makes you feel as if you are in the indicated location.
  • the video content may be 3D video content, or may be 2D video content converted to 2D.
  • the 3D model of the subject generated in the present embodiment includes, for example, a 3D model generated by a 3D model generation unit and a 3D model reconstructed by a rendering unit.
  • the information processing device 10 arranges a subject (for example, a performer) generated in the present embodiment in a virtual space where the user communicates as an avatar. be able to. In this case, the user becomes an avatar and can view the photographed subject in the virtual space.
  • a subject for example, a performer
  • a remote user can A 3D model of the subject can be viewed.
  • the information processing apparatus 10 can transmit the 3D model of the subject in real time, so that the subject and the remote user can communicate in real time.
  • the subject is a teacher and the user is a student, or that the subject is a doctor and the user is a patient.
  • the information processing apparatus 10 can also generate a free-viewpoint video of sports or the like based on the 3D models of a plurality of subjects generated in the present embodiment. Also, an individual can distribute himself/herself, which is a 3D model generated in this embodiment, to a distribution platform. As such, the content of the embodiments described herein can be applied to a variety of technologies and services.
  • the information processing apparatus 10 of this embodiment may be implemented by a dedicated computer system or may be implemented by a general-purpose computer system.
  • a communication program for executing the above operations is distributed by storing it in a computer-readable recording medium such as an optical disk, semiconductor memory, magnetic tape, or flexible disk.
  • the control device is configured by installing the program in a computer and executing the above-described processing.
  • the control device may be a device (for example, a personal computer) external to the information processing device 10 .
  • the control device may be a device inside the information processing device 10 (for example, the control unit 13).
  • the above communication program may be stored in a disk device provided in a server device on a network such as the Internet, so that it can be downloaded to a computer.
  • the functions described above may be realized through cooperation between an OS (Operating System) and application software.
  • the parts other than the OS may be stored in a medium and distributed, or the parts other than the OS may be stored in a server device so that they can be downloaded to a computer.
  • each component of each device illustrated is functionally conceptual and does not necessarily need to be physically configured as illustrated.
  • the specific form of distribution and integration of each device is not limited to the illustrated one, and all or part of them can be functionally or physically distributed and integrated in arbitrary units according to various loads and usage conditions. Can be integrated and configured. Note that this distribution/integration configuration may be performed dynamically.
  • each step of one flowchart may be executed by one device, or may be executed by a plurality of devices.
  • the plurality of processes may be executed by one device, or may be shared by a plurality of devices.
  • a plurality of processes included in one step can also be executed as processes of a plurality of steps.
  • the processing described as multiple steps can also be collectively executed as one step.
  • a computer-executed program may be configured such that the processing of the steps described in the program is executed in chronological order according to the order described in this specification, in parallel, or when calls are executed. It may also be executed individually at necessary timings such as when it is interrupted. That is, as long as there is no contradiction, the processing of each step may be executed in an order different from the order described above. Furthermore, the processing of the steps describing this program may be executed in parallel with the processing of other programs, or may be executed in combination with the processing of other programs.
  • the present embodiment can be applied to any configuration that constitutes a device or system, such as a processor as a system LSI (Large Scale Integration), a module using a plurality of processors, a unit using a plurality of modules, etc. Furthermore, it can also be implemented as a set or the like (that is, a configuration of a part of the device) to which other functions are added.
  • a processor as a system LSI (Large Scale Integration)
  • module using a plurality of processors a unit using a plurality of modules, etc.
  • it can also be implemented as a set or the like (that is, a configuration of a part of the device) to which other functions are added.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a single device housing a plurality of modules in one housing, are both systems. .
  • this embodiment can take a configuration of cloud computing in which one function is shared by a plurality of devices via a network and processed jointly.
  • the information processing apparatus 10 extracts IR component information from an IR image obtained by irradiating an object (for example, a user and surrounding objects) with infrared light, Based on the extracted IR component information, image processing relating to brightness or brightness is performed on the captured image of the target. Infrared light is invisible to the human eye. The user can obtain an image as if it were illuminated with visible light without being dazzled.
  • the present technology can also take the following configuration.
  • an acquisition unit that acquires an IR image that is a captured image obtained by irradiating an object with infrared light and that includes a visible light component and an IR component; an extraction unit that extracts IR component information from the IR image; an image processing unit that performs image processing related to brightness or brightness of the captured image of the target based on the information of the IR component; Information processing device.
  • the acquisition unit acquires a visible light image of the target in addition to the IR image, The extraction unit acquires a difference between the visible light image and the IR image as information on the IR component.
  • the information processing device according to (1) above.
  • the infrared light is blinking
  • the acquisition unit acquires the visible light image and the IR image in a time division manner in synchronization with the blinking cycle of the infrared light.
  • the information processing device according to (2) above.
  • the infrared light blinks in synchronization with the frame period of the video,
  • the acquisition unit acquires the image of the frame at the timing when the infrared light is not irradiated as the visible light image, and acquires the image of the frame at the timing when the infrared light is irradiated as the IR image.
  • the information processing apparatus according to (2) or (3) above.
  • the extracting unit acquires, as the IR component information, a difference between images of two consecutive frames starting from a frame at which the infrared light is not irradiated,
  • the image processing unit performs image processing related to brightness or brightness of an image of the next frame of the two consecutive frames based on the information of the IR component.
  • the information processing device according to (4) above.
  • the IR component information is a difference image of the two consecutive frames;
  • the image processing unit performs a process of blurring the edges of the difference image, and performs image processing related to brightness or brightness of the image of the next frame of the consecutive two frames based on the difference image with the edges blurred.
  • the information processing device according to (5) above.
  • the image processing unit corrects the IR component information based on inter-frame motion prediction, and determines the brightness or brightness of the next frame image of the two continuous frames based on the corrected IR component information. perform image processing, The information processing device according to (5) above.
  • the extracting unit acquires, as the IR component information, a difference between images of two consecutive frames starting from a frame at which the infrared light is irradiated, The image processing unit performs image processing related to brightness or brightness of the image of the last frame of the two consecutive frames based on the information of the IR component.
  • the information processing device according to (4) above.
  • the image processing unit rewrites luminance information in the HSL color space of the captured image based on the IR component information.
  • the information processing apparatus according to any one of (1) to (8) above.
  • the image processing unit rewrites lightness information in the HSV color space of the captured image based on the IR component information.
  • the information processing apparatus according to any one of (1) to (8) above.
  • An output control unit that outputs an image generated by the image processing to an output unit,
  • the information processing apparatus according to any one of (1) to (10) above.
  • the information processing device controls the infrared irradiation unit so that the infrared light blinks in synchronization with a video frame cycle while the image generated by the image processing is output to the output unit, While the infrared light is blinking, the acquisition unit acquires an image of the frame at the timing when the infrared light is not irradiated as a visible light image, and an image of the frame at the timing when the infrared light is irradiated.
  • the output control unit controls the infrared irradiation unit to stop outputting the infrared light at a timing when the image output to the output unit is switched from the image generated by the image processing to the estimated image, After the output of the infrared light is stopped, the acquisition unit acquires images of all frames as the visible light image, The estimating unit uses the learning model to estimate an image after the image processing of the visible light image.
  • the information processing device according to (12) above.
  • the acquisition unit comprises multi-viewpoint images, which are images for generating a 3D model of a subject and which are composed of visible light images from a plurality of directions of the subject, IR images of the subject from a plurality of directions, and get The image processing unit corrects the multi-viewpoint image based on the information of the IR component extracted from the IR image.
  • the information processing device according to (1) above.
  • the computer an acquisition unit that acquires an IR image that is a captured image obtained by irradiating an object with infrared light and that includes a visible light component and an IR component; an extraction unit that extracts IR component information from the IR image; an image processing unit that performs image processing related to brightness or brightness of the captured image of the target based on the information of the IR component;

Abstract

Un dispositif de traitement d'informations selon la présente invention comprend : une unité d'acquisition qui acquiert une image infrarouge (IR), laquelle est une image capturée obtenue par soumission d'un sujet à un rayonnement de lumière infrarouge, comprenant une composante de lumière visible et un composante IR ; une unité d'extraction qui extrait des informations de la composante IR à partir de l'image IR ; et une unité de traitement d'image qui, sur la base des informations sur la composante IR, soumet l'image capturée du sujet à un traitement d'image concernant la luminance ou la luminosité.
PCT/JP2022/011543 2021-08-27 2022-03-15 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2023026543A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-139250 2021-08-27
JP2021139250 2021-08-27

Publications (1)

Publication Number Publication Date
WO2023026543A1 true WO2023026543A1 (fr) 2023-03-02

Family

ID=85322654

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/011543 WO2023026543A1 (fr) 2021-08-27 2022-03-15 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (1)

Country Link
WO (1) WO2023026543A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06342254A (ja) * 1993-06-01 1994-12-13 Mitsubishi Electric Corp モニタ装置
JP2004312301A (ja) * 2003-04-04 2004-11-04 Sumitomo Electric Ind Ltd 画像表示方法、画像表示システム及び表示装置
JP2009017223A (ja) * 2007-07-04 2009-01-22 Sony Corp 撮影装置、画像処理装置、これらにおける画像処理方法およびプログラム
JP2012028837A (ja) * 2010-07-20 2012-02-09 Fujitsu Semiconductor Ltd ダイジェスト値生成装置、及び、ダイジェスト値生成プログラム
JP2017033448A (ja) * 2015-08-05 2017-02-09 大日本印刷株式会社 画像処理装置、プログラム及び画像処理方法
WO2017090462A1 (fr) * 2015-11-27 2017-06-01 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2021106370A1 (fr) * 2019-11-28 2021-06-03 ソニーセミコンダクタソリューションズ株式会社 Capteur d'image à semi-conducteurs, système de capture d'image et procédé de commande de capteur d'image à semi-conducteurs

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06342254A (ja) * 1993-06-01 1994-12-13 Mitsubishi Electric Corp モニタ装置
JP2004312301A (ja) * 2003-04-04 2004-11-04 Sumitomo Electric Ind Ltd 画像表示方法、画像表示システム及び表示装置
JP2009017223A (ja) * 2007-07-04 2009-01-22 Sony Corp 撮影装置、画像処理装置、これらにおける画像処理方法およびプログラム
JP2012028837A (ja) * 2010-07-20 2012-02-09 Fujitsu Semiconductor Ltd ダイジェスト値生成装置、及び、ダイジェスト値生成プログラム
JP2017033448A (ja) * 2015-08-05 2017-02-09 大日本印刷株式会社 画像処理装置、プログラム及び画像処理方法
WO2017090462A1 (fr) * 2015-11-27 2017-06-01 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2021106370A1 (fr) * 2019-11-28 2021-06-03 ソニーセミコンダクタソリューションズ株式会社 Capteur d'image à semi-conducteurs, système de capture d'image et procédé de commande de capteur d'image à semi-conducteurs

Similar Documents

Publication Publication Date Title
CN108369457B (zh) 用于混合现实的现实混合器
US20180158246A1 (en) Method and system of providing user facial displays in virtual or augmented reality for face occluding head mounted displays
US11580652B2 (en) Object detection using multiple three dimensional scans
WO2017176349A1 (fr) Cinémagraphe automatique
US11967014B2 (en) 3D conversations in an artificial reality environment
KR20190041586A (ko) 복수의 이미지들을 합성하는 전자장치 및 방법
US11941729B2 (en) Image processing apparatus, method for controlling image processing apparatus, and storage medium
CN107743637A (zh) 用于处理外围图像的方法和设备
CN112272296B (zh) 使用深度和虚拟光的视频照亮
CN109427089B (zh) 基于环境光照条件的混合现实对象呈现
WO2023026543A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2020215263A1 (fr) Dispositif et procédé de traitement d'image
JP2007102478A (ja) 画像処理装置、画像処理方法、及び半導体集積回路
US11418723B1 (en) Increasing dynamic range of a virtual production display
US11818325B2 (en) Blended mode three dimensional display systems and methods
JP2023099443A (ja) Ar処理方法及び装置
JP2021510442A (ja) 深度データを用いた拡張現実画像提供方法及びプログラム
US20230056459A1 (en) Image processing device, method of generating 3d model, learning method, and program
WO2022011621A1 (fr) Appareil et procédé de génération d'image d'éclairage de visage
CN111612915B (zh) 渲染对象以匹配相机噪声
CN116245741B (zh) 图像处理方法及其相关设备
JP7304484B2 (ja) 赤外光を利用したポートレート再照明
US11823343B1 (en) Method and device for modifying content according to various simulation characteristics
US20240107113A1 (en) Parameter Selection for Media Playback
WO2023160219A1 (fr) Procédé d'entraînement de modèle d'ajout de lumière, procédé de traitement d'image, et dispositif associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22860841

Country of ref document: EP

Kind code of ref document: A1