WO2024076101A1 - Procédé de traitement d'images sur la base de l'intelligence artificielle et dispositif électronique conçu pour prendre en charge le procédé - Google Patents

Procédé de traitement d'images sur la base de l'intelligence artificielle et dispositif électronique conçu pour prendre en charge le procédé Download PDF

Info

Publication number
WO2024076101A1
WO2024076101A1 PCT/KR2023/015018 KR2023015018W WO2024076101A1 WO 2024076101 A1 WO2024076101 A1 WO 2024076101A1 KR 2023015018 W KR2023015018 W KR 2023015018W WO 2024076101 A1 WO2024076101 A1 WO 2024076101A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
electronic device
network
signal processor
Prior art date
Application number
PCT/KR2023/015018
Other languages
English (en)
Korean (ko)
Inventor
송진우
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020220184871A external-priority patent/KR20240047283A/ko
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2024076101A1 publication Critical patent/WO2024076101A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Definitions

  • An embodiment of the present disclosure provides an image processing method that can support image processing of a video using an artificial intelligence (AI) method when shooting (or recording) a video, and an electronic device that supports the same.
  • AI artificial intelligence
  • AI-based image processing technology is being proposed that can detect full color images even with minimal lighting (e.g., dark environments such as at night).
  • AI-based image processing technology may refer to processing technology that allows high-quality images to be obtained even in a minimal lighting environment.
  • AI-based image processing technology is currently used for image processing for images (e.g. still images).
  • images e.g. still images.
  • an electronic device provides video by processing video signals using technology based on a general video processing algorithm.
  • an image signal processor is implemented as a technology based on an image processing algorithm, its pipeline is defined, and its structure is simplified. Therefore, conventional image processing methods may have limitations, and there may be no progress in improving the performance of demosaic and/or noise reduction, which have a significant impact on the resolution of the final image.
  • existing image processing algorithms may have lower performance compared to recent AI-based image processing algorithms.
  • AI-based image processing algorithms consume a lot of current to be used in scenarios where video is captured (e.g., recorded), so to date, general image processing algorithms are being applied to videos.
  • an image processing method capable of supporting image processing of a video using an artificial intelligence (AI) method when shooting a video and an electronic device supporting the same are provided.
  • AI artificial intelligence
  • an image signal processor that can improve resolution and noise reduction ability during artificial intelligence-based video shooting (e.g., recording), real-time demosaic (demosaic) )
  • ISP image signal processor
  • demosaic real-time demosaic
  • An electronic device may include a display, a camera, a processor operatively connected to the display and the camera, and an image signal processor (ISP) included in the processor.
  • the image signal processor may operate to receive input data corresponding to the sensor output size of the sensor through the camera while a user is capturing a video.
  • the image signal processor may operate to determine a parameter corresponding to a setting or setting change in which the video is captured.
  • the image signal processor is configured to perform image processing to convert the input data into a full color image based on the parameters and vary the output size of the full color image to match the video output size. It can work.
  • the image signal processor may operate to output image data according to image processing.
  • An image processing device of an electronic device may include a camera, an image signal processor (ISP) operatively connected to the camera, and a memory operatively connected to the image signal processor.
  • ISP image signal processor
  • the memory when executed, causes the image signal processor to receive input data corresponding to a sensor output size of a sensor through the camera while a video capture is performed by a user, and Determining parameters corresponding to the settings or setting changes to be captured, converting the input data to a full color image based on the parameters, and changing the output size of the full color image to match the video output size.
  • Instructions for performing processing and outputting image data according to image processing may be stored.
  • a method of operating an electronic device includes, while a user is capturing a video, an image signal processor (ISP) transmits input data corresponding to the sensor output size of the sensor through a camera. It may include performing the operation of receiving.
  • the operating method may include performing an operation to determine a parameter corresponding to a setting or setting change in which the video is captured.
  • the operating method may include performing an operation of converting the input data into a full color image based on the parameters and performing image processing to vary the output size of the full color image to suit the video output size. there is.
  • the operation method may include performing an operation of outputting image data according to image processing.
  • various embodiments of the present disclosure may include a computer-readable recording medium on which a program for executing the method on a processor is recorded.
  • a non-transitory computer-readable storage medium (or computer program product) storing one or more programs.
  • one or more programs when executed by a processor of an electronic device, may output a sensor output size of a sensor through a camera while a user performs video capture.
  • An operation of receiving input data corresponding to the video an operation of determining a parameter corresponding to a setting or setting change in which the video is captured, converting the input data into a full color image based on the parameter, and the full color image. It may include a command for performing image processing to change the output size of the image to match the video output size, and an operation for outputting image data according to the image processing.
  • the image signal processor generates a full color image based on parameter replacement of the AI network and simultaneously performs resize, so that when shooting (e.g. recording) an artificial intelligence-based video, Current consumption can be reduced.
  • image quality is improved by processing calculations for image processing based on artificial intelligence
  • power consumption due to a reduction in the amount of calculation can be minimized.
  • by reducing power consumption due to artificial intelligence-based image processing it is possible to use the AI network of the image signal processor even during high-complexity video shooting (e.g., recording).
  • FIG. 1 is a block diagram of an electronic device in a network environment according to various embodiments.
  • FIG. 2 is a block diagram illustrating a camera module according to various embodiments.
  • FIG. 3 is a diagram schematically showing the configuration of an electronic device according to an embodiment of the present disclosure.
  • FIG. 4 is a diagram schematically showing the configuration of an image signal processor according to an embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating an example of operations of a measurement unit and an alignment unit in an image signal processor according to an embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating an example of the operation of a preprocessor in an image signal processor according to an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating an example of a full color image in an image signal processor according to an embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating an example of the output of an image signal processor according to an embodiment of the present disclosure.
  • FIG. 9 is a diagram illustrating an example of a network cache of an image signal processor according to an embodiment of the present disclosure.
  • FIG. 10 is a diagram illustrating an example of an operation of an image signal processor according to an embodiment of the present disclosure.
  • FIGS. 11A, 11B, and 11C are diagrams illustrating an example of a scaling operation in an image signal processor according to an embodiment of the present disclosure.
  • FIG. 12 is a flowchart illustrating a method of operating an electronic device according to an embodiment of the present disclosure.
  • FIG. 1 is a block diagram of an electronic device 101 in a network environment 100 according to various embodiments.
  • the electronic device 101 communicates with the electronic device 102 through a first network 198 (e.g., a short-range wireless communication network) or a second network 199. It is possible to communicate with at least one of the electronic device 104 or the server 108 through (e.g., a long-distance wireless communication network). According to one embodiment, the electronic device 101 may communicate with the electronic device 104 through the server 108.
  • a first network 198 e.g., a short-range wireless communication network
  • a second network 199 e.g., a long-distance wireless communication network.
  • the electronic device 101 may communicate with the electronic device 104 through the server 108.
  • the electronic device 101 includes a processor 120, a memory 130, an input module 150, an audio output module 155, a display module 160, an audio module 170, and a sensor module ( 176), interface 177, connection terminal 178, haptic module 179, camera module 180, power management module 188, battery 189, communication module 190, subscriber identification module 196 , or may include an antenna module 197.
  • at least one of these components eg, the connection terminal 178) may be omitted or one or more other components may be added to the electronic device 101.
  • some of these components e.g., sensor module 176, camera module 180, or antenna module 197) are integrated into one component (e.g., display module 160). It can be.
  • the processor 120 for example, executes software (e.g., program 140) to operate at least one other component (e.g., hardware or software component) of the electronic device 101 connected to the processor 120. It can be controlled and various data processing or calculations can be performed. According to one embodiment, as at least part of data processing or computation, the processor 120 stores instructions or data received from another component (e.g., sensor module 176 or communication module 190) in volatile memory 132. The commands or data stored in the volatile memory 132 can be processed, and the resulting data can be stored in the non-volatile memory 134.
  • software e.g., program 140
  • the processor 120 stores instructions or data received from another component (e.g., sensor module 176 or communication module 190) in volatile memory 132.
  • the commands or data stored in the volatile memory 132 can be processed, and the resulting data can be stored in the non-volatile memory 134.
  • the processor 120 is a main processor 121 (e.g., a central processing unit (CPU) or an application processor (AP)) or an auxiliary processor (e.g., a central processing unit (CPU) or an application processor (AP)) that can be operated independently or together. 123) (e.g., graphic processing unit (GPU), neural processing unit (NPU), image signal processor (ISP), sensor hub processor, or communication processor (CP, communication processor)) may be included.
  • the electronic device 101 includes a main processor 121 and a secondary processor 123, the secondary processor 123 may be set to use lower power than the main processor 121 or be specialized for a designated function. You can.
  • the auxiliary processor 123 may be implemented separately from the main processor 121 or as part of it.
  • the auxiliary processor 123 may, for example, replace the main processor 121 while the main processor 121 is in an inactive (e.g., sleep) state, or when the main processor 121 While in an active (e.g., application execution) state, at least one of the components of the electronic device 101 (e.g., the display module 160, the sensor module 176, or At least some of the functions or states related to the communication module 190 can be controlled.
  • co-processor 123 e.g., image signal processor or communication processor
  • may be implemented as part of another functionally related component e.g., camera module 180 or communication module 190. there is.
  • the auxiliary processor 123 may include a hardware structure specialized for processing artificial intelligence models.
  • Artificial intelligence models can be created through machine learning. For example, such learning may be performed in the electronic device 101 itself on which the artificial intelligence model is performed, or may be performed through a separate server (e.g., server 108).
  • Learning algorithms may include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but It is not limited.
  • An artificial intelligence model may include multiple artificial neural network layers.
  • Artificial neural networks include deep neural network (DNN), convolutional neural network (CNN), recurrent neural network (RNN), restricted boltzmann machine (RBM), belief deep network (DBN), bidirectional recurrent deep neural network (BRDNN), It may be one of deep Q-networks or a combination of two or more of the above, but is not limited to the examples described above.
  • artificial intelligence models may additionally or alternatively include software structures.
  • the memory 130 may store various data used by at least one component (eg, the processor 120 or the sensor module 176) of the electronic device 101. Data may include, for example, input data or output data for software (e.g., program 140) and instructions related thereto.
  • Memory 130 may include volatile memory 132 or non-volatile memory 134.
  • the program 140 may be stored as software in the memory 130 and may include, for example, an operating system (OS) 142, middleware 144, or applications 146. there is.
  • OS operating system
  • middleware middleware
  • applications 146. there is.
  • the input module 150 may receive commands or data to be used in a component of the electronic device 101 (e.g., the processor 120) from outside the electronic device 101 (e.g., a user).
  • the input module 150 may include, for example, a microphone, mouse, keyboard, keys (eg, buttons), or digital pen (eg, stylus pen).
  • the sound output module 155 may output sound signals to the outside of the electronic device 101.
  • the sound output module 155 may include, for example, a speaker or a receiver. Speakers can be used for general purposes such as multimedia playback or recording playback.
  • the receiver can be used to receive incoming calls. According to one embodiment, the receiver may be implemented separately from the speaker or as part of it.
  • the display module 160 can visually provide information to the outside of the electronic device 101 (eg, a user).
  • the display module 160 may include, for example, a display, a hologram device, or a projector, and a control circuit for controlling the device.
  • the display module 160 may include a touch sensor configured to detect a touch, or a pressure sensor configured to measure the intensity of force generated by the touch.
  • the audio module 170 can convert sound into an electrical signal or, conversely, convert an electrical signal into sound. According to one embodiment, the audio module 170 acquires sound through the input module 150, the sound output module 155, or an external electronic device (e.g., directly or wirelessly connected to the electronic device 101). Sound may be output through the electronic device 102 (e.g., speaker or headphone).
  • the electronic device 102 e.g., speaker or headphone
  • the sensor module 176 detects the operating state (e.g., power or temperature) of the electronic device 101 or the external environmental state (e.g., user state) and generates an electrical signal or data value corresponding to the detected state. can do.
  • the sensor module 176 includes, for example, a gesture sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR (infrared) sensor, a biometric sensor, It may include a temperature sensor, humidity sensor, or light sensor.
  • the interface 177 may support one or more designated protocols that can be used to connect the electronic device 101 directly or wirelessly with an external electronic device (eg, the electronic device 102).
  • the interface 177 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface.
  • HDMI high definition multimedia interface
  • USB universal serial bus
  • SD secure digital
  • connection terminal 178 may include a connector through which the electronic device 101 can be physically connected to an external electronic device (eg, the electronic device 102).
  • the connection terminal 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (eg, a headphone connector).
  • the haptic module 179 can convert electrical signals into mechanical stimulation (e.g., vibration or movement) or electrical stimulation that the user can perceive through tactile or kinesthetic senses.
  • the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module 180 can capture still images and moving images.
  • the camera module 180 may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module 188 can manage power supplied to the electronic device 101.
  • the power management module 188 may be implemented as at least a part of, for example, a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • the battery 189 may supply power to at least one component of the electronic device 101.
  • the battery 189 may include, for example, a non-rechargeable primary battery, a rechargeable secondary battery, or a fuel cell.
  • Communication module 190 is configured to provide a direct (e.g., wired) communication channel or wireless communication channel between electronic device 101 and an external electronic device (e.g., electronic device 102, electronic device 104, or server 108). It can support establishment and communication through established communication channels. Communication module 190 operates independently of processor 120 (e.g., an application processor) and may include one or more communication processors that support direct (e.g., wired) communication or wireless communication.
  • processor 120 e.g., an application processor
  • the communication module 190 may be a wireless communication module 192 (e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 194 (e.g., : LAN (local area network) communication module, or power line communication module) may be included.
  • a wireless communication module 192 e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module
  • GNSS global navigation satellite system
  • wired communication module 194 e.g., : LAN (local area network) communication module, or power line communication module
  • the corresponding communication module is a first network 198 (e.g., a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)) or a second network 199 (e.g., legacy It may communicate with an external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., a telecommunication network such as a LAN or wide area network (WAN)).
  • a first network 198 e.g., a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)
  • a second network 199 e.g., legacy It may communicate with an external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., a telecommunication network such as a LAN or wide area network
  • the wireless communication module 192 uses subscriber information (e.g., International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module 196 to communicate within a communication network such as the first network 198 or the second network 199.
  • subscriber information e.g., International Mobile Subscriber Identifier (IMSI)
  • IMSI International Mobile Subscriber Identifier
  • the wireless communication module 192 may support 5G networks after 4G networks and next-generation communication technologies, for example, NR access technology (new radio access technology).
  • NR access technologies include high-speed transmission of high-capacity data (eMBB, enhanced mobile broadband), minimization of terminal power and access to multiple terminals (mMTC, massive machine type communications), or high-reliability and low-latency (URLLC, ultra-reliable and low-latency). communications) can be supported.
  • the wireless communication module 192 may support high frequency bands (eg, mmWave bands), for example, to achieve high data rates.
  • the wireless communication module 192 uses various technologies to secure performance in high frequency bands, for example, beamforming, massive array multiple-input and multiple-output (MIMO), and full-dimensional multiplexing.
  • MIMO massive array multiple-input and multiple-output
  • the wireless communication module 192 may support various requirements specified in the electronic device 101, an external electronic device (e.g., electronic device 104), or a network system (e.g., second network 199). According to one embodiment, the wireless communication module 192 supports Peak data rate (e.g., 20 Gbps or more) for realizing eMBB, loss coverage (e.g., 164 dB or less) for realizing mmTC, or U-plane latency (e.g., 164 dB or less) for realizing URLLC.
  • Peak data rate e.g., 20 Gbps or more
  • loss coverage e.g., 164 dB or less
  • U-plane latency e.g., 164 dB or less
  • the antenna module 197 may transmit or receive signals or power to or from the outside (eg, an external electronic device).
  • the antenna module 197 may include an antenna including a radiator made of a conductor or a conductive pattern formed on a substrate (eg, PCB).
  • the antenna module 197 may include a plurality of antennas (eg, an array antenna). In this case, at least one antenna suitable for the communication method used in the communication network, such as the first network 198 or the second network 199, is connected to the plurality of antennas by, for example, the communication module 190. can be selected. Signals or power may be transmitted or received between the communication module 190 and an external electronic device through the at least one selected antenna.
  • other components eg, radio frequency integrated circuit (RFIC) may be additionally formed as part of the antenna module 197.
  • RFIC radio frequency integrated circuit
  • the antenna module 197 may form a mmWave antenna module.
  • a mmWave antenna module includes a printed circuit board, an RFIC disposed on or adjacent to a first side (e.g., bottom side) of the printed circuit board and capable of supporting a designated high-frequency band (e.g., mmWave band); And a plurality of antennas (e.g., array antennas) disposed on or adjacent to the second side (e.g., top or side) of the printed circuit board and capable of transmitting or receiving signals in the designated high frequency band. can do.
  • a mmWave antenna module includes a printed circuit board, an RFIC disposed on or adjacent to a first side (e.g., bottom side) of the printed circuit board and capable of supporting a designated high-frequency band (e.g., mmWave band); And a plurality of antennas (e.g., array antennas) disposed on or adjacent to the second side (e.g., top or side)
  • peripheral devices e.g., bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)
  • signal e.g. commands or data
  • commands or data may be transmitted or received between the electronic device 101 and the external electronic device 104 through the server 108 connected to the second network 199.
  • Each of the external electronic devices 102 or 104 may be of the same or different type as the electronic device 101.
  • all or part of the operations performed in the electronic device 101 may be executed in one or more of the external electronic devices 102, 104, or 108.
  • the electronic device 101 may perform the function or service instead of executing the function or service on its own.
  • one or more external electronic devices may be requested to perform at least part of the function or service.
  • One or more external electronic devices that have received the request may execute at least part of the requested function or service, or an additional function or service related to the request, and transmit the result of the execution to the electronic device 101.
  • the electronic device 101 may process the result as is or additionally and provide it as at least part of a response to the request.
  • cloud computing distributed computing, mobile edge computing (MEC), or client-server computing technology can be used.
  • the electronic device 101 may provide an ultra-low latency service using, for example, distributed computing or mobile edge computing.
  • the external electronic device 104 may include an Internet of Things (IoT) device.
  • Server 108 may be an intelligent server using machine learning and/or neural networks.
  • the external electronic device 104 or server 108 may be included in the second network 199.
  • the electronic device 101 may be applied to intelligent services (e.g., smart home, smart city, smart car, or healthcare) based on 5G communication technology and IoT-related technology.
  • FIG. 2 is a block diagram 200 illustrating a camera module 180 according to various embodiments.
  • the camera module 180 includes a lens assembly 210, a flash 220, an image sensor 230, an image stabilizer 240, It may include a memory 250 (e.g., buffer memory) or an image signal processor 260.
  • the lens assembly 210 may collect light emitted from a subject that is the target of image capture.
  • Lens assembly 210 may include one or more lenses.
  • the camera module 180 may include a plurality of lens assemblies 210.
  • the camera module 180 may form, for example, a dual camera, a 360-degree camera, or a spherical camera.
  • Some of the plurality of lens assemblies 210 have the same lens properties (e.g., angle of view, focal length, autofocus, f number, or optical zoom), or at least one lens assembly is different from another lens assembly. It may have one or more lens properties that are different from the lens properties of .
  • the lens assembly 210 may include, for example, a wide-angle lens or a telephoto lens.
  • the flash 220 may emit light used to enhance light emitted or reflected from a subject.
  • the flash 220 may include one or more light emitting diodes (eg, red-green-blue (RGB) LED, white LED, infrared LED, or ultraviolet LED), or a xenon lamp.
  • RGB red-green-blue
  • LED white LED
  • infrared LED or ultraviolet LED
  • the image sensor 230 may acquire an image corresponding to the subject by converting light emitted or reflected from the subject and transmitted through the lens assembly 210 into an electrical signal.
  • the image sensor 230 is one image sensor selected from among image sensors with different properties, such as an RGB sensor, a BW (black and white) sensor, an IR sensor, or a UV sensor, and the same It may include a plurality of image sensors having different properties, or a plurality of image sensors having different properties.
  • Each image sensor included in the image sensor 230 may be implemented using, for example, a charged coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) sensor.
  • CCD charged coupled device
  • CMOS complementary metal oxide semiconductor
  • the image stabilizer 240 moves at least one lens or image sensor 230 included in the lens assembly 210 in a specific direction in response to the movement of the camera module 180 or the electronic device 101 including the same.
  • the operating characteristics of the image sensor 230 can be controlled (e.g., adjusting read-out timing, etc.). This allows to compensate for at least some of the negative effects of said movement on the image being captured.
  • the image stabilizer 240 uses a gyro sensor (not shown) or an acceleration sensor (not shown) disposed inside or outside the camera module 180 to stabilize the camera module 180 or the electronic device 101. ) can detect such movements.
  • the image stabilizer 240 may be implemented as, for example, an optical image stabilizer.
  • the memory 250 may at least temporarily store at least a portion of the image acquired through the image sensor 230 for the next image processing task. For example, when image acquisition is delayed due to the shutter or when multiple images are acquired at high speed, the acquired original image (e.g., Bayer-patterned image or high-resolution image) is stored in the memory 250. , the corresponding copy image (eg, low-resolution image) may be previewed through the display module 160. Thereafter, when a specified condition is satisfied (eg, user input or system command), at least a portion of the original image stored in the memory 250 may be obtained and processed, for example, by the image signal processor 260. According to one embodiment, the memory 250 may be configured as at least part of the memory 130 or as a separate memory that operates independently.
  • a specified condition eg, user input or system command
  • the image signal processor 260 may perform one or more image processes on an image acquired through the image sensor 230 or an image stored in the memory 250.
  • the one or more image processes may include, for example, depth map creation, three-dimensional modeling, panorama creation, feature point extraction, image compositing, or image compensation (e.g., noise reduction, resolution adjustment, brightness adjustment, blurring). may include blurring, sharpening, or softening.
  • the image signal processor 260 provides control (e.g., exposure time control, or read-out timing control) for at least one of the components included in the camera module 180 (e.g., image sensor 230). etc.) can be performed.
  • Images processed by the image signal processor 260 are stored back in memory 250 for further processing or are stored in external components of the camera module 180 (e.g., memory 130, display module 160, electronics ( 102), an electronic device 104, or a server 108).
  • the image signal processor 260 may be configured as at least a part of the processor 120, or may be configured as a separate processor that operates independently from the processor 120.
  • the image signal processor 260 is configured as a separate processor from the processor 120, at least one image processed by the image signal processor 260 is displayed as is or after additional image processing by the processor 120. It may be displayed through module 160.
  • the electronic device 101 may include a plurality of camera modules 180, each having different properties or functions.
  • at least one of the plurality of camera modules 180 may be a wide-angle camera, and at least another one may be a telephoto camera.
  • at least one of the plurality of camera modules 180 may be a front camera, and at least another one may be a rear camera.
  • Electronic devices may be of various types.
  • Electronic devices may include, for example, portable communication devices (e.g., smartphones), computer devices, portable multimedia devices, portable medical devices, cameras, wearable devices, or home appliances.
  • Electronic devices according to embodiments of this document are not limited to the above-described devices.
  • first, second, or first or second may be used simply to distinguish one element from another, and may be used to distinguish such elements in other respects, such as importance or order) is not limited.
  • One (e.g. first) component is said to be “coupled” or “connected” to another (e.g. second) component, with or without the terms “functionally” or “communicatively”.
  • any of the components can be connected to the other components directly (e.g. wired), wirelessly, or through a third component.
  • module used in various embodiments of this document may include a unit implemented in hardware, software, or firmware, and is interchangeable with terms such as logic, logic block, component, or circuit, for example. It can be used as A module may be an integrated part or a minimum unit of the parts or a part thereof that performs one or more functions. For example, according to one embodiment, the module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • Various embodiments of the present document are one or more instructions stored in a storage medium (e.g., built-in memory 136 or external memory 138) that can be read by a machine (e.g., electronic device 101). It may be implemented as software (e.g., program 140) including these.
  • a processor e.g., processor 120
  • the one or more instructions may include code generated by a compiler or code that can be executed by an interpreter.
  • a storage medium that can be read by a device may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' only means that the storage medium is a tangible device and does not contain signals (e.g. electromagnetic waves), and this term refers to cases where data is semi-permanently stored in the storage medium. There is no distinction between temporary storage cases.
  • Computer program products are commodities and can be traded between sellers and buyers.
  • the computer program product may be distributed in the form of a machine-readable storage medium (e.g. compact disc read only memory (CD-ROM)) or through an application store (e.g. Play StoreTM) or on two user devices (e.g. It can be distributed (e.g. downloaded or uploaded) directly between smart phones) or online.
  • a machine-readable storage medium e.g. compact disc read only memory (CD-ROM)
  • an application store e.g. Play StoreTM
  • two user devices e.g. It can be distributed (e.g. downloaded or uploaded) directly between smart phones) or online.
  • at least a portion of the computer program product may be at least temporarily stored or temporarily created in a machine-readable storage medium, such as the memory of a manufacturer's server, an application store's server, or a relay server.
  • each component (e.g., module or program) of the above-described components may include a single or plural entity, and some of the plurality of entities may be separately placed in other components. there is.
  • one or more of the components or operations described above may be omitted, or one or more other components or operations may be added.
  • multiple components eg, modules or programs
  • the integrated component may perform one or more functions of each component of the plurality of components in the same or similar manner as those performed by the corresponding component of the plurality of components prior to the integration. .
  • operations performed by a module, program, or other component may be executed sequentially, in parallel, iteratively, or heuristically, or one or more of the operations may be executed in a different order. may be removed, omitted, or one or more other operations may be added.
  • FIG. 3 is a diagram schematically showing the configuration of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 101 may include a display module 160, a camera module 180, a memory 130, and/or a processor 120. According to one embodiment, the electronic device 101 may include all or at least some of the components of the electronic device 101 as described in the description with reference to FIG. 1 .
  • the display module 160 may include the same or similar configuration as the display module 160 of FIG. 1. According to one embodiment, the display module 160 may include a display and visually provide various information to the outside of the electronic device 101 (eg, a user). According to one embodiment, the display module 160 may visually provide various information (eg, content, images, videos) related to the executing application and its use under the control of the processor 120.
  • the display module 160 may include a display and visually provide various information to the outside of the electronic device 101 (eg, a user). According to one embodiment, the display module 160 may visually provide various information (eg, content, images, videos) related to the executing application and its use under the control of the processor 120.
  • the display module 160 includes a touch sensor, a pressure sensor capable of measuring the intensity of touch, and/or a touch panel that detects a magnetic field-type stylus pen. (e.g. digitizer) may be included.
  • the display module 160 receives a signal (e.g., voltage, amount of light, resistance, electromagnetic signal, and/or Touch input and/or hovering input (or proximity input) can be detected by measuring changes in electric charge.
  • the display module 160 may include a liquid crystal display (LCD), an organic light emitted diode (OLED), or an active matrix organic light emitted diode (AMOLED).
  • the display module 160 may include a flexible display.
  • the camera module 180 may correspond to the camera module 180 of FIG. 1 or FIG. 2. According to one embodiment, when activated, the camera module 180 may capture a subject and transmit related results (eg, a captured image) to the processor 120 and/or the display module 160.
  • related results eg, a captured image
  • the memory 130 may correspond to the memory 130 of FIG. 1 .
  • the memory 130 may store various data used by the electronic device 101.
  • data may include, for example, input data or output data for an application (e.g., program 140 of FIG. 1) and instructions associated with the application.
  • the data may include camera image data acquired through a camera module.
  • the data may include various learning data acquired based on the user's learning through interaction with the user.
  • data may include various schemas (or algorithms, models, networks, or functions) to support artificial intelligence-based image processing.
  • the memory 130 may store instructions that cause the processor 120 to operate when executed.
  • an application may be stored as software (eg, program 140 in FIG. 1) on the memory 130 and may be executable by the processor 120.
  • the application may be a variety of applications that can provide various functions or services (eg, an image capture function based on artificial intelligence) on the electronic device 101.
  • the processor 120 may perform an application layer processing function required by the user of the electronic device 101. According to one embodiment, the processor 120 may provide commands and control of functions for various blocks of the electronic device 101. According to one embodiment, the processor 120 may perform operations or data processing related to control and/or communication of each component of the electronic device 101. For example, the processor 120 may include at least some of the components and/or functions of the processor 120 of FIG. 1 . The processor 120 may be operatively connected to components of the electronic device 101, for example. The processor 120 may load commands or data received from other components of the electronic device 101 into the memory 130, process the commands or data stored in the memory 130, and store the resulting data. there is.
  • the processor 120 may be an application processor (AP).
  • the processor 120 may be a system semiconductor responsible for calculation and multimedia driving functions of the electronic device 101.
  • the processor 120 is configured in the form of a system-on-chip (SoC), a technology-intensive semiconductor chip that integrates several semiconductor technologies and implements system blocks into one chip. It can be included.
  • the system blocks of the processor 120 include a graphics processing unit (GPU) 310 and an image signal processor (ISP) 320, as illustrated in FIG. 3. , central processing unit (CPU) 330, neural processing unit (NPU) 340, digital signal processor 350, modem 360, connectivity It may include (connectivity) 370 and/or security (security) 380 blocks.
  • GPU 310 may be responsible for graphics processing. According to one embodiment, the GPU 310 receives instructions from the CPU 330 and performs graphics processing to express the shape, position, color, shading, movement, and/or texture of objects (or objects) on the display. can do.
  • the ISP 320 may be responsible for processing and correcting images and videos.
  • the ISP 320 receives raw data (e.g., input data, raw data, original data, Alternatively, it can play a role in correcting raw data to create an image in the user's preferred form.
  • the ISP 320 may correct physical limitations that may occur in the camera module 180, interpolate red, green, blue (R/G/B) values, and remove noise.
  • the ISP 320 may perform post-processing such as adjusting partial brightness of the image and emphasizing detailed parts. For example, the ISP 320 can generate a result preferred by the user by independently tuning and correcting the image quality of the image acquired through the camera module 180.
  • the ISP 320 may support artificial intelligence-based image processing technology to improve image quality, speed image processing, and reduce current consumption (e.g., low power). For example, the ISP 320 can maintain low power while improving video quality, and for this purpose, it can support artificial intelligence-based video shooting. According to one embodiment, the ISP 320 may support artificial intelligence-based image processing related to improving video quality in a dark, low-light environment. According to one embodiment, the ISP 320 may support scene segmentation (e.g., image segmentation) technology that recognizes and/or classifies parts of the scene being shot in conjunction with the NPU 340. there is. For example, the ISP 320 may include processing functions by applying different parameters to objects such as the sky, bushes, and/or skin. According to one embodiment, the ISP 320 detects and displays a human face when shooting an image through an artificial intelligence function, or uses the coordinates and information of the face to adjust the brightness, focus, and/or color of the image. .
  • the ISP 320 detects and displays a human face
  • the configuration and detailed operation of the ISP 320 of the processor 120 will be described with reference to the drawings described later.
  • the CPU 330 may play a role corresponding to the processor 120.
  • the CPU 330 may decode user commands and perform arithmetic and logical operations and/or data processing.
  • the CPU 330 may be responsible for the functions of memory, interpretation, calculation, and control.
  • the CPU 330 may control the overall functions of the electronic device 101.
  • the CPU 330 can execute all software (eg, applications) of the electronic device 101 on an operating system (OS) and control hardware devices.
  • OS operating system
  • the NPU 340 may be responsible for processing optimized for an artificial intelligence deep learning algorithm.
  • the NPU 340 is a processor optimized for deep learning algorithm calculations (e.g., artificial intelligence calculations), and can process big data quickly and efficiently like a human neural network.
  • the NPU 340 can be mainly used for artificial intelligence calculations.
  • the NPU 340 recognizes objects, environments, and/or people in the background when taking an image through the camera module 180 and automatically adjusts the focus, or uses the camera module 180 when taking a picture of food. It can automatically switch the shooting mode to food mode and/or erase only unnecessary subjects from the captured results.
  • the electronic device 101 supports integrated machine learning processing by interacting with all processors such as GPU 310, ISP 320, CPU 330, and NPU 340. You can.
  • the DSP 350 may represent an integrated circuit that helps quickly process digital signals. According to one embodiment, the DSP 350 may perform a high-speed processing function by converting an analog signal into a digital signal.
  • the modem 360 may perform a role that allows the electronic device 101 to use various communication functions.
  • the modem 360 can support communications such as phone calls and data transmission and reception by exchanging signals with a base station.
  • the modem 360 is an integrated modem that supports communication technologies such as LTE and 2G to 5G (e.g., cellular modem, LTE modem, 5G modem, and 5G-Advanced modem, and 6G modem). may include.
  • the modem 360 may include an artificial intelligence modem to which an artificial intelligence algorithm is applied.
  • connectivity 370 may support wireless data transmission based on IEEE 802.11.
  • connectivity 370 may support communication services based on IEEE 802.11 (eg, Wi-Fi) and/or 802.15 (eg, Bluetooth, ZigBee, UWB).
  • IEEE 802.11 eg, Wi-Fi
  • 802.15 eg, Bluetooth, ZigBee, UWB
  • the connectivity 370 can support communication services for an unspecified number of people in a localized area, such as indoors, using an unlicensed band.
  • the security 380 may provide an independent security execution environment between data or services stored in the electronic device 101.
  • the security 380 prevents hacking from occurring through software and hardware security during user authentication when providing services such as biometrics, mobile ID, and/or payment of the electronic device 101. can play a role in preventing.
  • the security 380 is based on device security to strengthen the security of the electronic device 101 and user information such as mobile ID, payment, and car key in the electronic device 101.
  • An independent security execution environment can be provided by the security service.
  • the processor 120 may include processing circuitry and/or executable program elements.
  • the processor 120 e.g., ISP 320
  • the processor 120 collects raw data (e.g., input data, raw data) through the camera module 180 while a video is captured by a user. or original data) may be performed.
  • the processor 120 e.g., ISP 320
  • the processor 120 may perform an operation to determine parameters corresponding to settings or settings changes in which the video is captured.
  • the processor 120 e.g., ISP 320
  • the processor 120 eg, ISP 320
  • operations performed by the processor 120 may be implemented as a recording medium (or computer program product).
  • the recording medium may include a non-transitory computer-readable recording medium on which a program for executing various operations performed by the processor 120 is recorded.
  • Embodiments described in this disclosure may be implemented in a recording medium readable by a computer or similar device using software, hardware, or a combination thereof.
  • the operations described in one embodiment include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), and field programmable gate arrays (FPGAs). ), processors, controllers, micro-controllers, microprocessors, and/or other electrical units to perform functions. .
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • the recording medium includes an image signal processor (ISP) receiving raw data through a camera module while a user performs video capture.
  • ISP image signal processor
  • FIG. 4 is a diagram schematically showing the configuration of an image signal processor according to an embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating an example of operations of a measurement unit and an alignment unit in an image signal processor according to an embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating an example of the operation of a preprocessor in an image signal processor according to an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating an example of a full color image in an image signal processor according to an embodiment of the present disclosure.
  • the image acquired through the camera module 180 is received from the ISP 320, and artificial intelligence-based image processing is performed.
  • An example of the operation can be shown.
  • the electronic device 101 includes a camera module 180 (e.g., the camera module 180 in FIG. 1 or 2) and an image signal processor 320 (e.g., the ISP 320 in FIG. 3). , and a memory 480 (eg, the memory 130 of FIG. 1 or FIG. 3 or the memory 250 of FIG. 2).
  • the camera module 180 may include an image sensor 410, a receiver unit 420, and a measurement unit 430.
  • the camera module 180 may include a lens (eg, the lens assembly 210 of FIG. 2), although not shown.
  • ISP 320 may include an alignment unit 440, a preprocessor 450, an AI network 460, and a post-processor 470.
  • the memory 480 is a memory included in the electronic device 101 (e.g., memory 130 of FIG. 1 or 3), and/or a memory included in the camera module 180 (e.g., It may be the memory 250 of FIG. 2).
  • the image sensor 410 receives raw data from the lens of the camera module 180 (e.g., the lens assembly 210 of FIG. 2). can receive.
  • the image sensor 410 may receive CFA data (or CFA image) according to a color filter array (CFA) pattern (e.g., color filter shape (Bayer pattern)) of the image sensor 410. .
  • CFA color filter array
  • the receiver unit 420 may receive raw data from the image sensor 410 in real time. According to one embodiment, the receiver unit 420 may transmit raw data received in real time to the memory 480. According to one embodiment, the receiver unit 420 may perform a role of directly transmitting raw data to the memory 480 (eg, direct memory access, DMA) according to the output characteristics of the image sensor 410. For example, the receiver unit 420 may perform the role of directly transmitting (e.g., sharing) data between the image sensor 410 and the memory 480 without intervention of the CPU (e.g., CPU 330 in FIG. 3). You can.
  • DMA direct memory access
  • the measurement unit 430 may receive raw data from the image sensor 410 and extract measurement data based on the received raw data.
  • the measurement data may include motion data and 3A (eg, Auto exposure, Auto white balance, Auto focus) data.
  • the measurement data may include tone correction histogram data to be used in the post-processing unit 470.
  • the measurement unit 430 may operate flexibly according to the CFA of the image sensor 410. According to one embodiment, the measurement unit 430 may perform measurement according to the CFA pattern of the image sensor 410. For example, when generating a luminance image for calculating a histogram, the measurement unit 430 may change a coefficient for combining each input color channel into luminance. According to one embodiment, the measurement unit 430 may perform measurement for each channel and perform measurement according to the changed CFA when the CFA of the image sensor 410 is changed. According to one embodiment, the measurement unit 430 may determine the movement of the image through at least one frame (eg, 1 frame or N frames) from the image sensor 410.
  • the measurement unit 430 may determine the movement of the image through at least one frame (eg, 1 frame or N frames) from the image sensor 410.
  • the ISP 320 (e.g., an AI front end) performs demosaic, shadow correction, color correction, noise removal, and sharpness adjustment based on raw data and measurement data. The same image processing can be performed.
  • the ISP 320 uses raw data captured by the camera module 180 using an artificial intelligence method (e.g., artificial neural network (e.g., UNET, artificial neural network (ANN), deep neural network (DNN), CNN) (AI network structures such as convolutional neural network, RNN (recurrent neural network), AE (autoencoder), and/or GAN (generative adversarial network)) to full color conversion (AI front end) ) can play a role.
  • an artificial intelligence method e.g., artificial neural network (e.g., UNET, artificial neural network (ANN), deep neural network (DNN), CNN) (AI network structures such as convolutional neural network, RNN (recurrent neural network), AE (autoencoder), and/or GAN (generative adversarial network)
  • the ISP 320 may include an
  • the alignment unit 440 acquires measurement data such as image movement and/or brightness from the measurement unit 430, and uses the obtained measurement data to divide N-1 frames of the image into N Motion compensation can be performed to fit the frame.
  • the N-1 frame and N frame may represent raw data (eg, CFA data (or frame)) sequentially output from the image sensor 410.
  • N frame may represent the currently output frame
  • N-1 frame may represent the previous frame of one frame (e.g., N frame).
  • the operation of the alignment unit 440 is illustrated in FIG. 5 .
  • the alignment unit 440 receives measurement data (e.g., motion data) from the measurement unit 430 and aligns the N-1 frame 510 with the N frame 520.
  • Motion compensation can be performed.
  • N-1 frames 510 may represent an example where there is motion
  • N frames 520 may represent an example where there is no motion.
  • the motion for each frame is distorted due to parallax (or parallax in reading speed) due to image reading in the camera module 180 and characteristics of the image sensor 410 (e.g., rolling shutter type image sensor). (e.g. image distortion) may occur.
  • the alignment unit 440 performs an operation (e.g., warping) on the N-1 frame 510 to match the N-1 frame 510 with the N frame 520 for motion compensation for distortion. operation) can be performed.
  • the alignment unit 440 may measure and compensate for motion between images and match them so that at least one image (eg, one or multiple frames) can be input at a time.
  • the alignment unit 440 can match multiple images and provide them to the AI network 460, thereby helping the AI network 460 increase image resolution and reduce noise during demosaicing.
  • the measurement unit 430 may measure the movement of the image for each frame and provide the measurement to the alignment unit 440 so that the alignment unit 440 can match multiple images.
  • the preprocessor 450 may calculate the raw data using different arithmetic operations for each pixel and then transmit it to the AI network 460.
  • the preprocessing unit 450 is located at the front of the AI network 460, and can optimize the preprocessing operation performed in the existing ISP 320 into a form suitable for the AI network 460. .
  • the operation of the preprocessor 450 according to one embodiment is illustrated in FIG. 6.
  • the preprocessor 450 may perform an operation consisting of arithmetic operations for each pixel on the N frame 520 and the N-1 frame 510.
  • the preprocessor 450 performs four arithmetic operations based on, for example, Pedestal (610), white balance (WB) 620, lens shading correction (LSC) 630, and Gamma (640). can be performed.
  • the preprocessor 450 may sequentially input at least two frames (e.g., N-1 frame 510 and N frame 520) to the AI network 460.
  • the AI network 460 may perform tone correction by reordering the order of each pixel when at least two or more frames are sequentially input.
  • the AI network 460 performs global tone correction (GTM, global tone mapping) 650 and local tone correction (LTM, local tone mapping) 660 for at least two sequentially input frames. It is possible to obtain image quality gains.
  • GTM global tone correction
  • LTM local tone correction
  • the preprocessor 450 may adjust coefficients for each color channel for the luminance image.
  • the coefficient (Y) for generating a luminance image can be expressed as an example in ⁇ Equation 1> below.
  • ⁇ Equation 1> represents an example of calculating a coefficient for generating a luminance image, but is not limited thereto.
  • the AI network 460 may include an AI accelerator such as a microprocessor (eg, MPU, microprocessor unit). According to one embodiment, the AI network 460 may perform demosaic to convert the CFA pattern of the image sensor 410 into full color form using a designated artificial intelligence algorithm. According to one embodiment, the AI network 460 may perform an operation of interpolating and filling empty parts of pixels using peripheral pixels (e.g., color-filled pixels) in raw data, based on a designated artificial intelligence algorithm. .
  • an AI accelerator such as a microprocessor (eg, MPU, microprocessor unit).
  • the AI network 460 may perform demosaic to convert the CFA pattern of the image sensor 410 into full color form using a designated artificial intelligence algorithm.
  • the AI network 460 may perform an operation of interpolating and filling empty parts of pixels using peripheral pixels (e.g., color-filled pixels) in raw data, based on a designated artificial intelligence algorithm. .
  • the AI network 460 performs demosaicing (e.g., a function of converting to full color) and, at approximately the same time, outputs the current video input from the preprocessor 450 (e.g., a function of converting to full color).
  • demosaicing e.g., a function of converting to full color
  • the current video input from the preprocessor 450 e.g., a function of converting to full color.
  • the AI network 460 performs demosaicing in full color based on the raw data (e.g., CFA image) of the image sensor 410 processed in the pre-processor 450 and then performs demosaicing in the post-processor 470. ) can be transmitted.
  • the AI network 460 uses parameters learned through AI (e.g., AI parameters) and uses one image or N images matched in the measurement unit 430 and the alignment unit 440 to generate one image. Full color images can be output.
  • the parameters may include information learned according to the CFA of the image sensor 410 (eg, by sensor output size).
  • the AI network 460 changes the network layer information, weight parameters, and/or bias parameters to match the changed mode, thereby changing the desired mode. May operate to obtain an output (e.g., video output size).
  • the output of AI network 460 may represent a full-color image in FHD or UHD.
  • the output of AI network 460 according to one embodiment is illustrated in FIG. 7 .
  • the full-color image 701, 703, or 705 of the AI network 460 is adjusted to the video resolution (e.g., HD, FHD, UHD, or 8K) currently set by the user without a separate scaling process.
  • the video resolution e.g., HD, FHD, UHD, or 8K
  • element 701 represents an example of a full-color image from AI network 460 for the output of a 4x4 sensor
  • element 703 represents an example of a full-color image from AI network 460 for the output of a 2x2 sensor
  • Element 705 may represent an example of a full-color image from the AI network 460 for the output of a Bayer sensor.
  • the output (e.g. video output size) of the AI network 460 may be a 3M full color image.
  • the output of the ISP 320 e.g., the AI network 460
  • its operation are described with reference to the drawings described below.
  • the post-processing unit 470 may perform tone correction on the output of the AI network 460 (e.g., demosaic data or full color data). For example, the post-processing unit 470 can adjust the color of the image.
  • the post-processing unit 470 performs real-time tone correction, such as GTM and/or LTM, on the full color image output from the AI network 460 using measurement data, as illustrated in FIG. 6. It can be done.
  • the measurement unit 430 may provide tone-related data such as an image histogram necessary for tone correction of the post-processing unit 470 to the post-processing unit 470.
  • the memory 480 may store the output of the receiver unit 420 and/or the post-processing unit 470.
  • FIG. 8 is a diagram illustrating an example of the output of an image signal processor according to an embodiment of the present disclosure.
  • the full color image output from the AI network 460 of the ISP 320 is displayed at the video resolution (e.g. HD, FHD, UHD, or 8K) set by the user and/or at the camera module 180. It can be output according to AI parameters that automatically change based on situational awareness.
  • the output size of the AI network 460 may change depending on the user's video resolution setting and/or AI parameter change.
  • the AI network 460 may scale the output image according to the image size of various scenarios when shooting (e.g., recording) a video.
  • the output of the AI network 460 of the ISP 320 may be changed in response to the CFA type of the image sensor 410 or the sensor output size that varies depending on the lens.
  • the AI network 460 measures measurement data (e.g., exposure value (EV), bright value (BV), and/or zoom ratio) of the measurement unit 430. Using this, you can change the parameters to suit the situation. For example, the AI network 460 can use exposure value (EV) or brightness value (BV) to distinguish the illuminance at which an image is captured, and change parameters to suit the situation of the classified illuminance. According to one embodiment, when the AI network 460 determines that the illumination is low according to a low exposure value (EV), it can use an AI parameter that reduces noise significantly. According to one embodiment, the AI network 460 can automatically switch to AI parameters that enhance details when it is determined to be outdoors based on a high exposure value (EV). According to one embodiment, the AI network 460 processes demosaic based on AI parameters appropriate for the situation and outputs an image with a corresponding resolution (e.g., FHD, UHD, or 8K) (or video output size). You can.
  • a corresponding resolution e.g., FHD,
  • the AI network 460 of the ISP 320 processes demosaicing and scaling at once to a size that matches the output size of the selected video shooting scenario (e.g., setting) regardless of the input image size. can do.
  • the AI network 460 performs artificial intelligence-based learning to output 3M full-color images in the case of FHD settings and 9M full-color images in UHD settings, and operates using the learned training data. can do.
  • FIG. 9 is a diagram illustrating an example of a network cache of an image signal processor according to an embodiment of the present disclosure.
  • the AI network 460 of the ISP 320 can switch parameters according to sensors of various CFA patterns (e.g., Bayer, 2x2, 3x3, 4x4, or RGBW). For example, in recent years, sensors with various patterns in addition to sensors with Bayer-type patterns have been developed. According to one embodiment, it may be difficult for a general ISP to support all of the various patterns.
  • the ISP 320 according to an embodiment of the present disclosure is based on artificial intelligence-based operation using the AI network 460 and can flexibly respond to various sensor patterns through various learning.
  • the AI network 460 may include its own memory (e.g., network cache 900) that stores various types of parameters so that they can be quickly and easily switched.
  • network cache 900 may be included within AI network 460 or ISP 320.
  • the network cache 900 may previously cache parameters frequently used when shooting (eg, recording) a video.
  • the AI network 460 loads both first parameters (e.g., parameters for low light) and second parameters (e.g., parameters for outdoor use) into the network cache 900 and then loads them at a high speed according to the measurement data. It can be supplied to the AI network 460.
  • the AI network 460 determines (A) video resolution (e.g., FHD, UHD, and 8K) as in the example, (B) image condition (or 3A stats) (e.g., as in the example) : outdoor, indoor, and lowlight), and/or (C)
  • video resolution e.g., FHD, UHD, and 8K
  • image condition e.g., as in the example: outdoor, indoor, and lowlight
  • C various parameters corresponding to CFA patterns (e.g., Bayer, 2x2, 3x3, and 4x4) can be cached in the network cache 900, and video When shooting, you can quickly switch to parameters corresponding to measurement data.
  • FIG. 10 is a diagram illustrating an example of an operation of an image signal processor according to an embodiment of the present disclosure.
  • FIG. 10 may show an example of the computational area of the AI network 460 during digital zoom when shooting (eg, recording) a video.
  • VDIS video digital image stabilizer
  • SAT scene aligned transform
  • VDIS video digital image stabilizer
  • SAT scene aligned transform
  • VDIS may represent a method of reducing shaking by changing the crop area of the image according to the shaking of the camera module 180.
  • SAT may represent a method of switching without a sense of heterogeneity by changing the crop area of the image when switching between camera modules 180 of different angles of view.
  • both the VDIS method and the SAT method have something in common in cropping the image, and for this purpose, the output of the AI network 460 is an image margin (image) in the basic video output resolution (e.g., Output Image Size (1010)).
  • the area including margin (1020) (e.g., Image Margin Area (1030)) can be calculated.
  • an example of output calculated based on the area 1030 including the image margin 1020 is shown in FIGS. 11A to 11C.
  • FIGS. 11A, 11B, and 11C are diagrams illustrating an example of a scaling operation in an image signal processor according to an embodiment of the present disclosure.
  • FIGS. 11A, 11B, and 11C may represent an example of a scaling operation in the decoding (eg, pool) and encoding (eg, up convolution) process of the AI network 460.
  • the AI network 460 changes the output images 1103, 1105, and 1107 to a resolution (e.g., 8K, UHD, FHD) suitable for the current shooting environment (or situation) based on parameter adjustment. can do.
  • the AI network 460 can adjust the size of the image based on AI parameter changes and determine the output size according to the user scenario.
  • the AI network 460 may adjust the size of the output image based on an encoder-decoder based model (e.g., UNET) such as an autoencoder (AE).
  • the AI network 460 may have a network structure divided into a decoding part (eg, encoder 1110) and an encoding part (eg, decoder 1120).
  • the AI network 460 has an encoder 1110 (or contracting path) and decoder 1120 (or expanding path) structures, and may be symmetrical to each other.
  • the AI network 460 may perform an operation of concatenating features obtained from each layer of the encoding step to each layer of the decoding step.
  • the direct connection between each layer of the encoder 1110 and each layer of the decoder 1120 may be referred to as a skip connection.
  • the AI network 460 arranges the layers so that the neural network structure has skip connections in parallel and the left and right sides are symmetrical with respect to the center, as illustrated in FIGS. 11A, 11B, and 11C, e.g. For example, it may include a U-shaped neural network structure.
  • the part connecting the encoder 1110 and the decoder 1120 is called a bridge 1130.
  • the bottom portion of the U-shaped structure may represent a bridge 1130.
  • a rectangular box in FIGS. 11A, 11B, and 11C may represent a layer block 1140.
  • the vertical direction of the layer block 1140 represents the dimension of the map
  • the layer block 1140 The horizontal direction of can indicate the number of channels. For example, if the vertical direction is 256x256 and the horizontal direction is 128, this may indicate that the image of the corresponding layer has a size of 256x256x128. For example, if the input image is 512x512x3, an image with 3 RGB channels and a size of 512x512x3 can be displayed.
  • the encoder 1110 may down-scale (or down-sample) each layer and proceed to the next step of the encoder 1110 (e.g., in the direction of the arrow below).
  • the decoder 1120 may perform up convolution for each layer and proceed to the next step of the decoder 1120 (eg, in the direction of the arrow above).
  • the right portion shown for each layer in FIGS. 11A, 11B, and 11C is an example of the output images 1103, 1105, and 1107 of the AI network 460.
  • the AI network 460 may provide output images 1103, 1105, and 1107 from each layer.
  • the image in the middle of encoding (e.g., layer block 1140) and the image in the middle of decoding (e.g., layer block 1140) of the encoder 1110 and decoder 1120 may not be meaningful as images. there is.
  • the intermediate images may contain important information regarding the detail and/or resolution of the output image. For example, in the AI network 460, the closer it is to the input end (e.g., the top layer block of the encoder 1110 in FIG. 11A) and the output end (e.g., the top layer block of the decoder 1120 in FIG.
  • the AI network 460 may have an output terminal capable of outputting an image for each layer (e.g., supporting image output for each layer), the user's video resolution setting, and/or a camera module.
  • the output location e.g. layer
  • the output location can be adaptively changed according to the situation measured in (180).
  • the AI network 460 may use the first layer in FIG. 11A, the second layer in FIG. 11B, or the second layer in FIG. 11C based on parameters related to the input image (e.g., the CFA 1101 of the image sensor 410).
  • the corresponding video e.g., 8K video (1103), FHD video (1105), or UHD video (1107)
  • the AI network 460 can learn with different input/output sizes even during learning, and by generating and learning a resized reference file using a complex resizing algorithm, AI The resolution of the network 460 can be improved, and computational efficiency and image downscaling can be performed all at once.
  • a full-color image can be generated by changing the parameters of one fixed network (e.g., AI network 460) and resizing can be performed at the same time, thereby reducing current consumption and computation amount.
  • the AI network 460 can be used even in high-complexity video shooting (e.g., recording) by maximizing image quality due to AI computation and minimizing power consumption by reducing the amount of computation. .
  • the AI network 460 performs demosaicing and scaling at the same time based on learned training data, thereby reducing complexity while maintaining image quality performance and reducing current consumption. there is. According to one embodiment, as the computation on the video is performed based on the AI network 460, there is no need for additional scaling in the computation of the ISP 320 that calculates the video, so the computation steps can be simplified. .
  • the ISP 320 changes the hardware of various CFA patterns (e.g., Bayer, 2x2, 4x4, RGBW, and/or CMYK) of various image sensors 410 through learning of the AI network 460. You can apply without it.
  • CFA patterns e.g., Bayer, 2x2, 4x4, RGBW, and/or CMYK
  • the electronic device 101 includes a display (e.g., the display module 160 of FIGS. 1 or 3), a camera (e.g., the camera module 180 of FIGS. 1 to 3), and the A processor operatively connected to a display and the camera (e.g., the processor 120 of FIG. 1 or 3), and an image signal processor (ISP) included in the processor (e.g., the processor 120 of FIG. 3 or 4) It may include an ISP (320).
  • a display e.g., the display module 160 of FIGS. 1 or 3
  • a camera e.g., the camera module 180 of FIGS. 1 to 3
  • the A processor operatively connected to a display and the camera (e.g., the processor 120 of FIG. 1 or 3), and an image signal processor (ISP) included in the processor (e.g., the processor 120 of FIG. 3 or 4)
  • ISP image signal processor
  • the image signal processor 320 receives input data (e.g., raw data) corresponding to the sensor output size of the sensor through the camera while the user performs video shooting. It can operate to receive. According to one embodiment, the image signal processor 320 may operate to determine parameters corresponding to settings or settings changes in which the video is captured. According to one embodiment, the image signal processor 320 converts the input data into a full color image based on the parameters and changes the output size of the full color image to suit the video output size. Operates to perform processing. According to one embodiment, the image signal processor 320 may operate to output image data according to image processing.
  • input data e.g., raw data
  • the image signal processor 320 may operate to determine parameters corresponding to settings or settings changes in which the video is captured.
  • the image signal processor 320 converts the input data into a full color image based on the parameters and changes the output size of the full color image to suit the video output size. Operates to perform processing.
  • the image signal processor 320 may operate to output image data according to image processing.
  • the camera 180 has an image sensor (e.g., image sensor 410 in FIG. 4) set to receive the input data through the lens of the camera, and receives the input data from the image sensor. It may include a measurement unit (eg, the measurement unit 430 in FIG. 4) configured to receive and extract measurement data based on the received input data.
  • image sensor e.g., image sensor 410 in FIG. 4
  • measurement unit e.g, the measurement unit 430 in FIG. 4
  • the measurement unit 430 may operate to perform measurement according to a CFA (color filter array) pattern of the image sensor and determine the movement of the image through at least one frame from the image sensor. there is.
  • CFA color filter array
  • the image signal processor 320 includes an alignment unit (e.g., alignment unit 440 in FIG. 4) set to match input data from the image sensor and measurement data from the measurement unit, and an alignment unit 440 of FIG. 4 of the input data.
  • a preprocessor e.g., the preprocessor 450 in FIG. 4 that processes arithmetic operations for each pixel, converts the input data into the full color image based on artificial intelligence (AI) while shooting the video
  • An AI network e.g., AI network 460 in FIG. 4) that processes scaling (e.g., variable output size) corresponding to the video output size, and a full-color image output from the AI network using the measurement data.
  • It may include a post-processing unit (e.g., the post-processing unit 470 of FIG. 4) that performs real-time tone correction (tone mapping).
  • the image signal processor 320 while performing the video capture, detects a change in the video mode currently set for video capture or a video mode set during video capture, and uses AI in response to the detected result.
  • the network may operate to determine learned parameters.
  • the image signal processor 320 detects a video mode automatically changed by the camera while performing the video shooting, and determines parameters learned from the AI network in response to the detected result. It can operate to do so.
  • the image signal processor 320 determines the parameter based on at least one element of the CFA pattern of the image sensor, lens characteristics, measurement data of the measurement unit, sensor output size, and/or video output size. It can operate to do so.
  • the parameters can be changed in real time.
  • the image signal processor 320 may operate to change parameters for the AI network to parameters with different input/output sizes in real time in response to the settings or changes to the settings.
  • the image signal processor 320 processes demosaic to convert the input data into a full-color image based on parameters determined through an AI network and output size of the full-color image. may be operated to process variable scaling to match the current video output size.
  • the parameters may include at least one of network layer information, weight parameters, and/or bias parameters.
  • An image processing device for an electronic device 101 includes a camera (e.g., the camera module 180 of FIGS. 1 to 3) and an image signal processor (ISP) operatively connected to the camera.
  • image signal processor e.g., ISP 320 of FIG. 3 or 4
  • memory operatively connected to the image signal processor (e.g., memory 130 of FIG. 1 or FIG. 3 or memory 480 of FIG. 4) ))
  • ISP image signal processor
  • the memory 130, 480 when executed, the image signal processor 320, while performing video capture by the user, corresponds to the sensor output size of the sensor through the camera.
  • Instructions for receiving input data can be stored.
  • the memories 130 and 480 may store instructions that, when executed, allow the image signal processor 320 to determine parameters corresponding to settings or settings changes in which the video is captured. .
  • the image signal processor 320 converts the input data into a full color image based on the parameters and displays the full color image. Instructions that perform image processing to change the output size of the image to match the video output size can be stored.
  • the memories 130 and 480 may store instructions that, when executed, cause the image signal processor 320 to output image data according to image processing.
  • Operations performed in the electronic device 101 include a processor 120 including various processing circuitry and/or executable program elements of the electronic device 101. It can be executed by (e.g. ISP 320). According to one embodiment, operations performed by the electronic device 101 are stored in the memory 130 and, when executed, include instructions that cause the processor 120 (e.g., ISP 320) to operate. It can be executed by .
  • FIG. 12 is a flowchart illustrating a method of operating an electronic device according to an embodiment of the present disclosure.
  • FIG. 12 may show an example of supporting image processing of a video using an AI method when capturing (e.g., recording) a video in the electronic device 101 according to an embodiment.
  • an image processing method for a video may be performed, for example, according to the flowchart shown in FIG. 12.
  • the flowchart shown in FIG. 12 is merely a flowchart according to an embodiment of image processing of the electronic device 101, and the order of at least some operations may be changed, performed in parallel, performed as independent operations, or at least some of the operations. Other operations may be performed complementary to at least some operations.
  • operations 1201 to 1211 may be performed by at least one processor 120 (eg, ISP 320) of the electronic device 101.
  • the operation described in FIG. 12 is, for example, performed heuristically in combination with the operations described in FIGS. 3 to 11, or is performed as a detailed operation of some of the described operations. It can be performed heuristically.
  • an operation method performed by the electronic device 101 includes an operation 1201 of performing video capture, video An operation 1203 of receiving raw data through the camera module 180 while shooting, an operation 1205 of identifying a setting or a setting change in which a video is captured, and an operation of determining parameters corresponding to the setting or setting change. It may include (1207), an operation (1209) of performing image processing on raw data based on parameters, and an operation (1211) of outputting image data according to image processing.
  • the processor 120 (eg, ISP 320) of the electronic device 101 may perform an operation of capturing a video.
  • the processor 120 may execute an application related to video shooting based on user input.
  • the processor 120 may execute (or activate) the camera module 180 based on application execution.
  • the processor 120 may control the display module 160 to display an image (eg, a preview image) acquired through the camera module 180.
  • the processor 120 may start video capture based on detecting a user input that performs video capture (eg, recording).
  • the processor 120 may receive raw data through the camera module 180 while capturing a video.
  • the ISP 320 of the processor 120 may receive unprocessed raw data transmitted from the camera module 180 (e.g., the image sensor 410) while performing video shooting.
  • the image sensor 410 acquires CFA data (or CFA image) according to the CFA pattern of the image sensor 410, and transmits the acquired CFA data to the ISP 320 (e.g. : Can be transmitted to the alignment unit 440 of the ISP 320.
  • the measurement unit 430 may extract measurement data based on raw data and transmit the measurement data to the ISP 320.
  • the measurement data may include motion data and 3A (eg, Auto exposure, Auto white balance, Auto focus) data.
  • the measurement unit 430 may determine the movement of the image based on one or several frames (eg, N frames) input from the image sensor 430.
  • the ISP 320 may acquire raw data of the image sensor 410 and measurement data of the measurement unit 430, respectively.
  • the processor 120 may perform an operation to identify settings in which a video is captured or settings change.
  • the ISP 320 of the processor 120 may change the video mode (e.g., video resolution) currently set for video capture (e.g., HD, FHD, UHD, or 8K) or the video mode set during video capture. can be detected.
  • the ISP 320 may detect a video mode automatically changed by the camera module 180.
  • the processor 120 may perform an operation of determining a parameter corresponding to a setting or setting change.
  • the ISP 320 of the processor 120 includes the CFA pattern of the image sensor 410, lens characteristics (e.g., micro lens), measurement data (e.g., motion data) of the measurement unit 430, and sensor output.
  • the parameter may be determined based on at least one factor of size and/or video output size.
  • the ISP 320 may switch parameters to be used in the AI network 460 of the ISP 320 in real time in response to setting or changing settings.
  • the ISP 320 can change the parameters for the AI network 460 to parameters with different input/output sizes in real time based on the measurement data of the measurement unit 430.
  • the ISP 320 may change the input/output size to a different parameter in real time in response to the user's change in video mode while shooting a video.
  • the processor 120 may process an image for raw data based on parameters.
  • the ISP 320 of the processor 120 may simultaneously process demosaicing (e.g., full color image) and resizing of raw data through the AI network 460 based on parameters.
  • the ISP 320 performs demosaicing of raw data (e.g., CFA image) in full color through the AI network 460, and uses learned parameters corresponding to settings or settings changes (e.g., AI parameters), one full-color image can be output using at least one image (eg, one image or multiple frames) matched by the measurement unit 430 and the alignment unit 440.
  • the ISP 320 changes network layer information, weight parameters, and/or bias parameters through the AI network 460 to match settings or change settings, and outputs correspondingly (e.g., video output size). It can be resized and printed.
  • the processor 120 may perform an operation of outputting image data (eg, output data) according to image processing.
  • the ISP 320 of the processor 120 may output image data to the memory 130 (eg, the memory 480 of FIG. 4) and/or the display module 160.
  • An operation method performed by the electronic device 101 includes using an image signal processor (ISP) (e.g., the ISP of FIG. 3 or 4) while a user is capturing a video.
  • (320)) includes an operation of receiving input data (e.g., raw data) corresponding to the sensor output size of the sensor through a camera (e.g., the camera module 180 in FIGS. 1 to 3). can do.
  • the operating method may include determining a parameter corresponding to a setting or setting change in which the video is captured.
  • the operating method performs image processing to convert the input data into a full color image based on the parameters and vary the output size of the full color image to match the video output size.
  • the operating method may include outputting image data according to image processing.
  • the operation of receiving the input data includes receiving input data from an image sensor of the camera and receiving measurement data extracted based on the input data by a measurement unit of the camera. can do.
  • the operation of performing the image processing may include an operation of matching input data of the image sensor and measurement data of the measurement unit by the image signal processor (eg, alignment unit).
  • the operation of performing the image processing may include processing an arithmetic operation for each pixel of the input data by the image signal processor (eg, a preprocessor).
  • the operation of performing the image processing is to process the input data based on artificial intelligence (AI) while performing the video capture by the image signal processor (e.g., AI network). It may include converting a full color image and scaling it to correspond to the video output size.
  • AI artificial intelligence
  • the operation of performing the image processing is to perform real-time tone correction (tone correction) on the full color image output from the AI network by the image signal processor (e.g., post-processing unit) using the measurement data. It may include an operation to perform mapping).
  • the operation of determining the parameter includes, while performing the video capture, an operation of detecting a change in the video mode currently set for video capture or a video mode set during video capture, and AI in response to the detected result. It may include an operation to determine parameters learned in the network.
  • the operation of determining the parameter includes detecting a video mode automatically changed by the camera while shooting the video, and determining a parameter learned from an AI network in response to the detected result. It may include actions such as:
  • the operation of determining the parameter includes determining the parameter based on at least one element of the CFA pattern of the image sensor, lens characteristics, measurement data of the measurement unit, sensor output size, and/or video output size.
  • the operation of determining the parameter may include an operation of changing the parameter for the AI network to a parameter with a different input/output size in real time in response to the setting or change in the setting.
  • the operation of determining the parameter includes processing demosaic to convert the input data into a full color image based on parameters determined through an AI network, and processing the demosaic of the full color image. It may include an operation to process scaling to change the output size to match the current video output size.
  • the parameters can be changed in real time.
  • the parameters may include at least one of network layer information, weight parameters, and/or bias parameters.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

Un mode de réalisation de la présente divulgation concerne un procédé de traitement d'images et un dispositif électronique conçu pour prendre en charge le procédé, le procédé permettant un traitement d'images d'une vidéo devant être prise en charge au moyen de l'intelligence artificielle lors de la capture (ou de l'enregistrement) de la vidéo. Le dispositif électronique peut comprendre un affichage, une caméra, un processeur et un processeur de signaux d'images intégré dans le processeur. Le processeur de signaux d'images peut procéder aux opérations consistant à : par l'intermédiaire de la caméra, recevoir des données d'entrée (par exemple des données brutes) correspondant à la taille de sortie d'un capteur lorsqu'un utilisateur capture une vidéo ; déterminer des paramètres correspondant à des réglages selon lesquels la vidéo est capturée ou à une modification des réglages ; sur la base des paramètres, effectuer un traitement d'images permettant de convertir les données d'entrée en image polychrome ; modifier la taille de sortie de l'image polychrome de façon à correspondre à une taille de sortie de la vidéo ; et sortir des données d'images en fonction du traitement d'images.
PCT/KR2023/015018 2022-10-04 2023-09-27 Procédé de traitement d'images sur la base de l'intelligence artificielle et dispositif électronique conçu pour prendre en charge le procédé WO2024076101A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20220126608 2022-10-04
KR10-2022-0126608 2022-10-04
KR10-2022-0184871 2022-12-26
KR1020220184871A KR20240047283A (ko) 2022-10-04 2022-12-26 인공지능 기반의 영상 처리 방법 및 이를 지원하는 전자 장치

Publications (1)

Publication Number Publication Date
WO2024076101A1 true WO2024076101A1 (fr) 2024-04-11

Family

ID=90608702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/015018 WO2024076101A1 (fr) 2022-10-04 2023-09-27 Procédé de traitement d'images sur la base de l'intelligence artificielle et dispositif électronique conçu pour prendre en charge le procédé

Country Status (1)

Country Link
WO (1) WO2024076101A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130116796A (ko) * 2012-04-16 2013-10-24 삼성전자주식회사 카메라의 이미지 처리 장치 및 방법
KR20190021756A (ko) * 2017-08-23 2019-03-06 엘지디스플레이 주식회사 영상 처리 방법 및 이를 이용한 표시 장치
KR20200143815A (ko) * 2019-06-17 2020-12-28 주식회사 피앤오 인공지능 카메라 시스템, 인공지능 카메라 시스템에서의 영상 변환 방법, 및 컴퓨터 판독 가능 매체
KR20210139450A (ko) * 2019-03-25 2021-11-22 후아웨이 테크놀러지 컴퍼니 리미티드 이미지 디스플레이 방법 및 디바이스
KR20220127642A (ko) * 2021-03-11 2022-09-20 삼성전자주식회사 전자 장치 및 그 제어 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130116796A (ko) * 2012-04-16 2013-10-24 삼성전자주식회사 카메라의 이미지 처리 장치 및 방법
KR20190021756A (ko) * 2017-08-23 2019-03-06 엘지디스플레이 주식회사 영상 처리 방법 및 이를 이용한 표시 장치
KR20210139450A (ko) * 2019-03-25 2021-11-22 후아웨이 테크놀러지 컴퍼니 리미티드 이미지 디스플레이 방법 및 디바이스
KR20200143815A (ko) * 2019-06-17 2020-12-28 주식회사 피앤오 인공지능 카메라 시스템, 인공지능 카메라 시스템에서의 영상 변환 방법, 및 컴퓨터 판독 가능 매체
KR20220127642A (ko) * 2021-03-11 2022-09-20 삼성전자주식회사 전자 장치 및 그 제어 방법

Similar Documents

Publication Publication Date Title
WO2021086040A1 (fr) Procédé pour la fourniture de prévisualisation et dispositif électronique pour l'affichage de prévisualisation
WO2022039424A1 (fr) Procédé de stabilisation d'images et dispositif électronique associé
WO2021133025A1 (fr) Dispositif électronique comprenant un capteur d'image et son procédé de fonctionnement
WO2019156428A1 (fr) Dispositif électronique et procédé de correction d'images à l'aide d'un dispositif électronique externe
WO2022102972A1 (fr) Dispositif électronique comprenant un capteur d'image et son procédé de fonctionnement
WO2022108235A1 (fr) Procédé, appareil et support de stockage pour obtenir un obturateur lent
WO2022030838A1 (fr) Dispositif électronique et procédé de commande d'image de prévisualisation
WO2022149654A1 (fr) Dispositif électronique pour réaliser une stabilisation d'image, et son procédé de fonctionnement
WO2022196993A1 (fr) Dispositif électronique et procédé de capture d'image au moyen d'un angle de vue d'un module d'appareil de prise de vues
WO2022092706A1 (fr) Procédé de prise de photographie à l'aide d'une pluralité de caméras, et dispositif associé
WO2024076101A1 (fr) Procédé de traitement d'images sur la base de l'intelligence artificielle et dispositif électronique conçu pour prendre en charge le procédé
WO2021261737A1 (fr) Dispositif électronique comprenant un capteur d'image, et procédé de commande de celui-ci
WO2021215795A1 (fr) Filtre couleur pour dispositif électronique, et dispositif électronique le comportant
WO2021162263A1 (fr) Procédé de génération d'image et dispositif électronique associé
WO2024106746A1 (fr) Dispositif électronique et procédé d'augmentation de la résolution d'une image de bokeh numérique
WO2024085673A1 (fr) Dispositif électronique pour obtenir de multiples images d'exposition et son procédé de fonctionnement
WO2024085501A1 (fr) Procédé d'amélioration de qualité d'image basé sur l'apprentissage utilisant une image provenant d'un capteur d'image, et dispositif électronique prenant en charge celui-ci
WO2022240186A1 (fr) Procédé de correction de distorsion d'image et dispositif électronique associé
WO2022231270A1 (fr) Dispositif électronique et son procédé de traitement d'image
WO2024085487A1 (fr) Dispositif électronique, procédé et support de stockage lisible par ordinateur non transitoire destinés au changement de réglage de caméra
WO2024111924A1 (fr) Procédé de fourniture d'image et dispositif électronique le prenant en charge
WO2022092607A1 (fr) Dispositif électronique comportant un capteur d'image et procédé de fonctionnement de celui-ci
WO2024080767A1 (fr) Dispositif électronique d'acquisition d'image à l'aide d'une caméra, et son procédé de fonctionnement
WO2022025574A1 (fr) Dispositif électronique comprenant un capteur d'image et un processeur de signal d'image, et son procédé
WO2023033396A1 (fr) Dispositif électronique pour traiter une entrée de prise de vue continue, et son procédé de fonctionnement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23875165

Country of ref document: EP

Kind code of ref document: A1