WO2024085501A1 - Procédé d'amélioration de qualité d'image basé sur l'apprentissage utilisant une image provenant d'un capteur d'image, et dispositif électronique prenant en charge celui-ci - Google Patents

Procédé d'amélioration de qualité d'image basé sur l'apprentissage utilisant une image provenant d'un capteur d'image, et dispositif électronique prenant en charge celui-ci Download PDF

Info

Publication number
WO2024085501A1
WO2024085501A1 PCT/KR2023/015023 KR2023015023W WO2024085501A1 WO 2024085501 A1 WO2024085501 A1 WO 2024085501A1 KR 2023015023 W KR2023015023 W KR 2023015023W WO 2024085501 A1 WO2024085501 A1 WO 2024085501A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
zoom
data
processor
learning
Prior art date
Application number
PCT/KR2023/015023
Other languages
English (en)
Korean (ko)
Inventor
한대중
한창수
박재형
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020220185438A external-priority patent/KR20240054134A/ko
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2024085501A1 publication Critical patent/WO2024085501A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels

Definitions

  • Embodiments of the present disclosure provide a method that can support learning-based image quality improvement using images of an image sensor and an electronic device that supports the same.
  • an electronic device uses a plurality of cameras (e.g., a main camera, a wide-angle camera, a zoom camera) to use a digital camera (e.g., a DSLR). It can provide the experience of a camera with interchangeable lenses (digital single lens reflex) camera.
  • an electronic device may provide a zoom camera (e.g., a camera with a different magnification than a camera with about x1x) with a fixed zoom magnification (e.g., about You can take pictures easily.
  • an electronic device may apply a plurality of zoom cameras to provide a high-magnification zoom function, thereby improving the zoom performance of the cameras and providing high-quality images.
  • an electronic device may provide a zoom function through image processing.
  • the ROI region of image area according to the desired zoom ratio is cropped and then image processed (e.g. upscale processing) is used to The same image can be obtained.
  • This zoom function can be called digital crop zoom.
  • images acquired using a digital crop-based zoom function may experience image quality deterioration due to interpolation between pixels.
  • an electronic device can further improve zoom performance and provide higher quality images by applying a hybrid zoom structure (e.g., an optical-digital crop zoom structure).
  • a hybrid zoom structure e.g., an optical-digital crop zoom structure
  • a camera in an electronic device may have a fixed magnification optical lens design, and cannot physically acquire zoom images of different magnifications other than the designed magnification. Accordingly, the electronic device can acquire an image of a fixed magnification or higher by digitally cropping an image acquired through a camera (eg, a zoom camera).
  • the electronic device can provide a high-magnification zoom function through a hybrid zoom structure.
  • users' needs for higher magnification zoom performance and improved image quality have increased, and various researches on image processing technology to meet users' needs are in progress.
  • a method for supporting learning-based image quality improvement using images of an image sensor and an electronic device supporting the same are provided.
  • a method for improving image quality of a deep-learning-based zoom image using low-resolution and high-resolution images of an image sensor e.g., MPS, multi-pixel sensor
  • an image sensor e.g., MPS, multi-pixel sensor
  • binning output and remosaic output of a multi-pixel sensor (MPS) in a hybrid zoom structure e.g., an optical-digital crop zoom structure
  • MPS multi-pixel sensor
  • SR super resolution
  • learning data e.g. learning data
  • An electronic device may include a display, a communication circuit, a camera module including a plurality of cameras, and at least one processor operatively connected to the display, the communication circuit, and the camera module. there is.
  • the at least one processor may operate to acquire image data through the camera module.
  • the at least one processor may operate to perform binning processing based on the image data and transmit the image data to a server through the communication circuit.
  • the at least one processor may operate to obtain a binning image based on the binning process.
  • the at least one processor according to an embodiment may operate to obtain training data based on the binning image.
  • the at least one processor may operate to obtain ground truth data generated based on the image data from the server.
  • the at least one processor according to an embodiment may operate to perform learning about the zoom magnification based on the learning data and the actual measurement data.
  • the at least one processor according to an embodiment may operate to generate and map a parameter corresponding to the zoom magnification.
  • a method of operating an electronic device may include performing an operation of acquiring image data through a camera module of the electronic device.
  • the operating method may include performing binning processing based on the image data and transmitting the image data to a server through a communication circuit.
  • the operating method may include performing an operation of obtaining a binning image based on the binning process.
  • the operation method may include performing an operation to obtain learning data based on the binning image.
  • the operating method may include performing an operation of acquiring actual measurement data generated based on the image data from the server.
  • the operation method may include performing an operation of learning about the zoom magnification based on the learning data and the actual measurement data.
  • the operating method may include performing an operation of generating and mapping a parameter corresponding to the zoom magnification.
  • various embodiments of the present disclosure may include a computer-readable recording medium on which a program for executing the method on a processor is recorded.
  • a non-transitory computer-readable storage medium (or computer program product) storing one or more programs.
  • one or more programs when executed by a processor of an electronic device, acquire image data through a camera module of the electronic device, perform binning processing based on the image data, and Transmitting the image data to a server through a communication circuit, acquiring a binning image based on the binning process, acquiring learning data based on the binning image, and generating data based on the image data from the server. It may include instructions for acquiring actual measured data, performing learning on a zoom scale based on the learning data and the measured data, and generating and mapping a parameter corresponding to the zoom factor. there is.
  • a zoom function applicable to both high-specification and general-specification electronic devices and a method for improving image quality of a zoomed image can be provided.
  • high resolution can be implemented through resolution adjustment based on pre-learned learning data.
  • the electronic device performs digital cropping on a zoom image acquired through a camera (e.g., a zoom camera) using learning data corresponding to the zoom image to produce an image that is higher than the fixed magnification of the zoom camera. We can help you achieve it. Through this, it is possible to improve the image quality of zoom cameras.
  • the image quality of zoomed images is improved by using low-resolution and high-resolution images of a multi-pixel sensor (MPS), and high-magnification zoom performance and image quality are improved in scenarios where the user uses the zoom function. can meet the needs.
  • MPS multi-pixel sensor
  • FIG. 1 is a block diagram of an electronic device in a network environment according to various embodiments.
  • FIG. 2 is a block diagram illustrating a camera module according to various embodiments.
  • FIG. 3 is a diagram schematically showing the configuration of an electronic device according to an embodiment of the present disclosure.
  • FIG. 4 is a diagram illustrating an example of a hybrid zoom scenario in an electronic device according to an embodiment.
  • FIG. 5 is a diagram illustrating an example of a hybrid zoom scenario in an electronic device according to an embodiment.
  • Figure 6 is a diagram for explaining a learning system for learning data and an example of its operation according to an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating an example of a binning operation of a multi-pixel sensor according to an embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating an example of a limozaic operation of a multi-pixel sensor according to an embodiment of the present disclosure.
  • Figure 9 may show an example of a multi-pixel sensor-based deep-learning zoom scenario according to an embodiment of the present disclosure.
  • Figure 10 may show an example of a multi-pixel sensor-based deep-learning zoom scenario according to an embodiment of the present disclosure.
  • FIG. 11 is a diagram for explaining an example of an operation of an electronic device according to an embodiment of the present disclosure.
  • FIG. 12 is a flowchart illustrating a method of operating an electronic device according to an embodiment of the present disclosure.
  • FIG. 13 is a flowchart illustrating a method of operating an electronic device according to an embodiment of the present disclosure.
  • FIG. 1 is a block diagram of an electronic device 101 in a network environment 100 according to various embodiments.
  • the electronic device 101 communicates with the electronic device 102 through a first network 198 (e.g., a short-range wireless communication network) or a second network 199. It is possible to communicate with at least one of the electronic device 104 or the server 108 through (e.g., a long-distance wireless communication network). According to one embodiment, the electronic device 101 may communicate with the electronic device 104 through the server 108.
  • a first network 198 e.g., a short-range wireless communication network
  • a second network 199 e.g., a long-distance wireless communication network.
  • the electronic device 101 may communicate with the electronic device 104 through the server 108.
  • the electronic device 101 includes a processor 120, a memory 130, an input module 150, an audio output module 155, a display module 160, an audio module 170, and a sensor module ( 176), interface 177, connection terminal 178, haptic module 179, camera module 180, power management module 188, battery 189, communication module 190, subscriber identification module 196 , or may include an antenna module 197.
  • at least one of these components eg, the connection terminal 178) may be omitted or one or more other components may be added to the electronic device 101.
  • some of these components e.g., sensor module 176, camera module 180, or antenna module 197) are integrated into one component (e.g., display module 160). It can be.
  • the processor 120 for example, executes software (e.g., program 140) to operate at least one other component (e.g., hardware or software component) of the electronic device 101 connected to the processor 120. It can be controlled and various data processing or calculations can be performed. According to one embodiment, as at least part of data processing or computation, the processor 120 stores commands or data received from another component (e.g., sensor module 176 or communication module 190) in volatile memory 132. The commands or data stored in the volatile memory 132 can be processed, and the resulting data can be stored in the non-volatile memory 134.
  • software e.g., program 140
  • the processor 120 stores commands or data received from another component (e.g., sensor module 176 or communication module 190) in volatile memory 132.
  • the commands or data stored in the volatile memory 132 can be processed, and the resulting data can be stored in the non-volatile memory 134.
  • the processor 120 is a main processor 121 (e.g., a central processing unit (CPU) or an application processor (AP)) or an auxiliary processor (e.g., a central processing unit (CPU) or an application processor (AP)) that can be operated independently or together. 123) (e.g., graphic processing unit (GPU), neural processing unit (NPU), image signal processor (ISP), sensor hub processor, or communication processor
  • the auxiliary processor 123 is greater than the main processor 121.
  • the auxiliary processor 123 may be configured to use low power or be specialized for a designated function, or may be implemented separately from the main processor 121 or as part of it.
  • the auxiliary processor 123 may, for example, replace the main processor 121 while the main processor 121 is in an inactive (e.g., sleep) state, or when the main processor 121 While in an active (e.g., application execution) state, at least one of the components of the electronic device 101 (e.g., the display module 160, the sensor module 176, or At least some of the functions or states related to the communication module 190 can be controlled.
  • co-processor 123 e.g., image signal processor or communication processor
  • may be implemented as part of another functionally related component e.g., camera module 180 or communication module 190. there is.
  • the auxiliary processor 123 may include a hardware structure specialized for processing artificial intelligence models.
  • Artificial intelligence models can be created through machine learning. For example, such learning may be performed in the electronic device 101 itself on which the artificial intelligence model is performed, or may be performed through a separate server (e.g., server 108).
  • Learning algorithms may include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but It is not limited.
  • An artificial intelligence model may include multiple artificial neural network layers.
  • Artificial neural networks include deep neural network (DNN), convolutional neural network (CNN), recurrent neural network (RNN), restricted boltzmann machine (RBM), belief deep network (DBN), bidirectional recurrent deep neural network (BRDNN), It may be one of deep Q-networks or a combination of two or more of the above, but is not limited to the examples described above.
  • artificial intelligence models may additionally or alternatively include software structures.
  • the memory 130 may store various data used by at least one component (eg, the processor 120 or the sensor module 176) of the electronic device 101. Data may include, for example, input data or output data for software (e.g., program 140) and instructions related thereto.
  • Memory 130 may include volatile memory 132 or non-volatile memory 134.
  • the program 140 may be stored as software in the memory 130 and may include, for example, an operating system (OS) 142, middleware 144, or applications 146. there is.
  • OS operating system
  • middleware middleware
  • applications 146. there is.
  • the input module 150 may receive commands or data to be used in a component of the electronic device 101 (e.g., the processor 120) from outside the electronic device 101 (e.g., a user).
  • the input module 150 may include, for example, a microphone, mouse, keyboard, keys (eg, buttons), or digital pen (eg, stylus pen).
  • the sound output module 155 may output sound signals to the outside of the electronic device 101.
  • the sound output module 155 may include, for example, a speaker or a receiver. Speakers can be used for general purposes such as multimedia playback or recording playback.
  • the receiver can be used to receive incoming calls. According to one embodiment, the receiver may be implemented separately from the speaker or as part of it.
  • the display module 160 can visually provide information to the outside of the electronic device 101 (eg, a user).
  • the display module 160 may include, for example, a display, a hologram device, or a projector, and a control circuit for controlling the device.
  • the display module 160 may include a touch sensor configured to detect a touch, or a pressure sensor configured to measure the intensity of force generated by the touch.
  • the audio module 170 can convert sound into an electrical signal or, conversely, convert an electrical signal into sound. According to one embodiment, the audio module 170 acquires sound through the input module 150, the sound output module 155, or an external electronic device (e.g., directly or wirelessly connected to the electronic device 101). Sound may be output through the electronic device 102 (e.g., speaker or headphone).
  • the electronic device 102 e.g., speaker or headphone
  • the sensor module 176 detects the operating state (e.g., power or temperature) of the electronic device 101 or the external environmental state (e.g., user state) and generates an electrical signal or data value corresponding to the detected state. can do.
  • the sensor module 176 includes, for example, a gesture sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR (infrared) sensor, a biometric sensor, It may include a temperature sensor, humidity sensor, or light sensor.
  • the interface 177 may support one or more designated protocols that can be used to connect the electronic device 101 directly or wirelessly with an external electronic device (eg, the electronic device 102).
  • the interface 177 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface.
  • HDMI high definition multimedia interface
  • USB universal serial bus
  • SD secure digital
  • connection terminal 178 may include a connector through which the electronic device 101 can be physically connected to an external electronic device (eg, the electronic device 102).
  • the connection terminal 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (eg, a headphone connector).
  • the haptic module 179 can convert electrical signals into mechanical stimulation (e.g., vibration or movement) or electrical stimulation that the user can perceive through tactile or kinesthetic senses.
  • the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module 180 can capture still images and moving images.
  • the camera module 180 may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module 188 can manage power supplied to the electronic device 101.
  • the power management module 188 may be implemented as at least a part of, for example, a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • the battery 189 may supply power to at least one component of the electronic device 101.
  • the battery 189 may include, for example, a non-rechargeable primary battery, a rechargeable secondary battery, or a fuel cell.
  • Communication module 190 is configured to provide a direct (e.g., wired) communication channel or wireless communication channel between electronic device 101 and an external electronic device (e.g., electronic device 102, electronic device 104, or server 108). It can support establishment and communication through established communication channels. Communication module 190 operates independently of processor 120 (e.g., an application processor) and may include one or more communication processors that support direct (e.g., wired) communication or wireless communication.
  • processor 120 e.g., an application processor
  • the communication module 190 may be a wireless communication module 192 (e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 194 (e.g., : LAN (local area network) communication module, or power line communication module) may be included.
  • a wireless communication module 192 e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module
  • GNSS global navigation satellite system
  • wired communication module 194 e.g., : LAN (local area network) communication module, or power line communication module
  • the corresponding communication module is a first network 198 (e.g., a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)) or a second network 199 (e.g., legacy It may communicate with an external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., a telecommunication network such as a LAN or wide area network (WAN)).
  • a first network 198 e.g., a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)
  • a second network 199 e.g., legacy It may communicate with an external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., a telecommunication network such as a LAN or wide area network
  • the wireless communication module 192 uses subscriber information (e.g., International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module 196 to communicate within a communication network such as the first network 198 or the second network 199.
  • subscriber information e.g., International Mobile Subscriber Identifier (IMSI)
  • IMSI International Mobile Subscriber Identifier
  • the wireless communication module 192 may support 5G networks after 4G networks and next-generation communication technologies, for example, NR access technology (new radio access technology).
  • NR access technologies include high-speed transmission of high-capacity data (eMBB, enhanced mobile broadband), minimization of terminal power and access to multiple terminals (mMTC, massive machine type communications), or high-reliability and low-latency (URLLC, ultra-reliable and low-latency). communications) can be supported.
  • the wireless communication module 192 may support high frequency bands (eg, mmWave bands), for example, to achieve high data rates.
  • the wireless communication module 192 uses various technologies to secure performance in high frequency bands, for example, beamforming, massive array multiple-input and multiple-output (MIMO), and full-dimensional multiplexing.
  • MIMO massive array multiple-input and multiple-output
  • the wireless communication module 192 may support various requirements specified in the electronic device 101, an external electronic device (e.g., electronic device 104), or a network system (e.g., second network 199). According to one embodiment, the wireless communication module 192 supports Peak data rate (e.g., 20 Gbps or more) for realizing eMBB, loss coverage (e.g., 164 dB or less) for realizing mmTC, or U-plane latency (e.g., 164 dB or less) for realizing URLLC.
  • Peak data rate e.g., 20 Gbps or more
  • loss coverage e.g., 164 dB or less
  • U-plane latency e.g., 164 dB or less
  • the antenna module 197 may transmit or receive signals or power to or from the outside (eg, an external electronic device).
  • the antenna module 197 may include an antenna including a radiator made of a conductor or a conductive pattern formed on a substrate (eg, PCB).
  • the antenna module 197 may include a plurality of antennas (eg, an array antenna). In this case, at least one antenna suitable for a communication method used in a communication network such as the first network 198 or the second network 199 is, for example, connected to the plurality of antennas by the communication module 190. can be selected. Signals or power may be transmitted or received between the communication module 190 and an external electronic device through the at least one selected antenna.
  • other components eg, radio frequency integrated circuit (RFIC) may be additionally formed as part of the antenna module 197.
  • RFIC radio frequency integrated circuit
  • the antenna module 197 may form a mmWave antenna module.
  • a mmWave antenna module includes a printed circuit board, an RFIC disposed on or adjacent to a first side (e.g., bottom side) of the printed circuit board and capable of supporting a designated high-frequency band (e.g., mmWave band); And a plurality of antennas (e.g., array antennas) disposed on or adjacent to the second side (e.g., top or side) of the printed circuit board and capable of transmitting or receiving signals in the designated high frequency band. can do.
  • a mmWave antenna module includes a printed circuit board, an RFIC disposed on or adjacent to a first side (e.g., bottom side) of the printed circuit board and capable of supporting a designated high-frequency band (e.g., mmWave band); And a plurality of antennas (e.g., array antennas) disposed on or adjacent to the second side (e.g., top or side)
  • peripheral devices e.g., bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)
  • signal e.g. commands or data
  • commands or data may be transmitted or received between the electronic device 101 and the external electronic device 104 through the server 108 connected to the second network 199.
  • Each of the external electronic devices 102 or 104 may be of the same or different type as the electronic device 101.
  • all or part of the operations performed in the electronic device 101 may be executed in one or more of the external electronic devices 102, 104, or 108.
  • the electronic device 101 may perform the function or service instead of executing the function or service on its own.
  • one or more external electronic devices may be requested to perform at least part of the function or service.
  • One or more external electronic devices that have received the request may execute at least part of the requested function or service, or an additional function or service related to the request, and transmit the result of the execution to the electronic device 101.
  • the electronic device 101 may process the result as is or additionally and provide it as at least part of a response to the request.
  • cloud computing distributed computing, mobile edge computing (MEC), or client-server computing technology can be used.
  • the electronic device 101 may provide an ultra-low latency service using, for example, distributed computing or mobile edge computing.
  • the external electronic device 104 may include an Internet of Things (IoT) device.
  • Server 108 may be an intelligent server using machine learning and/or neural networks.
  • the external electronic device 104 or server 108 may be included in the second network 199.
  • the electronic device 101 may be applied to intelligent services (e.g., smart home, smart city, smart car, or healthcare) based on 5G communication technology and IoT-related technology.
  • FIG. 2 is a block diagram 200 illustrating a camera module 180 according to various embodiments.
  • the camera module 180 includes a lens assembly 210, a flash 220, an image sensor 230, an image stabilizer 240, It may include a memory 250 (e.g., buffer memory) or an image signal processor 260.
  • the lens assembly 210 may collect light emitted from a subject that is the target of image capture.
  • Lens assembly 210 may include one or more lenses.
  • the camera module 180 may include a plurality of lens assemblies 210.
  • the camera module 180 may form, for example, a dual camera, a 360-degree camera, or a spherical camera.
  • Some of the plurality of lens assemblies 210 have the same lens properties (e.g., angle of view, focal length, autofocus, f number, or optical zoom), or at least one lens assembly is different from another lens assembly. It may have one or more lens properties that are different from the lens properties of .
  • the lens assembly 210 may include, for example, a wide-angle lens or a telephoto lens.
  • the flash 220 may emit light used to enhance light emitted or reflected from a subject.
  • the flash 220 may include one or more light emitting diodes (eg, red-green-blue (RGB) LED, white LED, infrared LED, or ultraviolet LED), or a xenon lamp.
  • RGB red-green-blue
  • LED white LED
  • infrared LED or ultraviolet LED
  • the image sensor 230 may acquire an image corresponding to the subject by converting light emitted or reflected from the subject and transmitted through the lens assembly 210 into an electrical signal.
  • the image sensor 230 is one image sensor selected from among image sensors with different properties, such as an RGB sensor, a BW (black and white) sensor, an IR sensor, or a UV sensor, and the same It may include a plurality of image sensors having different properties, or a plurality of image sensors having different properties.
  • Each image sensor included in the image sensor 230 may be implemented using, for example, a charged coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) sensor.
  • CCD charged coupled device
  • CMOS complementary metal oxide semiconductor
  • the image stabilizer 240 moves at least one lens or image sensor 230 included in the lens assembly 210 in a specific direction in response to the movement of the camera module 180 or the electronic device 101 including the same.
  • the operating characteristics of the image sensor 230 can be controlled (e.g., adjusting read-out timing, etc.). This allows to compensate for at least some of the negative effects of said movement on the image being captured.
  • the image stabilizer 240 uses a gyro sensor (not shown) or an acceleration sensor (not shown) disposed inside or outside the camera module 180 to stabilize the camera module 180 or the electronic device 101. ) can detect such movements.
  • the image stabilizer 240 may be implemented as, for example, an optical image stabilizer.
  • the memory 250 may at least temporarily store at least a portion of the image acquired through the image sensor 230 for the next image processing task. For example, when image acquisition is delayed due to the shutter or when multiple images are acquired at high speed, the acquired original image (e.g., Bayer-patterned image or high-resolution image) is stored in the memory 250. , the corresponding copy image (eg, low resolution image) may be previewed through the display module 160. Thereafter, when a specified condition is satisfied (eg, user input or system command), at least a portion of the original image stored in the memory 250 may be obtained and processed, for example, by the image signal processor 260. According to one embodiment, the memory 250 may be configured as at least part of the memory 130 or as a separate memory that operates independently.
  • a specified condition eg, user input or system command
  • the image signal processor 260 may perform one or more image processes on an image acquired through the image sensor 230 or an image stored in the memory 250.
  • the one or more image processes may include, for example, depth map generation, 3D modeling, panorama generation, feature point extraction, image compositing, or image compensation.
  • Image compensation may include, for example, noise reduction, resolution adjustment, brightness adjustment, blurring, sharpening, and/or softening.
  • the image signal processor 260 provides control (e.g., exposure time control, or read-out timing control) for at least one of the components included in the camera module 180 (e.g., image sensor 230). etc.) can be performed. Images processed by the image signal processor 260 are stored back in memory 250 for further processing or are stored in external components of the camera module 180 (e.g., memory 130, display module 160, electronics ( 102), an electronic device 104, or a server 108).
  • control e.g., exposure time control, or read-out timing control
  • the components included in the camera module 180 e.g., image sensor 230.
  • images processed by the image signal processor 260 are stored back in memory 250 for further processing or are stored in external components of the camera module 180 (e.g., memory 130, display module 160, electronics ( 102), an electronic device 104, or a server 108).
  • the image signal processor 260 may be configured as at least a part of the processor 120, or may be configured as a separate processor that operates independently from the processor 120.
  • the image signal processor 260 is configured as a separate processor from the processor 120, at least one image processed by the image signal processor 260 is displayed as is or after additional image processing by the processor 120. It may be displayed through module 160.
  • the electronic device 101 may include a plurality of camera modules 180, each having different properties or functions.
  • at least one of the plurality of camera modules 180 may be a wide-angle camera, and at least another one may be a telephoto camera.
  • at least one of the plurality of camera modules 180 may be a front camera, and at least another one may be a rear camera.
  • Electronic devices may be of various types.
  • Electronic devices may include, for example, portable communication devices (e.g., smartphones), computer devices, portable multimedia devices, portable medical devices, cameras, wearable devices, or home appliances.
  • Electronic devices according to embodiments of this document are not limited to the above-described devices.
  • first, second, or first or second may be used simply to distinguish one element from another, and may be used to distinguish such elements in other respects, such as importance or order) is not limited.
  • One (e.g. first) component is said to be “coupled” or “connected” to another (e.g. second) component, with or without the terms “functionally” or “communicatively”.
  • any of the components can be connected to the other components directly (e.g. wired), wirelessly, or through a third component.
  • module used in various embodiments of this document may include a unit implemented in hardware, software, or firmware, and is interchangeable with terms such as logic, logic block, component, or circuit, for example. It can be used as A module may be an integrated part or a minimum unit of the parts or a part thereof that performs one or more functions. For example, according to one embodiment, the module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • Various embodiments of the present document are one or more instructions stored in a storage medium (e.g., built-in memory 136 or external memory 138) that can be read by a machine (e.g., electronic device 101). It may be implemented as software (e.g., program 140) including these.
  • a processor e.g., processor 120
  • the one or more instructions may include code generated by a compiler or code that can be executed by an interpreter.
  • a storage medium that can be read by a device may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' only means that the storage medium is a tangible device and does not contain signals (e.g. electromagnetic waves). This term refers to cases where data is stored semi-permanently in the storage medium. There is no distinction between temporary storage cases.
  • Computer program products are commodities and can be traded between sellers and buyers.
  • the computer program product may be distributed in the form of a machine-readable storage medium (e.g. compact disc read only memory (CD-ROM)) or via an application store (e.g. Play Store TM ) or on two user devices (e.g. It can be distributed (e.g. downloaded or uploaded) directly between smart phones) or online.
  • a machine-readable storage medium e.g. compact disc read only memory (CD-ROM)
  • an application store e.g. Play Store TM
  • two user devices e.g. It can be distributed (e.g. downloaded or uploaded) directly between smart phones) or online.
  • at least a portion of the computer program product may be at least temporarily stored or temporarily created in a machine-readable storage medium, such as the memory of a manufacturer's server, an application store's server, or a relay server.
  • each component (e.g., module or program) of the above-described components may include a single or plural entity, and some of the plurality of entities may be separately placed in other components. there is.
  • one or more of the components or operations described above may be omitted, or one or more other components or operations may be added.
  • multiple components eg, modules or programs
  • the integrated component may perform one or more functions of each component of the plurality of components identically or similarly to those performed by the corresponding component of the plurality of components prior to the integration. .
  • operations performed by a module, program, or other component may be executed sequentially, in parallel, iteratively, or heuristically, or one or more of the operations may be executed in a different order. may be removed, omitted, or one or more other operations may be added.
  • FIG. 3 is a diagram schematically showing the configuration of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 101 may include a display module 160, a camera module 180, a memory 130, and/or a processor 120. According to one embodiment, the electronic device 101 may include all or at least some of the components of the electronic device 101 as described in the description with reference to FIG. 1 .
  • the display module 160 may include the same or similar configuration as the display module 160 of FIG. 1.
  • the display module 160 may include a display, and may visually provide various information to the outside of the electronic device 101 (eg, a user) through the display.
  • the display module 160 under the control of the processor 120, displays various information (e.g., content, image, video, preview image) related to the executing application and its use. )) can be provided visually.
  • the display module 160 includes a touch sensor, a pressure sensor capable of measuring the intensity of touch, and/or a touch panel that detects a magnetic field-type stylus pen. (e.g. digitizer) may be included.
  • the display module 160 receives a signal (e.g., voltage, amount of light, resistance, electromagnetic signal, and/or Touch input and/or hovering input (or proximity input) can be detected by measuring changes in electric charge.
  • the display module 160 may include a liquid crystal display (LCD), an organic light emitted diode (OLED), or an active matrix organic light emitted diode (AMOLED).
  • the display module 160 may include a flexible display.
  • the camera module 180 may correspond to the camera module 180 of FIG. 1 or FIG. 2. According to one embodiment, when activated, the camera module 180 may transmit a related result (eg, a captured image) to the processor 120 and/or the display module 160 through photographing a subject.
  • a related result eg, a captured image
  • the camera module 180 may photograph an external subject (or object) and generate image data.
  • the camera module 180 may include an image sensor (eg, the image sensor 230 of FIG. 2).
  • the image sensor 230 may include a multi-pixel sensor (MPS).
  • the camera module 180 may convert the optical signal of the subject into an electrical signal using the image sensor 230.
  • the image sensor 230 may include a pixel array in which a plurality of pixels are two-dimensionally arranged. For example, one color among a plurality of reference colors may be assigned to each of the plurality of pixels.
  • the plurality of reference colors may include red, green, blue (RGB), or red, green, blue, white (RGBW).
  • the camera module 180 may generate image data using the image sensor 230 (eg, a multi-pixel sensor).
  • image data may be referred to variously as an image, non-Bayer image, image frame, and frame data.
  • the image data is input data to the processor 120 (e.g., an image signal processor (ISP) 320 and/or a neural processing unit (NPU) 340). It may be provided or stored in the memory 130. In one embodiment, image data stored in memory 130 may be provided to processor 120.
  • ISP image signal processor
  • NPU neural processing unit
  • the memory 130 may correspond to the memory 130 of FIG. 1 .
  • the memory 130 may store various data used by the electronic device 101.
  • data may include, for example, input data or output data for an application (eg, program 140 of FIG. 1) and a command related to the application.
  • the data includes image data acquired through the camera module 180 and various images processed based on the image data (e.g., non-Bayer images, Bayer images, and/or Remosaic images). ) image) may include related data.
  • the data may include various learning data and parameters acquired based on the user's learning through interaction with the user.
  • data may include various schemas (or algorithms, models, networks, or functions) to support artificial intelligence (AI)-based image processing.
  • AI artificial intelligence
  • a schema to support artificial intelligence (AI)-based image processing may include a neural network.
  • the neural network includes Artificial Neural Network (ANN), Convolution Neural Network (CNN), Region with Convolution Neural Network (R-CNN), Region Proposal Network (RPN), Recurrent Neural Network (RNN), and S-DNN.
  • ANN Artificial Neural Network
  • CNN Convolution Neural Network
  • R-CNN Region with Convolution Neural Network
  • RPN Region Proposal Network
  • RNN Region Proposal Network
  • RNN Recurrent Neural Network
  • S-DNN S-DNN
  • LSTM Long Short-Term Memory
  • DBN Deep Belief Network
  • RBM Restricted Boltzman Machine
  • LSTM Long Short-Term Memory
  • the type of neural network model is not limited to the above-described example.
  • the memory 130 may store instructions that cause the processor 120 to operate when executed.
  • an application may be stored as software (eg, program 140 in FIG. 1) on the memory 130 and may be executable by the processor 120.
  • the application may be a variety of applications that can provide various functions or services (eg, an image capture function based on artificial intelligence (AI)) on the electronic device 101.
  • AI artificial intelligence
  • the processor 120 may perform an application layer processing function required by the user of the electronic device 101. According to one embodiment, the processor 120 may provide commands and control of functions for various blocks of the electronic device 101. According to one embodiment, the processor 120 may perform operations or data processing related to control and/or communication of each component of the electronic device 101. For example, the processor 120 may include at least some of the components and/or functions of the processor 120 of FIG. 1 . According to one embodiment, the processor 120 may be operatively connected to components of the electronic device 101. According to one embodiment, the processor 120 loads commands or data received from other components of the electronic device 101 into the memory 130, processes the commands or data stored in the memory 130, and , the resulting data can be saved.
  • the processor 120 may be an application processor (AP).
  • the processor 120 may be a system semiconductor responsible for calculation and multimedia driving functions of the electronic device 101.
  • the processor 120 is configured in the form of a system-on-chip (SoC), a technology-intensive semiconductor chip that integrates several semiconductor technologies and implements system blocks into one chip. It can be included.
  • the system blocks of the processor 120 include a graphics processing unit (GPU) 310 and an image signal processor (ISP) 320, as illustrated in FIG. 3. , central processing unit (CPU) 330, neural processing unit (NPU) 340, digital signal processor 350, modem 360, connectivity It may include (connectivity) 370 and/or security (security) 380 blocks.
  • the processor 120 may control overall operations related to generating training data using a neural network for image capture.
  • the processor 120 may execute an application and perform a neural network-based task (eg, a learning data generation task) required according to the execution of the application.
  • the processor 120 controls the overall operation to improve zoom image quality based on deep learning using low-resolution images and high-resolution images of the image sensor 230 (e.g., multi-pixel sensor). can do.
  • GPU 310 may be responsible for graphics processing. According to one embodiment, the GPU 310 receives instructions from the CPU 330 and performs graphics processing to express the shape, position, color, shading, movement, and/or texture of objects (or objects) on the display. can do.
  • the ISP 320 may be responsible for image processing and correction of images and videos. According to one embodiment, the ISP 320 corrects raw data (e.g., raw data) transmitted from the image sensor (e.g., image sensor 230 of FIG. 2) of the camera module 180. Thus, it can play a role in creating images in the form preferred by the user. According to one embodiment, the ISP 320 corrects physical limitations that may occur in the camera module 180 and interpolates and reduces R/G/B (red, green, blue) values. It can be removed. According to one embodiment, the ISP 320 may perform post-processing, such as adjusting partial brightness of the image and emphasizing details. For example, the ISP 320 can generate a result preferred by the user by independently tuning and correcting the image quality of the image acquired through the camera module 180.
  • raw data e.g., raw data
  • the ISP 320 corrects physical limitations that may occur in the camera module 180 and interpolates and reduces R/G/B (red, green, blue) values. It
  • the ISP 320 may support artificial intelligence-based image processing technology to improve the quality of zoom images, speedy image processing, and reduce current consumption (e.g., low power).
  • the ISP 320 can maintain low power while improving image quality, and for this purpose, it can support artificial intelligence-based video shooting.
  • the ISP 320 may support artificial intelligence-based image processing related to improving the quality of high-magnification zoom images.
  • the ISP 320 may support scene segmentation (e.g., image segmentation) technology that recognizes and/or classifies parts of the scene being shot in conjunction with the NPU 340.
  • scene segmentation e.g., image segmentation
  • the ISP 320 may include processing functions by applying different parameters to objects such as the sky, bushes, and/or skin.
  • the ISP 320 may include a function to apply and process different parameters according to the zoom factor specified for image capture.
  • the ISP 320 detects and displays a human face when capturing an image through an artificial intelligence function, or uses the coordinates and information of the face to adjust the brightness, focus, and/or color of the image. .
  • the CPU 330 may play a role corresponding to the processor 120.
  • the CPU 330 may decode user commands and perform arithmetic and logical operations and/or data processing.
  • the CPU 330 may be responsible for the functions of memory, interpretation, calculation, and control.
  • the CPU 330 may control the overall functions of the electronic device 101.
  • the CPU 330 can execute all software (eg, applications) of the electronic device 101 on an operating system (OS) and control hardware devices.
  • OS operating system
  • the CPU 330 may include one processor core (single core) or may include a plurality of processor cores (multi-core). According to one embodiment, the CPU 330 executes an application and controls the overall operation of the processor 120 to perform neural network-based tasks required by execution of the application.
  • the NPU 340 may be responsible for processing optimized for deep-learning algorithms of artificial intelligence.
  • the NPU 340 is a processor optimized for deep-learning algorithm calculations (e.g., artificial intelligence calculations), and can process big data quickly and efficiently like a human neural network.
  • the NPU 340 can be mainly used for artificial intelligence calculations.
  • the NPU 340 recognizes objects, environments, and/or people in the background when taking an image through the camera module 180 and automatically adjusts the focus, or uses the camera module 180 when taking a picture of food. It can automatically switch the shooting mode to food mode and/or erase only unnecessary subjects from the captured results.
  • the electronic device 101 supports integrated machine learning processing by interacting with all processors such as GPU 310, ISP 320, CPU 330, and NPU 340. You can.
  • the DSP 350 may represent an integrated circuit that helps quickly process digital signals. According to one embodiment, the DSP 350 may perform a high-speed processing function by converting an analog signal into a digital signal.
  • the modem 360 may perform a role that allows the electronic device 101 to use various communication functions.
  • the modem 360 can support communications such as phone calls and data transmission and reception by exchanging signals with a base station.
  • the modem 360 is an integrated modem that supports communication technologies such as LTE and 2G to 5G (e.g., cellular modem, LTE modem, 5G modem, and 5G-Advanced modem, and 6G modem). may include.
  • the modem 360 may include an artificial intelligence modem to which an artificial intelligence algorithm is applied.
  • connectivity 370 may support wireless data transmission based on IEEE 802.11.
  • connectivity 370 may support communication services based on IEEE 802.11 (e.g., Wi-Fi) and/or 802.15 (e.g., Bluetooth, ZigBee, UWB).
  • IEEE 802.11 e.g., Wi-Fi
  • 802.15 e.g., Bluetooth, ZigBee, UWB
  • the connectivity 370 can support communication services for an unspecified number of people in a localized area, such as indoors, using an unlicensed band.
  • the security 380 may provide an independent security execution environment between data or services stored in the electronic device 101.
  • the security 380 prevents hacking from occurring through software and hardware security during user authentication when providing services such as biometrics, mobile ID, and/or payment of the electronic device 101. can play a role in preventing.
  • the security 380 is based on device security to strengthen the security of the electronic device 101 and user information such as mobile ID, payment, and car key in the electronic device 101.
  • An independent security execution environment can be provided by the security service.
  • the processor 120 may include processing circuitry and/or executable program elements. According to one embodiment, processor 120 may control operations related to improving high-magnification zoom performance and supporting image processing of high-magnification zoom images based on processing circuitry and/or executable program elements (or processing) can be done.
  • the processor 120 may perform an operation of acquiring image data (eg, non-Bayer image) through the camera module 180 (eg, zoom camera). According to one embodiment, the processor 120 performs binning processing based on image data and sends the image data to a server (e.g., FIG. 1) through a communication circuit (e.g., the wireless communication module 192 of FIG. 1). 1 or a transmission operation to the servers 108 and 630 of FIG. 6) can be performed. According to one embodiment, the processor may perform an operation of acquiring a binning image based on binning processing. According to one embodiment, the processor 120 may perform an operation to obtain training data based on the binning image.
  • image data eg, non-Bayer image
  • the processor 120 performs binning processing based on image data and sends the image data to a server (e.g., FIG. 1) through a communication circuit (e.g., the wireless communication module 192 of FIG. 1). 1 or a transmission operation to the servers 108 and
  • the processor 120 may perform an operation of acquiring ground truth data generated based on image data from a server. According to one embodiment, the processor 120 may perform an operation of learning about the zoom magnification based on learning data and actual measurement data. According to one embodiment, the processor 120 may generate and map a parameter corresponding to the zoom magnification.
  • operations performed by the processor 120 may be implemented as a recording medium (or computer program product).
  • the recording medium may include a non-transitory computer-readable recording medium on which a program for executing various operations performed by the processor 120 is recorded.
  • Embodiments described in this disclosure may be implemented in a recording medium readable by a computer or similar device using software, hardware, or a combination thereof.
  • the operations described in one embodiment include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), and field programmable gate arrays (FPGAs). ), processors, controllers, micro-controllers, microprocessors, and/or other electrical units to perform functions. .
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • a recording medium (or computer program product) includes acquiring image data through a camera module of the electronic device, performing binning processing based on the image data, and communicating the image data.
  • a computer-readable recording medium recording a program for executing an operation, an operation of learning about a zoom magnification based on the learning data and the actual measurement data, and an operation of generating and mapping a parameter corresponding to the zoom magnification. may include.
  • 4 and 5 are diagrams illustrating an example of a hybrid zoom scenario in an electronic device according to an embodiment.
  • the electronic device 101 may provide a high-magnification zoom function through a combination of a zoom function by the camera module 180 (eg, a zoom camera) and a digital crop zoom function.
  • a zoom function by the camera module 180 eg, a zoom camera
  • a digital crop zoom function e.g., a zoom zoom camera
  • FIG. 4 may represent an example of a conventional optical-digital hybrid zoom scenario.
  • the electronic device 101 includes a camera module 180 including a first camera (e.g., a main camera with x1 magnification) and a second camera (e.g., a zoom camera with x3 magnification),
  • a zoom function e.g, zooming
  • magnification of x1 to x10 may be shown through the camera module 180.
  • FIG. 5 may represent an example of a multi-pixel sensor (MPS)-based optical-digital hybrid zoom scenario.
  • a camera module 180 including a multi-pixel sensor, a first camera (e.g., a main camera with x1 magnification), and a second camera (e.g., a zoom camera with x3 magnification), and the camera module Through (180), an example of providing a zoom function (e.g., zooming) with a magnification of x1 to x10 can be shown.
  • a zoom function e.g., zooming
  • element (A) may represent examples of various magnifications representing zoom intervals (or ranges).
  • Element (B) may represent an example of a color filter array (CFA) pattern (e.g., a color filter shape (Bayer pattern)) that represents the format of a sensor (e.g., an image sensor or a multi-pixel sensor).
  • CFA color filter array
  • Bayer pattern e.g., a color filter shape
  • Element (C) can represent an example of resolution corresponding to each magnification.
  • Element (D) may represent an example of an image view with final zoom applied.
  • zoom x1 represents the zoom magnification of the first camera (e.g., main camera)
  • zoom x3 represents the zoom magnification of the second camera (e.g., zoom It can indicate the fixed zoom magnification of the camera.
  • other zoom scales e.g. x2, x6, x10) other than the zoom scales of zoom x1 and zoom x3 are zoom scales for sensors (e.g. image sensors) of the first camera and the second camera or areas of the image.
  • zoom function through upscale image processing e.g., bi-linear interpolation, bi-cubic interpolation, and/or image fusion
  • image processing e.g., bi-linear interpolation, bi-cubic interpolation, and/or image fusion
  • image processing e.g., bi-linear interpolation, bi-cubic interpolation, and/or image fusion
  • interpolation between pixels is performed. It can be done.
  • pixel values estimated through interpixel interpolation may result in image quality deterioration.
  • the size of the digital crop's ROI region of image
  • the amount of upscale for restoration of the original image decreases. It can get bigger. Therefore, as the zoom magnification increases, image quality deterioration may occur more significantly.
  • FIG. 5 may show an example of improving image quality deterioration that occurs as the zoom magnification increases using Remosaic high-resolution output based on a multi-pixel sensor.
  • the difference between the zoom scenarios according to FIGS. 5 and 4 is that the sensor applied to the second camera (e.g., zoom camera) with x3 magnification is a typical sensor (e.g., main camera) applied to the first camera (e.g., main camera) with x1 magnification.
  • N is a natural number greater than or equal to 2x2, 3x3, 4x4 pixels.
  • An example of applying a multi-pixel sensor (non-Bayer pattern) can be shown.
  • an optical-digital hybrid zoom operation using a x3 magnification camera eg, zoom camera
  • a x3 magnification camera e.g, zoom camera
  • zoom magnifications of x3.1 to x5.9 use the binning output of the multi-pixel sensor of the x3 magnification zoom camera (e.g., 12MP low-resolution output) for each zoom magnification.
  • a zoom factor of x6 can apply digital crop zoom according to each zoom factor by using the Remosaic output (e.g., 48MP high-resolution output) of the zoom camera's multi-pixel sensor. there is.
  • an optical-digital hybrid zoom scenario in the case of image cropping based on x2 zoom magnification with a multi-pixel sensor, the resolution of the zoom magnification image is secured through cropping of the Remosaic output, thereby reducing the size of the original image. Upscaling may not be necessary to create .
  • an optical-digital hybrid zoom scenario e.g., zoom multiplier x6 scenario
  • by performing a crop zoom e.g., Restoring the image size may not be necessary.
  • upscaling from 3MP to 12MP is not required for the x6 zoom magnification image, and applying digital crop at x10 zoom magnification is not applying upscaling from 1.07MP to 12MP.
  • upscaling from 4.3MP to 12MP is applied through the application of Remosaic based on a multi-pixel sensor, there may be less image quality deterioration compared to the existing method.
  • 'Binning' the binning
  • 'Remosaic' the Remosaic
  • 'GT' ground truth
  • learned training data is applied to the electronic device 101 (e.g., a neural network) to improve the image quality of zoom images and still images, preview images, and videos ( video) can also provide high-magnification zoom images without interruption.
  • the electronic device 101 e.g., a neural network
  • zoom images and still images, preview images, and videos can also provide high-magnification zoom images without interruption.
  • Figure 6 is a diagram for explaining a learning system for learning data and an example of its operation according to an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating an example of a binning operation of a multi-pixel sensor according to an embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating an example of a limozaic operation of a multi-pixel sensor according to an embodiment of the present disclosure.
  • FIG. 6 may show an example of a structure (eg, network system) that generates learning data for an optical-digital hybrid zoom function in the electronic device 101 and its operation.
  • a low resolution (LR) image and a high resolution (HR) image generated from the same output (e.g., non-Bayer image) of the multi-pixel sensor 610 are used.
  • An example of providing learning data based on SR deep-learning learning can be shown. For example, in the example of FIG.
  • a non-Bayer image e.g., CFA output
  • Remosaic images e.g., high-resolution images
  • the network 620 e.g., SR Network
  • a network system (e.g., optical-digital hybrid zoom structure) according to an embodiment may include an electronic device 101 and a server 630 (e.g., server 108 of FIG. 1). .
  • the electronic device 101 includes a camera module 180 (e.g., the camera module 180 of FIG. 1 or 2) and a processor 120 (e.g., the processor 120 of FIG. 1 or 3). may include.
  • the electronic device 101 may include memory (eg, memory 130 of FIG. 1 or FIG. 3 ).
  • server 630 may include a machine learning cloud or network server.
  • the server 630 exists outside the electronic device 101 and interacts with the electronic device 101 to generate an artificial intelligence model through machine learning (e.g., deep-learning learning). May include other electronic devices.
  • the camera module 180 may include an image sensor 610.
  • the camera module 180 may include a lens (eg, the lens assembly 210 of FIG. 2), although not shown.
  • the camera module 180 may represent a zoom camera with a specified magnification, and the image sensor 610 is included in the zoom camera and has a multi-pixel structure. It can represent a sensor (MPS).
  • MPS sensor
  • the image sensor 610 will be described as a multi-pixel sensor 610.
  • the multi-pixel sensor 610 is a multi-pixel sensor in which the combination of pixels sharing the same color filter is a combination of a plurality of NxN (N is a natural number of 2 or more) pixels such as 2x2, 3x3, or 4x4. It may represent an image sensor (e.g., non-Bayer sensor) with a non-Bayer pattern of a pixel structure.
  • N is a natural number of 2 or more pixels such as 2x2, 3x3, or 4x4.
  • It may represent an image sensor (e.g., non-Bayer sensor) with a non-Bayer pattern of a pixel structure.
  • the multi-pixel sensor 610 can provide high resolution by maximizing the number of pixels and can have the effect of increasing pixel size by providing binning between pixels.
  • the multi-pixel sensor 610 is capable of high-resolution and high-sensitivity sensing.
  • examples of operation of the multi-pixel sensor 610 are shown in FIGS. 7 and 8.
  • FIGS. 7 and 8 may show an example of the output of the 2x2 48MP multi-pixel sensor 610.
  • FIG. 7 may show an example of output (e.g., 48MP -> 12MP) according to the binning operation of the 2x2 multi-pixel sensor 610.
  • Figure 7 shows one pixel (e.g. block-unit pixel) by binning four neighboring pixels of the same color from a plurality of pixels, for example, a resolution of 12MP and twice the brightness.
  • An example of improved sensitivity ratio output can be shown.
  • FIG. 8 may show an example of output (e.g., 48MP Non-Bayer -> 48MP Bayer) according to the remosaic operation of the 2x2 multi-pixel sensor 610.
  • Figure 8 shows an example of implementing 48MP high-resolution output by converting the output of a 2x2 48MP multi-pixel sensor by applying the Remosaic algorithm to make it a general Bayer pattern.
  • the non-Bayer output of the multi-pixel sensor 610 can obtain high-resolution Bayer output through the Remosaic algorithm.
  • the multi-pixel sensor 610 may generate image artifacts in the process of acquiring high-resolution Bayer output, and an algorithm to improve and supplement this may be included in the Remosaic algorithm.
  • artifacts may include, for example, false colors, color noise & pattern noise, texture errors, and/or maze noise. You can.
  • the Remosaic algorithm in the case of image processing using the existing multi-pixel sensor 610, as the Remosaic algorithm is implemented inside the electronic device 101, processing speed and current consumption at a level usable by the user may be required. . Therefore, in the case of the existing multi-pixel sensor 610, the Remosaic algorithm based on speed processing can be applied to prioritize speed over improvement of some artifacts.
  • the Remosaic algorithm can be implemented in the server 630 located outside the electronic device 101.
  • deep-learning learning is operated separately through the server 630, so it may not be limited to the processing speed and current consumption required for the operation of the Remosaic algorithm.
  • the multi-pixel sensor 610 according to an embodiment of the present disclosure applies a resolution processing-based Remosaic algorithm (e.g., premium Remosaic algorithm) for high resolution compared to the Remosaic algorithm that prioritizes speed, Improvement of contrast artifacts can be prioritized.
  • the multi-pixel sensor 610 can acquire improved high-resolution images (e.g., GT, ground truth).
  • the output of the multi-pixel sensor 610 may include three modes.
  • the output of the multi-pixel sensor 610 is a non-Bayer output in the form of multi-pixels, and the sensitivity ratio is improved by merging the combination of pixels sharing each color filter with a pixel average. It can include high-resolution output converted to general Bayer format through the Remosaic algorithm for binning output and non-Bayer output.
  • each output of the multi-pixel sensor 610 may be selected, for example, by switching the mode of the multi-pixel sensor 610.
  • the multi-pixel sensor 610 may use a non-Bayer output (e.g., a non-Bayer image 615), and there may be a delay according to mode switching of the multi-pixel sensor 610. and may not require the logic, power, and/or processing time required by the Remosaic algorithm.
  • a non-Bayer output e.g., a non-Bayer image 615
  • the output of the multi-pixel sensor 610 (e.g., non-Bayer image 615) is input as learning data to the processor 120 of the electronic device 101, and is input to the server. It may be transmitted to 630 to generate ground truth data (e.g. GT).
  • ground truth data e.g. GT
  • the image used in the binning process of the electronic device 101 and the remosaic process of the server 630 may be the same non-Bayer image 615 output to the multi-pixel sensor 610.
  • the electronic device 101 may perform binning processing through the processor 120 (eg, the ISP 320 in FIG. 3) (block 601).
  • the electronic device 101 provides sensitivity through pixel average merge for the output of the multi-pixel sensor 610 (e.g., a non-Bayer image 615 of 2x2, 3x3, or 4x4).
  • a binning image 625 can be created through binning processing with excellent ratio.
  • the processor 120 may transmit the binning image 625 converted to the low-resolution binning process to the ISP 320 without delay through real-time processing of the output of the multi-pixel sensor 610.
  • the multi-pixel sensor 610 may receive raw data from the lens of the camera module 180 (eg, the lens assembly 210 of FIG. 2).
  • the multi-pixel sensor 610 collects CFA data (or CFA image) in real time according to a color filter array (CFA) pattern (e.g., color filter shape (Bayer pattern)) of the multi-pixel sensor 610. You can receive it.
  • the multi-pixel sensor 610 may transmit received raw data to the processor 120 in real time according to the output characteristics (eg, non-Bayer pattern) of the multi-pixel sensor 610.
  • the processor 120 when the processor 120 receives the non-Bayer image 615 of the multi-pixel sensor 610, the processor 120 sends the non-Bayer image 615 to the server 630 connected to the electronic device 101 in real time. It can be sent to .
  • the processor 120 may perform binning processing based on the non-Bayer image 615 of the multi-pixel sensor 610. According to one embodiment, the processor 120 may output a binning image 625 based on binning processing. In one embodiment, binning images 625 can be used for zoom images and deep-learning low-resolution training images. According to one embodiment, in the learning operation, the binning image 625 is image-processed through the ISP 320, as illustrated in FIG. 6, and is used as an input image for the network 620 (e.g., learning data 635). ) can be used.
  • the binning image 625 is image-processed through the ISP 320 when performing a zoom function in a shooting operation by the user, and is converted to a zoom magnification (e.g., about x3.1) corresponding to the user input.
  • a zoom magnification e.g., about x3.1
  • ⁇ x5.9 magnification can be used as an input image to generate an enlarged output image.
  • the processor 120 may perform image processing (e.g., image signal processing) on the binning image 625 through the ISP 320 (block 603).
  • image processing e.g., image signal processing
  • the processor 120 can perform image processing such as demosaic, shading correction, color correction, noise removal, and sharpness adjustment on the binning image 625 through the ISP 320.
  • the processor 120 may output data resulting from image processing as learning data 635.
  • the processor 120 may transmit the binning image 625 to the network 620 (eg, SR network or neural network) of the ISP 320 as training data 635 after image processing.
  • the network 620 eg, SR network or neural network
  • the server 630 receives a non-Bayer image 615, which is the output of the multi-pixel sensor 610, transmitted from the electronic device 101 (block 605), and the received non-Bayer image (block 605) 615), through Remosaic processing using the Remosaic algorithm (block 607), a Remosaic image 655 converted to high-resolution Bayer (e.g., deep-learning high-resolution learning image) can be generated.
  • the server 630 may transmit the Remosaic image 655 to the electronic device 101 (eg, the network 620 of the electronic device 101) in real time.
  • the server 630 runs the Remosaic algorithm on multi-pixels of the non-Bayer-based output (e.g., the non-Bayer image 615 of 2x2, 3x3, or 4x4) output from the multi-pixel sensor 610. It can be converted to high-resolution Bayer using this.
  • the server 630 receives a non-Bayer image 615 of the multi-pixel sensor 610 of the electronic device 101 from the electronic device 101, and receives a non-Bayer image 615 of the multi-pixel sensor 610 from the electronic device 101.
  • the Bayer image 615 can be stored in the memory 640 of the server 630 using a raw dump interface (RDI).
  • RDI raw dump interface
  • the server 630 can convert the non-Bayer image 615 of the multi-pixel sensor 610 stored in the memory 640 into a high-resolution Bayer image using a designated Remosaic algorithm based on software or hardware. there is. According to one embodiment, the server 630 may generate the Remosaic image 655 based on high-resolution Bayer transform. According to one embodiment, the server 630 may transmit the Remosaic image 655 to the network 620 of the electronic device 101 in real time. According to one embodiment, the remosaic image 655 can be used as a deep-learning high-performance ground truth (GT).
  • GT deep-learning high-performance ground truth
  • the Remosaic algorithm used in the server 630 may be a Remosaic algorithm focusing on image quality.
  • the Remosaic algorithm that prioritizes image quality may be an algorithm for obtaining a high-quality image (e.g., Remosaic image 655) by Remosaic.
  • the Remosaic image 655 generated by the Remosaic algorithm may be used as actual measurement data for the network 620 of the electronic device 101.
  • the Remosaic image 655 can be used as a GT image for SR deep-learning, and the electronic device 101 can perform a high-magnification zoom function by using the Binning image 625 as an input in a zoom scenario. You can.
  • the network 620 converts image data (e.g., non-Bayer image 615) of the multi-pixel sensor 610 from the server 630 into a high-definition image that is remosaiced based on the image quality-first Remosaic algorithm. (e.g., Remosaic image 655) may be received through a communication circuit (e.g., wireless communication module 192 of FIG. 1).
  • the network 620 learns about the zoom magnification in operation based on a pair of training data 635 and ground truth data (e.g., Remosaic image 655) received from the server 630. You can perform the following actions.
  • the network 620 can perform SR deep-learning based learning using a pair of learning data 635 (e.g., converted binning image) and ground truth data (e.g., Remosaic image 655).
  • learning data 635 e.g., converted binning image
  • ground truth data e.g., Remosaic image 655
  • network 620 may pair training data 635 and ground truth data 655 generated from the same input (e.g., non-Bayer image 615 of multi-pixel sensor 610) at a specified zoom mode. You can learn using .
  • the network 620 may generate and map a parameter corresponding to the zoom factor.
  • the network 620 may generate parameters to be used (or applicable) in a zoom mode specified by the network 620 according to learning results.
  • the processor 120 may map parameters based on learning results to a zoom magnification of a designated zoom mode. For example, the processor 120 may manage SR network parameters for each zoom ratio to be used in the network 620 using a lookup table.
  • the network 620 may be a neural network based on SR deep-learning learning to improve zoom image quality.
  • SR deep-learning learning to improve zoom image quality.
  • methods using deep learning have shown superior image quality results in improving low-light and SR image quality.
  • a low-resolution-based binning image e.g., learning data
  • a pair of high-resolution-based Remosaic images e.g., ground truth data
  • parameters may be provided differently for each zoom magnification according to learning.
  • the electronic device 101 uses the multi-pixel sensor 610 of the camera module 180 (e.g., zoom camera) to produce a low-resolution binning image.
  • LR low-resolution binning image
  • HR high-resolution
  • the electronic device 101 according to an embodiment of the present disclosure can solve the problem of securing images between training data and GT required for deep-learning learning during learning operations (e.g., operating as a learning device).
  • the electronic device 101 in actual use (e.g., operating as a device that performs a zoom function when shooting an image), uses Remosaic processing time in the electronic device 101 and a multi-pixel sensor ( 610), issues resulting from mode conversion can be resolved.
  • the electronic device 101 according to an embodiment of the present disclosure can provide a high-resolution image according to Remosaic using the network 620.
  • the electronic device 101 uses the network 620 to display not only still images, but also preview images and videos with issues depending on processing time and mode switching.
  • a zoom e.g., SR zoom
  • super resolution e.g., UHD (3840 x 2160)
  • SR zoom refers to zoom that converts a low resolution (LR) image into a high resolution (HR) image, and may refer to a zoom method accompanied by improved image quality.
  • the present disclosure can support image quality improvement of SR Zoom, which converts low-resolution images into high-resolution images.
  • SR zoom may represent zoom to which SR deep-learning learning is applied to a binning image cropped based on the zoom magnification.
  • securing learning data and GT images may be an important factor in SR performance, and learning data and GT images at substantially the same time point must be secured for learning.
  • Target performance can be secured by minimizing the loss function between the data results and GT, which is the improvement goal.
  • the training data and GT images required for SR deep-learning learning are, for example, images obtained at substantially the same time in the electronic device 101 (e.g., in the electronic device 101).
  • Binning processing can be performed through 120 and the Remosaic image can be converted through server 630.
  • the electronic device 101 obtains a pair of low-resolution learning data and high-resolution GT image at substantially the same time t, and uses this to determine result parameters (e.g., network parameters) of SR deep-learning learning. It can be reflected in the network 620 of the electronic device 101.
  • result parameters e.g., network parameters
  • the electronic device 101 may define a magnification that requires improvement in image quality for digital crop zoom.
  • the electronic device 101 may define a zoom factor to improve image quality deterioration due to upscaling applied to create the original image size after cropping the image through SR deep-learning.
  • the improveable zoom factor may follow the NxN condition of the multi-pixel sensor 610.
  • x2 magnification can be defined as the target zoom magnification for improving image quality.
  • x3 magnification can be defined as the target zoom magnification for improving image quality.
  • x4 magnification can be defined as the target zoom magnification for improving image quality.
  • the embodiments of the present disclosure are not limited, and may be defined as various zoom magnifications other than the zoom magnification described above.
  • the electronic device 101 may obtain a low-resolution image for learning data in a zoom section where SR deep-learning will be applied through binning processing and then cropping according to the zoom magnification.
  • the input for deep learning can be either a cropped image or an upscaled image after cropping, depending on the configuration of the network 620.
  • the electronic device 101 includes an upscale image processing algorithm (e.g., bi-linear interpolation, bi-cubic interpolation, and/or image fusion) within the processor 120 (e.g., ISP 320). By applying this, input from the network 620 of the electronic device 101 can be processed without delay.
  • an upscale image processing algorithm e.g., bi-linear interpolation, bi-cubic interpolation, and/or image fusion
  • a high-resolution image for GT in SR deep-learning may use a non-Bayer image of the same multi-pixel sensor 610 used in the binning process of the electronic device 101.
  • the server 630 receives a non-Bayer image of the multi-pixel sensor 610 of the electronic device 101, and processes the non-Bayer image of the multi-pixel sensor 610 received from the electronic device 101. It can be stored in the memory 640 through the RDI 605 of the server 630.
  • the server 630 may acquire a high-resolution and high-quality Remosaic image (e.g., a high-resolution image) using a non-Bayer image stored in the memory 640 using a designated hardware or software Remosaic algorithm.
  • a high-resolution and high-quality Remosaic image e.g., a high-resolution image
  • a non-Bayer image stored in the memory 640 using a designated hardware or software Remosaic algorithm.
  • the Remosaic algorithm may apply the Remosaic algorithm that prioritizes image quality.
  • processing time and current consumption for obtaining GT may increase depending on the amount of computation, but in one embodiment of the present disclosure, processing for obtaining GT is performed on the server 630. It can be processed by, and as the GT in the electronic device 101 is used only for SR deep-learning learning, issues such as processing time and current consumption to obtain the GT occur in the network 620 of the electronic device 101. You may not.
  • the electronic device 101 can learn through SR deep-learning using a pair of low-resolution learning data and high-resolution GT obtained through the above-described operation.
  • deep-learning can be trained by selecting either a cropped image or an upscale image with upscale applied after cropping for input of low-resolution training data, depending on the configuration of the network 620.
  • the final SR deep-learning result may be output as parameters of the network 620 applied to the electronic device 101.
  • the input of the network 620 of the electronic device 101 may use a binning image using a non-Bayer image of the multi-pixel sensor 610. Therefore, in the present disclosure, the non-Bayer image of the multi-pixel sensor 610 may be used as the output of a zoom camera to which the multi-pixel sensor 610 is applied (e.g., a zoom camera with x3 magnification).
  • the electronic device 101 uses a binning image generated through image processing using the ISP 320 for the non-Bayer image of the multi-pixel sensor 610 to zoom x3 and x3.1 to x5. ,Can provide digital crop zoom images up to 9 zoom.
  • the electronic device 101 uses the binning image generated through image processing using the ISP 320 at x6 zoom for the non-Bayer image of the multi-pixel sensor 610 learned in SR deep-learning. By upscaling based on specified parameters using the network 620, a zoom image without deterioration can be provided.
  • a high-quality zoom image can be provided using only a binning image without mode switching and remosaic processing of the multi-pixel sensor 610 applied to the zoom camera.
  • the multi-pixel sensor 610 uses a binning image as a basis and can output a high-resolution image using a remosaic image according to mode switching in a scenario that requires high resolution.
  • Remosaic images are difficult to process in real time, and noise may be inferior in low light.
  • the multi-pixel sensor 610 operates to use only binning images and outputs a high-resolution image using Remosaic GT of the results learned using the binning images.
  • the electronic device 101 operates to output a non-Bayer image through the multi-pixel sensor 610 during a learning operation (e.g., operating as a learning device). You can.
  • the electronic device 101 outputs a non-Bayer image through the multi-pixel sensor 320 in practical use (e.g., operates as a device that performs a zoom function when capturing an image). It can be operated so that all binning image output is possible.
  • a zoom section e.g., x6 to x12 zoom section
  • the zoom magnification e.g., x6 magnification
  • SR digital based on SR deep-learning output Crop zoom (e.g., example in FIG. 9) and high-magnification SR zoom (e.g., example in FIG. 10) based on SR deep-learning output can be optionally provided. Examples of this are shown in Figures 9 and 10.
  • Figure 9 may show an example of a multi-pixel sensor-based deep-learning zoom scenario according to an embodiment of the present disclosure.
  • Figure 10 may show an example of a multi-pixel sensor-based deep-learning zoom scenario according to an embodiment of the present disclosure.
  • Figure 9 may show an example of an SR digital crop zoom operation based on SR deep-learning output.
  • Figure 9 may represent an example of an SR first-order deep-learning x12x zoom scenario.
  • SR digital crop zoom can obtain a zoom image without image quality deterioration even when upscaling through a network 620 to which SR deep-learning is applied.
  • digital crop zoom can be applied at a magnification higher than SR zoom (e.g., x6x zoom with SR deep-learning learning applied using the multi-pixel sensor 610 with x3 magnification).
  • SR zoom may represent zoom to which SR deep-learning learning is applied to a binning image cropped based on the zoom magnification.
  • the image to which SR digital crop zoom is applied may represent an image to which digital crop zoom is applied to the SR zoom output image independently of SR deep-learning learning.
  • upscaling to a certain magnification e.g., x2 times to which digital crop zoom is applied (e.g., the range of x6.1 to x12 magnification) based on the x6 times zoom result
  • images with less image quality deterioration can be obtained, and additional processing time may not be required through image quality improvement through image processing within the ISP 320.
  • SR deep-learning is extended to zoom magnification (e.g., x12 times) at which image quality deterioration occurs due to SR digital cropping (e.g., Image quality can be further improved by using the network 620 using secondary SR deep-learning learning at a magnification of .1 or higher.
  • Figure 10 may show an example of a high-magnification SR zoom operation based on SR deep-learning output.
  • Figure 10 may show an example of a zoom scenario of SR 1st and 2nd deep learning x12 times or more.
  • the zoom magnification at which image quality deterioration occurs due to SR digital crop zoom using the binning image of the ISP 320 e.g., SR zoom applied to x6 times in FIG. 9 (e.g. : This can show an example of supporting image quality improvement at a magnification of x12.1 or higher.
  • the electronic device 101 may configure learning data for high-magnification SR zoom based on secondary deep learning in the learning operation, and learn in advance during the high-magnification SR zoom operation according to image capture. It can be operated to support high-magnification SR zoom using learning data about secondary deep learning. For example, in the example of Figure 10, based on learning data corresponding to the zoom
  • secondary SR deep-learning is, for example, 12M 1/4 Binning, 50MP 1/2 Binning, based on a 4x4 multi-pixel sensor 610 (e.g., 200MP sensor).
  • a 4x4 multi-pixel sensor 610 e.g., 200MP sensor.
  • An example of operation can be shown using the output of 200MP Remosaic.
  • the electronic device 101 uses a binning image processed by 4x4 12MP binning to zoom at a magnification of x3, and uses a digital crop zoom process on the binning image to zoom from x3.1 to x5.9. You can use cropped images.
  • the electronic device 101 when zooming at x6 magnification, uses learning data learned for x6 magnification (e.g., Binning crop low-resolution learning data) and ground truth data corresponding to the learning data (e.g., 2x2 Binning 50MP high-resolution learning data).
  • learning data learned for x6 magnification e.g., Binning crop low-resolution learning data
  • ground truth data corresponding to the learning data e.g., 2x2 Binning 50MP high-resolution learning data
  • GT can be operated using data learned as a pair (e.g., SR 1st deep learning is learned).
  • the electronic device 101 may use 4x4 binning data as input to the network 620 when learning.
  • digital crop zoom using SR zoom output is applied from x6.1 to x 11.9 magnification, and at high magnification zoom of x12x, SR zoom-based digital crop zoom is used as low-resolution learning data for deep-learning learning, and 200MP Remosaic image.
  • SR zoom-based digital crop zoom is used as low-resolution learning data for deep-learning learning, and 200MP Remosaic image.
  • the electronic device 101 obtains the parameters of the network 620 using learning data of SR secondary deep learning, and uses them to obtain parameters of the network 620 in the network of the electronic device 101. By applying it to (620), it can provide a high-quality zoom image at high magnification.
  • the network 620 of the electronic device 101 may operate flexibly according to the zoom magnification by changing parameters through first and second SR deep-learning learning.
  • FIG. 11 is a diagram for explaining an example of an operation of an electronic device according to an embodiment of the present disclosure.
  • FIG. 11 may show an example of a hybrid zoom operation in the electronic device 101.
  • the example of FIG. 11 may show an example of providing a zoom function by applying parameters according to the zoom arrangement to learning data based on SR deep-learning learning using low-resolution images and high-resolution images.
  • a high magnification zoom function with improved image quality is explained using the network 620 (eg, SR Network) of the electronic device 101.
  • the electronic device 101 includes a camera module 180 (e.g., the camera module 180 of FIG. 1 or 2) and a processor 120 (e.g., the camera module 180 of FIG. 1 or FIG. 3). may include a processor 120).
  • the electronic device 101 may include memory (eg, memory 130 of FIG. 1 or FIG. 3 ).
  • the components of the electronic device 101 shown in FIG. 11 may correspond to the components of the electronic device 101 described in the description with reference to FIG. 6, and detailed descriptions are omitted.
  • the camera module 180 may represent a zoom camera with a specified magnification and may include a multi-pixel sensor 610 having a multi-pixel structure.
  • the multi-pixel sensor 610 has a non-Bayer pattern ( It can represent an image sensor with a non-Bayer pattern.
  • the electronic device 101 may perform a zoom operation based on user input while capturing an image.
  • the electronic device 101 may acquire a non-Bayer image 1110 (eg, image data) of the multi-pixel sensor 610 during a zoom operation.
  • the output of the multi-pixel sensor 610 may include a non-Bayer image in the form of multi-pixels.
  • the multi-pixel sensor 610 may receive raw data from the lens of the camera module 180 (eg, the lens assembly 210 of FIG. 2). According to one embodiment, the multi-pixel sensor 610 collects CFA data (e.g., non-Bayer image 1110) according to a color filter array (CFA) pattern (e.g., non-Bayer pattern) of the multi-pixel sensor 610. can be output.
  • CFA color filter array
  • the output (eg, non-Bayer image 1110) of the multi-pixel sensor 610 may be input to the processor 120 of the electronic device 101.
  • the processor 120 may perform binning processing on the received non-Bayer image 1110 (block 1101).
  • the electronic device 101 displays a non-Bayer-based output (e.g., a non-Bayer image 1110 of a 2x2, 3x3, or 4x4 non-Bayer pattern) output from the multi-pixel sensor 610.
  • a binning image 1120 e.g., a low-resolution Bayer image
  • the binning image 1120 may represent a low-resolution image.
  • the output of the multi-pixel sensor 610 can be subjected to binning processing through a processor inside the sensor. In this case, the sensor's output can be directly input to the ISP (320) without separate binning processing.
  • the processor 120 may perform image signal processing on the binning image 1120 through the ISP 320 (block 1103).
  • the processor 120 may perform image processing such as demosaic, shadow correction, color correction, noise removal, and sharpness adjustment on the binning image 1120 through the ISP 320.
  • the processor 120 may output result data 1130 according to image processing.
  • the processor 120 may output result data 1130 obtained through image processing of the binning image 1120 according to the corresponding zoom factor. According to one embodiment, the processor 120 zooms the resulting data 1130 according to the applied zoom section and displays the zoomed image on a display (e.g., the display module 160 of FIG. 1 or FIG. 3). It can be displayed.
  • a display e.g., the display module 160 of FIG. 1 or FIG. 3
  • the processor 120 may apply x3 zoom based on the result data 1130 and output the output when the zoom factor applied from the x3 magnification zoom camera is a x3 magnification zoom.
  • the processor 120 digitally crops and upscales the resulting data 1130 to match the corresponding zoom factor when the zoom factor applied in the x3 magnification zoom camera is a zoom of x3.1 to x5.9 magnification. Can be printed.
  • the processor 120 provides learning data corresponding to the result data 1130 when zooming at a zoom magnification (e.g., zoom greater than The corresponding zoom can be applied and output using the matched parameters.
  • the processor 120 identifies parameters matched to learning data corresponding to the result data 1130 from the network 620 at a zoom of a specified magnification or higher, and changes the parameters set for the current shooting to parameters based on the learning data.
  • the resulting data 1130 can be digitally cropped and upscaled according to the zoom magnification according to the changed parameter and output.
  • the network 620 of the electronic device 101 may manage parameters set for each zoom magnification as a lookup table.
  • the network 620 may be a neural network based on SR deep-learning learning to improve zoom image quality.
  • the image quality of the zoom camera of the electronic device 101 can be improved.
  • zoom image quality can be improved at a higher zoom ratio other than the fixed zoom ratio of the zoom camera.
  • zoom image quality and usage scenarios using the multi-pixel sensor 610 can be improved.
  • the electronic device 101 may not require logic (e.g., Remosaic algorithm) for Remosaic conversion of the multi-pixel sensor 610 of the zoom camera of the electronic device 101, and the multi-pixel sensor 610 Binning of Remosaic output or mode switching for Remosaic may be unnecessary.
  • the electronic device 101 uses the same output of the multi-pixel sensor 610 to generate a low-resolution image (e.g., binning low-resolution training data) in the electronic device 101 and a high-resolution image in the server 620. It is possible to secure a sync image that pairs (or actual measurement data) (e.g., Remosaic high-resolution GT image).
  • the electronic device 101 uses a pair of low-resolution images and high-resolution images for SR deep-learning learning to adaptively adjust the parameters of the network 620 of the electronic device 101 according to the corresponding zoom arrangement. You can change it to , and SR zoom can be expanded by changing parameters.
  • the electronic device 101 includes a display (e.g., the display module 160 of FIG. 1 or FIG. 3), a communication circuit (e.g., the wireless communication module 192 of FIG. 1), and a plurality of devices.
  • a camera module including cameras (e.g., camera module 180 of FIGS. 1 to 3), and at least one operably connected to the display 160, the communication circuit 192, and the camera module 180. It may include a processor (eg, processor 120 of FIGS. 1, 3, 6, or 9).
  • the at least one processor 120 may operate to acquire image data through the camera module. According to one embodiment, the at least one processor 120 performs binning processing based on the image data, and sends the image data to a server (e.g., of FIG. 1 or FIG. 6) through the communication circuit. It can be operated to transmit to the server (108, 630). According to one embodiment, the at least one processor 120 may operate to obtain a binning image based on the binning process. According to one embodiment, the at least one processor 120 may operate to obtain training data based on the binning image. According to one embodiment, the at least one processor 120 may operate to obtain ground truth data generated based on the image data from the server. According to one embodiment, the at least one processor 120 may operate to perform learning about the zoom magnification based on the learning data and the actual measurement data. According to one embodiment, the at least one processor 120 may operate to generate and map a parameter corresponding to the zoom magnification.
  • a server e.g., of FIG. 1 or FIG. 6
  • the camera module 180 may include a main camera with a first magnification and an image sensor for output from the main camera.
  • the plurality of cameras include at least one zoom camera with a second magnification greater than the first magnification and a multi-pixel sensor (MPS) for output of the at least one zoom camera. can do.
  • MPS multi-pixel sensor
  • the image data may include a non-Bayer image.
  • the learning data may include image-processed data of the binning image converted to a low-resolution Bayer based on the non-Bayer image.
  • the ground truth data may include a remosaic image converted into a high-resolution Bayer image by the server based on the non-Bayer image.
  • the at least one processor 120 executes an application related to a learning operation based on a user input, and executes a zoom camera with a specified magnification among the plurality of cameras based on execution of the application.
  • the image data can be obtained from the multi-pixel sensor of the zoom camera.
  • the at least one processor 120 performs pixel average merge on the non-Bayer image output from the multi-pixel sensor through binning processing, and performs the binning processing operation.
  • the non-Bayer image may be transmitted to the server through the communication circuit.
  • the at least one processor 120 may include a neural network. According to one embodiment, the at least one processor 120 may perform learning about the zoom magnification based on the learning data and the actual measurement data using the neural network.
  • the at least one processor 120 performs deep-learning through the neural network using a pair of the training data and the ground truth data generated from the same input in a designated zoom mode. (deep-learning)-based learning can be performed.
  • the at least one processor 120 generates parameters for the neural network to use in a specified zoom mode according to learning results, maps the parameters to the zoom scale of the specified zoom mode, and creates parameters for each zoom magnification. can be managed as a lookup table.
  • the at least one processor 120 may operate to execute an application related to image capture based on user input. According to one embodiment, the at least one processor 120 may operate to receive a user input for executing zoom while performing image capture based on execution of the application. According to one embodiment, the at least one processor 120 may operate to execute a zoom camera with a specified magnification among the plurality of cameras based on the user input. According to one embodiment, the at least one processor 120 may operate to obtain image data from a multi-pixel sensor of the zoom camera.
  • the at least one processor 120 may operate to perform binning processing based on the image data. According to one embodiment, the at least one processor 120 may operate to obtain a binning image based on the binning process. According to one embodiment, the at least one processor 120 may operate to obtain result data based on the binning image. According to one embodiment, when the zoom factor based on the user input corresponds to the first magnification, the at least one processor 120 may operate to perform a first zoom process on the result data corresponding to the zoom factor. .
  • the at least one processor 120 when the zoom magnification based on the user input corresponds to a second magnification greater than the first magnification, the at least one processor 120 performs the zoom ratio based on the parameter corresponding to the second magnification.
  • the operation may be performed to perform a second zoom process on the resulting data.
  • the at least one processor 120 determines a parameter to be used in a neural network for the second magnification based on corresponding learning data, and sets the parameter set for the second magnification to the determined parameter. , and the resulting data can be digitally cropped and upscaled according to the zoom scale according to the changed parameters and output.
  • Operations performed in the electronic device 101 include a processor 120 including various processing circuitry and/or executable program elements of the electronic device 101. It can be executed by . According to one embodiment, operations performed by the electronic device 101 may be stored in the memory 130 and, when executed, may be executed by instructions that cause the processor 120 to operate.
  • FIG. 12 is a flowchart illustrating a method of operating an electronic device according to an embodiment of the present disclosure.
  • FIG. 12 may show an example of supporting a learning method for parameters to be applied according to the zoom magnification according to the user's zooming when capturing an image in the electronic device 101 according to one embodiment.
  • the learning method may be performed, for example, according to the flowchart shown in FIG. 12.
  • the flowchart shown in FIG. 12 is merely a flowchart according to an embodiment of the learning method of the electronic device 101, and the order of at least some operations may be changed, performed in parallel, performed as independent operations, or at least some of the operations. Other operations may be performed complementary to at least some operations.
  • operations 1201 to 1215 may be performed by at least one processor 120 of the electronic device 101.
  • the operation described in FIG. 12 is, for example, performed heuristically in combination with the operations described in FIGS. 3 to 11, or is performed as a detailed operation of some of the described operations. It can be performed heuristically.
  • an operation method (e.g., a learning method to support SR zoom) performed by the electronic device 101 according to an embodiment includes an operation of acquiring image data of the camera module 180 ( 1201), an operation of performing binning processing based on image data and transmitting the image data to the server 630 (1203), an operation of acquiring a binning image (1205), an operation of performing image processing on the binning image (1207) ), operation 1209 of acquiring learning data, operation 1211 of acquiring ground truth data from the server 630, operation 1213 of performing learning about the zoom magnification based on the learning data and ground truth data, and zoom An operation 1215 of generating and mapping a parameter corresponding to the magnification may be included.
  • the processor 120 of the electronic device 101 may perform an operation to obtain image data of the camera module 180.
  • the processor 120 may execute an application (eg, a camera application) related to a learning operation based on user input.
  • the processor 120 may execute (or activate) the camera module 180 based on application execution.
  • the processor 120 may control the display module 160 to display an image (eg, a preview image) acquired through the camera module 180.
  • the processor 120 may start capturing an image based on detecting a user input for executing image capturing.
  • the processor 120 may receive a user input for zooming (e.g., zooming) while performing image capture, and executes the zoom function of the camera module 180 based on the user input ( Example: Activate zoom camera). For example, the processor 120 may perform a zoom operation based on user input while capturing an image. According to one embodiment, the processor 120 may execute a zoom camera corresponding to a specified magnification among the plurality of cameras of the camera module 180 and obtain image data from the multi-pixel sensor 610 of the zoom camera. .
  • the processor 120 may acquire image data of the multi-pixel sensor 610 during a zoom operation.
  • the image data may include non-Bayer images.
  • the output of the multi-pixel sensor 610 may include a non-Bayer image based on a non-Bayer pattern of multi-pixels.
  • the multi-pixel sensor 610 may receive unprocessed raw data from a lens (eg, micro lens) of the camera module 180 while capturing an image.
  • the multi-pixel sensor 610 outputs CFA data (e.g., non-Bayer image) according to a color filter array (CFA) pattern (e.g., non-Bayer pattern) of the multi-pixel sensor 610. You can.
  • CFA color filter array
  • the processor 120 may perform binning processing based on image data and transmit the image data to the server 630.
  • the processor 120 may perform binning processing on the received non-Bayer image.
  • the processor 120 performs binning processing on the non-Bayer-based output (e.g., a non-Bayer image of a 2x2, 3x3, or 4x4 non-Bayer pattern) output from the multi-pixel sensor 610. Pixel average merge can be performed.
  • the processor 120 stores image data received from the multi-pixel sensor 610 in parallel (or heuristically) in a binning processing operation, through a communication circuit (e.g., the wireless communication module 192 of FIG. 1). ), it can be transmitted to the designated server 630.
  • a communication circuit e.g., the wireless communication module 192 of FIG. 1).
  • the processor 120 may perform an operation to obtain a binning image.
  • the processor 120 may acquire a binning image (e.g., a low-resolution Bayer image) with an excellent sensitivity ratio based on pixel average merge through binning processing.
  • the binning image may represent a low resolution image.
  • the processor 120 may perform image processing on the binning image.
  • the processor 120 may perform image signal processing on the binning image through the ISP 320.
  • the processor 120 may perform image processing such as demosaicing, shading correction, color correction, noise removal, and sharpness adjustment on the binning image.
  • the processor 120 may perform an operation to obtain training data.
  • the processor 120 may output learning data according to image processing based on the ISP 320.
  • the processor 120 may perform an operation to obtain actual measurement data from the server 630.
  • a high-quality image e.g., Remosaic image
  • image data e.g., non-Bayer image
  • a communication circuit e.g., the wireless communication module 192 of FIG. 1.
  • the processor 120 may perform an operation of learning about the zoom magnification based on training data and actual measurement data.
  • the processor 120 through the network 620 (e.g., a neural network), deep- Learning-based learning can be performed.
  • the processor 120 trains the network 620 using a pair of training data and ground truth data generated from the same input (e.g., a non-Bayer image of the multi-pixel sensor 610) in a designated zoom mode. can do.
  • the processor 120 may generate and map a parameter corresponding to the zoom factor.
  • the processor 120 may generate parameters that the network 620 will use (or be applicable to) in a designated zoom mode according to learning results.
  • the processor 120 may map parameters based on learning results to a zoom magnification of a designated zoom mode. For example, the processor 120 may manage parameters for each zoom magnification to be used in the network 620 using a lookup table.
  • FIG. 13 is a flowchart illustrating a method of operating an electronic device according to an embodiment of the present disclosure.
  • FIG. 13 may show an example of supporting SR zoom by changing parameters according to the zoom magnification according to the user's zooming when capturing an image in the electronic device 101 according to one embodiment.
  • a method of supporting SR zoom when capturing an image may be performed, for example, according to the flowchart shown in FIG. 13.
  • the flowchart shown in FIG. 13 is merely a flowchart according to an embodiment of the SR zoom operation of the electronic device 101, and the order of at least some operations may be changed, performed in parallel, performed as independent operations, or at least Some other operations may be performed complementary to at least some of the operations.
  • operations 1301 to 1319 may be performed by at least one processor 120 of the electronic device 101.
  • the operation described in FIG. 13 is, for example, performed heuristically in combination with the operations described in FIGS. 3 to 11, or heuristically performed as a detailed operation of some of the described operations. It can be.
  • an operation method (e.g., an image processing method to support SR zoom) performed by the electronic device 101 according to an embodiment includes an operation of acquiring image data of the camera module 180. (1301), an operation of performing binning processing based on image data (1303), an operation of acquiring a binning image (1305), an operation of performing image processing on the binning image (1307), an operation of acquiring result data ( 1309), an operation to determine whether it is a zoom operation in a designated zoom section (1311), if it is not a zoom operation in a designated zoom section, an operation to first zoom the resulting data according to the zoom factor (1313), zoom in a specified zoom section
  • it may include an operation 1315 of determining a parameter corresponding to the zoom magnification, an operation 1317 of performing a second zoom process based on the parameter, and an operation 1319 of outputting an image according to the zoom processing. .
  • the processor 120 of the electronic device 101 may perform an operation to obtain image data of the camera module 180.
  • the processor 120 may execute an application related to image capture (eg, a camera application) based on user input.
  • the processor 120 may execute (or activate) the camera module 180 based on application execution.
  • the processor 120 may control the display module 160 to display an image (eg, a preview image) acquired through the camera module 180.
  • the processor 120 may start capturing an image based on detecting a user input for executing image capturing.
  • the processor 120 may receive a user input for zooming (e.g., zooming) while performing image capture, and executes the zoom function of the camera module 180 based on the user input ( Example: Activate zoom camera). For example, the processor 120 may perform a zoom operation based on user input while capturing an image. According to one embodiment, the processor 120 may execute a zoom camera corresponding to a specified magnification among the plurality of cameras of the camera module 180 and obtain image data from the multi-pixel sensor 610 of the zoom camera. .
  • the processor 120 may acquire image data of the multi-pixel sensor 610 during a zoom operation.
  • the image data may include non-Bayer images.
  • the output of the multi-pixel sensor 610 may include a non-Bayer image based on a non-Bayer pattern of multi-pixels.
  • the multi-pixel sensor 610 may receive unprocessed raw data from a lens (eg, micro lens) of the camera module 180 while capturing an image.
  • the multi-pixel sensor 610 outputs CFA data (e.g., non-Bayer image) according to the CFA (color filter array) pattern (e.g., non-Bayer pattern) of the multi-pixel sensor 610. You can.
  • the processor 120 may perform binning processing based on image data (eg, non-Bayer image). According to one embodiment, the processor 120 may perform binning processing on the received non-Bayer image. According to one embodiment, the processor 120 performs binning processing on the non-Bayer-based output (e.g., a non-Bayer image of a 2x2, 3x3, or 4x4 non-Bayer pattern) output from the multi-pixel sensor 610. Pixel average merge can be performed.
  • image data eg, non-Bayer image
  • the processor 120 may perform binning processing on the received non-Bayer image.
  • the processor 120 performs binning processing on the non-Bayer-based output (e.g., a non-Bayer image of a 2x2, 3x3, or 4x4 non-Bayer pattern) output from the multi-pixel sensor 610. Pixel average merge can be performed.
  • the processor 120 may perform an operation to obtain a binning image.
  • the processor 120 may acquire a binning image (eg, a low-resolution Bayer image) with an excellent sensitivity ratio based on pixel average merge through binning processing.
  • the binning image may represent a low resolution image.
  • the processor 120 may perform image processing on the binning image.
  • the processor 120 may perform image signal processing on the binning image through the ISP 320.
  • the processor 120 may perform image processing such as demosaicing, shading correction, color correction, noise removal, and sharpness adjustment on the binning image.
  • the processor 120 may perform an operation to obtain result data.
  • the processor 120 may output result data according to image processing based on the ISP 320.
  • the processor 120 may perform an operation to determine whether the zoom operation is in a designated zoom section.
  • the processor 120 sets the zoom magnification for the zoom operation based on the user input to a first magnification (e.g., a fixed zoom magnification range (e.g., a zoom magnification of x3 times or x3.1 to x5.9 times). It can be determined whether it corresponds to a zoom magnification (e.g., a zoom magnification of ), or a second magnification (e.g., a zoom magnification (e.g., SR zoom magnification) greater than a specified high magnification (e.g., x6 magnification)) greater than the first magnification.
  • the designated zoom section may represent, for example, a section including a second magnification that is greater than the first magnification.
  • the processor 120 performs a first zoom process on the resulting data in accordance with the zoom factor in operation 1313. You can.
  • the processor 120 may apply x3 zoom based on the resulting data and output the output.
  • the processor 120 can digitally crop and upscale the resulting data to fit the corresponding zoom scale when the zoom scale applied from the x3 magnification zoom camera is a zoom of x3.1 to x5.9 magnification and output it. there is.
  • the processor 120 may perform an operation of determining a parameter corresponding to the zoom magnification in operation 1315.
  • the zoom magnification e.g., SR zoom magnification
  • the high magnification e.g., x6 magnification
  • the processor 120 extracts learning data corresponding to the result data from the network 620 at a zoom greater than a specified magnification, and determines parameters to be used in the network 620 (e.g., neural network) based on the learning data. You can. According to one embodiment, the processor 120 may change settings of parameters to be used in the network 620 in real time.
  • the processor 120 may perform a second zooming operation based on the parameter.
  • the corresponding zoom can be applied and output using the determined parameters.
  • the processor 120 changes the parameters currently set for shooting to parameters based on learning data, digitally crops the resulting data according to the zoom ratio according to the changed parameters, upscales it, and outputs it.
  • the processor 120 may perform an operation of outputting an image according to zoom processing.
  • the processor 120 may operate to output an image according to the first zoom processing operation of operation 1313 or the second zoom processing operation of operation 1317.
  • the processor 120 may output result data obtained through image processing of a binning image according to the corresponding zoom factor.
  • the processor 120 zooms the resulting data according to the applied zoom section and displays the zoomed image on a display (e.g., the display module 160 of FIG. 1 or FIG. 3). there is.
  • An operation method performed in the electronic device 101 includes acquiring image data through a camera module (e.g., the camera module 180 in FIGS. 1 to 3) of the electronic device 101. It may include performing actions such as: The operating method performs binning processing based on the image data, and sends the image data to a server (e.g., FIG. 1 or FIG. 6) through a communication circuit (e.g., wireless communication module 192 of FIG. 1). It may include performing an operation of transmitting to the server 108, 630). The operating method may include performing an operation of obtaining a binning image based on the binning process. The operation method may include performing an operation to obtain learning data based on the binning image.
  • the operating method may include performing actions such as: The operating method performs binning processing based on the image data, and sends the image data to a server (e.g., FIG. 1 or FIG. 6) through a communication circuit (e.g., wireless communication module 192 of FIG. 1). It may include performing
  • the operating method may include performing an operation of acquiring actual measurement data generated based on the image data from the server.
  • the operation method may include performing an operation to learn about the zoom magnification based on the learning data and the actual measurement data.
  • the operating method may include performing an operation of generating and mapping a parameter corresponding to the zoom magnification.
  • the image data may include a non-Bayer image.
  • the learning data may include image-processed data of the binning image converted to a low-resolution Bayer based on the non-Bayer image.
  • the ground truth data may include a remosaic image converted into a high-resolution Bayer image by the server based on the non-Bayer image.
  • the operation of acquiring the image data includes executing an application related to a learning operation based on a user input, and executing a zoom camera of a specified magnification among the plurality of cameras based on execution of the application.
  • An operation may include obtaining the image data from a multi-pixel sensor (MPS) of the zoom camera.
  • MPS multi-pixel sensor
  • the binning processing and the transmitting operations include performing pixel average merge on the non-Bayer image output from the multi-pixel sensor through binning processing, and the binning processing operation.
  • an operation of transmitting the non-Bayer image to the server through the communication circuit may be included.
  • the operation of performing the learning may include an operation of performing learning about the zoom magnification based on the learning data and the actual measurement data using a neural network.
  • the operation of performing the learning is deep-learning through the neural network using a pair of the learning data and the ground truth data generated from the same input in a designated zoom mode.
  • -learning may include actions that perform based learning.
  • the mapping operation is an operation of generating parameters to be used in a zoom mode specified by a neural network according to a learning result, mapping the parameters to the zoom scale of the specified zoom mode, and storing parameters for each zoom magnification in a lookup table. It can include operations managed by .
  • the operating method may include executing an application related to image capture based on a user input.
  • the operation method may include performing an operation of receiving a user input for executing zoom while performing image capture based on execution of the application.
  • the operating method may include executing a zoom camera with a specified magnification among the plurality of cameras based on the user input.
  • the operating method may include performing an operation of acquiring image data from a multi-pixel sensor of the zoom camera.
  • the operating method may include performing binning processing based on the image data.
  • the operating method may include performing an operation of acquiring a binning image based on the binning process.
  • the operation method may include performing an operation to obtain result data based on the binning image.
  • the operation method may include performing a first zoom process on the result data corresponding to the zoom ratio when the zoom ratio based on the user input corresponds to the first magnification.
  • the operating method includes performing second zoom processing on the result data based on a parameter corresponding to the second magnification when the zoom magnification based on the user input corresponds to a second magnification greater than the first magnification. It can be included.
  • the operation of performing the second zoom processing includes determining a parameter to be used in a neural network based on learning data corresponding to the second magnification, and converting the parameter set for the second magnification to the determined parameter. It may include an operation of changing to , and an operation of digitally cropping the resulting data and then upscaling and outputting it according to the zoom magnification according to the changed parameter.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

Un mode de réalisation de la présente divulgation concerne un procédé capable de prendre en charge une amélioration de qualité d'image basée sur l'apprentissage à l'aide d'une image provenant d'un capteur d'image, ainsi qu'un dispositif électronique prenant en charge celui-ci. Le dispositif électronique peut comprendre : un dispositif d'affichage ; un circuit de communication ; un module de caméra comprenant une pluralité de caméras ; et un processeur. Le processeur peut acquérir des données d'image par l'intermédiaire du module de caméra. Le processeur peut effectuer un traitement de compartimentage sur la base des données d'image et transmettre les données d'image à un serveur par l'intermédiaire du circuit de communication. Le processeur peut acquérir une image de compartimentage sur la base du traitement de compartimentage. Le processeur peut acquérir des données d'apprentissage sur la base de l'image de compartimentage. Le processeur peut acquérir, à partir du serveur, des données réellement mesurées générées sur la base des données d'image. Le processeur peut effectuer un apprentissage concernant un grossissement de zoom sur la base des données d'apprentissage et des données réellement mesurées. Le processeur peut générer et mapper un paramètre correspondant au grossissement de zoom.
PCT/KR2023/015023 2022-10-18 2023-09-27 Procédé d'amélioration de qualité d'image basé sur l'apprentissage utilisant une image provenant d'un capteur d'image, et dispositif électronique prenant en charge celui-ci WO2024085501A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2022-0133753 2022-10-18
KR20220133753 2022-10-18
KR1020220185438A KR20240054134A (ko) 2022-10-18 2022-12-27 이미지 센서의 이미지를 이용한 학습 기반 화질 개선 방법 및 이를 지원하는 전자 장치
KR10-2022-0185438 2022-12-27

Publications (1)

Publication Number Publication Date
WO2024085501A1 true WO2024085501A1 (fr) 2024-04-25

Family

ID=90737907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/015023 WO2024085501A1 (fr) 2022-10-18 2023-09-27 Procédé d'amélioration de qualité d'image basé sur l'apprentissage utilisant une image provenant d'un capteur d'image, et dispositif électronique prenant en charge celui-ci

Country Status (1)

Country Link
WO (1) WO2024085501A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190110965A (ko) * 2019-09-11 2019-10-01 엘지전자 주식회사 이미지 해상도를 향상시키기 위한 방법 및 장치
KR20200025107A (ko) * 2018-08-29 2020-03-10 삼성전자주식회사 이미지 센서 및 이미지 센서를 포함하는 전자 기기와, 이미지 줌 프로세싱 방법
KR20210130972A (ko) * 2020-04-23 2021-11-02 삼성전자주식회사 전자 장치의 컬러 필터, 및 그 전자 장치
KR20220038638A (ko) * 2021-02-02 2022-03-29 엘지이노텍 주식회사 이미지 센서, 카메라 모듈 및 카메라 모듈을 포함하는 광학 기기
KR102437193B1 (ko) * 2020-07-31 2022-08-30 동국대학교 산학협력단 복수의 배율에 따라 크기 변환된 영상으로 학습된 병렬 심층 신경망 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200025107A (ko) * 2018-08-29 2020-03-10 삼성전자주식회사 이미지 센서 및 이미지 센서를 포함하는 전자 기기와, 이미지 줌 프로세싱 방법
KR20190110965A (ko) * 2019-09-11 2019-10-01 엘지전자 주식회사 이미지 해상도를 향상시키기 위한 방법 및 장치
KR20210130972A (ko) * 2020-04-23 2021-11-02 삼성전자주식회사 전자 장치의 컬러 필터, 및 그 전자 장치
KR102437193B1 (ko) * 2020-07-31 2022-08-30 동국대학교 산학협력단 복수의 배율에 따라 크기 변환된 영상으로 학습된 병렬 심층 신경망 장치 및 방법
KR20220038638A (ko) * 2021-02-02 2022-03-29 엘지이노텍 주식회사 이미지 센서, 카메라 모듈 및 카메라 모듈을 포함하는 광학 기기

Similar Documents

Publication Publication Date Title
WO2022102972A1 (fr) Dispositif électronique comprenant un capteur d'image et son procédé de fonctionnement
WO2022039424A1 (fr) Procédé de stabilisation d'images et dispositif électronique associé
WO2022149654A1 (fr) Dispositif électronique pour réaliser une stabilisation d'image, et son procédé de fonctionnement
WO2022108235A1 (fr) Procédé, appareil et support de stockage pour obtenir un obturateur lent
WO2022139262A1 (fr) Dispositif électronique pour l'édition vidéo par utilisation d'un objet d'intérêt, et son procédé de fonctionnement
WO2022108201A1 (fr) Procédé de fourniture d'image et dispositif électronique le prenant en charge
WO2022092706A1 (fr) Procédé de prise de photographie à l'aide d'une pluralité de caméras, et dispositif associé
WO2022203447A1 (fr) Dispositif électronique comprenant un capteur d'image et procédé de fonctionnement correspondant
WO2022005002A1 (fr) Dispositif électronique comprenant un capteur d'image
WO2023033333A1 (fr) Dispositif électronique comprenant une pluralité de caméras et son procédé de fonctionnement
WO2022260252A1 (fr) Dispositif électronique à module de dispositif de prise de vues et procédé opératoire associé
WO2022139238A1 (fr) Procédé de fourniture d'image et dispositif électronique le prenant en charge
WO2024085501A1 (fr) Procédé d'amélioration de qualité d'image basé sur l'apprentissage utilisant une image provenant d'un capteur d'image, et dispositif électronique prenant en charge celui-ci
WO2022149812A1 (fr) Dispositif électronique comprenant un module de caméra et procédé de fonctionnement de dispositif électronique
WO2021261737A1 (fr) Dispositif électronique comprenant un capteur d'image, et procédé de commande de celui-ci
WO2024076101A1 (fr) Procédé de traitement d'images sur la base de l'intelligence artificielle et dispositif électronique conçu pour prendre en charge le procédé
WO2024014761A1 (fr) Procédé de correction de tremblement de dispositif de prise de vues et dispositif électronique le prenant en charge
WO2021230567A1 (fr) Procédé de capture d'image faisant intervenir une pluralité d'appareils de prise de vues et dispositif associé
WO2022231270A1 (fr) Dispositif électronique et son procédé de traitement d'image
WO2023058878A1 (fr) Dispositif électronique comprenant un capteur d'image et procédé de fonctionnement associé
WO2022154168A1 (fr) Dispositif électronique apte à réaliser une mise au point automatique et son procédé de fonctionnement
WO2024085673A1 (fr) Dispositif électronique pour obtenir de multiples images d'exposition et son procédé de fonctionnement
WO2022203433A1 (fr) Dispositif électronique comprenant une caméra et son procédé de commande
WO2022080737A1 (fr) Procédé de correction de distorsion d'image, et dispositif électronique associé
WO2024162659A1 (fr) Capteur d'image, dispositif électronique comprenant un capteur d'image et son procédé de fonctionnement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23880080

Country of ref document: EP

Kind code of ref document: A1