WO2023128349A1 - Procédé d'imagerie très haute résolution à l'aide d'un apprentissage coopératif - Google Patents

Procédé d'imagerie très haute résolution à l'aide d'un apprentissage coopératif Download PDF

Info

Publication number
WO2023128349A1
WO2023128349A1 PCT/KR2022/019576 KR2022019576W WO2023128349A1 WO 2023128349 A1 WO2023128349 A1 WO 2023128349A1 KR 2022019576 W KR2022019576 W KR 2022019576W WO 2023128349 A1 WO2023128349 A1 WO 2023128349A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
model
generating
learning
Prior art date
Application number
PCT/KR2022/019576
Other languages
English (en)
Korean (ko)
Inventor
이상윤
윤광진
Original Assignee
주식회사 에스아이에이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에스아이에이 filed Critical 주식회사 에스아이에이
Publication of WO2023128349A1 publication Critical patent/WO2023128349A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Definitions

  • the present disclosure relates to a super-resolution imaging method, and more specifically, to a super-resolution imaging method utilizing a model learned based on multiple source images.
  • Korean Patent Registration No. 10-2337412 (2021.12.06) discloses a deep learning-based super-resolution imaging method.
  • the present disclosure is to solve the above-described problems of the prior art, and an object of the present disclosure is to perform super-resolution imaging by utilizing a model learned based on multiple source images.
  • a deep learning-based super-resolution imaging method performed by a computing device including at least one processor is disclosed.
  • the method is an imaging method performed by a computing device including at least one processor 110, and includes generating a high-resolution image by comparing it with an input image using an imaging model.
  • the imaging model may include generating a plurality of second images having low resolution in comparison with the first image based on at least one first image using the first model.
  • the method may also include generating a plurality of high-resolution third images by comparing the second image based on the plurality of second images using a second model. In this case, the second model may be learned based on comparison using the plurality of third images.
  • the first model includes a plurality of first sub-models. At this time, the plurality of first sub-models output the plurality of second images based on the at least one first image. However, the plurality of first sub-models may be different from each other.
  • generating the plurality of second images may include adding noise to the at least one first image.
  • the method may further include generating the plurality of second images based on the at least one first image to which the noise is added.
  • adding noise to the at least one first image may include generating a single channel noise sample;
  • the method may further include generating a multi-channel noise sample having a plurality of channels by calculating the single-channel noise sample with a plurality of scaling factors and concatenating the resulting values.
  • the method may also include concatenating the multi-channel noise sample with a feature map of the at least one first image.
  • the single channel noise sample may include Gaussian noise.
  • the scaling factor may be learned based on at least one of the at least one first image and the plurality of third images.
  • the second model may be trained based on the first learning method, the second learning method, or the third learning method.
  • the second model may include a plurality of second sub-models.
  • each of the plurality of second sub-models may output a third image set including the plurality of third images based on the plurality of second images.
  • the plurality of second sub-models may be different from each other.
  • the first learning method includes generating a first loss value based on comparing at least one of the plurality of third images with the at least one first image, and the first loss value It may include learning the second model based on.
  • the second learning method may include generating a third image set including a plurality of third images based on the second image using the plurality of second sub-models; Comparing elements included in the image set with each other and outputting the second loss value. And, it may include learning a second model based on the second loss value.
  • the generating of the third image set may include a second sub-model_i that is an i-th second sub-model based on a second image_j that is a j-th image among the plurality of second images. and generating a third image_ji by using . Also, a step of generating a third image_jj by using a second sub-model_j that is a j-th second sub-model based on the second image_j is included. The outputting of the second loss value includes comparing the third image_ji with the third image_jj and outputting the second loss value.
  • i and j may be natural numbers that do not have different maximum values.
  • the plurality of third learning methods ensemble elements included in the third image sets generated by the plurality of second sub-models to output pseudo labels.
  • the method may include outputting a third loss value based on the capital label and a third image, and learning the second model based on the third loss value.
  • i and j are described as consecutive letters for convenience, but may not represent consecutive values in the embodiment. That is, j may not be i+1.
  • a computer program is disclosed according to an embodiment of the present disclosure for realizing the above object.
  • the program when the computer program is executed on one or more processors, performs an operation of generating a high-resolution image by comparing it with an input image using an imaging model, wherein the operations are performed by using a first model, at least one first An operation of generating a plurality of second images having a low resolution in comparison with the at least one first image based on one image and using a second model to generate the plurality of second images and the plurality of second images based on the plurality of second images.
  • an operation of generating a plurality of high-resolution third images is performed.
  • the second model may be learned based on comparison using the plurality of third images.
  • An apparatus for realizing the above object, the apparatus comprising: a processor including one or more cores; network unit; and memory.
  • the processor may generate a high-resolution image by comparing the input image using the imaging model.
  • the processor may generate a plurality of low-resolution second images based on the at least one first image by comparing the at least one first image using the first model.
  • the processor may generate a plurality of third images having a high resolution by comparing the plurality of second images based on the plurality of second images using the second model.
  • the second model may be learned based on comparison using the plurality of third images.
  • the present disclosure can improve super-resolution imaging performance by utilizing a model learned based on multiple source images.
  • FIG. 1 is a schematic diagram of a computing device for imaging according to an embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram illustrating a network function according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram briefly illustrating an imaging method and a first learning method performed by a computing device including at least one processor on which embodiments of the present disclosure may be implemented.
  • FIG. 4 is a schematic diagram of a method of inserting noise into a first image according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a method for learning a second model through a collaborative learning method, which is a second learning method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a method for learning a second model based on an ensemble, which is a third learning method according to an embodiment of the present disclosure.
  • FIG. 7 is a flowchart illustrating a process of performing imaging according to an embodiment of the present disclosure.
  • FIG. 8 is a simplified and general schematic diagram of an exemplary computing environment in which embodiments of the present disclosure may be implemented.
  • a component may be, but is not limited to, a procedure, processor, object, thread of execution, program, and/or computer running on a processor.
  • an application running on a computing device and a computing device may be components.
  • One or more components may reside within a processor and/or thread of execution.
  • a component can be localized within a single computer.
  • a component may be distributed between two or more computers. Also, these components can execute from various computer readable media having various data structures stored thereon.
  • Components may be connected, for example, via signals with one or more packets of data (e.g., data and/or signals from one component interacting with another component in a local system, distributed system) to other systems and over a network such as the Internet. data being transmitted) may communicate via local and/or remote processes.
  • packets of data e.g., data and/or signals from one component interacting with another component in a local system, distributed system
  • a network such as the Internet. data being transmitted
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless otherwise specified or clear from the context, “X employs A or B” is intended to mean one of the natural inclusive substitutions. That is, X uses A; X uses B; Or, if X uses both A and B, "X uses either A or B" may apply to either of these cases. Also, the term “and/or” as used herein should be understood to refer to and include all possible combinations of one or more of the listed related items.
  • Skilled artisans will further understand that the various illustrative logical blocks, components, modules, circuits, means, logics, and algorithm steps described in connection with the embodiments disclosed herein may be implemented in electronic hardware, computer software, or both. It should be recognized that it can be implemented with combinations of To clearly illustrate the interchangeability of hardware and software, various illustrative components, blocks, configurations, means, logics, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented in hardware or as software depends on the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. However, such implementation decisions should not be interpreted as causing a departure from the scope of this disclosure.
  • network functions artificial neural networks, and neural networks may be used interchangeably.
  • a super resolution image can be understood as an image reconstructed at high resolution while maintaining content based on a relatively low resolution image.
  • the content may be an area determined to be significant when semantically segmenting the data. Examples include mountains, rivers, sky, dogs, cats, and trees.
  • the series of processes for generating the super-resolution image is performing super-resolution imaging.
  • a low-resolution image used in a series of processes for performing super-resolution imaging is referred to as a low resolution (LR) image.
  • a relatively higher resolution image than LR is referred to as a high resolution (HR) image.
  • HR high resolution
  • SR super resolution
  • the SR model may be understood as a model for reconstructing an LR image into an SR image.
  • a degradation generator (DG) model may be understood as a model for reconstructing an HR image into an LR image.
  • the present disclosure may utilize various types of images including at least one first image, a plurality of second images, a plurality of third images, and the like.
  • the at least one first image may include at least one HR image.
  • the plurality of second images may include multiple source images composed of LR images including different contents.
  • the plurality of third images may include multiple source images composed of SR images including different contents.
  • the present disclosure can improve the above disadvantages and improve the performance of super-resolution imaging.
  • FIG. 1 is a schematic diagram of a computing device for performing a super-resolution imaging method according to an embodiment of the present disclosure.
  • the configuration of the computing device 100 shown in FIG. 1 is only a simplified example.
  • the computing device 100 may include other components for performing a computing environment of the computing device 100, and only some of the disclosed components may constitute the computing device 100.
  • the computing device 100 may include a processor 110 , a memory 130 , and a network unit 150 .
  • the processor 110 may include one or more cores, and includes a central processing unit (CPU), a general purpose graphics processing unit (GPGPU), and a tensor processing unit (TPU) of a computing device. unit), data analysis, and processors for deep learning.
  • the processor 110 may read a computer program stored in the memory 130 and process data for machine learning according to an embodiment of the present disclosure. According to an embodiment of the present disclosure, the processor 110 may perform an operation for learning a neural network.
  • the processor 110 is used for neural network learning, such as processing input data for learning in deep learning (DL), extracting features from input data, calculating errors, and updating neural network weights using backpropagation. calculations can be performed.
  • DL deep learning
  • At least one of the CPU, GPGPU, and TPU of the processor 110 may process learning of the network function.
  • the CPU and GPGPU can process learning of network functions and data classification using network functions.
  • the learning of a network function and data classification using a network function may be processed by using processors of a plurality of computing devices together.
  • a computer program executed in a computing device according to an embodiment of the present disclosure may be a CPU, GPGPU or TPU executable program.
  • the memory 130 may store any type of information generated or determined by the processor 110 and any type of information received by the network unit 150 .
  • the memory 130 is a flash memory type, a hard disk type, a multimedia card micro type, or a card type memory (eg, SD or XD memory, etc.), RAM (Random Access Memory, RAM), SRAM (Static Random Access Memory), ROM (Read-Only Memory, ROM), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory) -Only Memory), a magnetic memory, a magnetic disk, and an optical disk may include at least one type of storage medium.
  • the computing device 100 may operate in relation to a web storage that performs a storage function of the memory 130 on the Internet.
  • the above description of the memory is only an example, and the present disclosure is not limited thereto.
  • the network unit 150 includes a Public Switched Telephone Network (PSTN), x Digital Subscriber Line (xDSL), Rate Adaptive DSL (RADSL), Multi Rate DSL (MDSL), and VDSL ( Various wired communication systems such as Very High Speed DSL), Universal Asymmetric DSL (UADSL), High Bit Rate DSL (HDSL), and Local Area Network (LAN) may be used.
  • PSTN Public Switched Telephone Network
  • xDSL Digital Subscriber Line
  • RADSL Rate Adaptive DSL
  • MDSL Multi Rate DSL
  • VDSL Various wired communication systems such as Very High Speed DSL), Universal Asymmetric DSL (UADSL), High Bit Rate DSL (HDSL), and Local Area Network (LAN) may be used.
  • LAN Local Area Network
  • the network unit 150 presented in this specification includes Code Division Multi Access (CDMA), Time Division Multi Access (TDMA), Frequency Division Multi Access (FDMA), Orthogonal Frequency Division Multi Access (OFDMA), SC-FDMA ( Single Carrier-FDMA) and other systems.
  • CDMA Code Division Multi Access
  • TDMA Time Division Multi Access
  • FDMA Frequency Division Multi Access
  • OFDMA Orthogonal Frequency Division Multi Access
  • SC-FDMA Single Carrier-FDMA
  • the network unit 150 may be configured regardless of its communication mode, such as wired and wireless, and may be configured with various communication networks such as a personal area network (PAN) and a wide area network (WAN).
  • PAN personal area network
  • WAN wide area network
  • the network may be the known World Wide Web (WWW), or may use a wireless transmission technology used for short-range communication, such as Infrared Data Association (IrDA) or Bluetooth.
  • IrDA Infrared Data Association
  • Bluetooth Bluetooth
  • FIG. 2 is a schematic diagram illustrating a network function according to an embodiment of the present disclosure.
  • a neural network may consist of a set of interconnected computational units, which may generally be referred to as nodes. These nodes may also be referred to as neurons.
  • a neural network includes one or more nodes. Nodes (or neurons) constituting neural networks may be interconnected by one or more links.
  • one or more nodes connected through a link may form a relative relationship of an input node and an output node.
  • the concept of an input node and an output node is relative, and any node in an output node relationship with one node may have an input node relationship with another node, and vice versa.
  • an input node to output node relationship may be created around a link. More than one output node can be connected to one input node through a link, and vice versa.
  • the value of data of the output node may be determined based on data input to the input node.
  • a link interconnecting an input node and an output node may have a weight.
  • the weight may be variable, and may be changed by a user or an algorithm in order to perform a function desired by the neural network. For example, when one or more input nodes are interconnected by respective links to one output node, the output node is set to a link corresponding to values input to input nodes connected to the output node and respective input nodes.
  • An output node value may be determined based on the weight.
  • one or more nodes are interconnected through one or more links to form an input node and output node relationship in the neural network.
  • Characteristics of the neural network may be determined according to the number of nodes and links in the neural network, an association between the nodes and links, and a weight value assigned to each link. For example, when there are two neural networks having the same number of nodes and links and different weight values of the links, the two neural networks may be recognized as different from each other.
  • a neural network may be composed of a set of one or more nodes.
  • a subset of nodes constituting a neural network may constitute a layer.
  • Some of the nodes constituting the neural network may form one layer based on distances from the first input node.
  • a set of nodes having a distance of n from the first input node may constitute n layers.
  • the distance from the first input node may be defined by the minimum number of links that must be passed through to reach the corresponding node from the first input node.
  • the definition of such a layer is arbitrary for explanation, and the order of a layer in a neural network may be defined in a method different from the above.
  • a layer of nodes may be defined by a distance from a final output node.
  • An initial input node may refer to one or more nodes to which data is directly input without going through a link in relation to other nodes among nodes in the neural network.
  • it may mean nodes that do not have other input nodes connected by a link.
  • the final output node may refer to one or more nodes that do not have an output node in relation to other nodes among nodes in the neural network.
  • the hidden node may refer to nodes constituting the neural network other than the first input node and the last output node.
  • the number of nodes in the input layer may be the same as the number of nodes in the output layer, and the number of nodes decreases and then increases again as the number of nodes progresses from the input layer to the hidden layer.
  • the neural network according to another embodiment of the present disclosure may be a neural network in which the number of nodes of the input layer may be less than the number of nodes of the output layer and the number of nodes decreases as the number of nodes increases from the input layer to the hidden layer. there is.
  • the neural network according to another embodiment of the present disclosure is a neural network in which the number of nodes in the input layer may be greater than the number of nodes in the output layer, and the number of nodes increases as the number of nodes increases from the input layer to the hidden layer.
  • a neural network according to another embodiment of the present disclosure may be a neural network in the form of a combination of the aforementioned neural networks.
  • a deep neural network may refer to a neural network including a plurality of hidden layers in addition to an input layer and an output layer.
  • Deep neural networks can reveal latent structures in data. In other words, it can identify the latent structure of a photo, text, video, sound, or music (e.g., what objects are in the photo, what the content and emotion of the text are, what the content and emotion of the audio are, etc.).
  • Deep neural networks include convolutional neural networks (CNNs), recurrent neural networks (RNNs), auto encoders, generative adversarial networks (GANs), and restricted boltzmann machines (RBMs).
  • Deep neural network a deep belief network (DBN), a Q network, a U network, a Siamese network, a Generative Adversarial Network (GAN), and the like.
  • DBN deep belief network
  • Q Q network
  • U U
  • Siamese Siamese network
  • GAN Generative Adversarial Network
  • the network function may include an autoencoder.
  • An autoencoder may be a type of artificial neural network for outputting output data similar to input data.
  • An auto-encoder may include at least one hidden layer, and an odd number of hidden layers may be disposed between input and output layers. The number of nodes of each layer may be reduced from the number of nodes of the input layer to an intermediate layer called the bottleneck layer (encoding), and then expanded symmetrically with the reduction from the bottleneck layer to the output layer (symmetrical to the input layer).
  • Autoencoders can perform non-linear dimensionality reduction. The number of input layers and output layers may correspond to dimensions after preprocessing of input data.
  • the number of hidden layer nodes included in the encoder may decrease as the distance from the input layer increases. If the number of nodes in the bottleneck layer (the layer with the fewest nodes located between the encoder and decoder) is too small, a sufficient amount of information may not be conveyed, so more than a certain number (e.g., more than half of the input layer, etc.) ) may be maintained.
  • the neural network may be trained using at least one of supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Learning of the neural network may be a process of applying knowledge for the neural network to perform a specific operation to the neural network.
  • a neural network can be trained in a way that minimizes output errors.
  • the learning data is repeatedly input into the neural network, the output of the neural network for the training data and the error of the target are calculated, and the error of the neural network is transferred from the output layer of the neural network to the input layer in the direction of reducing the error. It is a process of updating the weight of each node of the neural network by backpropagating in the same direction.
  • the learning data in which the correct answer is labeled is used for each learning data (ie, the labeled learning data), and in the case of comparative teacher learning, the correct answer may not be labeled in each learning data.
  • learning data in the case of teacher learning regarding data classification may be data in which each learning data is labeled with a category.
  • Labeled training data is input to a neural network, and an error may be calculated by comparing an output (category) of the neural network and a label of the training data.
  • an error may be calculated by comparing input learning data with a neural network output. The calculated error is back-propagated in a reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back-propagation. The amount of change in the connection weight of each updated node may be determined according to a learning rate.
  • the neural network's computation of input data and backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently according to the number of iterations of the learning cycle of the neural network. For example, a high learning rate may be used in the early stage of neural network training to increase efficiency by allowing the neural network to quickly obtain a certain level of performance, and a low learning rate may be used in the late stage to increase accuracy.
  • training data can be a subset of real data (ie, data to be processed using the trained neural network), and therefore, errors for training data are reduced, but errors for real data are reduced. There may be incremental learning cycles.
  • Overfitting is a phenomenon in which errors for actual data increase due to excessive learning on training data. For example, a phenomenon in which a neural network that has learned a cat by showing a yellow cat does not recognize that it is a cat when it sees a cat other than yellow may be a type of overfitting. Overfitting can act as a cause of increasing the error of machine learning algorithms.
  • Various optimization methods can be used to prevent such overfitting. To prevent overfitting, methods such as increasing the training data, regularization, inactivating some nodes in the network during learning, and using a batch normalization layer should be applied. can
  • a computer readable medium storing a data structure is disclosed.
  • Data structure can refer to the organization, management, and storage of data that enables efficient access and modification of data.
  • Data structure may refer to the organization of data to solve a specific problem (eg, data retrieval, data storage, data modification in the shortest time).
  • a data structure may be defined as a physical or logical relationship between data elements designed to support a specific data processing function.
  • a logical relationship between data elements may include a connection relationship between user-defined data elements.
  • a physical relationship between data elements may include an actual relationship between data elements physically stored in a computer-readable storage medium (eg, a persistent storage device).
  • the data structure may specifically include a set of data, a relationship between data, and a function or command applicable to the data.
  • a computing device can perform calculations while using minimal resources of the computing device. Specifically, the computing device can increase the efficiency of operation, reading, insertion, deletion, comparison, exchange, and search through an effectively designed data structure.
  • the data structure can be divided into a linear data structure and a non-linear data structure according to the shape of the data structure.
  • a linear data structure may be a structure in which only one data is connected after one data.
  • Linear data structures may include lists, stacks, queues, and decks.
  • a list may refer to a series of data sets in which order exists internally.
  • the list may include a linked list.
  • a linked list may be a data structure in which data are connected in such a way that each data is connected in a single line with a pointer. In a linked list, a pointer can contain information about connection to the next or previous data.
  • a linked list can be expressed as a singly linked list, a doubly linked list, or a circular linked list depending on the form.
  • a stack can be a data enumeration structure that allows limited access to data.
  • a stack can be a linear data structure in which data can be processed (eg, inserted or deleted) at only one end of the data structure.
  • the data stored in the stack may be a LIFO-Last in First Out (Last in First Out) data structure.
  • a queue is a data listing structure that allows limited access to data, and unlike a stack, it can be a data structure (FIFO-First in First Out) in which data stored later comes out later.
  • a deck can be a data structure that can handle data from either end of the data structure.
  • the nonlinear data structure may be a structure in which a plurality of data are connected after one data.
  • the non-linear data structure may include a graph data structure.
  • a graph data structure can be defined as a vertex and an edge, and an edge can include a line connecting two different vertices.
  • a graph data structure may include a tree data structure.
  • the tree data structure may be a data structure in which one path connects two different vertices among a plurality of vertices included in the tree. That is, it may be a data structure that does not form a loop in a graph data structure.
  • the data structure may include a neural network.
  • the data structure including the neural network may be stored in a computer readable medium.
  • the data structure including the neural network may also include preprocessed data for processing by the neural network, data input to the neural network, weights of the neural network, hyperparameters of the neural network, data obtained from the neural network, activation function associated with each node or layer of the neural network, and neural network It may include a loss function for learning of .
  • a data structure including a neural network may include any of the components described above.
  • the data structure including the neural network includes preprocessed data for processing by the neural network, data input to the neural network, weights of the neural network, hyperparameters of the neural network, data obtained from the neural network, activation function associated with each node or layer of the neural network, and neural network. It may be configured to include all or any combination thereof, such as a loss function for learning of .
  • the data structure comprising the neural network may include any other information that determines the characteristics of the neural network.
  • the data structure may include all types of data used or generated in the computational process of the neural network, but is not limited to the above.
  • a computer readable medium may include a computer readable recording medium and/or a computer readable transmission medium.
  • a neural network may consist of a set of interconnected computational units, which may generally be referred to as nodes. These nodes may also be referred to as neurons.
  • a neural network includes one or more nodes.
  • the data structure may include data input to the neural network.
  • a data structure including data input to the neural network may be stored in a computer readable medium.
  • Data input to the neural network may include training data input during a neural network learning process and/or input data input to a neural network that has been trained.
  • Data input to the neural network may include pre-processed data and/or data subject to pre-processing.
  • Pre-processing may include a data processing process for inputting data to a neural network.
  • the data structure may include data subject to pre-processing and data generated by pre-processing.
  • the data structure may include the weights of the neural network.
  • weights and parameters may be used in the same meaning.
  • a data structure including weights of a neural network may be stored in a computer readable medium.
  • a neural network may include a plurality of weights.
  • the weight may be variable, and may be changed by a user or an algorithm in order to perform a function desired by the neural network. For example, when one or more input nodes are interconnected by respective links to one output node, the output node is set to a link corresponding to values input to input nodes connected to the output node and respective input nodes.
  • a data value output from an output node may be determined based on the weight.
  • the weights may include weights that are varied during neural network training and/or weights for which neural network training has been completed.
  • the variable weight in the neural network learning process may include a weight at the time the learning cycle starts and/or a variable weight during the learning cycle.
  • the weights for which neural network learning has been completed may include weights for which learning cycles have been completed.
  • the data structure including the weights of the neural network may include a data structure including weights that are variable during the neural network learning process and/or weights for which neural network learning is completed. Therefore, it is assumed that the above-described weights and/or combinations of weights are included in the data structure including the weights of the neural network.
  • the foregoing data structure is only an example, and the present disclosure is not limited thereto.
  • the data structure including the weights of the neural network may be stored in a computer readable storage medium (eg, a memory or a hard disk) after going through a serialization process.
  • Serialization can be the process of converting a data structure into a form that can be stored on the same or another computing device and later reconstructed and used.
  • the computing device may transmit and receive data through a network by serializing the data structure.
  • the data structure including the weights of the serialized neural network may be reconstructed on the same computing device or another computing device through deserialization.
  • the data structure including the weights of the neural network is not limited to serialization.
  • the data structure including the weights of the neural network is a data structure for increasing the efficiency of operation while minimizing the resource of the computing device (for example, B-Tree, Trie, m-way search tree, AVL tree, Red-Black Tree).
  • the resource of the computing device for example, B-Tree, Trie, m-way search tree, AVL tree, Red-Black Tree.
  • the data structure may include hyper-parameters of the neural network.
  • the data structure including the hyperparameters of the neural network may be stored in a computer readable medium.
  • a hyperparameter may be a variable variable by a user. Hyperparameters include, for example, learning rate, cost function, number of learning cycle iterations, weight initialization (eg, setting the range of weight values to be targeted for weight initialization), hidden unit number (eg, the number of hidden layers and the number of nodes in the hidden layer).
  • weight initialization eg, setting the range of weight values to be targeted for weight initialization
  • hidden unit number eg, the number of hidden layers and the number of nodes in the hidden layer.
  • FIG. 3 is a schematic diagram briefly illustrating an imaging method and a first learning method performed by a computing device including at least one processor according to an embodiment of the present disclosure.
  • the processor 110 of the computer device 100 uses a first model 320 to generate a plurality of first images based on at least one first image 310 .
  • 2 images 330 may be created.
  • the processor 110 according to an embodiment of the present disclosure may generate a plurality of third images 350 by using the second model 340 based on the plurality of second images 330 .
  • the processor 110 according to an embodiment of the present disclosure compares the plurality of third images 350 with at least one first image 310 so that the third image 350 to be compared is at least one first image 350.
  • the second model 340 may be trained to be similar to the first image 310 .
  • the first model 320 may include a plurality of different first sub-models.
  • the second model 340 may include a plurality of different second sub-models.
  • the at least one first image 310 may include at least one HR image.
  • the plurality of second images 330 include LR images as many as the number of combinations of at least one first image 310 and a plurality of first sub-models of the first model 320. can do.
  • the plurality of third images 350 may include as many SR images as the number of combinations of the plurality of second images 330 and the plurality of second sub-models of the second model 340. can Referring to FIG. 3 , the processor 110 of the computer device 100 according to an embodiment of the present disclosure converts the plurality of third images 350 generated from the second model 340 into the at least one first image.
  • the second model 340 may be trained by comparing with 1 image 310 .
  • the processor 110 compares the at least one first image 310 and the plurality of third images 350 to generate a first loss value 360 in order to train the second model.
  • a method of learning the second model 340 based on the generated first loss value 360 is referred to as a 'first learning method'.
  • FIG. 4 is a schematic diagram of a method of inserting noise into a first image according to an embodiment of the present disclosure.
  • the first image including the same content but having different noise may be generated.
  • a data set may be configured by generating a plurality of first images including the same content but having different noises. Since the processor 110 can generate a high-quality LR image that is difficult to obtain, when the second model is trained based on the LR image, noise existing in the LR image can be effectively imitated.
  • the processor 110 may add noise to at least one first image 421 .
  • the plurality of second images may be generated based on the at least one first image 415 to which the noise is added.
  • a single channel noise sample 410 may be generated.
  • a multi-channel noise sample 413 having a plurality of channels may be generated by performing an operation 411 on the single-channel noise sample with a plurality of scaling factors and concatenating the resulting values.
  • the feature map of the first image 415 including noise may be generated by concatenating the multi-channel noise sample 413 with the feature map of the first image 421 .
  • a series of noise insertion processes 430 may be repeated a predetermined number of times.
  • the single-channel noise may include Gaussian noise.
  • the scaling factor may be learned based on at least one of the first image and the third image.
  • the learning method may extract noise based on the at least one first image or at least one of the plurality of third images and adjust a scaling factor to generate noise similar to the extracted noise. .
  • FIG. 5 is a schematic diagram of a method for learning a second model through a collaborative learning method according to an embodiment of the present disclosure.
  • the processor 110 generates a plurality of second images 510 and a second model 520 including a plurality of second sub-models.
  • a third image set having different contents may be created.
  • the processor 11 may generate third image sets 530 and 531 for each of the plurality of second images 510 by using the plurality of second sub-models. there is.
  • third image sets may be generated in proportion to the number of second images 510 .
  • different contents are classified as 'x' and 'y'.
  • the processor 110 may generate second loss values based on a comparison between elements included in each third image set, and thus generated second loss values.
  • the second model 520 may be trained based on 2 loss values.
  • the processor 110 inputs a specific second image_x to the second model 520 and outputs a third image set_x 530, and another image set_x 530. It may include a third image set_y (531) output by inputting 2 images_y to the second model 520, and comparing elements included in the third image set_x (530).
  • Second loss values may be generated based on the operation of performing the image set and the operation of comparing elements included in the third image set_y 531 .
  • the learning method of the second model 520 is referred to as a 'second learning method'.
  • the processor 110 sets a plurality of third images based on a second model 520 including a plurality of second images 510 and a plurality of second sub-models. may be generated, the elements included in each of the plurality of third image sets may be compared with each other to output the second loss values, and the second model 520 may be learned based on the second loss values. there is.
  • the processor 110 generates a third image set for each of the plurality of second images 510 using the plurality of second sub-models. can do.
  • the processor 110 may: 1 a second image_j(which is a j-th image among the plurality of second images 510 ); ), the i-th second sub-model, the second sub-model_i( ), the third image_ji( ( )), 2 the second image_j ( ), the second sub-model_j(which is the j-th second sub-model) ), the third image_jj( ( )), 3 the second image_j ( ), the k-th second sub-model, the second sub-model_j ( ), the third image_jk( ( )) may be performed, and based on these operations, the second image_j( ) to generate a third image set.
  • i, j, and k may include different natural numbers
  • the second image_j ( Elements included in the third image set for ) may be compared with each other to output second loss values, and the second model 520 may be learned based on these second loss values.
  • the second image in the same order and the third image generated based on the second sub-model may play a reference role in calculating the second loss values.
  • the third image_jj ( ( )) may serve as a reference, and the third image_jj ( ( )), and the third image_ji (which is the remaining third images) ( )) and the third image_jk ( ( ))
  • the second loss values may be output by comparing each.
  • the second learning method receives at least one HR image, generates a plurality of LR images corresponding to the at least one HR image based on a plurality of DG models, and the plurality of LR
  • a plurality of SR images may be output using a plurality of SR models based on the images, and the SR model may be trained by comparing the plurality of SR images corresponding to the input HR image.
  • this process is performed for each HR image or LR image regardless of the number of HR images or LR images.
  • FIG. 6 shows that the processor 110 according to an embodiment of the present disclosure generates a pseudo label 650 by performing an ensemble 640 based on third images, and the pseudo label 650 It is a schematic diagram of a method for learning the second model 620 based on ).
  • an ensemble 640 of the present disclosure is disclosed in the sense that the processor 110 integrates a plurality of entities through a predetermined operation.
  • the capital label 650 of the present disclosure means a label generated by ensembling 640 of a plurality of entities.
  • third image sets 630 and 631 using the second model 620 based on a plurality of second images 610 will be described.
  • third image sets may be generated in proportion to the number of second images 510 .
  • the contents are divided into 'x' and 'y' for explanation.
  • the processor 110 generates third image sets 630 and 631 for each of the plurality of second images 610 using the plurality of second sub-models. can do.
  • the processor 110 may ensemble elements included in each of the third image sets 630 and 631 to output the number label 650 .
  • the processor 110 inputs a specific second image_x to the second model 620, outputs a third image set_x 630, and outputs another second image_y to the second model 620. 2 may be input to the model 620 to output a third image set_y 631 .
  • the processor 110 separates each of the third image set_x 630 and the third image set_y 631 into an ensemble 640 to generate a number label 650 corresponding to each set. can create Also, the processor 110 may generate a third loss value 660 based on each capital label 650 and a third image used to generate each capital label.
  • the method of generating the third loss value 660 as in the above embodiment is referred to as a 'third learning method'.
  • the plurality of second images may be images generated based on the first model. Also, the plurality of second images may be images included in a separate data set and not generated based on the first model.
  • the processor 110 may generate a 'fourth loss value' based on the first loss value, the second loss value, and the third loss value calculated above.
  • the second model may be learned based on the fourth loss value.
  • Equation 1 is an equation representing a method of generating a fourth loss value based on the first loss value, the second loss value, and the third loss value, which can be performed by the processor 110 .
  • Each can be defined as a weight settable by the user.
  • the first term may represent the product of the weight and the first loss value
  • the second term may represent the product of the weight and the second loss value
  • the third term may represent the product of the weight and the third loss value.
  • Equation 2 is an equation representing how the processor 110 generates the second loss value.
  • About Without calculating the loss value based on A loss value (second loss value) may be calculated based on . At this time may mean a label calculated by inputting the j-th second image to the i-th second sub-model.
  • the predicted value calculated by each of the second sub-models is used as a label instead of simply learning based on the first image and the second image. It can be understood that it can also be used.
  • the learned SR model is the average of all possible high-resolution images tends to be learned to predict .
  • An embodiment of the present invention with reference to Equation 2 simplifies the process of learning the average by the processor 110, so learning performance can be improved.
  • a super-resolution imaging method performed by the processor 110 is disclosed.
  • learning an SR model based on an LR image generated by a conventional method there is a disadvantage in that the performance of the SR model is degraded because it is difficult to secure a natural LR image.
  • conventional learning, collaborative learning, ensemble learning, and noise-injection learning are performed based on multi-source images, and learning is performed based on these learnings.
  • An embodiment of performing a super-resolution imaging method using the model is disclosed.
  • the processor 110 compares at least one first image with a low resolution based on at least one first image using a first model.
  • a plurality of second images may be generated.
  • the first model includes a plurality of first sub-models, and the plurality of first sub-models may output the plurality of second images based on at least one first image.
  • the plurality of first sub-models may be different from each other.
  • the generating of the plurality of second images may include adding noise to the at least one first image and generating the plurality of second images based on the at least one first image to which the noise is added. It may further include steps to do.
  • the adding of noise to the at least one first image may include generating a single-channel noise sample, calculating the single-channel noise sample with a plurality of scaling factors, and concatenating the resulting values to obtain a plurality of channels. It may include generating a multi-channel noise sample having , and concatenating the multi-channel noise sample with a feature map of the at least one first image.
  • the single-channel noise sample may include Gaussian noise.
  • the scaling factor may be learned based on at least one of the at least one first image and the plurality of third images.
  • the processor 110 compares the plurality of second images based on the plurality of second images using a second model, and compares the plurality of high-resolution second images.
  • Third images of dogs may be generated.
  • the second model may include a plurality of second sub-models. Each of the plurality of second sub-models outputs a third image set included in the plurality of third images based on the plurality of second images, and the plurality of second sub-models may be different from each other. there is.
  • FIG. 8 is a simplified and general schematic diagram of an exemplary computing environment in which embodiments of the present disclosure may be implemented.
  • program modules include routines, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • methods of the present disclosure may be used in single-processor or multiprocessor computer systems, minicomputers, mainframe computers as well as personal computers, handheld computing devices, microprocessor-based or programmable consumer electronics, and the like. It will be appreciated that other computer system configurations may be implemented, including (each of which may be operative in connection with one or more associated devices).
  • the described embodiments of the present disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • Computers typically include a variety of computer readable media.
  • Computer readable media can be any medium that can be accessed by a computer, including volatile and nonvolatile media, transitory and non-transitory media, removable and non-transitory media. Includes removable media.
  • Computer readable media may include computer readable storage media and computer readable transmission media.
  • Computer readable storage media are volatile and nonvolatile media, transitory and non-transitory, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer readable storage media may include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital video disk (DVD) or other optical disk storage device, magnetic cassette, magnetic tape, magnetic disk storage device or other magnetic storage device. device, or any other medium that can be accessed by a computer and used to store desired information.
  • a computer readable transmission medium typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism. Including all information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed so as to encode information within the signal.
  • computer readable transmission media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also intended to be included within the scope of computer readable transmission media.
  • System bus 1108 couples system components, including but not limited to system memory 1106 , to processing unit 1104 .
  • Processing unit 1104 may be any of a variety of commercially available processors. Dual processor and other multiprocessor architectures may also be used as the processing unit 1104.
  • System bus 1108 may be any of several types of bus structures that may additionally be interconnected to a memory bus, a peripheral bus, and a local bus using any of a variety of commercial bus architectures.
  • System memory 1106 includes read only memory (ROM) 1110 and random access memory (RAM) 1112 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) is stored in non-volatile memory 1110, such as ROM, EPROM, or EEPROM, and is a basic set of information that helps transfer information between components within computer 1102, such as during startup. contains routines.
  • RAM 1112 may also include high-speed RAM, such as static RAM, for caching data.
  • the computer 1102 may also include an internal hard disk drive (HDD) 1114 (eg, EIDE, SATA) - the internal hard disk drive 1114 may also be configured for external use within a suitable chassis (not shown).
  • HDD hard disk drive
  • FDD magnetic floppy disk drive
  • optical disk drive 1120 e.g., a CD-ROM
  • the hard disk drive 1114, magnetic disk drive 1116, and optical disk drive 1120 are connected to the system bus 1108 by a hard disk drive interface 1124, magnetic disk drive interface 1126, and optical drive interface 1128, respectively.
  • the interface 1124 for external drive implementation includes at least one or both of USB (Universal Serial Bus) and IEEE 1394 interface technologies.
  • drives and their associated computer readable media provide non-volatile storage of data, data structures, computer executable instructions, and the like.
  • drives and media correspond to storing any data in a suitable digital format.
  • computer readable media refers to HDDs, removable magnetic disks, and removable optical media such as CDs or DVDs, those skilled in the art can use zip drives, magnetic cassettes, flash memory cards, and cartridges. It will be appreciated that other tangible computer readable media such as , , and the like may also be used in the exemplary operating environment and that any such media may include computer executable instructions for performing the methods of the present disclosure. .
  • a number of program modules may be stored on the drive and RAM 1112, including an operating system 1130, one or more application programs 1132, other program modules 1134, and program data 1136. All or portions of the operating system, applications, modules and/or data may also be cached in RAM 1112. It will be appreciated that the present disclosure may be implemented in a variety of commercially available operating systems or combinations of operating systems.
  • a user may enter commands and information into the computer 1102 through one or more wired/wireless input devices, such as a keyboard 1138 and a pointing device such as a mouse 1140.
  • Other input devices may include a microphone, IR remote control, joystick, game pad, stylus pen, touch screen, and the like.
  • an input device interface 1142 that is connected to the system bus 1108, a parallel port, IEEE 1394 serial port, game port, USB port, IR interface, may be connected by other interfaces such as the like.
  • a monitor 1144 or other type of display device is also connected to the system bus 1108 through an interface such as a video adapter 1146.
  • computers typically include other peripheral output devices (not shown) such as speakers, printers, and the like.
  • Computer 1102 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1148 via wired and/or wireless communications.
  • Remote computer(s) 1148 may be a workstation, computing device computer, router, personal computer, handheld computer, microprocessor-based entertainment device, peer device, or other common network node, and generally includes It includes many or all of the components described for, but for simplicity, only memory storage device 1150 is shown.
  • the logical connections shown include wired/wireless connections to a local area network (LAN) 1152 and/or a larger network, such as a wide area network (WAN) 1154 .
  • LAN and WAN networking environments are common in offices and corporations and facilitate enterprise-wide computer networks, such as intranets, all of which can be connected to worldwide computer networks, such as the Internet.
  • computer 1102 When used in a LAN networking environment, computer 1102 connects to local network 1152 through wired and/or wireless communication network interfaces or adapters 1156. Adapter 1156 may facilitate wired or wireless communications to LAN 1152, which also includes a wireless access point installed therein to communicate with wireless adapter 1156.
  • computer 1102 When used in a WAN networking environment, computer 1102 may include a modem 1158, be connected to a communicating computing device on WAN 1154, or establish communications over WAN 1154, such as over the Internet. have other means.
  • a modem 1158 which may be internal or external and a wired or wireless device, is connected to the system bus 1108 through a serial port interface 1142.
  • program modules described for computer 1102, or portions thereof may be stored on remote memory/storage device 1150. It will be appreciated that the network connections shown are exemplary and other means of establishing a communication link between computers may be used.
  • Computer 1102 is any wireless device or entity that is deployed and operating in wireless communication, eg, printers, scanners, desktop and/or portable computers, portable data assistants (PDAs), communication satellites, wireless detectable tags associated with It operates to communicate with arbitrary equipment or places and telephones.
  • wireless communication eg, printers, scanners, desktop and/or portable computers, portable data assistants (PDAs), communication satellites, wireless detectable tags associated with It operates to communicate with arbitrary equipment or places and telephones.
  • PDAs portable data assistants
  • communication satellites e.g., a wireless detectable tags associated with It operates to communicate with arbitrary equipment or places and telephones.
  • the communication may be a predefined structure as in conventional networks or simply an ad hoc communication between at least two devices.
  • Wi-Fi Wireless Fidelity
  • Wi-Fi is a wireless technology, such as a cell phone, that allows such devices, eg, computers, to transmit and receive data both indoors and outdoors, i.e. anywhere within coverage of a base station.
  • Wi-Fi networks use a radio technology called IEEE 802.11 (a, b, g, etc.) to provide secure, reliable, and high-speed wireless connections.
  • Wi-Fi can be used to connect computers to each other, to the Internet, and to wired networks (using IEEE 802.3 or Ethernet).
  • Wi-Fi networks can operate in the unlicensed 2.4 and 5 GHz radio bands, for example, at 11 Mbps (802.11a) or 54 Mbps (802.11b) data rates, or in products that include both bands (dual band) .
  • Various embodiments presented herein may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques.
  • article of manufacture includes a computer program, carrier, or media accessible from any computer-readable storage device.
  • computer-readable storage media include magnetic storage devices (eg, hard disks, floppy disks, magnetic strips, etc.), optical disks (eg, CDs, DVDs, etc.), smart cards, and flash memory devices (eg, EEPROM, cards, sticks, key drives, etc.), but are not limited thereto.
  • various storage media presented herein include one or more devices and/or other machine-readable media for storing information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)

Abstract

Selon un mode de réalisation de la présente divulgation, un programme d'ordinateur, stocké sur un support d'enregistrement lisible par ordinateur, est divulgué. Le programme d'ordinateur peut générer une image de plus haute résolution qu'une image d'entrée lorsque celle-ci est exécutée dans au moins un processeur. Le procédé de génération d'image de haute résolution comprend les étapes consistant à : à l'aide d'un premier modèle, générer une pluralité de deuxièmes images de résolution inférieure à au moins celle d'une première image, sur la base de l'au moins une première image ; et, à l'aide d'un second modèle, générer une pluralité de troisièmes images de résolution supérieure à celle de la pluralité de deuxièmes images, sur la base de la pluralité de deuxièmes images, le deuxième modèle étant entraîné sur la base de la comparaison à l'aide de la pluralité de troisièmes images.
PCT/KR2022/019576 2021-12-31 2022-12-05 Procédé d'imagerie très haute résolution à l'aide d'un apprentissage coopératif WO2023128349A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210193812A KR102406287B1 (ko) 2021-12-31 2021-12-31 협력 학습을 이용한 초해상도 이미징 방법
KR10-2021-0193812 2021-12-31

Publications (1)

Publication Number Publication Date
WO2023128349A1 true WO2023128349A1 (fr) 2023-07-06

Family

ID=81981720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/019576 WO2023128349A1 (fr) 2021-12-31 2022-12-05 Procédé d'imagerie très haute résolution à l'aide d'un apprentissage coopératif

Country Status (2)

Country Link
KR (1) KR102406287B1 (fr)
WO (1) WO2023128349A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102406287B1 (ko) * 2021-12-31 2022-06-08 주식회사 에스아이에이 협력 학습을 이용한 초해상도 이미징 방법

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018151747A (ja) * 2017-03-10 2018-09-27 株式会社ツバサファクトリー 超解像度処理装置、超解像度処理方法およびコンピュータプログラム
KR20200000541A (ko) * 2018-06-25 2020-01-03 주식회사 수아랩 인공 신경망의 학습 방법
JP2021502644A (ja) * 2017-11-09 2021-01-28 京東方科技集團股▲ふん▼有限公司Boe Technology Group Co.,Ltd. 画像処理方法、処理装置及び処理デバイス
KR20210018668A (ko) * 2019-08-08 2021-02-18 동국대학교 산학협력단 딥러닝 신경 네트워크를 사용하여 다운샘플링을 수행하는 이미지 처리 시스템 및 방법, 영상 스트리밍 서버 시스템
KR102337412B1 (ko) * 2021-03-17 2021-12-09 주식회사 에스아이에이 딥러닝 기반 초해상도 이미징 방법
KR102406287B1 (ko) * 2021-12-31 2022-06-08 주식회사 에스아이에이 협력 학습을 이용한 초해상도 이미징 방법

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018151747A (ja) * 2017-03-10 2018-09-27 株式会社ツバサファクトリー 超解像度処理装置、超解像度処理方法およびコンピュータプログラム
JP2021502644A (ja) * 2017-11-09 2021-01-28 京東方科技集團股▲ふん▼有限公司Boe Technology Group Co.,Ltd. 画像処理方法、処理装置及び処理デバイス
KR20200000541A (ko) * 2018-06-25 2020-01-03 주식회사 수아랩 인공 신경망의 학습 방법
KR20210018668A (ko) * 2019-08-08 2021-02-18 동국대학교 산학협력단 딥러닝 신경 네트워크를 사용하여 다운샘플링을 수행하는 이미지 처리 시스템 및 방법, 영상 스트리밍 서버 시스템
KR102337412B1 (ko) * 2021-03-17 2021-12-09 주식회사 에스아이에이 딥러닝 기반 초해상도 이미징 방법
KR102406287B1 (ko) * 2021-12-31 2022-06-08 주식회사 에스아이에이 협력 학습을 이용한 초해상도 이미징 방법

Also Published As

Publication number Publication date
KR102406287B1 (ko) 2022-06-08

Similar Documents

Publication Publication Date Title
WO2022164230A1 (fr) Procédé de prédiction d'une maladie chronique sur la base d'un signal d'électrocardiogramme
WO2021261825A1 (fr) Dispositif et procédé de génération de données météorologiques reposant sur l'apprentissage automatique
WO2023128349A1 (fr) Procédé d'imagerie très haute résolution à l'aide d'un apprentissage coopératif
WO2022255564A1 (fr) Procédé d'analyse de signal biologique
WO2021040354A1 (fr) Procédé de traitement de données utilisant un réseau de neurones artificiels
KR20210119944A (ko) 신경망을 학습시키는 방법
KR102337412B1 (ko) 딥러닝 기반 초해상도 이미징 방법
WO2024117708A1 (fr) Procédé de conversion d'image faciale à l'aide d'un modèle de diffusion
WO2023027279A1 (fr) Procédé de prédiction de la liaison ou non d'un atome à l'intérieur d'une structure chimique à une kinase
WO2024058465A1 (fr) Procédé d'apprentissage de modèle de réseau neuronal local pour apprentissage fédéré
WO2024080791A1 (fr) Procédé de génération d'ensemble de données
WO2023101417A1 (fr) Procédé permettant de prédire une précipitation sur la base d'un apprentissage profond
WO2023027277A1 (fr) Procédé d'entraînement pour diversité de modèle de réseau neuronal
WO2021251691A1 (fr) Procédé de détection d'objet à base de rpn sans ancrage
KR102515935B1 (ko) 신경망 모델을 위한 학습 데이터 생성 방법
WO2023008811A2 (fr) Procédé de reconstruction d'image de visage masqué à l'aide d'un modèle de réseau neuronal
KR20220129995A (ko) 딥러닝 기반 초해상도 이미징 방법
WO2023027278A1 (fr) Procédé d'apprentissage actif fondé sur un programme d'apprentissage
WO2023027280A1 (fr) Procédé de déduction d'un épitope candidat
WO2023075351A1 (fr) Procédé d'apprentissage d'intelligence artificielle pour robot industriel
KR20220003989A (ko) 피처 셋 정보에 기초한 전이 학습 방법
WO2024143909A1 (fr) Procédé de conversion d'image en étapes en prenant en considération des changements d'angle
KR102649764B1 (ko) 페이스 스왑 이미지 생성 방법
WO2024143907A1 (fr) Procédé pour entraîner un modèle de réseau neuronal pour convertir une image en utilisant des images partielles
WO2023033280A1 (fr) Procédé d'échantillonnage de données pour apprentissage actif

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22916495

Country of ref document: EP

Kind code of ref document: A1