WO2023167658A1 - Image processing with encoder-decoder networks having skip connections - Google Patents

Image processing with encoder-decoder networks having skip connections Download PDF

Info

Publication number
WO2023167658A1
WO2023167658A1 PCT/US2022/018394 US2022018394W WO2023167658A1 WO 2023167658 A1 WO2023167658 A1 WO 2023167658A1 US 2022018394 W US2022018394 W US 2022018394W WO 2023167658 A1 WO2023167658 A1 WO 2023167658A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature map
channels
stage
stages
encoding
Prior art date
Application number
PCT/US2022/018394
Other languages
French (fr)
Inventor
Zibo MENG
Weihua Xiong
Chiu Man HO
Original Assignee
Innopeak Technology, Inc.
Zeku, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology, Inc., Zeku, Inc. filed Critical Innopeak Technology, Inc.
Priority to PCT/US2022/018394 priority Critical patent/WO2023167658A1/en
Priority to PCT/US2022/018878 priority patent/WO2023167682A1/en
Publication of WO2023167658A1 publication Critical patent/WO2023167658A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This application relates generally to image processing technology including, but not limited to, methods, systems, and non-transitory computer-readable media for enhancing an image using deep learning techniques involving skip connections.
  • Images captured under low illumination conditions typically have a low signal-to-noise ratio (SNR) and do not have a good perceptual quality. Exposure times are extended to improve image quality, and however, resulting images can become blurry. Denoising techniques have also been explored to remove image noise caused by the low illumination conditions, while image enhancement techniques are developed to improve perceptual quality of digital images. Convolutional neural networks have been applied in image enhancement and noise reduction. For example, a U-net includes an encoder for extracting abstract features from an input image and a decoder for generating an output image from the extracted features. Corresponding layers of the encoder and decoder are connected by skip connections. However, those skip connections introduce heavy memory consumption and an expensive computational cost. It would be beneficial to have an effective and efficient mechanism to improve image quality and remove image noises, e.g., for images captured under the low illumination conditions, while keeping a high utilization rate of computational resources and a low power consumption.
  • SNR signal-to-noise ratio
  • Various embodiments of this application are directed to enhancing image quality using an encoder-decoder network (e.g., U-net) by selectively storing portions (i.e., not all) of feature maps generated by encoding stages for use in decoding stages. For each respective encoding stage, a subset of channels is selected from an encoded feature map generated by the respective encoding stage, and temporarily stored as a transitional feature map that is provided to a corresponding decoding stage. For example, an encoded feature map having a dimension of W C m is reduced to a transitional feature map having a dimension of H ⁇ W ⁇ Cout, where C O ut is less than C m .
  • an encoder-decoder network e.g., U-net
  • C O ut successive channels of the C m channels of the encoded feature map are selected and stored as the transitional feature map.
  • the C m channels of the encoded feature map are divided into Cout groups, and each group has Cm Com channels.
  • the respective Cm Com channels are averaged to generate a respective channel of the transitional feature map.
  • storage and computational costs are reduced (e.g., by a factor of C m IC O ut .
  • Such an image processing process reduces a number of floating point operations per second (FLOPS), thereby making corresponding deep learning models (e.g., the U-net) feasible to be implemented on mobile devices having limited computational and storage resources.
  • FLOPS floating point operations per second
  • an image processing method is implemented at an electronic device having memory.
  • the method includes obtaining an input image having an image resolution and processing the input image by a series of encoding stages and a series of decoding stages to generate an output image.
  • the series of encoding stages include a target encoding stage, and the series of decoding stages include a target decoding stage corresponding to the target encoding stage.
  • the method further includes, at the target encoding stage, generating an encoded feature map having a total number (G Titan) of channels; in accordance with a predefined connection skipping rule, determining an alternative number (Cout) of channels based on the total number of channels of the encoded feature map; and temporarily storing the alternative number of channels of the encoded feature map in the memory, thereby allowing the alternative number of channels to be extracted from the memory and combined with an input feature map during the target decoding stage.
  • the alternative number is less than the total number.
  • an image processing method is implemented at an electronic device having memory.
  • the method includes obtaining an input image having an image resolution and processing the input image by a series of encoding stages and a series of decoding stages to generate an output image.
  • the series of encoding stages include a target encoding stage, and the series of decoding stages include a target decoding stage corresponding to the target encoding stage.
  • the method further includes, at the target decoding stage, obtaining an input feature map that is generated from a total number (C m ) of channels of an encoded feature map outputted by the target encoding stage, extracting from the memory an alternative number (Cout of channels of the encoded feature map, the alternative number less than the total number, combining the alternative number of channels of the encoded feature map and the input feature map to a combined feature map, and converting the combined feature map to a decoded feature map.
  • the decoded feature map is fed to a second decoding stage that immediately follows the target decoding stage or outputted from the series of decoding stages to render the output image.
  • some implementations include an electronic device that includes one or more processors and memory having instructions stored thereon, which when executed by the one or more processors cause the processors to perform any of the above methods.
  • some implementations include a non-transitory computer-readable medium, having instructions stored thereon, which when executed by one or more processors cause the processors to perform any of the above methods.
  • Figure 1 is an example data processing environment having one or more servers communicatively coupled to one or more client devices, in accordance with some embodiments.
  • FIG. 2 is a block diagram illustrating an electronic system configured to process content data (e.g., image data), in accordance with some embodiments.
  • content data e.g., image data
  • Figure 3 is an example data processing environment for training and applying a neural network-based data processing model for processing visual and/or audio data, in accordance with some embodiments.
  • Figure 4A is an example neural network applied to process content data in an NN-based data processing model, in accordance with some embodiments
  • Figure 4B is an example node in the neural network, in accordance with some embodiments.
  • Figure 5 is an example encoder-decoder network in which selected channels of encoded feature maps are temporarily stored at encoding stages, in accordance with some embodiments.
  • Figure 6 is a detailed network structure of each encoding stage and a corresponding decoding stage in an encoder-decoder network (e.g., a U-net), in accordance with some embodiments.
  • Figures 7A-7E are five example channel determination schemes applied by each encoding stage of an encoder-decoder network, in accordance with some embodiments
  • Figure 7F illustrates a channel determination process implemented based on a channel determination scheme applied by each encoding stage of an encoder-decoder network, in accordance with some embodiments.
  • Figure 8A is a flow diagram of an example image processing method, in accordance with some embodiments
  • Figure 8B is a flow diagram of another example image processing method, in accordance with some embodiments.
  • An electronic device employs an encoder-decoder network to perform image denoising and enhancement operations, thereby improving perceptual quality of an image taken under low illumination condition.
  • the input image is processed successively by a set of downsampling stages (i.e., encoding stages) to extract a series of feature maps, as well as to reduce spatial resolutions of these feature maps successively.
  • An encoded feature map outputted by the downsmpling stages is then processed by a bottleneck network followed by a set of upscaling stages (i.e., decoding stages).
  • an input feature map is upscaled and concatenated with the layer of the same resolution from the downsampling stage to effectively preserve the details in the input image.
  • Various embodiments of this application are directed to selectively storing portions (i.e., not all) of feature maps generated by encoding stages for use in decoding stages. For each respective encoding stage, a subset of channels is selected from a respective feature map generated by the respective encoding stage, and temporarily stored as a transitional feature map that is provided to a corresponding decoding stage. For example, an encoded feature map having a dimension of H W C m is reduced to a transitional feature map having a dimension of Hx WxCout, where C O ut is less than C m . In some embodiments, C O ut successive channels of the C m channels of the encoded feature map are selected and stored as the transitional feature map.
  • the C m channels of the input feature map are divided into C O ut map groups, and each group has Cm/C O ut channels.
  • each group of channels one of the respective Cm/C O ut channels is selected as a respective channel of the transitional feature map.
  • the respective Cm/C O ut channels are combined (e.g., averaged) to generate a respective channel of the transitional feature map.
  • FIG. 1 is an example data processing environment 100 having one or more servers 102 communicatively coupled to one or more client devices 104, in accordance with some embodiments.
  • the one or more client devices 104 may be, for example, desktop computers 104A, tablet computers 104B, mobile phones 104C, head-mounted display (HMD) (also called augmented reality (AR) glasses) 104D, or intelligent, multi-sensing, network- connected home devices (e.g., a surveillance camera 104E, a smart television device, a drone).
  • HMD head-mounted display
  • AR augmented reality
  • Each client device 104 can collect data or user inputs, executes user applications, and present outputs on its user interface.
  • the collected data or user inputs can be processed locally at the client device 104 and/or remotely by the server(s) 102.
  • the one or more servers 102 provide system data (e.g., boot files, operating system images, and user applications) to the client devices 104, and in some embodiments, processes the data and user inputs received from the client device(s) 104 when the user applications are executed on the client devices 104.
  • the data processing environment 100 further includes a storage 106 for storing data related to the servers 102, client devices 104, and applications executed on the client devices 104.
  • the one or more servers 102 are configured to enable real-time data communication with the client devices 104 that are remote from each other or from the one or more servers 102.
  • the one or more servers 102 are configured to implement data processing tasks that cannot be or are preferably not completed locally by the client devices 104.
  • the client devices 104 include a game console (e.g., the HMD 104D) that executes an interactive online gaming application.
  • the game console receives a user instruction and sends it to a game server 102 with user data.
  • the game server 102 generates a stream of video data based on the user instruction and user data and providing the stream of video data for display on the game console and other client devices that are engaged in the same game session with the game console.
  • the client devices 104 include a networked surveillance camera 104E and a mobile phone 104C.
  • the networked surveillance camera 104E collects video data and streams the video data to a surveillance camera server 102 in real time. While the video data is optionally pre-processed on the surveillance camera 104E, the surveillance camera server 102 processes the video data to identify motion or audio events in the video data and share information of these events with the mobile phone 104C, thereby allowing a user of the mobile phone 104 to monitor the events occurring near the networked surveillance camera 104E in the real time and remotely.
  • the one or more servers 102, one or more client devices 104, and storage 106 are communicatively coupled to each other via one or more communication networks 108, which are the medium used to provide communications links between these devices and computers connected together within the data processing environment 100.
  • the one or more communication networks 108 may include connections, such as wire, wireless communication links, or fiber optic cables. Examples of the one or more communication networks 108 include local area networks (LAN), wide area networks (WAN) such as the Internet, or a combination thereof.
  • the one or more communication networks 108 are, optionally, implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Long Term Evolution (LTE), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol.
  • USB Universal Serial Bus
  • FIREWIRE Long Term Evolution
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • CDMA code division multiple access
  • TDMA time division multiple access
  • Bluetooth Wi-Fi
  • Wi-Fi voice over Internet Protocol
  • a connection to the one or more communication networks 108 may be established either directly (e.g., using 3G/4G connectivity to a wireless carrier), or through a network interface 110 (e.g., a router, switch, gateway, hub, or an intelligent, dedicated whole-home control node), or through any combination thereof.
  • the one or more communication networks 108 can represent the Internet of a worldwide collection of networks and gateways that use the Transmission Control Protocol/Intemet Protocol (TCP/IP) suite of protocols to communicate with one another.
  • TCP/IP Transmission Control Protocol/Intemet Protocol
  • At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages.
  • deep learning techniques are applied in the data processing environment 100 to process content data (e.g., video data, visual data, audio data) obtained by an application executed at a client device 104 to identify information contained in the content data, match the content data with other data, categorize the content data, or synthesize related content data.
  • content data e.g., video data, visual data, audio data
  • data processing models e.g., an encoder-decoder network 500 in Figure 5
  • These data processing models are trained with training data before they are applied to process the content data.
  • the mobile phone 104C or HMD 104D obtains the content data (e.g., captures video data via an internal camera) and processes the content data using the data processing models locally.
  • both model training and data processing are implemented locally at each individual client device 104 (e.g., the mobile phone 104C and HMD 104D).
  • the client device 104 obtains the training data from the one or more servers 102 or storage 106 and applies the training data to train the data processing models.
  • both model training and data processing are implemented remotely at a server 102 (e.g., the server 102A) associated with a client device 104 (e.g. the client device 104A and HMD 104D).
  • the server 102A obtains the training data from itself, another server 102 or the storage 106 applies the training data to train the data processing models.
  • the client device 104 obtains the content data, sends the content data to the server 102 A (e.g., in an application) for data processing using the trained data processing models, receives data processing results (e.g., recognized hand gestures) from the server 102A, presents the results on a user interface (e.g., associated with the application), renders virtual objects in a field of view based on the poses, or implements some other functions based on the results.
  • the client device 104 itself implements no or little data processing on the content data prior to sending them to the server 102 A. Additionally, in some embodiments, data processing is implemented locally at a client device 104 (e.g., the client device 104B and HMD 104D), while model training is implemented remotely at a server 102
  • the server 102B obtains the training data from itself, another server 102 or the storage 106 and applies the training data to train the data processing models.
  • the trained data processing models are optionally stored in the server 102B or storage 106.
  • the client device 104 imports the trained data processing models from the server 102B or storage 106, processes the content data using the data processing models, and generates data processing results to be presented on a user interface or used to initiate some functions (e.g., rendering virtual objects based on device poses) locally.
  • a pair of AR glasses 104D are communicatively coupled in the data processing environment 100.
  • the AR glasses 104D includes a camera, a microphone, a speaker, one or more inertial sensors (e.g., gyroscope, accelerometer), and a display.
  • the camera and microphone are configured to capture video and audio data from a scene of the AR glasses 104D, while the one or more inertial sensors are configured to capture inertial sensor data.
  • the camera captures hand gestures of a user wearing the AR glasses 104D, and recognizes the hand gestures locally and in real time using a two-stage hand gesture recognition model.
  • the microphone records ambient sound, including user’s voice commands.
  • both video or static visual data captured by the camera and the inertial sensor data measured by the one or more inertial sensors are applied to determine and predict device poses.
  • the video, static image, audio, or inertial sensor data captured by the AR glasses 104D is processed by the AR glasses 104D, server(s) 102, or both to recognize the device poses.
  • deep learning techniques are applied by the server(s) 102 and AR glasses 104D jointly to recognize and predict the device poses.
  • the device poses are used to control the AR glasses 104D itself or interact with an application (e.g., a gaming application) executed by the AR glasses 104D.
  • the display of the AR glasses 104D displays a user interface, and the recognized or predicted device poses are used to render or interact with user selectable display items (e.g., an avatar) on the user interface.
  • deep learning techniques are applied in the data processing environment 100 to process video data, static image data, or inertial sensor data captured by the AR glasses 104D.
  • 2D or 3D device poses are recognized and predicted based on such video, static image, and/or inertial sensor data using a first data processing model.
  • Visual content is optionally generated using a second data processing model.
  • Training of the first and second data processing models is optionally implemented by the server 102 or AR glasses 104D. Inference of the device poses and visual content is implemented by each of the server 102 and AR glasses 104D independently or by both of the server 102 and AR glasses 104D jointly.
  • FIG 2 is a block diagram illustrating an electronic system 200 configured to process content data (e.g., image data), in accordance with some embodiments.
  • the electronic system 200 includes a server 102, a client device 104 (e.g., AR glasses 104D in Figure 1), a storage 106, or a combination thereof.
  • the electronic system 200 typically, includes one or more processing units (CPUs) 202, one or more network interfaces 204, memory 206, and one or more communication buses 208 for interconnecting these components (sometimes called a chipset).
  • the electronic system 200 includes one or more input devices 210 that facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls.
  • the client device 104 of the electronic system 200 uses a microphone for voice recognition or a camera for gesture recognition to supplement or replace the keyboard.
  • the client device 104 includes one or more optical cameras (e.g., an RGB camera), scanners, or photo sensor units for capturing images, for example, of graphic serial codes printed on the electronic devices.
  • the electronic system 200 also includes one or more output devices 212 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays.
  • the client device 104 includes a location detection device, such as a GPS (global positioning system) or other geo-location receiver, for determining the location of the client device 104.
  • Memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 206, optionally, includes one or more storage devices remotely located from one or more processing units 202. Memory 206, or alternatively the non-volatile memory within memory 206, includes a non-transitory computer readable storage medium. In some embodiments, memory 206, or the non- transitory computer readable storage medium of memory 206, stores the following programs, modules, and data structures, or a subset or superset thereof:
  • Network communication module 216 for connecting each server 102 or client device 104 to other devices (e.g., server 102, client device 104, or storage 106) via one or more network interfaces 204 (wired or wireless) and one or more communication networks 108, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
  • User interface module 218 for enabling presentation of information (e.g., a graphical user interface for application(s) 224, widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.) at each client device 104 via one or more output devices 212 (e.g., displays, speakers, etc.);
  • information e.g., a graphical user interface for application(s) 224, widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.
  • output devices 212 e.g., displays, speakers, etc.
  • Input processing module 220 for detecting one or more user inputs or interactions from one of the one or more input devices 210 and interpreting the detected input or interaction;
  • Web browser module 222 for navigating, requesting (e.g., via HTTP), and displaying websites and web pages thereof, including a web interface for logging into a user account associated with a client device 104 or another electronic device, controlling the client or electronic device if associated with the user account, and editing and reviewing settings and data that are associated with the user account;
  • One or more user applications 224 for execution by the electronic system 200 e.g., games, social network applications, smart home applications, and/or other web or non-web based applications for controlling another electronic device and reviewing data captured by such devices;
  • Model training module 226 for receiving training data and establishing a data processing model for processing content data (e.g., video, image, audio, or textual data) to be collected or obtained by a client device 104;
  • content data e.g., video, image, audio, or textual data
  • Data processing module 228 for processing content data using data processing models 250, thereby identifying information contained in the content data, matching the content data with other data, categorizing the content data, or synthesizing related content data, where in some embodiments, the data processing module 228 is associated with one of the user applications 224 to process the content data in response to a user instruction received from the user application 224, and in an example, the data processing module 228 is applied to implement image processing processes in Figures 8A and 8B; and • One or more databases 230 for storing at least data including one or more of: o Device settings 232 including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.) of the one or more servers 102 or client devices 104; o User account information 234 for the one or more user applications 224, e.g., user names, security questions, account history data, user preferences, and predefined account settings; o Network parameters 236 for the one or more communication networks 108, e.g., IP address
  • the one or more databases 230 are stored in one of the server 102, client device 104, and storage 106 of the electronic system 200 .
  • the one or more databases 230 are distributed in more than one of the server 102, client device 104, and storage 106 of the electronic system 200 .
  • more than one copy of the above data is stored at distinct devices, e.g., two copies of the data processing models 250 are stored at the server 102 and storage 106, respectively.
  • Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
  • the above identified modules or programs i.e., sets of instructions
  • memory 206 optionally, stores a subset of the modules and data structures identified above.
  • memory 206 optionally, stores additional modules and data structures not described above.
  • FIG. 3 is an example data processing system 300 for training and applying a neural network based (NN-based) data processing model 240 for processing content data (e.g., video, image, audio, or textual data), in accordance with some embodiments.
  • the data processing system 300 includes a model training module 226 for establishing the data processing model 240 and a data processing module 228 for processing the content data using the data processing model 240.
  • both of the model training module 226 and the data processing module 228 are located on a client device 104 of the data processing system 300, while a training data source 304 distinct form the client device 104 provides training data 306 to the client device 104.
  • the training data source 304 is optionally a server 102 or storage 106.
  • both of the model training module 226 and the data processing module 228 are located on a server 102 of the data processing system 300.
  • the training data source 304 providing the training data 306 is optionally the server 102 itself, another server 102, or the storage 106.
  • the model training module 226 and the data processing module 228 are separately located on a server 102 and client device 104, and the server 102 provides the trained data processing model 240 to the client device 104.
  • the model training module 226 includes one or more data pre-processing modules 308, a model training engine 310, and a loss control module 312.
  • the data processing model 240 is trained according to a type of the content data to be processed.
  • the training data 306 is consistent with the type of the content data, so is a data pre-processing module 308 applied to process the training data 306 consistent with the type of the content data.
  • an image pre-processing module 308A is configured to process image training data 306 to a predefined image format, e.g., extract a region of interest (ROI) in each training image, and crop each training image to a predefined image size.
  • ROI region of interest
  • an audio pre-processing module 308B is configured to process audio training data 306 to a predefined audio format, e.g., converting each training sequence to a frequency domain using a Fourier transform.
  • the model training engine 310 receives pre-processed training data provided by the data pre-processing modules 308, further processes the pre-processed training data using an existing data processing model 240, and generates an output from each training data item.
  • the loss control module 312 can monitor a loss function comparing the output associated with the respective training data item and a ground truth of the respective training data item.
  • the model training engine 310 modifies the data processing model 240 to reduce the loss function, until the loss function satisfies a loss criteria (e.g., a comparison result of the loss function is minimized or reduced below a loss threshold).
  • the modified data processing model 240 is provided to the data processing module 228 to process the content data.
  • the model training module 226 offers supervised learning in which the training data is entirely labelled and includes a desired output for each training data item (also called the ground truth in some situations). Conversely, in some embodiments, the model training module 226 offers unsupervised learning in which the training data are not labelled. The model training module 226 is configured to identify previously undetected patterns in the training data without pre-existing labels and with no or little human supervision. Additionally, in some embodiments, the model training module 226 offers partially supervised learning in which the training data are partially labelled.
  • the data processing module 228 includes a data pre-processing modules 314, a model -based processing module 316, and a data post-processing module 318.
  • the data preprocessing modules 314 pre-processes the content data based on the type of the content data. Functions of the data pre-processing modules 314 are consistent with those of the preprocessing modules 308 and covert the content data to a predefined content format that is acceptable by inputs of the model -based processing module 316. Examples of the content data include one or more of video, image, audio, textual, and other types of data.
  • each image is pre-processed to extract an ROI or cropped to a predefined image size
  • an audio clip is pre-processed to convert to a frequency domain using a Fourier transform.
  • the content data includes two or more types, e.g., video data and textual data.
  • the model -based processing module 316 applies the trained data processing model 240 provided by the model training module 226 to process the pre-processed content data.
  • the model -based processing module 316 can also monitor an error indicator to determine whether the content data has been properly processed in the data processing model 240.
  • the processed content data is further processed by the data postprocessing module 318 to present the processed content data in a preferred format or to provide other related information that can be derived from the processed content data.
  • Figure 4A is an example neural network (NN) 400 applied to process content data in an NN-based data processing model 240, in accordance with some embodiments
  • Figure 4B is an example node 420 in the neural network (NN) 400, in accordance with some embodiments.
  • the data processing model 240 is established based on the neural network 400.
  • a corresponding model-based processing module 316 applies the data processing model 240 including the neural network 400 to process content data that has been converted to a predefined content format.
  • the neural network 400 includes a collection of nodes 420 that are connected by links 412. Each node 420 receives one or more node inputs and applies a propagation function to generate a node output from the one or more node inputs.
  • the node output is provided via one or more links 412 to one or more other nodes 420
  • a weight w associated with each link 412 is applied to the node output.
  • the one or more node inputs are combined based on corresponding weights wi, W2, W3, and W4 according to the propagation function.
  • the propagation function is a product of a non-linear activation function and a linear weighted combination of the one or more node inputs.
  • the collection of nodes 420 is organized into one or more layers in the neural network 400.
  • the one or more layers includes a single layer acting as both an input layer and an output layer.
  • the one or more layers includes an input layer 402 for receiving inputs, an output layer 406 for providing outputs, and zero or more hidden layers 404 (e.g., 404A and 404B) between the input and output layers 402 and 406.
  • a deep neural network has more than one hidden layers 404 between the input and output layers 402 and 406. In the neural network 400, each layer is only connected with its immediately preceding and/or immediately following layer.
  • a layer 402 or 404B is a fully connected layer because each node 420 in the layer 402 or 404B is connected to every node 420 in its immediately following layer.
  • one of the one or more hidden layers 404 includes two or more nodes that are connected to the same node in its immediately following layer for down sampling or pooling the nodes 420 between these two layers.
  • max pooling uses a maximum value of the two or more nodes in the layer 404B for generating the node of the immediately following layer 406 connected to the two or more nodes.
  • a convolutional neural network is applied in a data processing model 240 to process content data (particularly, video and image data).
  • the CNN employs convolution operations and belongs to a class of deep neural networks 400, i.e., a feedforward neural network that only moves data forward from the input layer 402 through the hidden layers to the output layer 406.
  • the one or more hidden layers of the CNN are convolutional layers convolving with a multiplication or dot product.
  • Each node in a convolutional layer receives inputs from a receptive area associated with a previous layer (e.g., five nodes), and the receptive area is smaller than the entire previous layer and may vary based on a location of the convolution layer in the convolutional neural network.
  • Video or image data is pre-processed to a predefined video/image format corresponding to the inputs of the CNN.
  • the pre-processed video or image data is abstracted by each layer of the CNN to a respective feature map.
  • a recurrent neural network is applied in the data processing model 240 to process content data (particularly, textual and audio data). Nodes in successive layers of the RNN follow a temporal sequence, such that the RNN exhibits a temporal dynamic behavior.
  • each node 420 of the RNN has a time-varying real-valued activation.
  • the RNN examples include, but are not limited to, a long short-term memory (LSTM) network, a fully recurrent network, an Elman network, a Jordan network, a Hopfield network, a bidirectional associative memory (BAM network), an echo state network, an independently RNN (IndRNN), a recursive neural network, and a neural history compressor.
  • LSTM long short-term memory
  • BAM bidirectional associative memory
  • an echo state network an independently RNN (IndRNN)
  • a recursive neural network a recursive neural network
  • a neural history compressor examples include, but are not limited to, a long short-term memory (LSTM) network, a fully recurrent network, an Elman network, a Jordan network, a Hopfield network, a bidirectional associative memory (BAM network), an echo state network, an independently RNN (IndRNN), a recursive neural network, and a neural history compressor.
  • the RNN can be used for hand
  • the training process is a process for calibrating all of the weights w, for each layer of the learning model using a training data set which is provided in the input layer 402.
  • the training process typically includes two steps, forward propagation and backward propagation, which are repeated multiple times until a predefined convergence condition is satisfied.
  • forward propagation the set of weights for different layers are applied to the input data and intermediate results from the previous layers.
  • backward propagation a margin of error of the output (e.g., a loss function) is measured, and the weights are adjusted accordingly to decrease the error.
  • the activation function is optionally linear, rectified linear unit, sigmoid, hyperbolic tangent, or of other types.
  • a network bias term b is added to the sum of the weighted outputs from the previous layer before the activation function is applied.
  • the network bias b provides a perturbation that helps the NN 400 avoid over fitting the training data.
  • the result of the training includes the network bias parameter b for each layer.
  • Figure 5 is an example encoder-decoder network (e.g., a U-net) 500 in which selected channels of encoded feature maps are temporarily stored at encoding stages, in accordance with some embodiments.
  • the encoder-decoder network 500 is configured to receive an input image 502 having an image resolution and process the input image 502 to generate an output image 504 having an image quality better than the input image 502 (e.g., having a higher SNR than the input image 502).
  • the encoder-decoder network 500 includes a series of encoding stages 506, a bottleneck network 508 coupled to the series of encoding stages 506, and a series of decoding stages 510 coupled to the bottleneck network 508.
  • the series of decoding stages 510 include the same number of stages as the series of encoding stages 506.
  • the encoder-decoder network 500 has four encoding stages 506A- 506D and four decoding stages 510A-510D.
  • the bottleneck network 508 is coupled between the encoding stages 506 and decoding stages 510.
  • the input image 502 is successively processed by the series of encoding stages 506A-506D, the bottleneck network 508, and the series of decoding stages 510A-510D to generate the output image 504.
  • an original image is divided into a plurality of image tiles, and the input image 502 corresponds to one of the plurality of image tiles.
  • each of the plurality of image tiles is processed using the encoder-decoder network 500, and all of the image tiles in the original image are successively processed using the encoder-decoder network 500.
  • Output images 504 of the plurality of image tiles are collected and combined to one another to reconstruct a final output image corresponding to the original image.
  • the series of encoding stages 506 include an ordered sequence of encoding stages 506, e.g., stages 506A, 506B, 506C, and 506D, and have an encoding scale factor.
  • Each encoding stage 506 generates an encoded feature map 512 having a feature resolution and a number of encoding channels.
  • the feature resolution is scaled down and the number of encoding channels is scaled up according to the encoding scale factor.
  • the encoding scale factor is 2.
  • a first encoded feature map 512A of a first encoding stage 506A has a first feature resolution (e.g., // x W) related to the image resolution and a first number of (e.g., NCH) encoding channels
  • a second encoded feature map 512B of a second encoding stage 506B has a second feature resolution (e.g., V2HXV2W) and a second number of (e.g., NCH) encoding channels.
  • a third encoded feature map 512C of a third encoding stage 506C has a third feature resolution (e.g., V H ⁇ V W) and a third number of (e.g., 4NCH) encoding channels
  • a fourth encoded feature map 512D of a fourth encoding stage 506D has a fourth feature resolution (e.g., VsH ⁇ VsW) and a fourth number of (e.g., 8NCH) encoding channels.
  • the encoded feature map 512 is processed and provided as an input to a next encoding stage 506, except that the encoded feature map 512 of a last encoding stage 506 (e.g., stage 506D in Figure 5) is processed and provided as an input to the bottleneck network 508. Additionally, for each encoding stage 506, in accordance with a predefined connection skipping rule, a subset of feature map 514 is selected from the number of encoding channels of the encoded feature map 512. The subset of feature map 514 is less than the entire encoded feature map 512, i.e., does not include all of a total number (e.g., Cm) of encoding channels of the encoded feature map 512.
  • a total number e.g., Cm
  • the subset of feature map 514 includes an alternative number (e.g., C O ut) of encoding channels of the encoded feature map 512, and the alternative number is less than the total number of encoding channels in the respective encoding stage 506.
  • the subset of feature map 514 is temporarily stored in memory and extracted for further processing by the bottleneck network 508 or any of the decoding stages 510. Stated another way, the alternative number of channels 514 are stored in the memory as skip connections that skip part of the encoder-decoder network 500.
  • the bottleneck network 508 is coupled to the last stage of the encoding stages 506 (e.g., stage 506D in Figure 5), and continues to process the total number of encoding channels of the encoded feature map 512D of the last encoding stage 506D and generate an intermediate feature map 516A (i.e., a first input feature map 516A to be used by a first decoding stage 510A).
  • the bottleneck network 508 includes a first set of 3 *3 CNN and Rectified Linear Unit (ReLU), a second set of 3 *3 CNN and ReLU, a global pooling network, a bilinear upsampling network, and a set of 1 x 1 CNN and ReLU.
  • ReLU Rectified Linear Unit
  • the encoded feature map 512D of the last encoding stage 506D is normalized (e.g., using a pooling layer), and fed to the first set of 3 *3 CNN and ReLU of the bottleneck network 508.
  • the intermediate feature map 516A is outputted by the set of 1 x 1 CNN and ReLU of the bottleneck network 508 and provided to the decoding stages 510.
  • the series of decoding stages 510 include an ordered sequence of decoding stages 510, e.g., stages 510A, 510B, 510D, and 510D, and have a decoding upsampling factor.
  • Each decoding stage 510 generates a decoded feature map 518 having a feature resolution and a number of decoding channels.
  • the feature resolution is scaled up and the number of decoding channels is scaled down according to the decoding upsampling factor.
  • the decoding upsampling factor is 2.
  • a first decoded feature map 518A of a first decoding stage 510A has a first feature resolution (e.g., ’/sJ/’x’/sIF’) and a first number of (e.g., 8 Vcff ’) decoding channels
  • a second decoded feature map 518B of a second decoding stage 510B has a second feature resolution (e.g., V H’ ⁇ V W’) and a second number of (e.g., 4NCH’) decoding channels.
  • a third decoded feature map 518C of a third decoding stage 510C has a third feature resolution (e.g., UH’xUIF’) and a third number of (e.g., INCH’ decoding channels, and a fourth decoded feature map 518D of a fourth decoding stage 510D has a fourth feature resolution (e.g., J/’x W’) related to a resolution of the output image 504 and a fourth number of (e.g., NCH’ decoding channels.
  • a third feature resolution e.g., UH’xUIF’
  • a third number of e.g., INCH’ decoding channels
  • a fourth decoded feature map 518D of a fourth decoding stage 510D has a fourth feature resolution (e.g., J/’x W’) related to a resolution of the output image 504 and a fourth number of (e.g., NCH’ decoding channels.
  • the decoded feature map 518 is processed and provided as an input to a next encoding stage 506, except that the decoded feature map 518 of a last encoding stage 506 (e.g., stage 506D in Figure 5) is processed to generate the output image 504.
  • the decoded feature map 518D of the last encoding stage 506 is processed by a 1 x 1 CNN 522 and combined with the input image 502 to generate the output image 504, thereby reducing a noise level of the input image 502 via the entire encoderdecoder network 500.
  • each respective decoding stage 510 extracts a subset of feature map 514 that is selected from a total number of encoding channels of an encoded feature map 512 of a corresponding encoding stage 506.
  • the subset of feature map 514 is combined with an input feature map 516 of the respective decoding stage 510 using a set of neural networks.
  • Each respective decoding stage 510 and the corresponding encoding stage 506 are symmetric with respect to the bottleneck network 508, i.e., separated from the bottleneck network 508 by the same number of decoding or encoding stages 510 or 506.
  • the subset of feature map 514 is purged from the memory after it is used by the respective decoding stage 510.
  • each encoding stage 506 is uniquely associated with a respective decoding stage 510 in the context of sharing the alternative number of channels 514.
  • Each respective encoding stage 506 and the corresponding decoding stage 510 are symmetric with respect to the bottleneck network 508, i.e., separated from the bottleneck network 508 by the same number of decoding or encoding stages, respectively.
  • the encoding stage 506A, 506B, 506C, and 506D provides a first alternative number of channels 514A to the fourth decoding stage 510D, and remaining encoding stages 506B, 506C, and 506D provide the respective alternative numbers of channels 514B, 514C, and 514D to remaining decoding stages 510C, 510B, and 510A, respectively.
  • a computational capability of an electronic device configured to process the input image 502 determines the feature resolutions and numbers of decoding channels of the decoding stages 510.
  • a mobile device 104 having a limited computational capability has a relatively smaller feature resolution and smaller number of decoding channel for each decoding stage 510 compared with a server 102 having a larger computational compatibility.
  • the feature resolutions and numbers of decoding channels of the decoding stages 510 partially set limits on the alternative number of channels of the subset of feature map 514 obtained from each corresponding encoding stage 506.
  • the feature resolution and numbers of decoding channel of the first decoding stage 510A also set limits on the feature resolution and numbers of channels of the intermediate feature map 516A generated by the bottleneck network 508.
  • the encoder-decoder network 500 can be adaptively configured to operate on electronic devices having limited capabilities, e.g., by adopting a scalable size determined according to a computational capability and conserving a storage resource by way of storing the subset of feature map 514 rather than the entire feature map 512 of each encoding stage.
  • the predefined connection skipping rule is applied to select or determine the alternative number of channels 514 from the total number of channels of the encoded feature map 512.
  • the alternative number and the total number have a predefined ratio.
  • the predefined ratio is determined based on at least the hardware capability of the electronic device and the image resolution of the input image 502.
  • Figure 6 is a detailed network structure 600 of each encoding stage 506 and a corresponding decoding stage 510 in an encoder-decoder network (e.g., a U-net), in accordance with some embodiments.
  • the encoder-decoder network obtains an input image 502 and processes the input image 502 by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image 504.
  • Each encoding stage 506 corresponds to a corresponding decoding stage 510.
  • Each encoding stage 506 generates an encoded feature map 512 having a total number (G Ton) of channels.
  • an alternative number (C O ut) of channels are selected from the total number of channels of the encoded feature map 512.
  • the alternative number is less than the total number.
  • the alternative number of channels 514 of the encoded feature map 512 are stored in memory, thereby allowing the alternative number of channels 514 to be extracted from the memory and combined with an input feature map 516 of the corresponding decoding stage 510.
  • each decoding stage 510 obtains an input feature map 516 that is generated from a total number of channels of an encoded feature map outputted by the corresponding encoding stage 506.
  • An alternative number of channels 514 of the encoded feature map 512 are extracted from the memory, and the alternative number is less than the total number.
  • the alternative number of channels 514 of the encoded feature map 512 and the input feature map 516 are combined to a combined feature map 520, which is further converted to a decoded feature map 518.
  • the decoded feature map 518 is fed to a next decoding stage 510 that immediately follows each respective decoding stage or outputted from the series of decoding stages 510 to render the output image 504.
  • the alternative number of channels 514 selected from the encoded feature map 512 are stored in the memory as skip connections that skip part of the encoder-decoder network 500.
  • each encoding stage 506 includes a first set of 3 ⁇ 3 CNN and ReLU 602, a second set of 3 x3 CNN and ReLU 604, and a max pooling module 606.
  • the first and second sets of CNN and ReLU 602 and 604 generate the encoded feature map 512 of the respective encoding stage 506 jointly, and the max pooling module 606 applies a pooling operation to downsample the encoded feature map 512 to a pooled feature map 608.
  • the first encoding stage 506A generates the first encoded feature map 512A from the input image 502, and the remaining encoding stage 506B, 506C or 506D receives a pooled feature map 608A, 608B or 608C (also called a respective start feature map) of a preceding encoding stage 506 A, 506B or 506C and generates a corresponding encoded feature map 512B, 512C or 512D, respectively.
  • the fourth encoding stage 506D is the last encoding stage in which the fourth encoded feature map 512D is further converted to a fourth pooled feature map 608D, and the fourth pooled feature map 608D is provided to the bottleneck network 508.
  • each decoding stage 510 applies a set of 1x1 CNN and ReLU 612 on the alternative number of channels 514 of the encoded feature map 512 to obtain a modified subset of feature map.
  • the encoding stage 506 applies another set of 1 x 1 CNN and ReLU 614 and a bilinear upsampling operation 616 successively on an input feature map 516 to obtain a modified input feature map.
  • the modified subset of feature map and the modified input feature map are combined (e.g., concatenated) to provide the combined feature map 520.
  • a dimension of each of the alternative number of channels 514 is cropped to match a dimension of the input feature map 516.
  • the decoding stage 510 then applies a first set of 3x3 CNN and ReLU 618 and a second set of 3x3 CNN and ReLU 620 successively on the combined feature map 520 to generate the decoded feature map 518.
  • the first decoding stage 510A receives an intermediate feature map 516A outputted by the bottleneck network 508 as a first input feature map 516A.
  • the remaining decoding stage 510B, 510C or 510D receives a decoded feature map 518A, 518B or 518C outputted by a preceding decoding stage 510A, 510B or 510C as a respective input feature map 516 and generates a corresponding decoded feature map 518B, 518C or 518D, respectively.
  • the fourth decoding stage 510D is the last decoding stage in which the fourth decoded feature map 518D is processed by a pooling module (e.g., a U1 CNN 522) and combined with the input image 502 to generate the output image 504.
  • the output image 504 has one or more performance characteristics better than the input image 502, e.g., has a lower noise level and a high SNR.
  • Figures 7A-7E are five example channel determination schemes 700, 710, 720, 730, and 740 applied by each encoding stage 506 of an encoder-decoder network 500 (e.g., a U-net), in accordance with some embodiments.
  • the encoder-decoder network 500 obtains an input image 502 and processes the input image 502 by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image 504.
  • Each encoding stage 506 corresponds to a corresponding decoding stage 510.
  • Each encoding stage 506 generates an encoded feature map 512 having a total number (G perennial) of channels.
  • an alternative number (C O ut) of channels are selected or determined from the total number of channels of the encoded feature map 512.
  • the alternative number of channels 514 of the encoded feature map 512 are stored in memory, and subsequently extracted and combined with an input feature map 516 of the corresponding decoding stage 510.
  • the alternative number is less than the total number.
  • the alternative number of channels 514 are stored in the memory as skip connections that skip part of the encoder-decoder network 500.
  • the alternative number of channels 514 are a subset of successive channels within the total number of channels of the encoded feature map 512.
  • the alternative number of channels 514 include a first channel 702 leading the encoded feature map 512.
  • the alternative number of channels 514 include a last channel 704 concluding the encoded feature map 512.
  • the alternative number of channels 514 do not include the first channel 702 or the last channel 704 of the encoded feature map 512.
  • the alternative number of channels 514 are successive channels within the total number of channels of the encoded feature map 512, and is spaced with the same number of channels from a first channel and a last channel of the total number of channels 512.
  • the alternative number of channels 514 are not successive channels within the total number of channels of the encoded feature map 512.
  • the alternative number of channels 514 are evenly distributed among the total number of channels of the encoded feature map 512, and each of the alternative number of channels 514 is selected from a fixed channel of every third number of successive channels.
  • each of the alternative number of channels 514 is the first channel of every 4 successive channels of the total number of channels of the encoded feature map 512.
  • the total number of channels of the encoded feature map 512 is divided into the alternative number of non-overlapping channel groups 706.
  • Each channel group 706 includes a respective subset of successive channels of the encoded feature map 512.
  • the subset of successive channels of the encoded feature map 512 are averaged to provide a respective channel of the alternative number of channels 514 stored in the memory for use in decoding. Such an averaging operation is therefore used to generate the alternative number of channels 514, and avoids relying on a 1 x 1 convolutional layer to generate the alternative number of channels 514.
  • a computational capability of an electronic device is measured by a number of floating point operations per second (FLOPS).
  • FLOPS floating point operations per second
  • the encoded feature map 512 has a resolution of H x W x C m
  • the 1 x 1 convolutional layer consumes H x W x C m x Cout FLOPs.
  • each decoding stage 510 does not use the 1 x 1 convolutional layer to process these skip connections including the alternative number of channels 514, which helps reduce the computational resource needed for decoding.
  • an input image 502 has an image resolution of 3000x4000 pixels and processed by the U-net.
  • the U-net has a single encoding stage 506 and a single decoding stage 510 that is connected to the encoding stage 506 by a single skip connection.
  • the encoded feature map 512 has a resolution of 1500x2000 and 32 channels, and the subset of feature map 514 has a resolution of 1500x2000 and 8 channels. That said, the total number C m is equal to 32, and the alternative number C O ut is 8.
  • Direct storage of the alternative number of channels of the encoding stage 506 of the scheme 700, 710, 720, or 730 uses 0 FLOPS, and the averaging operation of the scheme 750 uses 0.096 GFLOPS, while the 1 x 1 convolutional layer uses 1.56 GFLOPS. More computational resources can be conserved when multiple encoding stages 506 with skip connections are involved.
  • a peak signal-to-noise ratio is applied to measure effectiveness of using the 1 x 1 convolutional layer or alternative number of channels 514 to form a skip connection in the encoder-decoder network 500.
  • Application of the 1 x 1 convolutional layer corresponds to a first PSNR equal to 39.351.
  • Skip connection having the alternative number of channels 514 corresponds to a second PSNR equal to 39.445, which is comparable with that of applying the 1 x 1 convolutional layer while requiring no or very few FLOPS.
  • Figure 7F illustrates a channel determination process 760 implemented based on a channel determination scheme 750 applied by each encoding stage 506 of an encoderdecoder network 500, in accordance with some embodiments.
  • a respective inner product value of the respective channel is determined, and the alternative number of (e.g., Cout) channels 514 having the smallest inner product values are selected from the total number of channels.
  • an LI or L2 norm value is calculated for each of the total number of channels of the encoded map 512, and the alternative number of (e.g., Cout) channels 514 having the largest LI or L2 norm values are selected from the total number of channels.
  • an Euclidean norm value is calculated for each of the total number of channels of the encoded map 512, and the alternative number of (e.g., Cout) channels 514 having the largest Euclidean norm values are selected from the total number of channels.
  • all of the series of encoding stages 506 (e.g., 506A- 506D in Figure 5) comply with the same predefined connection skipping rule, and apply the same channel determination scheme (e.g., any of the schemes 700, 710, 720, 730, 740, and 750).
  • each of the series of encoding stages 506 complies with a respective predefined connection skipping rule and has a respective channel determination scheme, independently of remaining encoding stages in the series of encoding stages.
  • the first, second, third, and fourth encoding stages 506A, 506B, 506C, and 506D apply the channel determination schemes 700, 720, 730, and 740, respectively.
  • two or more encoding stages 506 apply the same channel determination scheme that is distinct from that of one or more remaining encoding stages.
  • Figure 8A is a flow diagram of an example image processing method 800, in accordance with some embodiments
  • Figure 8B is a flow diagram of another example image processing method 850, in accordance with some embodiments.
  • the methods 800 and 850 are described as being implemented by an electronic device (e.g., a mobile phone 104C, AR glasses 104D, smart television device, or drone).
  • Method 800 and 850 are, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of the computer system.
  • Each of the operations shown in Figures 8A and 8B may correspond to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., memory 206 in Figure 2).
  • the computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non- volatile memory device or devices.
  • the instructions stored on the computer readable storage medium may include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors. Some operations in method 1050 may be combined and/or the order of some operations may be changed.
  • the electronic device has memory.
  • the electronic device obtains (802) an input image 502 having an image resolution and processes (804) the input image 502 by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image 504.
  • the series of encoding stages 506 (806) include a target encoding stages 506, and the series of decoding stages 510 include a target decoding stage 510 corresponding to the target encoding stages 506.
  • the electronic device At the target encoding stages 506, which can be any of the encoding stages 506 506, the electronic device generates (808) an encoded feature map 512 having a total number (Cm) of channels.
  • the electronic device determines (810) an alternative number (C O ut) of channels 514 based on the total number of channels of the encoded feature map 512, and the alternative number (C O ut) is less than the total number (Cm).
  • the electronic device temporarily stores (812) the alternative number of channels 514 of the encoded feature map 512 in the memory, thereby allowing the alternative number of channels 514 to be extracted from the memory and combined with an input feature map 516 of the target decoding stage.
  • the operations performed on the target encoding stages 506 are performed on each and every encoding stage 506.
  • the electronic device extracts from the memory the alternative number of channels 514 of the encoded feature map 512, obtains the input feature map 516, and combines the alternative number of channels 514 of the encoded feature map 512 and the input feature map 516 to a combined feature map 520.
  • the combined feature map 520 is converted to a decoded feature map 518, and the decoded feature map 518 is fed to a next decoding stage 510 that immediately follows the target decoding stage 510 or outputted from the series of decoding stages 510.
  • the electronic device crops a dimension of each of the alternative number of channels 514 to match a dimension of the input feature map 516.
  • the input feature map 516 is provided by a bottleneck network or a last decoding stage 510 that immediately precedes the target decoding stage.
  • a I convolutional layer and a rectified linear unit (ReLU) 612 are applied on the alternative number of channels 514 of the encoded feature map 512 to obtain a modified subset of the encoded feature map 512.
  • ReLU rectified linear unit
  • a distinct 1 x 1 convolutional layer, a distinct ReLU 614, and a bilinear upsampling operation 616 are successively applied on the input feature map 516 to obtain a modified input feature map 516.
  • the modified subset of the encoded feature map 512 and the modified input feature map 516 are concatenated to the combined feature map 520.
  • the electronic device successively applies a 3x3 convolutional layer and a ReLU on the combined feature map 520 twice (618 and 620) to obtain the decoded feature map 518.
  • the electronic device in accordance with a determination that the target decoding stage 510 is the last stage of the series of decoding stages 510, the electronic device combines the decoded feature map 518 and the input image 502 to obtain an output image 504 that has one or more performance characteristics better than the input image 502. In accordance with a determination that the target decoding stage 510 is not the last stage of the series of decoding stages 510, the electronic device provides the decoded feature map 518 as a distinct input feature map 516 to the next decoding stage 510 that immediately follows the target decoding stage.
  • the electronic device in accordance with a determination that the target encoding stages 506 is the last stage of the series of encoding stages 506, the electronic device provides the encoded feature map 512 to a bottleneck network coupled between the series of encoding stages 506 and the series of decoding stages 510. In accordance with a determination that the target encoding stages 506 is not the last stage of the series of encoding stages 506, the electronic device applies a max pooling operation on the encoded feature map 512 before feeding the encoded feature map 512 to a second encoding stages 506 that immediately follows the target encoding stages 506.
  • the electronic device obtains (852) an input image and processes (854) the input image by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image.
  • the series of encoding stages 506 include (856) a target encoding stage, and the series of decoding stages 510 include a target decoding stage 510 corresponding to the target encoding stage.
  • the electronic device obtains (858) an input feature map 516 that is generated from a total number of channels of an encoded feature map 512 outputted by the target encoding stage, extracts (860) from the memory an alternative number of channels 514 of the encoded feature map 512, and combines (862) the alternative number of channels 514 of the encoded feature map 512 and the input feature map 516 to a combined feature map 520.
  • the alternative number is less than the total number.
  • the electronic device converts (864) the combined feature map 520 to a decoded feature map 518, wherein the decoded feature map 518 is fed to a next decoding stage 510 that immediately follows the target decoding stage 510 or outputted from the series of decoding stages 510 to render the output image.
  • the operations performed on the target decoding stages 510 are performed on each and every decoding stage 510.
  • the electronic device At the target encoding stage, the electronic device generates the encoded feature map 512 having the total number of channels.
  • the alternative number of channels 514 are determined from the total number of channels of the encoded feature map 512.
  • the electronic device temporarily stores the alternative number of channels 514 of the encoded feature map 512 in the memory, thereby allowing the alternative number of channels 514 to be extracted from the memory and combined with the input feature map 516 during the target decoding stage.
  • the electronic device in accordance with a determination that the target encoding stage 506 is the last stage of the series of encoding stages 506, the electronic device provides the encoded feature map 512 to a bottleneck network coupled between the series of encoding stages 506 and the series of decoding stages 510. In accordance with a determination that the target encoding stage 506 is not the last stage of the series of encoding stages 506, the electronic device applies a max pooling operation 606 on the encoded feature map 512 before feeding the encoded feature map 512 as a distinct start feature map 608 to the distinct encoding stage 506 that immediately follows the target encoding stage.
  • the electronic device receives a start feature map (e.g., a pooled feature map 608) and successively applies a 3 ⁇ 3 convolutional layer and a ReLU on the start feature map twice (602 and 604) to generate the encoded feature map 512.
  • a start feature map e.g., a pooled feature map 608
  • the electronic device crops a dimension of each of the alternative number of channels 514 to match a dimension of the input feature map 516.
  • the input feature map 516 is provided by a bottleneck network or a last decoding stage 510 that immediately precedes the target decoding stage.
  • the electronic device applies a 1 x 1 convolutional layer and a rectified linear unit (ReLU) 612 on the alternative number of channels 514 of the encoded feature map 512 to obtain a modified subset of the encoded feature map 512, successively applies a distinct 1x1 convolutional layer, a distinct ReLU 614, and a bilinear upsampling operation 616 on the input feature map 516 to obtain a modified input feature map 516, and concatenates the modified subset of the encoded feature map 512 and the modified input feature map 516 to the combined feature map 520.
  • ReLU rectified linear unit
  • the electronic devices converts the combined feature map 520 to a decoded feature map 518 by successively applying a 3 ⁇ 3 convolutional layer and a ReLU on the combined feature map 520 twice (618 and 620) to obtain the decoded feature map 518.
  • the electronic device in accordance with a determination that the target decoding stage 510 is the last stage of the series of decoding stages 510, the electronic device combines (866) the decoded feature map 518 and the input image 502 to obtain an output image 504 that has one or more performance characteristics better than the input image 502. In accordance with a determination that the target decoding stage 510 is not the last stage of the series of decoding stages 510, the electronic device provides (868) the decoded feature map 518 as a distinct input feature map 516 to the next decoding stage 510 that immediately follows the target decoding stage.
  • the alternative number of channels 514 and the total number of channels have a predefined ratio that is determined based on a hardware capability of the electronic device and the image resolution of the input image 502.
  • the alternative number of channels 514 are (814) successive channels within the total number of channels of the encoded feature map 512, and includes one of a first channel and a last channel of the total number of channels. In some embodiments, in accordance with the predefined connection skipping rule, the alternative number of channels 514 are (816) successive channels within the total number of channels of the encoded feature map 512, and is spaced with the same number of channels from a first channel and a last channel of the total number of channels.
  • the alternative number of channels 514 are (818) evenly distributed among the total number of channels of the encoded feature map 512, and each of the alternative number of channels 514 is selected from a fixed channel of every third number of successive channels.
  • the total number of channels of the encoded feature map 512 is divided (820) into the alternative number of channel groups, and each of the alternative number of channels 514 is an average of successive channels in a distinct channel group.
  • the electronic device determines (822) a respective inner product, LI norm, or Euclidean norm value of the respective channel and selects the alternative number of channels 514 having the smallest inner product, largest LI norm, or largest Euclidean norm values among the total number of channels.
  • all of the series of encoding stages 506 comply with the same predefined connection skipping rule. Conversely, in some embodiments, each of the series of encoding stages 506 complies with a respective predefined connection skipping rule, independently of remaining encoding stages 506 in the series of encoding stages 506.
  • the series of encoding stages 506 has the same number of stages as the series of decoding stages 510.
  • the target encoding stage 506 has a first position in the series of encoding stages 506, and the target decoding stage 510 has a second position in the series of decoding stages 510. The second position matches the first position.
  • the target encoding stage 506 is the first encoding stage 506A, and corresponds to the target decoding stage 510 that is the fourth decoding stage 510D.
  • the electronic device divides an original image into a plurality of image tiles, and the plurality of image tiles includes an image tile corresponding to the input image 502.
  • the term “if’ is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context.
  • the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
  • stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages can be implemented in hardware, firmware, software or any combination thereof.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This application is directed to processing an input image by a series of encoding stages and a series of decoding stages. At each encoding stage, an electronic device generates a respective encoded feature map having a total number of channels, selects an alternative number of channels from the total number of channels, and stores the alternative number of channels in memory. The alternative number is less than the total number. At a corresponding decoding stage, the electronic device obtains a respective input feature map generated from the total number of channels of the respective encoded feature map, extracts from the memory the alternative number of channels of the respective encoded feature map, combines the extracted alternative number of channels and the respective input feature map to a combined feature map that is further converted for use in a subsequent decoding stage or for rendering an output image.

Description

Image Processing with Encoder-Decoder Networks Having Skip Connections
TECHNICAL FIELD
[0001] This application relates generally to image processing technology including, but not limited to, methods, systems, and non-transitory computer-readable media for enhancing an image using deep learning techniques involving skip connections.
BACKGROUND
[0002] Images captured under low illumination conditions typically have a low signal-to-noise ratio (SNR) and do not have a good perceptual quality. Exposure times are extended to improve image quality, and however, resulting images can become blurry. Denoising techniques have also been explored to remove image noise caused by the low illumination conditions, while image enhancement techniques are developed to improve perceptual quality of digital images. Convolutional neural networks have been applied in image enhancement and noise reduction. For example, a U-net includes an encoder for extracting abstract features from an input image and a decoder for generating an output image from the extracted features. Corresponding layers of the encoder and decoder are connected by skip connections. However, those skip connections introduce heavy memory consumption and an expensive computational cost. It would be beneficial to have an effective and efficient mechanism to improve image quality and remove image noises, e.g., for images captured under the low illumination conditions, while keeping a high utilization rate of computational resources and a low power consumption.
SUMMARY
[0003] Various embodiments of this application are directed to enhancing image quality using an encoder-decoder network (e.g., U-net) by selectively storing portions (i.e., not all) of feature maps generated by encoding stages for use in decoding stages. For each respective encoding stage, a subset of channels is selected from an encoded feature map generated by the respective encoding stage, and temporarily stored as a transitional feature map that is provided to a corresponding decoding stage. For example, an encoded feature map having a dimension of
Figure imgf000003_0001
W Cm is reduced to a transitional feature map having a dimension of H^W^Cout, where COut is less than Cm. In some embodiments, COut successive channels of the Cm channels of the encoded feature map are selected and stored as the transitional feature map. Alternatively, the Cm channels of the encoded feature map are divided into Cout groups, and each group has Cm Com channels. For each group of channels, the respective Cm Com channels are averaged to generate a respective channel of the transitional feature map. When the transitional feature map is stored and provided to the decoding stage, storage and computational costs are reduced (e.g., by a factor of CmICOut . Such an image processing process reduces a number of floating point operations per second (FLOPS), thereby making corresponding deep learning models (e.g., the U-net) feasible to be implemented on mobile devices having limited computational and storage resources.
[0004] In one aspect, an image processing method is implemented at an electronic device having memory. The method includes obtaining an input image having an image resolution and processing the input image by a series of encoding stages and a series of decoding stages to generate an output image. The series of encoding stages include a target encoding stage, and the series of decoding stages include a target decoding stage corresponding to the target encoding stage. The method further includes, at the target encoding stage, generating an encoded feature map having a total number (G„) of channels; in accordance with a predefined connection skipping rule, determining an alternative number (Cout) of channels based on the total number of channels of the encoded feature map; and temporarily storing the alternative number of channels of the encoded feature map in the memory, thereby allowing the alternative number of channels to be extracted from the memory and combined with an input feature map during the target decoding stage. The alternative number is less than the total number.
[0005] In another aspect, an image processing method is implemented at an electronic device having memory. The method includes obtaining an input image having an image resolution and processing the input image by a series of encoding stages and a series of decoding stages to generate an output image. The series of encoding stages include a target encoding stage, and the series of decoding stages include a target decoding stage corresponding to the target encoding stage. The method further includes, at the target decoding stage, obtaining an input feature map that is generated from a total number (Cm) of channels of an encoded feature map outputted by the target encoding stage, extracting from the memory an alternative number (Cout of channels of the encoded feature map, the alternative number less than the total number, combining the alternative number of channels of the encoded feature map and the input feature map to a combined feature map, and converting the combined feature map to a decoded feature map. The decoded feature map is fed to a second decoding stage that immediately follows the target decoding stage or outputted from the series of decoding stages to render the output image.
[0006] In another aspect, some implementations include an electronic device that includes one or more processors and memory having instructions stored thereon, which when executed by the one or more processors cause the processors to perform any of the above methods.
[0007] In yet another aspect, some implementations include a non-transitory computer-readable medium, having instructions stored thereon, which when executed by one or more processors cause the processors to perform any of the above methods.
[0008] These illustrative embodiments and implementations are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof.
Additional embodiments are discussed in the Detailed Description, and further description is provided there.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] For a better understanding of the various described implementations, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
[0010] Figure 1 is an example data processing environment having one or more servers communicatively coupled to one or more client devices, in accordance with some embodiments.
[0011] Figure 2 is a block diagram illustrating an electronic system configured to process content data (e.g., image data), in accordance with some embodiments.
[0012] Figure 3 is an example data processing environment for training and applying a neural network-based data processing model for processing visual and/or audio data, in accordance with some embodiments.
[0013] Figure 4A is an example neural network applied to process content data in an NN-based data processing model, in accordance with some embodiments, and Figure 4B is an example node in the neural network, in accordance with some embodiments.
[0014] Figure 5 is an example encoder-decoder network in which selected channels of encoded feature maps are temporarily stored at encoding stages, in accordance with some embodiments. [0015] Figure 6 is a detailed network structure of each encoding stage and a corresponding decoding stage in an encoder-decoder network (e.g., a U-net), in accordance with some embodiments.
[0016] Figures 7A-7E are five example channel determination schemes applied by each encoding stage of an encoder-decoder network, in accordance with some embodiments, and Figure 7F illustrates a channel determination process implemented based on a channel determination scheme applied by each encoding stage of an encoder-decoder network, in accordance with some embodiments.
[0017] Figure 8A is a flow diagram of an example image processing method, in accordance with some embodiments, and Figure 8B is a flow diagram of another example image processing method, in accordance with some embodiments.
[0018] Like reference numerals refer to corresponding parts throughout the several views of the drawings.
DETAILED DESCRIPTION
[0019] Reference will now be made in detail to specific embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous non-limiting specific details are set forth in order to assist in understanding the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that various alternatives may be used without departing from the scope of claims and the subject matter may be practiced without these specific details. For example, it will be apparent to one of ordinary skill in the art that the subject matter presented herein can be implemented on many types of electronic devices with digital video capabilities.
[0020] An electronic device employs an encoder-decoder network to perform image denoising and enhancement operations, thereby improving perceptual quality of an image taken under low illumination condition. For example, given an input image, /, a U-net is employed to predict a noise map Al of the input image based on an equation, A/ = (/: w), where w is a set of learnable parameters of the U-net. A denoised output I is a sum of the input image and the predicted noise map, i.e. I = I + A/. In the U-net, the input image is processed successively by a set of downsampling stages (i.e., encoding stages) to extract a series of feature maps, as well as to reduce spatial resolutions of these feature maps successively. An encoded feature map outputted by the downsmpling stages is then processed by a bottleneck network followed by a set of upscaling stages (i.e., decoding stages). In some embodiments, in each upscaling stage, an input feature map is upscaled and concatenated with the layer of the same resolution from the downsampling stage to effectively preserve the details in the input image.
[0021] Various embodiments of this application are directed to selectively storing portions (i.e., not all) of feature maps generated by encoding stages for use in decoding stages. For each respective encoding stage, a subset of channels is selected from a respective feature map generated by the respective encoding stage, and temporarily stored as a transitional feature map that is provided to a corresponding decoding stage. For example, an encoded feature map having a dimension of H W Cm is reduced to a transitional feature map having a dimension of Hx WxCout, where COut is less than Cm. In some embodiments, COut successive channels of the Cm channels of the encoded feature map are selected and stored as the transitional feature map. Alternatively, in some embodiments, the Cm channels of the input feature map are divided into COut map groups, and each group has Cm/COut channels. Optionally, for each group of channels, one of the respective Cm/COut channels is selected as a respective channel of the transitional feature map. Optionally, for each group of channels, the respective Cm/COut channels are combined (e.g., averaged) to generate a respective channel of the transitional feature map. When the transitional feature map is stored and provided to the decoding stage, an image processing process reduces memory and the computational usage, thereby making the encoder-decoder network feasible to be implemented on mobile devices having limited computational and storage resources.
[0022] Figure 1 is an example data processing environment 100 having one or more servers 102 communicatively coupled to one or more client devices 104, in accordance with some embodiments. The one or more client devices 104 may be, for example, desktop computers 104A, tablet computers 104B, mobile phones 104C, head-mounted display (HMD) (also called augmented reality (AR) glasses) 104D, or intelligent, multi-sensing, network- connected home devices (e.g., a surveillance camera 104E, a smart television device, a drone). Each client device 104 can collect data or user inputs, executes user applications, and present outputs on its user interface. The collected data or user inputs can be processed locally at the client device 104 and/or remotely by the server(s) 102. The one or more servers 102 provide system data (e.g., boot files, operating system images, and user applications) to the client devices 104, and in some embodiments, processes the data and user inputs received from the client device(s) 104 when the user applications are executed on the client devices 104. In some embodiments, the data processing environment 100 further includes a storage 106 for storing data related to the servers 102, client devices 104, and applications executed on the client devices 104. [0023] The one or more servers 102 are configured to enable real-time data communication with the client devices 104 that are remote from each other or from the one or more servers 102. Further, in some embodiments, the one or more servers 102 are configured to implement data processing tasks that cannot be or are preferably not completed locally by the client devices 104. For example, the client devices 104 include a game console (e.g., the HMD 104D) that executes an interactive online gaming application. The game console receives a user instruction and sends it to a game server 102 with user data. The game server 102 generates a stream of video data based on the user instruction and user data and providing the stream of video data for display on the game console and other client devices that are engaged in the same game session with the game console. In another example, the client devices 104 include a networked surveillance camera 104E and a mobile phone 104C. The networked surveillance camera 104E collects video data and streams the video data to a surveillance camera server 102 in real time. While the video data is optionally pre-processed on the surveillance camera 104E, the surveillance camera server 102 processes the video data to identify motion or audio events in the video data and share information of these events with the mobile phone 104C, thereby allowing a user of the mobile phone 104 to monitor the events occurring near the networked surveillance camera 104E in the real time and remotely. [0024] The one or more servers 102, one or more client devices 104, and storage 106 are communicatively coupled to each other via one or more communication networks 108, which are the medium used to provide communications links between these devices and computers connected together within the data processing environment 100. The one or more communication networks 108 may include connections, such as wire, wireless communication links, or fiber optic cables. Examples of the one or more communication networks 108 include local area networks (LAN), wide area networks (WAN) such as the Internet, or a combination thereof. The one or more communication networks 108 are, optionally, implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Long Term Evolution (LTE), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol. A connection to the one or more communication networks 108 may be established either directly (e.g., using 3G/4G connectivity to a wireless carrier), or through a network interface 110 (e.g., a router, switch, gateway, hub, or an intelligent, dedicated whole-home control node), or through any combination thereof. As such, the one or more communication networks 108 can represent the Internet of a worldwide collection of networks and gateways that use the Transmission Control Protocol/Intemet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages.
[0025] In some embodiments, deep learning techniques are applied in the data processing environment 100 to process content data (e.g., video data, visual data, audio data) obtained by an application executed at a client device 104 to identify information contained in the content data, match the content data with other data, categorize the content data, or synthesize related content data. In these deep learning techniques, data processing models (e.g., an encoder-decoder network 500 in Figure 5) are created based on one or more neural networks to process the content data. These data processing models are trained with training data before they are applied to process the content data. Subsequently to model training, the mobile phone 104C or HMD 104D obtains the content data (e.g., captures video data via an internal camera) and processes the content data using the data processing models locally. [0026] In some embodiments, both model training and data processing are implemented locally at each individual client device 104 (e.g., the mobile phone 104C and HMD 104D). The client device 104 obtains the training data from the one or more servers 102 or storage 106 and applies the training data to train the data processing models.
Alternatively, in some embodiments, both model training and data processing are implemented remotely at a server 102 (e.g., the server 102A) associated with a client device 104 (e.g. the client device 104A and HMD 104D). The server 102A obtains the training data from itself, another server 102 or the storage 106 applies the training data to train the data processing models. The client device 104 obtains the content data, sends the content data to the server 102 A (e.g., in an application) for data processing using the trained data processing models, receives data processing results (e.g., recognized hand gestures) from the server 102A, presents the results on a user interface (e.g., associated with the application), renders virtual objects in a field of view based on the poses, or implements some other functions based on the results. The client device 104 itself implements no or little data processing on the content data prior to sending them to the server 102 A. Additionally, in some embodiments, data processing is implemented locally at a client device 104 (e.g., the client device 104B and HMD 104D), while model training is implemented remotely at a server 102
(e.g., the server 102B) associated with the client device 104. The server 102B obtains the training data from itself, another server 102 or the storage 106 and applies the training data to train the data processing models. The trained data processing models are optionally stored in the server 102B or storage 106. The client device 104 imports the trained data processing models from the server 102B or storage 106, processes the content data using the data processing models, and generates data processing results to be presented on a user interface or used to initiate some functions (e.g., rendering virtual objects based on device poses) locally.
[0027] In some embodiments, a pair of AR glasses 104D (also called an HMD) are communicatively coupled in the data processing environment 100. The AR glasses 104D includes a camera, a microphone, a speaker, one or more inertial sensors (e.g., gyroscope, accelerometer), and a display. The camera and microphone are configured to capture video and audio data from a scene of the AR glasses 104D, while the one or more inertial sensors are configured to capture inertial sensor data. In some situations, the camera captures hand gestures of a user wearing the AR glasses 104D, and recognizes the hand gestures locally and in real time using a two-stage hand gesture recognition model. In some situations, the microphone records ambient sound, including user’s voice commands. In some situations, both video or static visual data captured by the camera and the inertial sensor data measured by the one or more inertial sensors are applied to determine and predict device poses. The video, static image, audio, or inertial sensor data captured by the AR glasses 104D is processed by the AR glasses 104D, server(s) 102, or both to recognize the device poses. Optionally, deep learning techniques are applied by the server(s) 102 and AR glasses 104D jointly to recognize and predict the device poses. The device poses are used to control the AR glasses 104D itself or interact with an application (e.g., a gaming application) executed by the AR glasses 104D. In some embodiments, the display of the AR glasses 104D displays a user interface, and the recognized or predicted device poses are used to render or interact with user selectable display items (e.g., an avatar) on the user interface.
[0028] As explained above, in some embodiments, deep learning techniques are applied in the data processing environment 100 to process video data, static image data, or inertial sensor data captured by the AR glasses 104D. 2D or 3D device poses are recognized and predicted based on such video, static image, and/or inertial sensor data using a first data processing model. Visual content is optionally generated using a second data processing model. Training of the first and second data processing models is optionally implemented by the server 102 or AR glasses 104D. Inference of the device poses and visual content is implemented by each of the server 102 and AR glasses 104D independently or by both of the server 102 and AR glasses 104D jointly.
[0029] Figure 2 is a block diagram illustrating an electronic system 200 configured to process content data (e.g., image data), in accordance with some embodiments. The electronic system 200 includes a server 102, a client device 104 (e.g., AR glasses 104D in Figure 1), a storage 106, or a combination thereof. The electronic system 200 , typically, includes one or more processing units (CPUs) 202, one or more network interfaces 204, memory 206, and one or more communication buses 208 for interconnecting these components (sometimes called a chipset). The electronic system 200 includes one or more input devices 210 that facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, in some embodiments, the client device 104 of the electronic system 200 uses a microphone for voice recognition or a camera for gesture recognition to supplement or replace the keyboard. In some embodiments, the client device 104 includes one or more optical cameras (e.g., an RGB camera), scanners, or photo sensor units for capturing images, for example, of graphic serial codes printed on the electronic devices. The electronic system 200 also includes one or more output devices 212 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays. Optionally, the client device 104 includes a location detection device, such as a GPS (global positioning system) or other geo-location receiver, for determining the location of the client device 104.
[0030] Memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 206, optionally, includes one or more storage devices remotely located from one or more processing units 202. Memory 206, or alternatively the non-volatile memory within memory 206, includes a non-transitory computer readable storage medium. In some embodiments, memory 206, or the non- transitory computer readable storage medium of memory 206, stores the following programs, modules, and data structures, or a subset or superset thereof:
• Operating system 214 including procedures for handling various basic system services and for performing hardware dependent tasks; • Network communication module 216 for connecting each server 102 or client device 104 to other devices (e.g., server 102, client device 104, or storage 106) via one or more network interfaces 204 (wired or wireless) and one or more communication networks 108, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
• User interface module 218 for enabling presentation of information (e.g., a graphical user interface for application(s) 224, widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.) at each client device 104 via one or more output devices 212 (e.g., displays, speakers, etc.);
• Input processing module 220 for detecting one or more user inputs or interactions from one of the one or more input devices 210 and interpreting the detected input or interaction;
• Web browser module 222 for navigating, requesting (e.g., via HTTP), and displaying websites and web pages thereof, including a web interface for logging into a user account associated with a client device 104 or another electronic device, controlling the client or electronic device if associated with the user account, and editing and reviewing settings and data that are associated with the user account;
• One or more user applications 224 for execution by the electronic system 200 (e.g., games, social network applications, smart home applications, and/or other web or non-web based applications for controlling another electronic device and reviewing data captured by such devices);
• Model training module 226 for receiving training data and establishing a data processing model for processing content data (e.g., video, image, audio, or textual data) to be collected or obtained by a client device 104;
• Data processing module 228 for processing content data using data processing models 250, thereby identifying information contained in the content data, matching the content data with other data, categorizing the content data, or synthesizing related content data, where in some embodiments, the data processing module 228 is associated with one of the user applications 224 to process the content data in response to a user instruction received from the user application 224, and in an example, the data processing module 228 is applied to implement image processing processes in Figures 8A and 8B; and • One or more databases 230 for storing at least data including one or more of: o Device settings 232 including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.) of the one or more servers 102 or client devices 104; o User account information 234 for the one or more user applications 224, e.g., user names, security questions, account history data, user preferences, and predefined account settings; o Network parameters 236 for the one or more communication networks 108, e.g., IP address, subnet mask, default gateway, DNS server and host name; o Training data 238 for training one or more data processing models 250; o Data processing model(s) 240 for processing content data (e.g., video, image, audio, or textual data) using deep learning techniques, where the data processing models 240 includes an encoder-decoder network 500 (e.g., a U- net) in Figure 5; and o Content data and results 254 that are obtained by and outputted to the client device 104 of the electronic system 200 , respectively, where the content data is processed by the data processing models 250 locally at the client device 104 or remotely at the server 102 to provide the associated results to be presented on client device 104.
[0031] Optionally, the one or more databases 230 are stored in one of the server 102, client device 104, and storage 106 of the electronic system 200 . Optionally, the one or more databases 230 are distributed in more than one of the server 102, client device 104, and storage 106 of the electronic system 200 . In some embodiments, more than one copy of the above data is stored at distinct devices, e.g., two copies of the data processing models 250 are stored at the server 102 and storage 106, respectively.
[0032] Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory 206, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 206, optionally, stores additional modules and data structures not described above. [0033] Figure 3 is an example data processing system 300 for training and applying a neural network based (NN-based) data processing model 240 for processing content data (e.g., video, image, audio, or textual data), in accordance with some embodiments. The data processing system 300 includes a model training module 226 for establishing the data processing model 240 and a data processing module 228 for processing the content data using the data processing model 240. In some embodiments, both of the model training module 226 and the data processing module 228 are located on a client device 104 of the data processing system 300, while a training data source 304 distinct form the client device 104 provides training data 306 to the client device 104. The training data source 304 is optionally a server 102 or storage 106. Alternatively, in some embodiments, both of the model training module 226 and the data processing module 228 are located on a server 102 of the data processing system 300. The training data source 304 providing the training data 306 is optionally the server 102 itself, another server 102, or the storage 106. Additionally, in some embodiments, the model training module 226 and the data processing module 228 are separately located on a server 102 and client device 104, and the server 102 provides the trained data processing model 240 to the client device 104.
[0034] The model training module 226 includes one or more data pre-processing modules 308, a model training engine 310, and a loss control module 312. The data processing model 240 is trained according to a type of the content data to be processed. The training data 306 is consistent with the type of the content data, so is a data pre-processing module 308 applied to process the training data 306 consistent with the type of the content data. For example, an image pre-processing module 308A is configured to process image training data 306 to a predefined image format, e.g., extract a region of interest (ROI) in each training image, and crop each training image to a predefined image size. Alternatively, an audio pre-processing module 308B is configured to process audio training data 306 to a predefined audio format, e.g., converting each training sequence to a frequency domain using a Fourier transform. The model training engine 310 receives pre-processed training data provided by the data pre-processing modules 308, further processes the pre-processed training data using an existing data processing model 240, and generates an output from each training data item. During this course, the loss control module 312 can monitor a loss function comparing the output associated with the respective training data item and a ground truth of the respective training data item. The model training engine 310 modifies the data processing model 240 to reduce the loss function, until the loss function satisfies a loss criteria (e.g., a comparison result of the loss function is minimized or reduced below a loss threshold). The modified data processing model 240 is provided to the data processing module 228 to process the content data.
[0035] In some embodiments, the model training module 226 offers supervised learning in which the training data is entirely labelled and includes a desired output for each training data item (also called the ground truth in some situations). Conversely, in some embodiments, the model training module 226 offers unsupervised learning in which the training data are not labelled. The model training module 226 is configured to identify previously undetected patterns in the training data without pre-existing labels and with no or little human supervision. Additionally, in some embodiments, the model training module 226 offers partially supervised learning in which the training data are partially labelled.
[0036] The data processing module 228 includes a data pre-processing modules 314, a model -based processing module 316, and a data post-processing module 318. The data preprocessing modules 314 pre-processes the content data based on the type of the content data. Functions of the data pre-processing modules 314 are consistent with those of the preprocessing modules 308 and covert the content data to a predefined content format that is acceptable by inputs of the model -based processing module 316. Examples of the content data include one or more of video, image, audio, textual, and other types of data. For example, each image is pre-processed to extract an ROI or cropped to a predefined image size, and an audio clip is pre-processed to convert to a frequency domain using a Fourier transform. In some situations, the content data includes two or more types, e.g., video data and textual data. The model -based processing module 316 applies the trained data processing model 240 provided by the model training module 226 to process the pre-processed content data. The model -based processing module 316 can also monitor an error indicator to determine whether the content data has been properly processed in the data processing model 240. In some embodiments, the processed content data is further processed by the data postprocessing module 318 to present the processed content data in a preferred format or to provide other related information that can be derived from the processed content data.
[0037] Figure 4A is an example neural network (NN) 400 applied to process content data in an NN-based data processing model 240, in accordance with some embodiments, and Figure 4B is an example node 420 in the neural network (NN) 400, in accordance with some embodiments. The data processing model 240 is established based on the neural network 400. A corresponding model-based processing module 316 applies the data processing model 240 including the neural network 400 to process content data that has been converted to a predefined content format. The neural network 400 includes a collection of nodes 420 that are connected by links 412. Each node 420 receives one or more node inputs and applies a propagation function to generate a node output from the one or more node inputs. As the node output is provided via one or more links 412 to one or more other nodes 420, a weight w associated with each link 412 is applied to the node output. Likewise, the one or more node inputs are combined based on corresponding weights wi, W2, W3, and W4 according to the propagation function. In an example, the propagation function is a product of a non-linear activation function and a linear weighted combination of the one or more node inputs.
[0038] The collection of nodes 420 is organized into one or more layers in the neural network 400. Optionally, the one or more layers includes a single layer acting as both an input layer and an output layer. Optionally, the one or more layers includes an input layer 402 for receiving inputs, an output layer 406 for providing outputs, and zero or more hidden layers 404 (e.g., 404A and 404B) between the input and output layers 402 and 406. A deep neural network has more than one hidden layers 404 between the input and output layers 402 and 406. In the neural network 400, each layer is only connected with its immediately preceding and/or immediately following layer. In some embodiments, a layer 402 or 404B is a fully connected layer because each node 420 in the layer 402 or 404B is connected to every node 420 in its immediately following layer. In some embodiments, one of the one or more hidden layers 404 includes two or more nodes that are connected to the same node in its immediately following layer for down sampling or pooling the nodes 420 between these two layers. Particularly, max pooling uses a maximum value of the two or more nodes in the layer 404B for generating the node of the immediately following layer 406 connected to the two or more nodes.
[0039] In some embodiments, a convolutional neural network (CNN) is applied in a data processing model 240 to process content data (particularly, video and image data). The CNN employs convolution operations and belongs to a class of deep neural networks 400, i.e., a feedforward neural network that only moves data forward from the input layer 402 through the hidden layers to the output layer 406. The one or more hidden layers of the CNN are convolutional layers convolving with a multiplication or dot product. Each node in a convolutional layer receives inputs from a receptive area associated with a previous layer (e.g., five nodes), and the receptive area is smaller than the entire previous layer and may vary based on a location of the convolution layer in the convolutional neural network. Video or image data is pre-processed to a predefined video/image format corresponding to the inputs of the CNN. The pre-processed video or image data is abstracted by each layer of the CNN to a respective feature map. By these means, video and image data can be processed by the CNN for video and image recognition, classification, analysis, imprinting, or synthesis. [0040] Alternatively and additionally, in some embodiments, a recurrent neural network (RNN) is applied in the data processing model 240 to process content data (particularly, textual and audio data). Nodes in successive layers of the RNN follow a temporal sequence, such that the RNN exhibits a temporal dynamic behavior. In an example, each node 420 of the RNN has a time-varying real-valued activation. Examples of the RNN include, but are not limited to, a long short-term memory (LSTM) network, a fully recurrent network, an Elman network, a Jordan network, a Hopfield network, a bidirectional associative memory (BAM network), an echo state network, an independently RNN (IndRNN), a recursive neural network, and a neural history compressor. In some embodiments, the RNN can be used for handwriting or speech recognition. It is noted that in some embodiments, two or more types of content data are processed by the data processing module 228, and two or more types of neural networks (e.g., both CNN and RNN) are applied to process the content data jointly.
[0041] The training process is a process for calibrating all of the weights w, for each layer of the learning model using a training data set which is provided in the input layer 402. The training process typically includes two steps, forward propagation and backward propagation, which are repeated multiple times until a predefined convergence condition is satisfied. In the forward propagation, the set of weights for different layers are applied to the input data and intermediate results from the previous layers. In the backward propagation, a margin of error of the output (e.g., a loss function) is measured, and the weights are adjusted accordingly to decrease the error. The activation function is optionally linear, rectified linear unit, sigmoid, hyperbolic tangent, or of other types. In some embodiments, a network bias term b is added to the sum of the weighted outputs from the previous layer before the activation function is applied. The network bias b provides a perturbation that helps the NN 400 avoid over fitting the training data. The result of the training includes the network bias parameter b for each layer.
[0042] Figure 5 is an example encoder-decoder network (e.g., a U-net) 500 in which selected channels of encoded feature maps are temporarily stored at encoding stages, in accordance with some embodiments. The encoder-decoder network 500 is configured to receive an input image 502 having an image resolution and process the input image 502 to generate an output image 504 having an image quality better than the input image 502 (e.g., having a higher SNR than the input image 502). The encoder-decoder network 500 includes a series of encoding stages 506, a bottleneck network 508 coupled to the series of encoding stages 506, and a series of decoding stages 510 coupled to the bottleneck network 508. The series of decoding stages 510 include the same number of stages as the series of encoding stages 506. In an example, the encoder-decoder network 500 has four encoding stages 506A- 506D and four decoding stages 510A-510D. The bottleneck network 508 is coupled between the encoding stages 506 and decoding stages 510. The input image 502 is successively processed by the series of encoding stages 506A-506D, the bottleneck network 508, and the series of decoding stages 510A-510D to generate the output image 504.
[0043] In some embodiments, an original image is divided into a plurality of image tiles, and the input image 502 corresponds to one of the plurality of image tiles. Stated another way, in some embodiments, each of the plurality of image tiles is processed using the encoder-decoder network 500, and all of the image tiles in the original image are successively processed using the encoder-decoder network 500. Output images 504 of the plurality of image tiles are collected and combined to one another to reconstruct a final output image corresponding to the original image.
[0044] The series of encoding stages 506 include an ordered sequence of encoding stages 506, e.g., stages 506A, 506B, 506C, and 506D, and have an encoding scale factor. Each encoding stage 506 generates an encoded feature map 512 having a feature resolution and a number of encoding channels. Among the encoding stages 506A-506D, the feature resolution is scaled down and the number of encoding channels is scaled up according to the encoding scale factor. In an example, the encoding scale factor is 2. A first encoded feature map 512A of a first encoding stage 506A has a first feature resolution (e.g., //x W) related to the image resolution and a first number of (e.g., NCH) encoding channels, and a second encoded feature map 512B of a second encoding stage 506B has a second feature resolution (e.g., V2HXV2W) and a second number of (e.g., NCH) encoding channels. A third encoded feature map 512C of a third encoding stage 506C has a third feature resolution (e.g., V H^ V W) and a third number of (e.g., 4NCH) encoding channels, and a fourth encoded feature map 512D of a fourth encoding stage 506D has a fourth feature resolution (e.g., VsH^VsW) and a fourth number of (e.g., 8NCH) encoding channels.
[0045] For each encoding stage 506, the encoded feature map 512 is processed and provided as an input to a next encoding stage 506, except that the encoded feature map 512 of a last encoding stage 506 (e.g., stage 506D in Figure 5) is processed and provided as an input to the bottleneck network 508. Additionally, for each encoding stage 506, in accordance with a predefined connection skipping rule, a subset of feature map 514 is selected from the number of encoding channels of the encoded feature map 512. The subset of feature map 514 is less than the entire encoded feature map 512, i.e., does not include all of a total number (e.g., Cm) of encoding channels of the encoded feature map 512. Rather, the subset of feature map 514 includes an alternative number (e.g., COut) of encoding channels of the encoded feature map 512, and the alternative number is less than the total number of encoding channels in the respective encoding stage 506. The subset of feature map 514 is temporarily stored in memory and extracted for further processing by the bottleneck network 508 or any of the decoding stages 510. Stated another way, the alternative number of channels 514 are stored in the memory as skip connections that skip part of the encoder-decoder network 500. [0046] The bottleneck network 508 is coupled to the last stage of the encoding stages 506 (e.g., stage 506D in Figure 5), and continues to process the total number of encoding channels of the encoded feature map 512D of the last encoding stage 506D and generate an intermediate feature map 516A (i.e., a first input feature map 516A to be used by a first decoding stage 510A). In an example, referring to Figure 5, the bottleneck network 508 includes a first set of 3 *3 CNN and Rectified Linear Unit (ReLU), a second set of 3 *3 CNN and ReLU, a global pooling network, a bilinear upsampling network, and a set of 1 x 1 CNN and ReLU. The encoded feature map 512D of the last encoding stage 506D is normalized (e.g., using a pooling layer), and fed to the first set of 3 *3 CNN and ReLU of the bottleneck network 508. The intermediate feature map 516A is outputted by the set of 1 x 1 CNN and ReLU of the bottleneck network 508 and provided to the decoding stages 510.
[0047] The series of decoding stages 510 include an ordered sequence of decoding stages 510, e.g., stages 510A, 510B, 510D, and 510D, and have a decoding upsampling factor. Each decoding stage 510 generates a decoded feature map 518 having a feature resolution and a number of decoding channels. Among the decoding stages 510A-510D, the feature resolution is scaled up and the number of decoding channels is scaled down according to the decoding upsampling factor. In an example, the decoding upsampling factor is 2. A first decoded feature map 518A of a first decoding stage 510A has a first feature resolution (e.g., ’/sJ/’x’/sIF’) and a first number of (e.g., 8 Vcff ’) decoding channels, and a second decoded feature map 518B of a second decoding stage 510B has a second feature resolution (e.g., V H’^ V W’) and a second number of (e.g., 4NCH’) decoding channels. A third decoded feature map 518C of a third decoding stage 510C has a third feature resolution (e.g., UH’xUIF’) and a third number of (e.g., INCH’ decoding channels, and a fourth decoded feature map 518D of a fourth decoding stage 510D has a fourth feature resolution (e.g., J/’x W’) related to a resolution of the output image 504 and a fourth number of (e.g., NCH’ decoding channels.
[0048] For each decoding stage 510, the decoded feature map 518 is processed and provided as an input to a next encoding stage 506, except that the decoded feature map 518 of a last encoding stage 506 (e.g., stage 506D in Figure 5) is processed to generate the output image 504. For example, the decoded feature map 518D of the last encoding stage 506 is processed by a 1 x 1 CNN 522 and combined with the input image 502 to generate the output image 504, thereby reducing a noise level of the input image 502 via the entire encoderdecoder network 500. Additionally, each respective decoding stage 510 extracts a subset of feature map 514 that is selected from a total number of encoding channels of an encoded feature map 512 of a corresponding encoding stage 506. The subset of feature map 514 is combined with an input feature map 516 of the respective decoding stage 510 using a set of neural networks. Each respective decoding stage 510 and the corresponding encoding stage 506 are symmetric with respect to the bottleneck network 508, i.e., separated from the bottleneck network 508 by the same number of decoding or encoding stages 510 or 506. In some embodiments, the subset of feature map 514 is purged from the memory after it is used by the respective decoding stage 510.
[0049] It is noted that each encoding stage 506 is uniquely associated with a respective decoding stage 510 in the context of sharing the alternative number of channels 514. Each respective encoding stage 506 and the corresponding decoding stage 510 are symmetric with respect to the bottleneck network 508, i.e., separated from the bottleneck network 508 by the same number of decoding or encoding stages, respectively. Specifically, in this example, the encoding stage 506A, 506B, 506C, and 506D provides a first alternative number of channels 514A to the fourth decoding stage 510D, and remaining encoding stages 506B, 506C, and 506D provide the respective alternative numbers of channels 514B, 514C, and 514D to remaining decoding stages 510C, 510B, and 510A, respectively.
[0050] In various embodiments of this application, a computational capability of an electronic device configured to process the input image 502 determines the feature resolutions and numbers of decoding channels of the decoding stages 510. For example, a mobile device 104 having a limited computational capability has a relatively smaller feature resolution and smaller number of decoding channel for each decoding stage 510 compared with a server 102 having a larger computational compatibility. The feature resolutions and numbers of decoding channels of the decoding stages 510 partially set limits on the alternative number of channels of the subset of feature map 514 obtained from each corresponding encoding stage 506. The feature resolution and numbers of decoding channel of the first decoding stage 510A also set limits on the feature resolution and numbers of channels of the intermediate feature map 516A generated by the bottleneck network 508. By these means, the encoder-decoder network 500 can be adaptively configured to operate on electronic devices having limited capabilities, e.g., by adopting a scalable size determined according to a computational capability and conserving a storage resource by way of storing the subset of feature map 514 rather than the entire feature map 512 of each encoding stage. [0051] From another perspective, in some embodiments, the predefined connection skipping rule is applied to select or determine the alternative number of channels 514 from the total number of channels of the encoded feature map 512. The alternative number and the total number have a predefined ratio. The predefined ratio is determined based on at least the hardware capability of the electronic device and the image resolution of the input image 502. [0052] Figure 6 is a detailed network structure 600 of each encoding stage 506 and a corresponding decoding stage 510 in an encoder-decoder network (e.g., a U-net), in accordance with some embodiments. As explained above, the encoder-decoder network obtains an input image 502 and processes the input image 502 by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image 504. Each encoding stage 506 corresponds to a corresponding decoding stage 510. Each encoding stage 506 generates an encoded feature map 512 having a total number (G„) of channels. In accordance with a predefined connection skipping rule, an alternative number (COut) of channels are selected from the total number of channels of the encoded feature map 512. The alternative number is less than the total number. The alternative number of channels 514 of the encoded feature map 512 are stored in memory, thereby allowing the alternative number of channels 514 to be extracted from the memory and combined with an input feature map 516 of the corresponding decoding stage 510.
[0053] Conversely, each decoding stage 510 obtains an input feature map 516 that is generated from a total number of channels of an encoded feature map outputted by the corresponding encoding stage 506. An alternative number of channels 514 of the encoded feature map 512 are extracted from the memory, and the alternative number is less than the total number. The alternative number of channels 514 of the encoded feature map 512 and the input feature map 516 are combined to a combined feature map 520, which is further converted to a decoded feature map 518. The decoded feature map 518 is fed to a next decoding stage 510 that immediately follows each respective decoding stage or outputted from the series of decoding stages 510 to render the output image 504. As such, the alternative number of channels 514 selected from the encoded feature map 512 are stored in the memory as skip connections that skip part of the encoder-decoder network 500.
[0054] In some embodiments, each encoding stage 506 includes a first set of 3 ^3 CNN and ReLU 602, a second set of 3 x3 CNN and ReLU 604, and a max pooling module 606. The first and second sets of CNN and ReLU 602 and 604 generate the encoded feature map 512 of the respective encoding stage 506 jointly, and the max pooling module 606 applies a pooling operation to downsample the encoded feature map 512 to a pooled feature map 608. The first encoding stage 506A generates the first encoded feature map 512A from the input image 502, and the remaining encoding stage 506B, 506C or 506D receives a pooled feature map 608A, 608B or 608C (also called a respective start feature map) of a preceding encoding stage 506 A, 506B or 506C and generates a corresponding encoded feature map 512B, 512C or 512D, respectively. The fourth encoding stage 506D is the last encoding stage in which the fourth encoded feature map 512D is further converted to a fourth pooled feature map 608D, and the fourth pooled feature map 608D is provided to the bottleneck network 508.
[0055] In some embodiments, each decoding stage 510 applies a set of 1x1 CNN and ReLU 612 on the alternative number of channels 514 of the encoded feature map 512 to obtain a modified subset of feature map. The encoding stage 506 applies another set of 1 x 1 CNN and ReLU 614 and a bilinear upsampling operation 616 successively on an input feature map 516 to obtain a modified input feature map. The modified subset of feature map and the modified input feature map are combined (e.g., concatenated) to provide the combined feature map 520. In some embodiments, a dimension of each of the alternative number of channels 514 is cropped to match a dimension of the input feature map 516. The decoding stage 510 then applies a first set of 3x3 CNN and ReLU 618 and a second set of 3x3 CNN and ReLU 620 successively on the combined feature map 520 to generate the decoded feature map 518. The first decoding stage 510A receives an intermediate feature map 516A outputted by the bottleneck network 508 as a first input feature map 516A. The remaining decoding stage 510B, 510C or 510D receives a decoded feature map 518A, 518B or 518C outputted by a preceding decoding stage 510A, 510B or 510C as a respective input feature map 516 and generates a corresponding decoded feature map 518B, 518C or 518D, respectively. The fourth decoding stage 510D is the last decoding stage in which the fourth decoded feature map 518D is processed by a pooling module (e.g., a U1 CNN 522) and combined with the input image 502 to generate the output image 504. The output image 504 has one or more performance characteristics better than the input image 502, e.g., has a lower noise level and a high SNR.
[0056] Figures 7A-7E are five example channel determination schemes 700, 710, 720, 730, and 740 applied by each encoding stage 506 of an encoder-decoder network 500 (e.g., a U-net), in accordance with some embodiments. The encoder-decoder network 500 obtains an input image 502 and processes the input image 502 by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image 504. Each encoding stage 506 corresponds to a corresponding decoding stage 510. Each encoding stage 506 generates an encoded feature map 512 having a total number (G„) of channels. In accordance with a predefined connection skipping rule, an alternative number (COut) of channels are selected or determined from the total number of channels of the encoded feature map 512. The alternative number of channels 514 of the encoded feature map 512 are stored in memory, and subsequently extracted and combined with an input feature map 516 of the corresponding decoding stage 510. The alternative number is less than the total number. The alternative number of channels 514 are stored in the memory as skip connections that skip part of the encoder-decoder network 500.
[0057] In some embodiments, in accordance with the predefined connection skipping rule, the alternative number of channels 514 are a subset of successive channels within the total number of channels of the encoded feature map 512. For example, referring to Figure 7A, the alternative number of channels 514 include a first channel 702 leading the encoded feature map 512. Alternatively, referring to Figure 7B, the alternative number of channels 514 include a last channel 704 concluding the encoded feature map 512. Alternatively, referring to Figure 7C, in some embodiments, the alternative number of channels 514 do not include the first channel 702 or the last channel 704 of the encoded feature map 512. Specifically, in an example, in accordance with a predefined connection skipping rule, the alternative number of channels 514 are successive channels within the total number of channels of the encoded feature map 512, and is spaced with the same number of channels from a first channel and a last channel of the total number of channels 512.
[0058] Conversely, in some embodiments, in accordance with the predefined connection skipping rule, the alternative number of channels 514 are not successive channels within the total number of channels of the encoded feature map 512. For example, referring to Figure 7D, the alternative number of channels 514 are evenly distributed among the total number of channels of the encoded feature map 512, and each of the alternative number of channels 514 is selected from a fixed channel of every third number of successive channels. In this example, each of the alternative number of channels 514 is the first channel of every 4 successive channels of the total number of channels of the encoded feature map 512. In some embodiments, referring to Figure 7E, the total number of channels of the encoded feature map 512 is divided into the alternative number of non-overlapping channel groups 706. Each channel group 706 includes a respective subset of successive channels of the encoded feature map 512. For each channel group, the subset of successive channels of the encoded feature map 512 are averaged to provide a respective channel of the alternative number of channels 514 stored in the memory for use in decoding. Such an averaging operation is therefore used to generate the alternative number of channels 514, and avoids relying on a 1 x 1 convolutional layer to generate the alternative number of channels 514.
[0059] A computational capability of an electronic device is measured by a number of floating point operations per second (FLOPS). When the encoded feature map 512 has a resolution of Hx WxCm, and the 1 x 1 convolutional layer consumes Hx WxCm xCout FLOPs. Given that the alternative number of channels 514 are directly extracted from the memory, each decoding stage 510 does not use the 1 x 1 convolutional layer to process these skip connections including the alternative number of channels 514, which helps reduce the computational resource needed for decoding. In an example, an input image 502 has an image resolution of 3000x4000 pixels and processed by the U-net. The U-net has a single encoding stage 506 and a single decoding stage 510 that is connected to the encoding stage 506 by a single skip connection. The encoded feature map 512 has a resolution of 1500x2000 and 32 channels, and the subset of feature map 514 has a resolution of 1500x2000 and 8 channels. That said, the total number Cm is equal to 32, and the alternative number COut is 8. Direct storage of the alternative number of channels of the encoding stage 506 of the scheme 700, 710, 720, or 730 uses 0 FLOPS, and the averaging operation of the scheme 750 uses 0.096 GFLOPS, while the 1 x 1 convolutional layer uses 1.56 GFLOPS. More computational resources can be conserved when multiple encoding stages 506 with skip connections are involved.
[0060] Additionally, a peak signal-to-noise ratio (PSNR) is applied to measure effectiveness of using the 1 x 1 convolutional layer or alternative number of channels 514 to form a skip connection in the encoder-decoder network 500. Application of the 1 x 1 convolutional layer corresponds to a first PSNR equal to 39.351. Skip connection having the alternative number of channels 514 corresponds to a second PSNR equal to 39.445, which is comparable with that of applying the 1 x 1 convolutional layer while requiring no or very few FLOPS. [0061] Figure 7F illustrates a channel determination process 760 implemented based on a channel determination scheme 750 applied by each encoding stage 506 of an encoderdecoder network 500, in accordance with some embodiments. In accordance with the predefined connection skipping rule, for each of the total number of (e.g., Cm channels of the encoded feature map 512, a respective inner product value of the respective channel is determined, and the alternative number of (e.g., Cout) channels 514 having the smallest inner product values are selected from the total number of channels. Alternatively, in some embodiments, an LI or L2 norm value is calculated for each of the total number of channels of the encoded map 512, and the alternative number of (e.g., Cout) channels 514 having the largest LI or L2 norm values are selected from the total number of channels. Alternatively, in some embodiments, an Euclidean norm value is calculated for each of the total number of channels of the encoded map 512, and the alternative number of (e.g., Cout) channels 514 having the largest Euclidean norm values are selected from the total number of channels. [0062] In some embodiments, all of the series of encoding stages 506 (e.g., 506A- 506D in Figure 5) comply with the same predefined connection skipping rule, and apply the same channel determination scheme (e.g., any of the schemes 700, 710, 720, 730, 740, and 750). Alternatively, in some embodiments, each of the series of encoding stages 506 complies with a respective predefined connection skipping rule and has a respective channel determination scheme, independently of remaining encoding stages in the series of encoding stages. For example, the first, second, third, and fourth encoding stages 506A, 506B, 506C, and 506D apply the channel determination schemes 700, 720, 730, and 740, respectively. Further, in some embodiments, two or more encoding stages 506 apply the same channel determination scheme that is distinct from that of one or more remaining encoding stages.
[0063] Figure 8A is a flow diagram of an example image processing method 800, in accordance with some embodiments, and Figure 8B is a flow diagram of another example image processing method 850, in accordance with some embodiments. For convenience, the methods 800 and 850 are described as being implemented by an electronic device (e.g., a mobile phone 104C, AR glasses 104D, smart television device, or drone). Method 800 and 850 are, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of the computer system. Each of the operations shown in Figures 8A and 8B may correspond to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., memory 206 in Figure 2). The computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non- volatile memory device or devices. The instructions stored on the computer readable storage medium may include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors. Some operations in method 1050 may be combined and/or the order of some operations may be changed.
[0064] Referring to Figure 8A, the electronic device has memory. The electronic device obtains (802) an input image 502 having an image resolution and processes (804) the input image 502 by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image 504. The series of encoding stages 506 (806) include a target encoding stages 506, and the series of decoding stages 510 include a target decoding stage 510 corresponding to the target encoding stages 506. At the target encoding stages 506, which can be any of the encoding stages 506 506, the electronic device generates (808) an encoded feature map 512 having a total number (Cm) of channels. In accordance with a predefined connection skipping rule, the electronic device determines (810) an alternative number (COut) of channels 514 based on the total number of channels of the encoded feature map 512, and the alternative number (COut) is less than the total number (Cm). The electronic device temporarily stores (812) the alternative number of channels 514 of the encoded feature map 512 in the memory, thereby allowing the alternative number of channels 514 to be extracted from the memory and combined with an input feature map 516 of the target decoding stage. In some embodiments, the operations performed on the target encoding stages 506 are performed on each and every encoding stage 506.
[0065] In some embodiments, at the target decoding stage 510, the electronic device extracts from the memory the alternative number of channels 514 of the encoded feature map 512, obtains the input feature map 516, and combines the alternative number of channels 514 of the encoded feature map 512 and the input feature map 516 to a combined feature map 520. The combined feature map 520 is converted to a decoded feature map 518, and the decoded feature map 518 is fed to a next decoding stage 510 that immediately follows the target decoding stage 510 or outputted from the series of decoding stages 510. Further, in some embodiments, at the target decoding stage 510, the electronic device crops a dimension of each of the alternative number of channels 514 to match a dimension of the input feature map 516. Additionally, in some embodiments, the input feature map 516 is provided by a bottleneck network or a last decoding stage 510 that immediately precedes the target decoding stage. A I convolutional layer and a rectified linear unit (ReLU) 612 are applied on the alternative number of channels 514 of the encoded feature map 512 to obtain a modified subset of the encoded feature map 512. A distinct 1 x 1 convolutional layer, a distinct ReLU 614, and a bilinear upsampling operation 616 are successively applied on the input feature map 516 to obtain a modified input feature map 516. The modified subset of the encoded feature map 512 and the modified input feature map 516 are concatenated to the combined feature map 520. Further, in some embodiments, the electronic device successively applies a 3x3 convolutional layer and a ReLU on the combined feature map 520 twice (618 and 620) to obtain the decoded feature map 518.
[0066] In some embodiments, in accordance with a determination that the target decoding stage 510 is the last stage of the series of decoding stages 510, the electronic device combines the decoded feature map 518 and the input image 502 to obtain an output image 504 that has one or more performance characteristics better than the input image 502. In accordance with a determination that the target decoding stage 510 is not the last stage of the series of decoding stages 510, the electronic device provides the decoded feature map 518 as a distinct input feature map 516 to the next decoding stage 510 that immediately follows the target decoding stage.
[0067] In some embodiments, in accordance with a determination that the target encoding stages 506 is the last stage of the series of encoding stages 506, the electronic device provides the encoded feature map 512 to a bottleneck network coupled between the series of encoding stages 506 and the series of decoding stages 510. In accordance with a determination that the target encoding stages 506 is not the last stage of the series of encoding stages 506, the electronic device applies a max pooling operation on the encoded feature map 512 before feeding the encoded feature map 512 to a second encoding stages 506 that immediately follows the target encoding stages 506.
[0068] Referring to Figure 8B, the electronic device obtains (852) an input image and processes (854) the input image by a series of encoding stages 506 and a series of decoding stages 510 to generate an output image. The series of encoding stages 506 include (856) a target encoding stage, and the series of decoding stages 510 include a target decoding stage 510 corresponding to the target encoding stage. At the target decoding stage, the electronic device obtains (858) an input feature map 516 that is generated from a total number of channels of an encoded feature map 512 outputted by the target encoding stage, extracts (860) from the memory an alternative number of channels 514 of the encoded feature map 512, and combines (862) the alternative number of channels 514 of the encoded feature map 512 and the input feature map 516 to a combined feature map 520. The alternative number is less than the total number. The electronic device converts (864) the combined feature map 520 to a decoded feature map 518, wherein the decoded feature map 518 is fed to a next decoding stage 510 that immediately follows the target decoding stage 510 or outputted from the series of decoding stages 510 to render the output image. In some embodiments, the operations performed on the target decoding stages 510 are performed on each and every decoding stage 510.
[0069] In some embodiments, at the target encoding stage, the electronic device generates the encoded feature map 512 having the total number of channels. In accordance with a predefined connection skipping rule, the alternative number of channels 514 are determined from the total number of channels of the encoded feature map 512. The electronic device temporarily stores the alternative number of channels 514 of the encoded feature map 512 in the memory, thereby allowing the alternative number of channels 514 to be extracted from the memory and combined with the input feature map 516 during the target decoding stage. Further, in some embodiments, in accordance with a determination that the target encoding stage 506 is the last stage of the series of encoding stages 506, the electronic device provides the encoded feature map 512 to a bottleneck network coupled between the series of encoding stages 506 and the series of decoding stages 510. In accordance with a determination that the target encoding stage 506 is not the last stage of the series of encoding stages 506, the electronic device applies a max pooling operation 606 on the encoded feature map 512 before feeding the encoded feature map 512 as a distinct start feature map 608 to the distinct encoding stage 506 that immediately follows the target encoding stage. Additionally, in some embodiments, at the target encoding stage, the electronic device receives a start feature map (e.g., a pooled feature map 608) and successively applies a 3^3 convolutional layer and a ReLU on the start feature map twice (602 and 604) to generate the encoded feature map 512.
[0070] In some embodiments, at the target decoding stage, the electronic device crops a dimension of each of the alternative number of channels 514 to match a dimension of the input feature map 516.
[0071] In some embodiments, the input feature map 516 is provided by a bottleneck network or a last decoding stage 510 that immediately precedes the target decoding stage. The electronic device applies a 1 x 1 convolutional layer and a rectified linear unit (ReLU) 612 on the alternative number of channels 514 of the encoded feature map 512 to obtain a modified subset of the encoded feature map 512, successively applies a distinct 1x1 convolutional layer, a distinct ReLU 614, and a bilinear upsampling operation 616 on the input feature map 516 to obtain a modified input feature map 516, and concatenates the modified subset of the encoded feature map 512 and the modified input feature map 516 to the combined feature map 520.
[0072] In some embodiments, the electronic devices converts the combined feature map 520 to a decoded feature map 518 by successively applying a 3 ^3 convolutional layer and a ReLU on the combined feature map 520 twice (618 and 620) to obtain the decoded feature map 518.
[0073] In some embodiments, in accordance with a determination that the target decoding stage 510 is the last stage of the series of decoding stages 510, the electronic device combines (866) the decoded feature map 518 and the input image 502 to obtain an output image 504 that has one or more performance characteristics better than the input image 502. In accordance with a determination that the target decoding stage 510 is not the last stage of the series of decoding stages 510, the electronic device provides (868) the decoded feature map 518 as a distinct input feature map 516 to the next decoding stage 510 that immediately follows the target decoding stage.
[0074] Referring to Figures 8A and 8B, in some embodiments, in accordance with the predefined connection skipping rule, the alternative number of channels 514 and the total number of channels have a predefined ratio that is determined based on a hardware capability of the electronic device and the image resolution of the input image 502.
[0075] In some embodiments, in accordance with the predefined connection skipping rule, the alternative number of channels 514 are (814) successive channels within the total number of channels of the encoded feature map 512, and includes one of a first channel and a last channel of the total number of channels. In some embodiments, in accordance with the predefined connection skipping rule, the alternative number of channels 514 are (816) successive channels within the total number of channels of the encoded feature map 512, and is spaced with the same number of channels from a first channel and a last channel of the total number of channels. In some embodiments, in accordance with the predefined connection skipping rule, the alternative number of channels 514 are (818) evenly distributed among the total number of channels of the encoded feature map 512, and each of the alternative number of channels 514 is selected from a fixed channel of every third number of successive channels. In some embodiments, in accordance with the predefined connection skipping rule, the total number of channels of the encoded feature map 512 is divided (820) into the alternative number of channel groups, and each of the alternative number of channels 514 is an average of successive channels in a distinct channel group. In some embodiments, for each of the total number of channels, the electronic device determines (822) a respective inner product, LI norm, or Euclidean norm value of the respective channel and selects the alternative number of channels 514 having the smallest inner product, largest LI norm, or largest Euclidean norm values among the total number of channels.
[0076] In some embodiments, all of the series of encoding stages 506 comply with the same predefined connection skipping rule. Conversely, in some embodiments, each of the series of encoding stages 506 complies with a respective predefined connection skipping rule, independently of remaining encoding stages 506 in the series of encoding stages 506.
[0077] In some embodiments, the series of encoding stages 506 has the same number of stages as the series of decoding stages 510. The target encoding stage 506 has a first position in the series of encoding stages 506, and the target decoding stage 510 has a second position in the series of decoding stages 510. The second position matches the first position. For example, the target encoding stage 506 is the first encoding stage 506A, and corresponds to the target decoding stage 510 that is the fourth decoding stage 510D.
[0078] In some embodiments, the electronic device divides an original image into a plurality of image tiles, and the plurality of image tiles includes an image tile corresponding to the input image 502.
[0079] It should be understood that the particular order in which the operations in Figure 8A or 8B have been described are merely exemplary and are not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to retrieve candidate images or determining a camera pose as described herein. Additionally, it should be noted that details of other processes described above with respect to Figures 5-7 are also applicable in an analogous manner to methods 800 and 850 described above with respect to Figures 8A and 8B. For brevity, these details are not repeated here.
[0080] The terminology used in the description of the various described implementations herein is for the purpose of describing particular implementations only and is not intended to be limiting. As used in the description of the various described implementations and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Additionally, it will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
[0081] As used herein, the term “if’ is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
[0082] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain principles of operation and practical applications, to thereby enable others skilled in the art.
[0083] Although various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages can be implemented in hardware, firmware, software or any combination thereof.

Claims

What is claimed is:
1. An image processing method, implemented at an electronic device having memory, comprising: obtaining an input image; processing the input image by a series of encoding stages and a series of decoding stages to generate an output image, wherein the series of encoding stages include a target encoding stage, and the series of decoding stages include a target decoding stage corresponding to the target encoding stage, including at the target encoding stage: generating an encoded feature map having a total number of channels; in accordance with a predefined connection skipping rule, determining an alternative number of channels based on the total number of channels of the encoded feature map, the alternative number being less than the total number; and temporarily storing the alternative number of channels of the encoded feature map in the memory, thereby allowing the alternative number of channels to be extracted from the memory and combined with an input feature map of the target decoding stage.
2. The method of claim 1, processing the input image further comprising, at the target decoding stage: extracting from the memory the alternative number of channels of the encoded feature map; obtaining the input feature map; combining the alternative number of channels of the encoded feature map and the input feature map to a combined feature map; and converting the combined feature map to a decoded feature map, wherein the decoded feature map is fed to a next decoding stage that immediately follows the target decoding stage or outputted from the series of decoding stages.
3. The method of claim 2, further comprising, at the target decoding stage: cropping a dimension of each of the alternative number of channels to match a dimension of the input feature map.
4. The method of claim 2 or 3, wherein the input feature map is provided by a bottleneck network or a last decoding stage that immediately precedes the target decoding stage, combining the alternative number of channels of the encoded feature map and the input feature map to the combined feature map further comprising: applying a 1 x 1 convolutional layer and a rectified linear unit (ReLU) on the alternative number of channels of the encoded feature map to obtain a modified subset of the encoded feature map; successively applying a distinct 1 >< 1 convolutional layer, a distinct ReLU, and a bilinear upsampling operation on the input feature map to obtain a modified input feature map; and concatenating the modified subset of the encoded feature map and the modified input feature map to the combined feature map.
5. The method of any of claims 2-4, converting the combined feature map to a decoded feature map further comprising: successively applying a 3x3 convolutional layer and a ReLU on the combined feature map twice to obtain the decoded feature map.
6. The method of claim 5, further comprising: in accordance with a determination that the target decoding stage is the last stage of the series of decoding stages, combining the decoded feature map and the input image to obtain an output image that has one or more performance characteristics better than the input image; in accordance with a determination that the target decoding stage is not the last stage of the series of decoding stages, providing the decoded feature map as a distinct input feature map to the next decoding stage that immediately follows the target decoding stage.
7. The method of any of the preceding claims, wherein in accordance with the predefined connection skipping rule, the alternative number of channels and the total number of channels have a predefined ratio, further comprising: determining the predefined ratio based on a hardware capability of the electronic device and an image resolution of the input image.
8. The method of any of claims 1-7, wherein in accordance with the predefined connection skipping rule, the alternative number of channels are successive channels within the total number of channels of the encoded feature map, and includes one of a first channel and a last channel of the total number of channels.
9. The method of any of claims 1-7, wherein in accordance with the predefined connection skipping rule, the alternative number of channels are successive channels within the total number of channels of the encoded feature map, and is spaced with the same number of channels from a first channel and a last channel of the total number of channels.
10. The method of any of claims 1-7, wherein in accordance with the predefined connection skipping rule, the alternative number of channels are evenly distributed among the total number of channels of the encoded feature map, and each of the alternative number of channels is selected from a fixed channel of every third number of successive channels.
11. The method of any of claims 1-7, wherein in accordance with the predefined connection skipping rule, the total number of channels of the encoded feature map is divided into the alternative number of channel groups, and each of the alternative number of channels is an average of successive channels in a distinct channel group.
12. The method of any of claims 1-7, determining the alternative number of channels based on the total number of channels further comprising, in accordance with the predefined connection skipping rule: for each of the total number of channels, determining a respective inner product value of the respective channel; and selecting the alternative number of channels having the smallest inner product values among the total number of channels.
13. The method of any of claims 1-12, wherein all of the series of encoding stages comply with the same predefined connection skipping rule.
14. The method of any of claims 1-12, wherein each of the series of encoding stages complies with a respective predefined connection skipping rule, independently of remaining encoding stages in the series of encoding stages.
15. The method of any of the preceding claims, wherein: the series of encoding stages has the same number of stages as the series of decoding stages; and the target encoding stage has a first position in the series of encoding stages, and the target decoding stage has a second position in the series of decoding stages, the second position matching the first position.
16. The method of any of the preceding claims, further comprising: in accordance with a determination that the target encoding stage is the last stage of the series of encoding stages, providing the encoded feature map to a bottleneck network coupled between the series of encoding stages and the series of decoding stages; in accordance with a determination that the target encoding stage is not the last stage of the series of encoding stages, applying a max pooling operation on the encoded feature map before feeding the encoded feature map to a second encoding stage that immediately follows the target encoding stage.
17. The method of any of the preceding claims, further comprising: dividing an original image into a plurality of image tiles, the plurality of image tiles including an image tile corresponding to the input image.
18. An electronic device, comprising: one or more processors; and memory having instructions stored thereon, which when executed by the one or more processors cause the processors to perform a method of any of claims 1-17.
19. A non-transitory computer-readable medium, having instructions stored thereon, which when executed by one or more processors cause the processors to perform a method of any of claims 1-17.
PCT/US2022/018394 2022-03-01 2022-03-01 Image processing with encoder-decoder networks having skip connections WO2023167658A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2022/018394 WO2023167658A1 (en) 2022-03-01 2022-03-01 Image processing with encoder-decoder networks having skip connections
PCT/US2022/018878 WO2023167682A1 (en) 2022-03-01 2022-03-04 Image processing with encoder-decoder networks having skip connections

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/018394 WO2023167658A1 (en) 2022-03-01 2022-03-01 Image processing with encoder-decoder networks having skip connections

Publications (1)

Publication Number Publication Date
WO2023167658A1 true WO2023167658A1 (en) 2023-09-07

Family

ID=87884033

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2022/018394 WO2023167658A1 (en) 2022-03-01 2022-03-01 Image processing with encoder-decoder networks having skip connections
PCT/US2022/018878 WO2023167682A1 (en) 2022-03-01 2022-03-04 Image processing with encoder-decoder networks having skip connections

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2022/018878 WO2023167682A1 (en) 2022-03-01 2022-03-04 Image processing with encoder-decoder networks having skip connections

Country Status (1)

Country Link
WO (2) WO2023167658A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210248761A1 (en) * 2020-02-10 2021-08-12 Hong Kong Applied Science and Technology Research Institute Company Limited Method for image segmentation using cnn
US20210258611A1 (en) * 2019-03-23 2021-08-19 Uatc, Llc Compression of Images Having Overlapping Fields of View Using Machine-Learned Models

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020113355A1 (en) * 2018-12-03 2020-06-11 Intel Corporation A content adaptive attention model for neural network-based image and video encoders
US11966849B2 (en) * 2020-02-20 2024-04-23 Adobe Inc. Image processing network search for deep image priors
CN112215223B (en) * 2020-10-16 2024-03-19 清华大学 Multidirectional scene character recognition method and system based on multi-element attention mechanism

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210258611A1 (en) * 2019-03-23 2021-08-19 Uatc, Llc Compression of Images Having Overlapping Fields of View Using Machine-Learned Models
US20210248761A1 (en) * 2020-02-10 2021-08-12 Hong Kong Applied Science and Technology Research Institute Company Limited Method for image segmentation using cnn

Also Published As

Publication number Publication date
WO2023167682A1 (en) 2023-09-07

Similar Documents

Publication Publication Date Title
WO2021184026A1 (en) Audio-visual fusion with cross-modal attention for video action recognition
WO2021081562A2 (en) Multi-head text recognition model for multi-lingual optical character recognition
WO2021077140A2 (en) Systems and methods for prior knowledge transfer for image inpainting
WO2023102223A1 (en) Cross-coupled multi-task learning for depth mapping and semantic segmentation
WO2023101679A1 (en) Text-image cross-modal retrieval based on virtual word expansion
WO2022103877A1 (en) Realistic audio driven 3d avatar generation
WO2021092600A2 (en) Pose-over-parts network for multi-person pose estimation
US20230267587A1 (en) Tuning color image fusion towards original input color with adjustable details
WO2023133285A1 (en) Anti-aliasing of object borders with alpha blending of multiple segmented 3d surfaces
WO2022235809A1 (en) Image restoration for under-display cameras
WO2023167658A1 (en) Image processing with encoder-decoder networks having skip connections
WO2023086398A1 (en) 3d rendering networks based on refractive neural radiance fields
WO2023277877A1 (en) 3d semantic plane detection and reconstruction
CN114926368A (en) Image recovery model generation method and device, and image recovery method and device
WO2023277888A1 (en) Multiple perspective hand tracking
WO2023177388A1 (en) Methods and systems for low light video enhancement
US20240087344A1 (en) Real-time scene text area detection
WO2023211443A1 (en) Transformer-encoded speech extraction and enhancement
WO2024076343A1 (en) Masked bounding-box selection for text rotation prediction
US20230274403A1 (en) Depth-based see-through prevention in image fusion
WO2023172257A1 (en) Photometic stereo for dynamic surface with motion field
WO2023063944A1 (en) Two-stage hand gesture recognition
US20230410830A1 (en) Audio purification method, computer system and computer-readable medium
WO2023018423A1 (en) Learning semantic binary embedding for video representations
WO2023229591A1 (en) Real scene super-resolution with raw images for mobile devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930066

Country of ref document: EP

Kind code of ref document: A1