US11842265B2 - Processor chip and control methods thereof - Google Patents

Processor chip and control methods thereof Download PDF

Info

Publication number
US11842265B2
US11842265B2 US16/906,130 US202016906130A US11842265B2 US 11842265 B2 US11842265 B2 US 11842265B2 US 202016906130 A US202016906130 A US 202016906130A US 11842265 B2 US11842265 B2 US 11842265B2
Authority
US
United States
Prior art keywords
processor
processing unit
address information
input content
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/906,130
Other versions
US20210049450A1 (en
Inventor
Yongmin TAI
Insang CHO
Wonjae LEE
Chanyoung Hwang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US16/906,130 priority Critical patent/US11842265B2/en
Publication of US20210049450A1 publication Critical patent/US20210049450A1/en
Application granted granted Critical
Publication of US11842265B2 publication Critical patent/US11842265B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/78Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
    • G06F21/79Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data in semiconductor storage media, e.g. directly-addressable memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1458Protection against unauthorised use of memory or access to memory by checking the subject access rights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/109Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM] by using specially-adapted hardware at the client
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/51Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/74Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information operating in dual or compartmented mode, i.e. at least one secure mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1052Security improvement

Definitions

  • the disclosure relates to a processor chip and control methods thereof, for example to a processor chip performing neural network processing and control methods thereof.
  • Neural Network technology such as Deep Learning
  • performances of Segmentation, Super-Resolution, HDR, or the like are being improved using Neural Network technology.
  • Technologies for improving image quality using the Neural Network technology in electronic apparatuses may first be implemented in hardware through a digital circuit design such as a register-transfer level (RTL) or implemented in software using a processor such as a neural processing unit (NPU).
  • a digital circuit design such as a register-transfer level (RTL) or implemented in software using a processor such as a neural processing unit (NPU).
  • RTL register-transfer level
  • NPU neural processing unit
  • the method using a processor includes using a processor of various types such as a central processing unit (CPU), digital signal processor (DSP), graphics processing unit (GPU), NPU, or the like.
  • NPU may be specialized in neural network processing and may output results quickly, and is superior in performance and efficiency compared to processors of different types when performing neural network processing due to a specialized accelerator.
  • the NPU generally requires control by the CPU, and receives input data and an artificial intelligence model information to be applied to the input data by the CPU in particular. Specifically, as shown in FIG. 1 , the CPU initializes NPU and provides the input data and the artificial intelligence model information stored in the memory to the NPU, and operates (triggers) the NPU.
  • the CPU may be overloaded if the NPU control is set at each frame unit. That is, based on CPU being used, real-time execution may be difficult.
  • Embodiments of the present disclosure address the above-described necessity, and provide a processor chip for more efficiently controlling a neural processing unit (NPU) and control methods thereof.
  • NPU neural processing unit
  • a processor chip configured to perform neural network processing includes a memory, a first processor configured to perform the neural network processing on data stored in the memory, a second processor and a third processor, wherein the second processor is configured to transmit a control signal to the first processor and the third processor to cause the first processor and the third processor to perform operations.
  • the second processor may be configured to transmit a start signal to the third processor to cause the third processor to provide information on an input content stored in the memory to the first processor, and to transmit an initializing signal to the first processor to cause the first processor to perform neural network processing on an input content based on information on the input content provided from the third processor and artificial intelligence model information stored in the memory.
  • the memory may be configured to include a secure area where access by the second processor is not possible and an unsecure area where access by the second processor is possible, wherein the second processor may be configured to transmit the start signal to the third processor to cause the third processor to identify address information of the input content by accessing the secure area and to provide address information of the input content to the first processor, and to transmit the initializing signal to the first processor to cause the first processor to perform neural network processing on the input content based on address information of the input content provided from the third processor and the artificial intelligence model information stored in the unsecure area.
  • the second processor may be configured to provide address information of one of a plurality of artificial intelligence models stored in the unsecure area to the first processor, and the first processor is configured to obtain the artificial intelligence model information based on address information of the artificial intelligence model provided from the second processor.
  • the second processor may be configured to transmit the start signal to the third processor to cause the third processor to provide address information of one of a plurality of artificial intelligence models stored in the unsecure area based on the input contents to the third processor, and the first processor may be configured to obtain the artificial intelligence model information based on address information of the artificial intelligence model provided by the third processor.
  • a communication interface comprising communication circuitry may be further included, and a plurality of frames included in the input content may be sequentially received through the communication interface and stored in the secure area of the memory, and the second processor may be configured to transmit the start signal to the third processor to cause the third processor to provide address information of the frame sequentially stored in the memory to the first processor at predetermined time intervals
  • the second processor based on a first application being executed, may be configured to transmit the start signal to the third processor to cause the third processor to provide information on the input content to the first processor.
  • the second processor based on the first application being terminated, may be configured to transmit an end signal to the third processor to cause the third processor to stop the provided operations, and the third processor may be configured to control the first processor to stop the neural network processing operation, and may provide a signal indicating that the neural network processing operation is stopped to the second processor.
  • the second processor based on a second application being executed, may be configured to access the unsecure area to identify address information of a data corresponding to the second application, to provide the identified address information to the first processor, and to control the first processor to perform neural network processing for the data based on address information of the data provided from the second processor and the artificial intelligence model information stored in the unsecure area.
  • a display may be further included, and the second processor may be configured to transmit the start signal to the third processor to cause the third processor to identify address information of input content displayed through the display from a plurality of input contents and to provide the identified address information to the first processor.
  • the first processor may include a neural processing unit (NPU), and the second processor may include a processor configured to operate based on an operating system, and the third processor may include a processor configured to perform a predetermined operations.
  • NPU neural processing unit
  • the second processor may include a processor configured to operate based on an operating system
  • the third processor may include a processor configured to perform a predetermined operations.
  • a method of controlling a processor chip including a memory, a first processor performing neural network processing for data stored in the memory, a second processor and a third processor, includes: transmitting, by the second processor, a control signal to the third processor to cause the third processor to perform an operation and transmitting, by the second processor, a control signal to the first processor to cause the first processor to perform an operation.
  • the transmitting a control signal to the third processor may include transmitting a start signal to the third processor to cause the third processor to provide information on an input content stored in the memory to the first processor, and transmitting a control signal to the first processor includes transmitting an initializing signal to the first processor to cause the first processor to perform the neural network processing on the input content based on the information on the input content provided from the third processor and artificial intelligence model information stored in the memory.
  • transmitting a control signal to the third processor may include the third processor accessing a secure area of the memory to identify address information of the input content and transmitting the start signal to the third processor to provide address information of the input content to the first processor, and transmitting a control signal to the first processor includes the first processor transmitting an initializing signal to the first processor to perform the neural network processing on the input content based on address information of the input content provided from the third processor and the artificial intelligence model information stored in the unsecure area of the memory.
  • providing address information of one of a plurality of artificial intelligence models stored in the unsecure area to the first processor by the second processor and obtaining the artificial intelligence model information based on the address information of the artificial intelligence model provided from the second processor by the first processor may be further included.
  • transmitting the control signal to the third processor may include transmitting the start signal to the third processor to cause the third processor to provide address information on one of the plurality of artificial intelligence models stored in the unsecure area based on the input content to the first processor, and the control method may further include obtaining the artificial intelligence model information by the first processor based on address information of the artificial intelligence model provided by the third processor.
  • sequentially receiving a plurality of frames included in the input content and being stored in a secure area of the memory may be further included, and transmitting the control signal to the third processor may include transmitting the start signal to the third processor by the third processor to cause the third processor to provide address information of the frame sequentially stored in the memory to the first processor at predetermined time intervals.
  • transmitting the control signal to the third processor may include, based on the first application being executed, transmitting the start signal to the third processor to cause the third processor to provide information on the input content to the first processor.
  • transmitting an end signal to the third processor from the second processor to cause the third processor to terminate the provided operations, terminating the neural network processing operation by the first processor under control of the third processor, and providing a signal indicating the neural network processing operation being terminated from the third processor to the second processor may be further included.
  • a method of controlling a processor chip including a memory, a first processor, a second processor and a third processor includes: transmitting information on data stored in the memory by the second processor to the first processor based on a first application being executed; transmitting a first initializing signal from the second processor to the first processor to cause the first processor to perform neural network processing on the data based on information on the data provided by the second processor and the first artificial intelligence model information stored in the memory; transmitting a start signal to the third processor from the second processor to cause the third processor to provide information on input content stored in the memory to the first processor based on the first application being terminated and a second application being executed; and transmitting a second initializing signal to the first processor from the second processor to cause the first processor to perform neural network processing on the input content based on information on the input content provided from the third processor and a second artificial intelligence model information stored in the memory.
  • the processor chip may reduce memory use based on performing neural network processing on the original content, improve output quality of content, and perform processing in real-time.
  • FIG. 1 is a diagram illustrating control methods of a neural processing unit (NPU) according to conventional technology
  • FIG. 2 is a block diagram illustrating an example configuration of a processor chip according to an embodiment of the present disclosure
  • FIG. 3 is a diagram illustrating an example secure area and an unsecure area of a memory according to an embodiment of the present disclosure
  • FIGS. 4 A and 4 B are diagrams illustrating example neural network processing according to an embodiment of the present disclosure
  • FIGS. 5 A and 5 B are diagrams illustrating example neural network processing according to another embodiment of the present disclosure.
  • FIGS. 6 A, 6 B and 6 C are diagrams illustrating example operations based on an application according to an embodiment of the present disclosure.
  • FIG. 7 is a flowchart illustrating an example method of controlling a processor chip according to an embodiment of the present disclosure.
  • FIG. 2 is a block diagram illustrating an example configuration of a processor chip 100 according to an embodiment of the present disclosure.
  • the processor chip 100 includes a memory 110 , a first processor (e.g., including processing circuitry) 120 , a second processor (e.g., including processing circuitry) 130 , and a third processor (e.g., including processing circuitry) 140 .
  • a first processor e.g., including processing circuitry
  • a second processor e.g., including processing circuitry
  • a third processor e.g., including processing circuitry
  • the processor chip 100 as an apparatus performing neural network processing, may be implemented, for example, and without limitation, in a TV, a smartphone, a tablet PC, a mobile phone, a video phone, an electronic book reader, a desktop PC, a laptop PC, a netbook computer, a work station, a server, a PDA, a portable multimedia player (PMP), an MP3 player, a wearable device, or the like.
  • the processor chip 100 may be configured to perform a neural network processing on contents and may be an apparatus that displays an image processed content according to neural network processing through a display. Further, the processor chip 100 may not have a separate display and may be an apparatus performing neural network processing on content. In this example, the processor chip 100 may provide the image processed content according to neural network processing to a display apparatus.
  • the memory 110 may be electrically connected with a first processor 120 , a second processor 130 and a third processor 140 , and may store data necessary for the various embodiments of the present disclosure.
  • the memory 110 may be implemented, for example, and without limitation, as an internal memory of a ROM (for example, electrically erasable programmable read-only memory (EEPROM)), a RAM or the like included in each of the first processor 120 , the second processor 130 and the third processor 140 , or implemented as a memory separate from the first processor 120 , the second processor 130 and the third processor 140 .
  • the memory 110 may be implemented in the form of an embedded memory in the processor chip 100 according to the data storage use or in the form of a detachable memory in the processor chip 100 .
  • the data may be stored in the embedded memory in the processor chip 100
  • the data for an expansion function of the processor chip 100 may be stored in detachable memory in the processor chip 100 .
  • the embedded memory in the processor chip 100 may, for example, and without limitation, be implemented in at least one from a volatile memory (ex: dynamic RAM (DRAM), static RAM (SRAM), or synchronous dynamic RAM (SDRAM), etc.) or a non-volatile memory (ex: one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, flash memory (ex: NAND flash or NOR flash, etc.), hard drive, solid state drive (SSD)), or the like, and the detachable memory in the processor chip 100 may be implemented, for example, and without limitation, in the form of a memory card (for example, compact flash (CF), secure digital (SD0, micro secure digital (Micro-SD), mini secure digital (Mini-SD), extreme digital (xD), multi-media card (MMC), etc.), an external memory (for example, USB memory) capable of connecting to
  • the memory 110 may store an artificial intelligence model used for performing neural network processing on data.
  • the artificial intelligence model as a model for upscaling the resolution of the input content to 8K, may be a model learning the relationship of the original image (8K) and the downscaled image (ex: 4K) of the original image using, for example, and without limitation, a convolution neural network (CNN).
  • CNN may refer, for example, to a multilayered neural network having specialized connecting structure designed for voice processing, image processing and the like.
  • the artificial intelligence model may be a model based on various neural networks such as, for example, and without limitation, a recurrent neural network (RNN), deep neural network (DNN), and the like.
  • RNN recurrent neural network
  • DNN deep neural network
  • the memory 110 may store data, which may be applied with an artificial intelligence model.
  • the memory 110 may store content, which may be applied with an artificial intelligence model.
  • the memory 110 may include a secure area and an unsecure area.
  • the secure area may be an area that stores data requiring security
  • the unsecure area may be an area that stores data unrelated to security.
  • the artificial intelligence model may be stored in the unsecure area
  • the content to be applied with the artificial intelligence model may be stored in the secure area.
  • the present disclosure is not limited thereto, and the artificial intelligence model may be stored in a secure area if, for example, security is required. Further, the data to be applied with an artificial intelligence model may be stored in the unsecure area if, for example, unrelated to security.
  • the secure area and the unsecure area of the memory 110 may be divided in terms of software.
  • a second processor 130 may be electrically connected with the memory 110 , but may not recognize the secure area corresponding to, for example, and without limitation, 30% of the memory in terms of software.
  • the present disclosure is not limited thereto, and the secure area of the memory 110 may be implemented to a first memory, and the unsecure area of the memory 110 may be implemented to a second memory.
  • the secure area and the unsecure area of the memory 110 may be divided in terms of hardware.
  • the first processor 120 and the third processor 140 to be described hereafter may include various processing circuitry and access the secure area and the unsecure area of the memory 110 , but the second processor 130 may include various processing circuitry and be unable to access the secure area of the memory 110 , and may, for example, only be able to access the unsecure area.
  • the first processor 120 may be a processor performing neural network processing.
  • the first processor 120 may, for example, include a neural network processing exclusive processor such as a neural processing unit (NPU), may include a plurality of processing elements. Between the adjacent processing elements, a unidirectional shift or a bidirectional shift of data may be possible.
  • a neural network processing exclusive processor such as a neural processing unit (NPU)
  • NPU neural processing unit
  • Each of the processing elements may, for example, and without limitation, generally include a multiplier and an arithmetic logic unit (ALU), and the ALU may include, for example, and without limitation, at least one or more adder.
  • the processing element may use a multiplier and ALU to perform four fundamental arithmetical operations.
  • the processing elements are not limited thereto, and may include other structures capable of performing functions such as four fundamental arithmetical operations and shifts.
  • each of the processing elements may include, for example, a register for storing data.
  • the first processor 120 may include various processing circuitry including, for example, a controller controlling a plurality of processing elements, and the controller may control the plurality of processing elements to process in parallel processing as may be required in neural network processing process.
  • the first processor 120 under control of the second processor 130 or the third processor 140 , may perform neural network processing.
  • the first processor 120 may perform neural network processing on data based on information on data provided by the second processor 130 or the third processor 140 and the artificial intelligence model information stored in the unsecure area.
  • the data may be data stored in the secure area or data stored in the unsecure area.
  • the first processor 120 may access the secure area.
  • the second processor 130 may include various processing circuitry and may generally control the operations of the processor chip 100 , and may, for example, be a processor that operates based on an operating system. However, the second processor 130 may be unable to access the secure area of the memory 110 and may, for example, only access the unsecure area.
  • the second processor 130 may be implemented as various processing circuitry, such as, for example, a microprocessor, a time controller (TCON), or the like.
  • the second processor is not limited thereto, and may include, for example, and without limitation, one or more of a central processing unit (CPU), Micro Controller Unit (MCU), Micro processing unit (MPU), controller, application processor (AP), communication processor (CP), ARM processor, or the like, or may be defined by the corresponding terms.
  • the second processor 130 may be implemented as a System on Chip (SoC) integrated with a processing algorithm, a large scale integration (LSI), or in the form of a field programmable gate array (FPGA).
  • SoC System on Chip
  • LSI large scale integration
  • FPGA field programmable gate array
  • the second processor 130 may include a general-purpose processor.
  • the second processor 130 may transmit a control signal to the first processor 120 and the third processor 140 to cause the first processor 120 and the third processor 140 to perform operations.
  • the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to provide information on the input content stored in the memory 110 to the first processor 120 .
  • the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 , capable of accessing the secure area, to identify address information of input content by accessing the secure area, and to provide address information of input content to the first processor 120 .
  • the third processor 140 capable of accessing the secure area, may be controlled to provide address information of the input content stored in the secure area to the first processor 120 .
  • the third processor 140 unlike the second processor 130 , as a processor that may control, a part of the operations of the processor chip 100 according to necessity, may be a processor performing only predetermined operations, but is not limited thereto.
  • the third processor 140 may include, for example, a processor incapable of the operations changing arbitrarily.
  • the third processor 140 may be a processor exclusively for signal processing such as digital signal processor (DSP), graphics processing unit (GPU), and the like, or may be a processor performing image processing on input content according to necessity.
  • DSP digital signal processor
  • GPU graphics processing unit
  • the second processor 130 may transmit an initializing signal to the first processor 120 to cause the first processor to perform neural network processing on input content based on information on input content provided from the third processor 140 and artificial intelligence model information stored in the memory.
  • the second processor 130 may transmit an initializing signal to the first processor 120 to cause the first processor 120 to perform neural network processing on input content based on address information of input content provided from the third processor 140 and artificial intelligence model information stored in the unsecure area.
  • the second processor 130 may provide address information on one from the plurality of artificial intelligence models stored in the unsecure area to the first processor 120 , and the first processor 120 may obtain artificial intelligence model information based on address information of artificial intelligence model provided from the second processor 130 .
  • the second processor 130 may control the third processor 140 to identify address information of input content by accessing the secure area and provide address information of input content to the first processor 120 . Further, the second processor 130 may identify the address information of the artificial intelligence model performing 8K upscaling by accessing the unsecure area and may provide address information of the artificial intelligence model to the first processor 120 . Further, the first processor 120 may perform control for performing neural network processing on input content based on address information of the input content provided from the third processor 140 and the address information of the artificial intelligence model.
  • the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to provide address information on one from the plurality of artificial intelligence models stored in the unsecure area based on the input content to the first processor 120 , and the first processor 120 may obtain the artificial intelligence model information based on the address information of the artificial intelligence model provided from the third processor 140 .
  • the third processor 140 may analyze the input content and identify the type of image processing necessary to the input content.
  • the third processor 140 based on the resolution of the input content being lower than the resolution of the display, may identify address information of the artificial intelligence model performing the upscaling, and based on the quality being low despite resolution of the input content and the resolution of the display being identical, may identify the address information of the artificial intelligence model removing noise of the image.
  • the address information of the artificial intelligence model may be provided to the second processor 130 , or may be provided after the third processor 140 analyzes the input content.
  • the artificial intelligence model provided by the second processor 130 may be an artificial intelligence model corresponding to user selection.
  • the second processor 130 based on the user wanting to play the input content to cinema mode, may identify address information of the artificial intelligence model corresponding to cinema mode.
  • the disclosure is not limited thereto, and the second processor 130 may identify surrounding environment information using a sensor and address information of the artificial intelligence model corresponding to surrounding environment information may be identified.
  • the processor chip 100 further includes a communication interface (e.g., including communication circuitry) (not shown), and a plurality of frames included in the input content that may be sequentially received through the communication interface and stored in the secure area of the memory 110 , and the second processor 130 may transmit a start signal to cause the third processor 140 to provide address information of the frame sequentially stored in the memory 110 to the first processor 120 at predetermined time intervals.
  • a communication interface e.g., including communication circuitry
  • the communication interface may include various communication circuitry and may be configured to perform communication with various types of external apparatuses based on various types of communication methods.
  • the communication interface may include, for example, and without limitation, a tuner receiving RF broadcast signals by tuning a channel selected by the user or all channels pre-stored from radio frequency (RF) broadcast signals received through an antenna.
  • the processor chip 100 may receive and demodulate a digital IF (DIF) signal converted from the tuner, and may further include a demodulator performing channel decoding or the like.
  • the communication interface may include various configurations including various circuitry for performing wireless communication such as a WiFi chip, a Bluetooth chip, a wireless communication chip, and an NFC chip.
  • a WiFi chip and a Bluetooth chip may perform communication through a WiFi method and a Bluetooth method, respectively. Based on a WiFi chip or a Bluetooth chip being used, various connection information such as SSID and session key may be first transmitted and received, and various information may be transmitted and received after connecting by communication using the above.
  • a wireless communication chip may refer, for example, to a chip performing communication according to various communication protocols such as IEEE, ZigBee, 3rd Generation (3G), 3rd Generation Partnership Project (3GPP), and Long Term Evolution (LTE).
  • An NFC chip refers to a chip that operates by a near field communication (NFC) method using a 13.56 MHz band from various RF-ID frequency bands such as 135 kHz, 13.56 MHz, 433 MHz, 860-960 MHz, and 2.45 GHz.
  • the communication interface may also include the configuration of performing a wire communication such as HDMI, MHL, USB, DP, thunderbolt, RGB, D-SUB, and DVI.
  • the processor chip 100 may receive a content playback screen from an external apparatus through the communication interface or transmit the content playback screen to the external apparatus through the communication interface.
  • the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to provide information on the input content to the first processor 120 .
  • the second processor 130 may transmit an end signal to the third processor 140 to cause the third processor 140 to terminate the provided operations, and the third processor 140 may perform control to cause the first processor 120 to terminate neural network processing operation and may provide a signal indicating the neural network processing operation is terminated to the second processor 130 .
  • the second processor 130 may access the unsecure area to identify address information of the data corresponding to the second application and may provide the identified address information to the first processor 120 , and the first processor 120 may perform control to perform neural network processing on data based on address information of data provided from the second processor 130 and the artificial intelligence model information stored in the unsecure area.
  • the first application may, for example, be an application related to image processing of content
  • the second application may, for example, be an application unrelated to content.
  • the second processor 130 may control the third processor 140 to obtain address information of content stored in the secure area, and based on the second application unrelated to content being executed, may directly obtain address information of data stored in an unsecure area.
  • the processor chip 100 may further include a display (not shown), and the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to identify address information of input content displayed through a display from a plurality of input contents, and to provide the identified address information to the first processor 120 .
  • the processor chip 100 may receive, for example, a first input content from an external apparatus connected to a high-definition multimedia interface (HDMI), a second input content from the broadcast station through a tuner, and may store the first input content and the second input content in the secure area.
  • the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to identify address information of input content displayed through the display from the first input content and the second input content, and to provide the identified address information to the first processor 120 .
  • HDMI high-definition multimedia interface
  • the display may be implemented as displays of various types such as, for example, and without limitation, a liquid crystal display (LCD), organic light emitting diodes (OLED) display, a plasma display panel (PDP), or the like.
  • the display may also include driving circuit, backlight unit, or the like capable of being implemented in forms such as an a-si TFT, a low temperature poly silicon (LTPS) TFT, and an organic TFT (OTFT) therein.
  • the display may be implemented in touch screen by coupling with a touch sensor.
  • the second processor 130 after initializing the first processor 110 , may transmit a start signal to the third processor 140 to cause the third processor 140 to identify address information of input content accessing the secure area and to provide the identified address information to the first processor 120 .
  • the second processor 130 or the third processor 140 may obtain an input content or artificial intelligence model and may provide to the first processor 120 directly.
  • the problem of the second processor 130 being unable to access the secure area may be addressed by a method of using the third processor 140 capable of accessing the secure area.
  • FIG. 3 is a diagram illustrating an example secure area and unsecure area of a memory 110 according to an embodiment of the present disclosure.
  • the second processor 130 operates such that it is possible to store and display input content even if access to the secure area of the memory 110 is not possible.
  • the processor chip 100 may receive input content.
  • the receiving of input content is shown, for example, as streaming, and this may refer, for example, to the input content being received in real-time. Further, not only is the input content indicated as streaming through the internet, receiving content from an external apparatus through a wire connection may also be indicated as streaming.
  • the processor chip 100 may receive encoded input content from the outside, and may decode the input content through a decoder (e.g., including decoding circuitry and/or executable program elements). Further, the decoded content may be stored in the secure area of the memory 110 .
  • a decoder e.g., including decoding circuitry and/or executable program elements.
  • the decoded content may be stored in different areas within the secure area. For example, based on the decoded content being received through the tuner, the decoded content may be stored in a first area of the secure area, and based on the decoded content being received from an external apparatus connected through HDMI, the decoded content may be stored in a second area of the secure area.
  • the secure area may be divided into a plurality of areas divided by the provided sources of the content, and the third processor 140 may identify information on the divided plurality of areas of the secure area.
  • the content stored in the secure area may be displayed after image processing.
  • the image processing may be performed by a digital circuit such as, for example, and without limitation, a register-transfer level (RTL), the first processor 120 , or the third processor 140 .
  • a digital circuit such as a register-transfer level (RTL) may be used when possibility of change is low, and the third processor 140 may be used when the possibility of change is high.
  • FRC complex algorithm
  • modes of predetermined number existing ex: 30 Hz, 60 Hz, 120 Hz
  • being implemented to digital circuits such as RTL is possible, and in this case may be advantageous from a power and production cost aspect rather than using the third processor 140 .
  • the third processor 140 may be more advantageous rather than using a digital circuit such as RTL.
  • the first processor 120 may perform image processing using the artificial intelligence model.
  • the above-described operations such as streaming, decoding, and storing may be implemented through a digital circuit.
  • the second processor 130 may control each digital circuit and may perform processing on input content, but direct access to the input content is not allowed. For example, even if the second processor 130 is unable access to the secure area, the operations of displaying the input content may be performed.
  • FIGS. 4 A and 4 B are diagrams illustrating example neural network processing according to an embodiment of the present disclosure.
  • the second processor (CPU) 130 may perform booting using an operating system (O/S), and may perform various operations such as control through various menu input by the user and performing applications using various programs, contents, data or the like stored in the unsecure area (Storage 1).
  • O/S operating system
  • Storage 1 the unsecure area
  • the second processor 130 may be able access the unsecure area, but may be unable to access the secure area (Storage 2).
  • Data that requires security such as content copyright (DRM) or private information such as input content received from outside the processor chip 100 may be stored in the secure area, and the second processor 130 may not read the data requiring security.
  • DRM content copyright
  • private information such as input content received from outside the processor chip 100
  • the third processor 140 may not us an O/S, and may use a pre-stored program to perform the predetermined operations. Accordingly, the third processor 140 may access the secure area storing the data requiring security.
  • the first processor 120 as a processor that may be used for neural network processing, may not use an O/S, and may be able to access the secure area.
  • the third processor 140 may be controlled to perform neural network processing on input content.
  • the second processor 130 may perform initializing of the first processor 120 .
  • a reason that the second processor 130 may initialize the first processor 120 may be that the second processor 130 controls the TV system through the O/S.
  • the O/S manages resources on each program through the driver, and this is an effective system to managing resource such as the memory 110 .
  • the second processor 130 may effectively manage the resource by initializing the first processor 120 through the driver.
  • the second processor 130 may control the third processor 140 to identify address information of the input content by accessing the secure area and to provide address information of the input content to the first processor 120 .
  • the control operations of the third processor 140 by the second processor 130 may, for example, include be a onetime control operation.
  • the third processor 140 may control for the first processor 120 to perform neural network processing on input content based on address information of input content provided from the third processor 140 and the artificial intelligence model information stored in the unsecure area.
  • the control operations of the first processor 120 by the third processor 140 may include a control operation that is repeated at predetermined time intervals.
  • the third processor 140 based on the input content being an image of 30 fps, may provide input to the first processor 120 at 33.3 ms intervals and may control the first processor 120 to perform neural network processing.
  • the third processor 140 may provide address information of each frame for every frame of the input content to the first processor 120 , and may control (trigger) to perform neural network processing.
  • the present disclosure is not limited thereto, and the third processor 140 may obtain each frame of the input content, provide the obtained frame to the first processor 120 , and may control to perform neural network processing.
  • the artificial intelligence model used when performing neural network processing by the first processor 120 may be provided by the second processor 130 .
  • the artificial intelligence model may be stored in the unsecure area, and the second processor 130 may access the unsecure area to provide address information of the artificial intelligence model to the first processor 120 .
  • the third processor 140 and not the second processor 130 may identify address information on the input content, and may control the neural network processing operation of the first processor 120 . Accordingly, a neural network processing may be performed in real-time using the first processor 120 while maintaining security of the input content and reducing the processing burden of the second processor 130 .
  • the memory 110 is described as including a secure area and an unsecure area, but is not limited thereto.
  • the processor chip 100 may include a first memory 111 and a second memory 112 divided, for example, in terms of hardware.
  • the first memory 111 as a memory storing non-secure data, may be a memory that is accessible by the second processor 130 .
  • the second memory 112 as a memory storing secure data, may be a memory that is not accessible by the second processor 130 .
  • the first processor 120 and the third processor 140 may access the first memory 111 and the second memory 112 .
  • the operations of FIG. 4 B other than the memory being divided in terms of hardware, is identical to or similar to the operations of FIG. 4 A , and thus the overlapping description thereof may not be repeated here.
  • FIGS. 5 A and 5 B are diagrams illustrating example neural network processing according to another embodiment of the present disclosure.
  • FIG. 5 A is virtually similar to the operations of FIG. 4 A , but is a diagram illustrating an example case in which the subject identifying the address information of the artificial intelligence model and providing the address information to the first processor 120 is the third processor 140 .
  • the third processor 140 may access the input content stored in the secure area (Storage 2), and thus an analysis of the input content is possible. Accordingly, the third processor 140 may identify the artificial intelligence model optimized to the input content. For example, the third processor 140 may identify address information of the artificial intelligence model to remove noise from the unsecure area if the noise of the input content is determined problematic and may identify address information of the artificial intelligence model for extending resolution in the unsecure area (Storage 1). The third processor 140 may provide address information of the artificial intelligence model to the first processor 120 .
  • the memory 110 in FIG. 5 A is described as including a secure area and an unsecure area, but is not limited thereto.
  • the processor chip 100 may include a first memory 111 and a second memory 112 divided, for example, in terms of hardware, and the specific description thereof overlapping with FIG. 4 B may not be repeated here.
  • FIGS. 6 A, 6 B and 6 C are diagrams illustrating example operations based on an application according to an embodiment of the present disclosure.
  • the second processor 130 may provide information on data stored in the memory 110 and information on the artificial intelligence model to the first processor 120 .
  • the second processor 130 may control for the first processor 120 to perform neural network processing on data based on the provided information.
  • the second processor 130 may identify a first address information of data stored in the unsecure area and a second address information of the artificial intelligence model, and may provide the first address information and the second address information to the first processor 120 .
  • the second processor 130 may control the first processor 120 to perform neural network processing on data based on the first address information and the second address information.
  • application A may be an application that receives ambient illuminance value sensed through a sensor as input processing the most appropriate brightness.
  • the ambient illuminance value may not be data requiring security and thus may be stored in the unsecure area.
  • the second processor 130 may identify the first address information of ambient illuminance value and the second address information of the artificial intelligence model corresponding to application A, and may provide the first address information and the second address information to the first processor 120 .
  • the second processor 130 may control the first processor 120 to process the most appropriate brightness corresponding to the ambient illuminance value based on the first address information and the second address information.
  • the second processor 130 may not have to use the third processor 140 .
  • the second processor 130 may control the third processor 140 to provide information on input content stored in the memory 110 to the first processor 120 .
  • the second processor 130 may control the third processor 140 to identify address information of input content by accessing the secure area (Storage 2), and provide address information of the input content to the first processor 120 .
  • Storage 2 secure area
  • the third processor 140 may control the first processor 120 to perform neural network processing on input content based on information on input content provided from the third processor 140 and the artificial intelligence model information stored in the unsecure area (Storage 1).
  • the third processor 140 may control the first processor 120 to perform neural network processing on input content based on address information of input content provided from the third processor 140 and the artificial intelligence model information stored in the unsecure area (Storage 1).
  • the third processor 140 may perform image processing through the first processor 120 by reading input content received at predetermined intervals from the secure area, and then repeating the operations of storing in the secure area.
  • the second processor 130 after initializing the first processor 120 , may delegate operations thereafter to the third processor 140 .
  • the second processor 130 may be a processor that controls the overall operations of the processor chip 100 , but as the third processor 130 is a processor only used in necessary cases, the third processor 140 may be more advantageous in image processing in real time.
  • the memory 110 is described as including a secure area and an unsecure area, but is not limited thereto.
  • the processor chip 100 may include a first memory 111 and a second memory 112 divided in terms of a hardware, and the specific description thereof overlapping with FIG. 4 B may not be repeated here.
  • FIG. 7 is a flowchart illustrating an example method of controlling a processor chip according to an embodiment of the present disclosure.
  • a method of controlling a processor chip including a memory, a first processor performing neural network processing on data stored in the memory, a second processor and a third processor includes the second processor transmitting a control signal to the third processor to cause the third processor to perform an operation (S 710 ). Further, a control signal may be transmitted to the first processor by the second processor to cause the first processor to perform an operation (S 720 ).
  • Transmitting a control signal to the third processor may include transmitting a start signal to the third processor to cause the third processor to provide information on input content stored in the memory to the first processor, and transmitting a control signal to the first processor (S 720 ) includes transmitting an initializing signal to the first processor to cause the first processor to perform neural network processing on input content based on information on input content provided from the third processor and the artificial intelligence model information stored in the memory.
  • transmitting a control signal to the third processor may include identifying address information of input content by accessing the secure area of the memory, and transmitting a start signal to the third processor to provide address information of input content to the first processor, and transmitting a control signal to the first processor (S 720 ) includes transmitting an initializing signal to the first processor to cause the first processor to perform neural network processing on input content based on address information of input content provided from the third processor and the artificial intelligence model information stored in the unsecure area of the memory.
  • the providing address information on one from a plurality of artificial intelligence models stored in the unsecure area to the first processor by the second processor and obtaining the artificial intelligence model information based on address information of the artificial intelligence model provided from the second processor by the first processor may be further included.
  • transmitting a control signal to the third processor may include transmitting a start signal to the third processor to cause the third processor to provide address information on one of the plurality of artificial intelligence models stored in the unsecure area based on input content, and the control method may further include the first processor obtaining artificial intelligence model information based on address information of artificial intelligence model provided from the third processor.
  • Sequentially receiving a plurality of frames included in the input content and storing in the secure area of the memory may further be included, and transmitting a control signal to the third processor (S 710 ) may include transmitting a start signal to the third processor for the third processor to provide address information of frames sequentially stored in the memory at predetermined time intervals to the first processor.
  • transmitting a control signal to the third processor may include, based on a first application being executed, transmitting a start signal to the third processor to cause the third processor to provide information on the input content to the first processor.
  • transmitting an end signal to the third processor by the second processor to cause the third processor to terminate the provided operations, terminating neural network processing operation by the first processor under the control of the third processor, and providing a signal indicating neural network processing operation is terminated by the third processor to the second processor may be further included.
  • the processor chip may, for example, use a DSP rather than a CPU to perform neural network processing on original content to reduce memory use, improve output quality of content, and perform processing in real-time.
  • the memory, the first processor, the second processor and the third processor are described as being implemented in a processor chip for the sake of convenience and ease of description, but is not limited thereto.
  • an electronic apparatus may, for example, and without limitation, be implemented as including a memory, CPU, DSP and NPU as a separate configuration.
  • the various example embodiments described above may be implemented as a software including instructions stored on machine-readable storage media readable by a machine (e.g.: computer).
  • the machine as an apparatus capable of calling an instruction stored in a storage medium and operating according to the called instruction, may include an electronic device (e.g.: electronic apparatus) according to the disclosed embodiments.
  • the processor may directly, or using other elements under the control of the processor, perform a function corresponding to the instruction.
  • the instruction may include a code generated by a compiler or a code executable by an interpreter.
  • the machine-readable storage medium may be provided in the form of a non-transitory storage medium.
  • the ‘non-transitory’ storage medium may not include a signal and is tangible, but does not distinguish data being semi-permanently or temporarily stored in a storage medium.
  • the method according to various embodiments disclosed herein may be provided in a computer program product.
  • a computer program product may be exchanged between a seller and a purchaser as a commodity.
  • a computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)) or distributed online through an application store (e.g. Play StoreTM).
  • an application store e.g. Play StoreTM
  • at least a portion of the computer program product may be at least temporarily stored in a storage medium such as a manufacturer's server, a server of an application store, or a memory of a relay server, or temporarily generated.
  • the various embodiments described above may be implemented in a computer or in a recording medium capable of reading a similar apparatus using a software, a hardware or a combination of software and hardware.
  • the embodiments described herein may be implemented as a processor itself.
  • the embodiments according to the process and function described in the present disclosure may be implemented as separate software modules. Each of the software modules may perform one or more function or operations described in the present disclosure.
  • the computer instructions for performing a processing operations of another apparatus according to the various embodiments described above may be stored in a non-transitory computer-readable medium.
  • the computer instructions stored in this non-transitory computer-readable medium based on being executed by the processor of a specific device, may have a specific device perform a processing operations of other device according to the various embodiments described above.
  • the non-transitory computer readable medium may refer, for example, to a medium that stores data semi-permanently, and is readable by an apparatus.
  • Examples of a non-transitory computer-readable medium may include a compact disc (CD), a digital versatile disc (DVD), a hard disc, a Blu-ray disc, a universal serial bus (USB), a memory card, a read only memory (ROM), and the like.
  • CD compact disc
  • DVD digital versatile disc
  • hard disc a hard disc
  • Blu-ray disc a Blu-ray disc
  • USB universal serial bus
  • memory card a read only memory (ROM), and the like.
  • each of the elements may be composed of a single entity or a plurality of entities, and some sub-elements of the abovementioned sub-elements may be omitted, or another sub-element may be further included in various embodiments.
  • some elements e.g.: modules or programs
  • a module, a program, or other element in accordance with the various embodiments, may be performed sequentially, in a parallel, repetitively, or in a heuristically manner, or at least some operations may be performed in a different order, omitted, or may further include a different operations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Neurology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Storage Device Security (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in a processor chip configured to perform neural network processing. The processor chip includes a memory, a first processor configured to perform neural network processing on a data stored in the memory, a second processor and a third processor, and the second processor is configured to transmit a control signal to the first processor and the third processor to cause the first processor and the third processor to perform an operation.

Description

CROSS-REFERENCE TO RELATED APPLICATION
This application is a continuation of application Ser. No. 16/747,989, filed Jan. 21, 2020, which claims priority to Korean Patent Application No. 10-2019-0099124, filed on Aug. 13, 2019, the entire contents of each of which are all hereby incorporated herein by reference in their entireties.
BACKGROUND 1. Field
The disclosure relates to a processor chip and control methods thereof, for example to a processor chip performing neural network processing and control methods thereof.
2. Description of Related Art
Recently, electronic apparatuses that provide various experiences using Neural Network technology such as Deep Learning are being developed. Specifically, performances of Segmentation, Super-Resolution, HDR, or the like are being improved using Neural Network technology.
Technologies for improving image quality using the Neural Network technology in electronic apparatuses may first be implemented in hardware through a digital circuit design such as a register-transfer level (RTL) or implemented in software using a processor such as a neural processing unit (NPU).
From the above, the method using a processor includes using a processor of various types such as a central processing unit (CPU), digital signal processor (DSP), graphics processing unit (GPU), NPU, or the like. In particular, NPU may be specialized in neural network processing and may output results quickly, and is superior in performance and efficiency compared to processors of different types when performing neural network processing due to a specialized accelerator.
The NPU generally requires control by the CPU, and receives input data and an artificial intelligence model information to be applied to the input data by the CPU in particular. Specifically, as shown in FIG. 1 , the CPU initializes NPU and provides the input data and the artificial intelligence model information stored in the memory to the NPU, and operates (triggers) the NPU.
However, based on the electronic apparatus using the content as input data, the CPU cannot directly access the original of the content for the security of the content.
As a method to bypass the above, methods such as using a module copying the original of the content in a secure area (trust zone) to a common area or accessing a downscaled content may be employed. In this case, problems such as overuse of memory, delay in operation time, deterioration of content quality, or the like may occur.
In addition, based on the CPU in the electronic apparatus performing various operations, the CPU may be overloaded if the NPU control is set at each frame unit. That is, based on CPU being used, real-time execution may be difficult.
As indicated above, based on a CPU of an electronic apparatus controlling an NPU, various problems may arise.
SUMMARY
Embodiments of the present disclosure address the above-described necessity, and provide a processor chip for more efficiently controlling a neural processing unit (NPU) and control methods thereof.
According to an example embodiment of the present disclosure a processor chip configured to perform neural network processing includes a memory, a first processor configured to perform the neural network processing on data stored in the memory, a second processor and a third processor, wherein the second processor is configured to transmit a control signal to the first processor and the third processor to cause the first processor and the third processor to perform operations.
In addition, the second processor may be configured to transmit a start signal to the third processor to cause the third processor to provide information on an input content stored in the memory to the first processor, and to transmit an initializing signal to the first processor to cause the first processor to perform neural network processing on an input content based on information on the input content provided from the third processor and artificial intelligence model information stored in the memory.
Further, the memory may be configured to include a secure area where access by the second processor is not possible and an unsecure area where access by the second processor is possible, wherein the second processor may be configured to transmit the start signal to the third processor to cause the third processor to identify address information of the input content by accessing the secure area and to provide address information of the input content to the first processor, and to transmit the initializing signal to the first processor to cause the first processor to perform neural network processing on the input content based on address information of the input content provided from the third processor and the artificial intelligence model information stored in the unsecure area.
In addition, the second processor may be configured to provide address information of one of a plurality of artificial intelligence models stored in the unsecure area to the first processor, and the first processor is configured to obtain the artificial intelligence model information based on address information of the artificial intelligence model provided from the second processor.
Further, the second processor may be configured to transmit the start signal to the third processor to cause the third processor to provide address information of one of a plurality of artificial intelligence models stored in the unsecure area based on the input contents to the third processor, and the first processor may be configured to obtain the artificial intelligence model information based on address information of the artificial intelligence model provided by the third processor.
In addition, a communication interface comprising communication circuitry may be further included, and a plurality of frames included in the input content may be sequentially received through the communication interface and stored in the secure area of the memory, and the second processor may be configured to transmit the start signal to the third processor to cause the third processor to provide address information of the frame sequentially stored in the memory to the first processor at predetermined time intervals
Further, the second processor, based on a first application being executed, may be configured to transmit the start signal to the third processor to cause the third processor to provide information on the input content to the first processor.
In addition, the second processor, based on the first application being terminated, may be configured to transmit an end signal to the third processor to cause the third processor to stop the provided operations, and the third processor may be configured to control the first processor to stop the neural network processing operation, and may provide a signal indicating that the neural network processing operation is stopped to the second processor.
Further, the second processor, based on a second application being executed, may be configured to access the unsecure area to identify address information of a data corresponding to the second application, to provide the identified address information to the first processor, and to control the first processor to perform neural network processing for the data based on address information of the data provided from the second processor and the artificial intelligence model information stored in the unsecure area.
A display may be further included, and the second processor may be configured to transmit the start signal to the third processor to cause the third processor to identify address information of input content displayed through the display from a plurality of input contents and to provide the identified address information to the first processor.
Further, the first processor may include a neural processing unit (NPU), and the second processor may include a processor configured to operate based on an operating system, and the third processor may include a processor configured to perform a predetermined operations.
According to an example embodiment of the present disclosure, a method of controlling a processor chip including a memory, a first processor performing neural network processing for data stored in the memory, a second processor and a third processor, includes: transmitting, by the second processor, a control signal to the third processor to cause the third processor to perform an operation and transmitting, by the second processor, a control signal to the first processor to cause the first processor to perform an operation.
In addition, the transmitting a control signal to the third processor may include transmitting a start signal to the third processor to cause the third processor to provide information on an input content stored in the memory to the first processor, and transmitting a control signal to the first processor includes transmitting an initializing signal to the first processor to cause the first processor to perform the neural network processing on the input content based on the information on the input content provided from the third processor and artificial intelligence model information stored in the memory.
Further, transmitting a control signal to the third processor may include the third processor accessing a secure area of the memory to identify address information of the input content and transmitting the start signal to the third processor to provide address information of the input content to the first processor, and transmitting a control signal to the first processor includes the first processor transmitting an initializing signal to the first processor to perform the neural network processing on the input content based on address information of the input content provided from the third processor and the artificial intelligence model information stored in the unsecure area of the memory.
In addition, providing address information of one of a plurality of artificial intelligence models stored in the unsecure area to the first processor by the second processor and obtaining the artificial intelligence model information based on the address information of the artificial intelligence model provided from the second processor by the first processor may be further included.
Further, transmitting the control signal to the third processor may include transmitting the start signal to the third processor to cause the third processor to provide address information on one of the plurality of artificial intelligence models stored in the unsecure area based on the input content to the first processor, and the control method may further include obtaining the artificial intelligence model information by the first processor based on address information of the artificial intelligence model provided by the third processor.
In addition, sequentially receiving a plurality of frames included in the input content and being stored in a secure area of the memory may be further included, and transmitting the control signal to the third processor may include transmitting the start signal to the third processor by the third processor to cause the third processor to provide address information of the frame sequentially stored in the memory to the first processor at predetermined time intervals.
Further, transmitting the control signal to the third processor may include, based on the first application being executed, transmitting the start signal to the third processor to cause the third processor to provide information on the input content to the first processor.
In addition, based on the first application being terminated, transmitting an end signal to the third processor from the second processor to cause the third processor to terminate the provided operations, terminating the neural network processing operation by the first processor under control of the third processor, and providing a signal indicating the neural network processing operation being terminated from the third processor to the second processor may be further included.
According to another example embodiment of the present disclosure, a method of controlling a processor chip including a memory, a first processor, a second processor and a third processor includes: transmitting information on data stored in the memory by the second processor to the first processor based on a first application being executed; transmitting a first initializing signal from the second processor to the first processor to cause the first processor to perform neural network processing on the data based on information on the data provided by the second processor and the first artificial intelligence model information stored in the memory; transmitting a start signal to the third processor from the second processor to cause the third processor to provide information on input content stored in the memory to the first processor based on the first application being terminated and a second application being executed; and transmitting a second initializing signal to the first processor from the second processor to cause the first processor to perform neural network processing on the input content based on information on the input content provided from the third processor and a second artificial intelligence model information stored in the memory.
According to the various example embodiments of the present disclosure, the processor chip may reduce memory use based on performing neural network processing on the original content, improve output quality of content, and perform processing in real-time.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a diagram illustrating control methods of a neural processing unit (NPU) according to conventional technology;
FIG. 2 is a block diagram illustrating an example configuration of a processor chip according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating an example secure area and an unsecure area of a memory according to an embodiment of the present disclosure;
FIGS. 4A and 4B are diagrams illustrating example neural network processing according to an embodiment of the present disclosure;
FIGS. 5A and 5B are diagrams illustrating example neural network processing according to another embodiment of the present disclosure;
FIGS. 6A, 6B and 6C are diagrams illustrating example operations based on an application according to an embodiment of the present disclosure; and
FIG. 7 is a flowchart illustrating an example method of controlling a processor chip according to an embodiment of the present disclosure.
DETAILED DESCRIPTION
Various example embodiments of the present disclosure may be diversely modified. Accordingly, example embodiments are illustrated in the drawings and are described in greater detail in the detailed description. However, it is to be understood that the present disclosure is not limited to an example embodiment, and includes all modifications, equivalents, and substitutions without departing from the scope and spirit of the present disclosure. Also, well-known functions or constructions may not described in detail where they may obscure the disclosure with unnecessary detail.
Hereinafter, the various example embodiments of the present disclosure will be described in greater detail with reference to the accompanying drawings.
FIG. 2 is a block diagram illustrating an example configuration of a processor chip 100 according to an embodiment of the present disclosure.
As shown in FIG. 2 , the processor chip 100 includes a memory 110, a first processor (e.g., including processing circuitry) 120, a second processor (e.g., including processing circuitry) 130, and a third processor (e.g., including processing circuitry) 140.
The processor chip 100, as an apparatus performing neural network processing, may be implemented, for example, and without limitation, in a TV, a smartphone, a tablet PC, a mobile phone, a video phone, an electronic book reader, a desktop PC, a laptop PC, a netbook computer, a work station, a server, a PDA, a portable multimedia player (PMP), an MP3 player, a wearable device, or the like.
The processor chip 100, as an apparatus provided with a display (not shown), may be configured to perform a neural network processing on contents and may be an apparatus that displays an image processed content according to neural network processing through a display. Further, the processor chip 100 may not have a separate display and may be an apparatus performing neural network processing on content. In this example, the processor chip 100 may provide the image processed content according to neural network processing to a display apparatus.
The memory 110 may be electrically connected with a first processor 120, a second processor 130 and a third processor 140, and may store data necessary for the various embodiments of the present disclosure. For example, the memory 110 may be implemented, for example, and without limitation, as an internal memory of a ROM (for example, electrically erasable programmable read-only memory (EEPROM)), a RAM or the like included in each of the first processor 120, the second processor 130 and the third processor 140, or implemented as a memory separate from the first processor 120, the second processor 130 and the third processor 140. The memory 110 may be implemented in the form of an embedded memory in the processor chip 100 according to the data storage use or in the form of a detachable memory in the processor chip 100. For example, in the case of data for driving the processor chip 100, the data may be stored in the embedded memory in the processor chip 100, and in the case data for an expansion function of the processor chip 100, the data may be stored in detachable memory in the processor chip 100. The embedded memory in the processor chip 100 may, for example, and without limitation, be implemented in at least one from a volatile memory (ex: dynamic RAM (DRAM), static RAM (SRAM), or synchronous dynamic RAM (SDRAM), etc.) or a non-volatile memory (ex: one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, flash memory (ex: NAND flash or NOR flash, etc.), hard drive, solid state drive (SSD)), or the like, and the detachable memory in the processor chip 100 may be implemented, for example, and without limitation, in the form of a memory card (for example, compact flash (CF), secure digital (SD0, micro secure digital (Micro-SD), mini secure digital (Mini-SD), extreme digital (xD), multi-media card (MMC), etc.), an external memory (for example, USB memory) capable of connecting to a USB port, and the like.
The memory 110 may store an artificial intelligence model used for performing neural network processing on data. For example, the artificial intelligence model, as a model for upscaling the resolution of the input content to 8K, may be a model learning the relationship of the original image (8K) and the downscaled image (ex: 4K) of the original image using, for example, and without limitation, a convolution neural network (CNN). Herein, CNN may refer, for example, to a multilayered neural network having specialized connecting structure designed for voice processing, image processing and the like.
However, the above is only an example embodiment, and the artificial intelligence model may be a model based on various neural networks such as, for example, and without limitation, a recurrent neural network (RNN), deep neural network (DNN), and the like.
Further, the memory 110 may store data, which may be applied with an artificial intelligence model. For example, the memory 110 may store content, which may be applied with an artificial intelligence model.
The memory 110 may include a secure area and an unsecure area. The secure area may be an area that stores data requiring security, and the unsecure area may be an area that stores data unrelated to security. For example, the artificial intelligence model may be stored in the unsecure area, and the content to be applied with the artificial intelligence model may be stored in the secure area.
However, the present disclosure is not limited thereto, and the artificial intelligence model may be stored in a secure area if, for example, security is required. Further, the data to be applied with an artificial intelligence model may be stored in the unsecure area if, for example, unrelated to security.
The secure area and the unsecure area of the memory 110 may be divided in terms of software. For example, a second processor 130 may be electrically connected with the memory 110, but may not recognize the secure area corresponding to, for example, and without limitation, 30% of the memory in terms of software.
However, the present disclosure is not limited thereto, and the secure area of the memory 110 may be implemented to a first memory, and the unsecure area of the memory 110 may be implemented to a second memory. For example, the secure area and the unsecure area of the memory 110 may be divided in terms of hardware.
The first processor 120 and the third processor 140 to be described hereafter may include various processing circuitry and access the secure area and the unsecure area of the memory 110, but the second processor 130 may include various processing circuitry and be unable to access the secure area of the memory 110, and may, for example, only be able to access the unsecure area.
The first processor 120 may be a processor performing neural network processing. For example, the first processor 120, may, for example, include a neural network processing exclusive processor such as a neural processing unit (NPU), may include a plurality of processing elements. Between the adjacent processing elements, a unidirectional shift or a bidirectional shift of data may be possible.
Each of the processing elements may, for example, and without limitation, generally include a multiplier and an arithmetic logic unit (ALU), and the ALU may include, for example, and without limitation, at least one or more adder. The processing element may use a multiplier and ALU to perform four fundamental arithmetical operations. However, the processing elements are not limited thereto, and may include other structures capable of performing functions such as four fundamental arithmetical operations and shifts. Further, each of the processing elements may include, for example, a register for storing data.
The first processor 120 may include various processing circuitry including, for example, a controller controlling a plurality of processing elements, and the controller may control the plurality of processing elements to process in parallel processing as may be required in neural network processing process.
The first processor 120, under control of the second processor 130 or the third processor 140, may perform neural network processing. For example, the first processor 120 may perform neural network processing on data based on information on data provided by the second processor 130 or the third processor 140 and the artificial intelligence model information stored in the unsecure area. Herein, the data may be data stored in the secure area or data stored in the unsecure area. For example, the first processor 120 may access the secure area.
The second processor 130 may include various processing circuitry and may generally control the operations of the processor chip 100, and may, for example, be a processor that operates based on an operating system. However, the second processor 130 may be unable to access the secure area of the memory 110 and may, for example, only access the unsecure area.
According to an embodiment, the second processor 130 may be implemented as various processing circuitry, such as, for example, a microprocessor, a time controller (TCON), or the like. However, the second processor is not limited thereto, and may include, for example, and without limitation, one or more of a central processing unit (CPU), Micro Controller Unit (MCU), Micro processing unit (MPU), controller, application processor (AP), communication processor (CP), ARM processor, or the like, or may be defined by the corresponding terms. Further, the second processor 130 may be implemented as a System on Chip (SoC) integrated with a processing algorithm, a large scale integration (LSI), or in the form of a field programmable gate array (FPGA). For example, the second processor 130 may include a general-purpose processor.
The second processor 130 may transmit a control signal to the first processor 120 and the third processor 140 to cause the first processor 120 and the third processor 140 to perform operations.
The second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to provide information on the input content stored in the memory 110 to the first processor 120. For example, the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140, capable of accessing the secure area, to identify address information of input content by accessing the secure area, and to provide address information of input content to the first processor 120. For example, based on the second processor 130 not being able access the secure area, the third processor 140, capable of accessing the secure area, may be controlled to provide address information of the input content stored in the secure area to the first processor 120.
The third processor 140, unlike the second processor 130, as a processor that may control, a part of the operations of the processor chip 100 according to necessity, may be a processor performing only predetermined operations, but is not limited thereto. For example, the third processor 140 may include, for example, a processor incapable of the operations changing arbitrarily. For example, the third processor 140 may be a processor exclusively for signal processing such as digital signal processor (DSP), graphics processing unit (GPU), and the like, or may be a processor performing image processing on input content according to necessity.
In addition, the second processor 130 may transmit an initializing signal to the first processor 120 to cause the first processor to perform neural network processing on input content based on information on input content provided from the third processor 140 and artificial intelligence model information stored in the memory. For example, the second processor 130 may transmit an initializing signal to the first processor 120 to cause the first processor 120 to perform neural network processing on input content based on address information of input content provided from the third processor 140 and artificial intelligence model information stored in the unsecure area.
The second processor 130, may provide address information on one from the plurality of artificial intelligence models stored in the unsecure area to the first processor 120, and the first processor 120 may obtain artificial intelligence model information based on address information of artificial intelligence model provided from the second processor 130.
For example, the second processor 130, based on performing 8K upscaling on the input content, may control the third processor 140 to identify address information of input content by accessing the secure area and provide address information of input content to the first processor 120. Further, the second processor 130 may identify the address information of the artificial intelligence model performing 8K upscaling by accessing the unsecure area and may provide address information of the artificial intelligence model to the first processor 120. Further, the first processor 120 may perform control for performing neural network processing on input content based on address information of the input content provided from the third processor 140 and the address information of the artificial intelligence model.
In addition, the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to provide address information on one from the plurality of artificial intelligence models stored in the unsecure area based on the input content to the first processor 120, and the first processor 120 may obtain the artificial intelligence model information based on the address information of the artificial intelligence model provided from the third processor 140.
For example, the third processor 140 may analyze the input content and identify the type of image processing necessary to the input content. The third processor 140, based on the resolution of the input content being lower than the resolution of the display, may identify address information of the artificial intelligence model performing the upscaling, and based on the quality being low despite resolution of the input content and the resolution of the display being identical, may identify the address information of the artificial intelligence model removing noise of the image.
For example, the address information of the artificial intelligence model may be provided to the second processor 130, or may be provided after the third processor 140 analyzes the input content. As the second processor 130 may be unable to analyze the input content stored in the secure area, the artificial intelligence model provided by the second processor 130 may be an artificial intelligence model corresponding to user selection. For example, the second processor 130, based on the user wanting to play the input content to cinema mode, may identify address information of the artificial intelligence model corresponding to cinema mode.
However, the disclosure is not limited thereto, and the second processor 130 may identify surrounding environment information using a sensor and address information of the artificial intelligence model corresponding to surrounding environment information may be identified.
The processor chip 100 further includes a communication interface (e.g., including communication circuitry) (not shown), and a plurality of frames included in the input content that may be sequentially received through the communication interface and stored in the secure area of the memory 110, and the second processor 130 may transmit a start signal to cause the third processor 140 to provide address information of the frame sequentially stored in the memory 110 to the first processor 120 at predetermined time intervals.
The communication interface may include various communication circuitry and may be configured to perform communication with various types of external apparatuses based on various types of communication methods. The communication interface may include, for example, and without limitation, a tuner receiving RF broadcast signals by tuning a channel selected by the user or all channels pre-stored from radio frequency (RF) broadcast signals received through an antenna. In this example, the processor chip 100 may receive and demodulate a digital IF (DIF) signal converted from the tuner, and may further include a demodulator performing channel decoding or the like. Further, the communication interface may include various configurations including various circuitry for performing wireless communication such as a WiFi chip, a Bluetooth chip, a wireless communication chip, and an NFC chip. A WiFi chip and a Bluetooth chip may perform communication through a WiFi method and a Bluetooth method, respectively. Based on a WiFi chip or a Bluetooth chip being used, various connection information such as SSID and session key may be first transmitted and received, and various information may be transmitted and received after connecting by communication using the above. A wireless communication chip may refer, for example, to a chip performing communication according to various communication protocols such as IEEE, ZigBee, 3rd Generation (3G), 3rd Generation Partnership Project (3GPP), and Long Term Evolution (LTE). An NFC chip refers to a chip that operates by a near field communication (NFC) method using a 13.56 MHz band from various RF-ID frequency bands such as 135 kHz, 13.56 MHz, 433 MHz, 860-960 MHz, and 2.45 GHz. Further, the communication interface may also include the configuration of performing a wire communication such as HDMI, MHL, USB, DP, thunderbolt, RGB, D-SUB, and DVI. For example, the processor chip 100 may receive a content playback screen from an external apparatus through the communication interface or transmit the content playback screen to the external apparatus through the communication interface.
The second processor 130, based on a first application being executed, may transmit a start signal to the third processor 140 to cause the third processor 140 to provide information on the input content to the first processor 120.
In addition, the second processor 130, based on the first application being terminated, may transmit an end signal to the third processor 140 to cause the third processor 140 to terminate the provided operations, and the third processor 140 may perform control to cause the first processor 120 to terminate neural network processing operation and may provide a signal indicating the neural network processing operation is terminated to the second processor 130.
The second processor 130, based on the second application being executed, may access the unsecure area to identify address information of the data corresponding to the second application and may provide the identified address information to the first processor 120, and the first processor 120 may perform control to perform neural network processing on data based on address information of data provided from the second processor 130 and the artificial intelligence model information stored in the unsecure area.
The first application may, for example, be an application related to image processing of content, and the second application may, for example, be an application unrelated to content.
For example, the second processor 130, based on the first application related to image processing of content being executed, may control the third processor 140 to obtain address information of content stored in the secure area, and based on the second application unrelated to content being executed, may directly obtain address information of data stored in an unsecure area.
The processor chip 100 may further include a display (not shown), and the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to identify address information of input content displayed through a display from a plurality of input contents, and to provide the identified address information to the first processor 120.
For example, the processor chip 100 may receive, for example, a first input content from an external apparatus connected to a high-definition multimedia interface (HDMI), a second input content from the broadcast station through a tuner, and may store the first input content and the second input content in the secure area. In this example, the second processor 130 may transmit a start signal to the third processor 140 to cause the third processor 140 to identify address information of input content displayed through the display from the first input content and the second input content, and to provide the identified address information to the first processor 120.
The display may be implemented as displays of various types such as, for example, and without limitation, a liquid crystal display (LCD), organic light emitting diodes (OLED) display, a plasma display panel (PDP), or the like. The display may also include driving circuit, backlight unit, or the like capable of being implemented in forms such as an a-si TFT, a low temperature poly silicon (LTPS) TFT, and an organic TFT (OTFT) therein. The display may be implemented in touch screen by coupling with a touch sensor.
The second processor 130, after initializing the first processor 110, may transmit a start signal to the third processor 140 to cause the third processor 140 to identify address information of input content accessing the secure area and to provide the identified address information to the first processor 120.
Although in the above the second processor 130 or the third processor 140 has been described as providing address information to the first processor 120, the present disclosure is not limited thereto. For example, the second processor 130 or the third processor 140 may obtain an input content or artificial intelligence model and may provide to the first processor 120 directly.
As described above, the problem of the second processor 130 being unable to access the secure area may be addressed by a method of using the third processor 140 capable of accessing the secure area.
Below, example operations of each processor will be described in greater detail with reference to the drawings.
FIG. 3 is a diagram illustrating an example secure area and unsecure area of a memory 110 according to an embodiment of the present disclosure. Referring to FIG. 3 , the second processor 130 operates such that it is possible to store and display input content even if access to the secure area of the memory 110 is not possible.
The processor chip 100 may receive input content. In FIG. 3 , the receiving of input content is shown, for example, as streaming, and this may refer, for example, to the input content being received in real-time. Further, not only is the input content indicated as streaming through the internet, receiving content from an external apparatus through a wire connection may also be indicated as streaming.
The processor chip 100 may receive encoded input content from the outside, and may decode the input content through a decoder (e.g., including decoding circuitry and/or executable program elements). Further, the decoded content may be stored in the secure area of the memory 110.
The decoded content, based on the provided source, may be stored in different areas within the secure area. For example, based on the decoded content being received through the tuner, the decoded content may be stored in a first area of the secure area, and based on the decoded content being received from an external apparatus connected through HDMI, the decoded content may be stored in a second area of the secure area. For example, the secure area may be divided into a plurality of areas divided by the provided sources of the content, and the third processor 140 may identify information on the divided plurality of areas of the secure area.
The content stored in the secure area may be displayed after image processing. The image processing may be performed by a digital circuit such as, for example, and without limitation, a register-transfer level (RTL), the first processor 120, or the third processor 140. A digital circuit such as a register-transfer level (RTL) may be used when possibility of change is low, and the third processor 140 may be used when the possibility of change is high. For example, even in the case of a complex algorithm (FRC), based on modes of predetermined number existing (ex: 30 Hz, 60 Hz, 120 Hz), being implemented to digital circuits such as RTL is possible, and in this case may be advantageous from a power and production cost aspect rather than using the third processor 140. In the case of an algorithm (ex: game mode, automatic screen adjustment according to surrounding brightness) where there are many changing elements by the user or there are many changing elements existing by use type (area), there is a high possibility of not being used regularly when implemented as a digital circuit such as RTL, and there may be the problem of taking up space within the chip. In this case, using the third processor 140 may be more advantageous rather than using a digital circuit such as RTL. Further, the first processor 120 may perform image processing using the artificial intelligence model.
The above-described operations such as streaming, decoding, and storing may be implemented through a digital circuit. The second processor 130 may control each digital circuit and may perform processing on input content, but direct access to the input content is not allowed. For example, even if the second processor 130 is unable access to the secure area, the operations of displaying the input content may be performed.
FIGS. 4A and 4B are diagrams illustrating example neural network processing according to an embodiment of the present disclosure.
As shown in FIG. 4A, the second processor (CPU) 130 may perform booting using an operating system (O/S), and may perform various operations such as control through various menu input by the user and performing applications using various programs, contents, data or the like stored in the unsecure area (Storage 1).
The second processor 130 may be able access the unsecure area, but may be unable to access the secure area (Storage 2). Data that requires security such as content copyright (DRM) or private information such as input content received from outside the processor chip 100 may be stored in the secure area, and the second processor 130 may not read the data requiring security.
The third processor 140, as, for example, a processor that may independently perform digital signal processing, may not us an O/S, and may use a pre-stored program to perform the predetermined operations. Accordingly, the third processor 140 may access the secure area storing the data requiring security.
The first processor 120, as a processor that may be used for neural network processing, may not use an O/S, and may be able to access the secure area.
Based on the second processor 130 being unable to identify address information of the input content, the third processor 140 may be controlled to perform neural network processing on input content.
The second processor 130 may perform initializing of the first processor 120. A reason that the second processor 130 may initialize the first processor 120 may be that the second processor 130 controls the TV system through the O/S. The O/S manages resources on each program through the driver, and this is an effective system to managing resource such as the memory 110. For example, the second processor 130 may effectively manage the resource by initializing the first processor 120 through the driver.
Thereafter, the second processor 130 may control the third processor 140 to identify address information of the input content by accessing the secure area and to provide address information of the input content to the first processor 120. The control operations of the third processor 140 by the second processor 130 may, for example, include be a onetime control operation.
The third processor 140 may control for the first processor 120 to perform neural network processing on input content based on address information of input content provided from the third processor 140 and the artificial intelligence model information stored in the unsecure area. The control operations of the first processor 120 by the third processor 140 may include a control operation that is repeated at predetermined time intervals. For example, the third processor 140, based on the input content being an image of 30 fps, may provide input to the first processor 120 at 33.3 ms intervals and may control the first processor 120 to perform neural network processing.
For example, the third processor 140 may provide address information of each frame for every frame of the input content to the first processor 120, and may control (trigger) to perform neural network processing. However, the present disclosure is not limited thereto, and the third processor 140 may obtain each frame of the input content, provide the obtained frame to the first processor 120, and may control to perform neural network processing.
The artificial intelligence model used when performing neural network processing by the first processor 120 may be provided by the second processor 130. Herein, the artificial intelligence model may be stored in the unsecure area, and the second processor 130 may access the unsecure area to provide address information of the artificial intelligence model to the first processor 120.
Based on the above-described operations, the third processor 140 and not the second processor 130 may identify address information on the input content, and may control the neural network processing operation of the first processor 120. Accordingly, a neural network processing may be performed in real-time using the first processor 120 while maintaining security of the input content and reducing the processing burden of the second processor 130.
In FIG. 4A, the memory 110 is described as including a secure area and an unsecure area, but is not limited thereto. For example, as shown in FIG. 4B, the processor chip 100 may include a first memory 111 and a second memory 112 divided, for example, in terms of hardware. The first memory 111, as a memory storing non-secure data, may be a memory that is accessible by the second processor 130. The second memory 112, as a memory storing secure data, may be a memory that is not accessible by the second processor 130. The first processor 120 and the third processor 140 may access the first memory 111 and the second memory 112. The operations of FIG. 4B, other than the memory being divided in terms of hardware, is identical to or similar to the operations of FIG. 4A, and thus the overlapping description thereof may not be repeated here.
FIGS. 5A and 5B are diagrams illustrating example neural network processing according to another embodiment of the present disclosure.
FIG. 5A is virtually similar to the operations of FIG. 4A, but is a diagram illustrating an example case in which the subject identifying the address information of the artificial intelligence model and providing the address information to the first processor 120 is the third processor 140.
The third processor 140, unlike the second processor 130, may access the input content stored in the secure area (Storage 2), and thus an analysis of the input content is possible. Accordingly, the third processor 140 may identify the artificial intelligence model optimized to the input content. For example, the third processor 140 may identify address information of the artificial intelligence model to remove noise from the unsecure area if the noise of the input content is determined problematic and may identify address information of the artificial intelligence model for extending resolution in the unsecure area (Storage 1). The third processor 140 may provide address information of the artificial intelligence model to the first processor 120.
The memory 110 in FIG. 5A is described as including a secure area and an unsecure area, but is not limited thereto. For example, as shown in FIG. 5B, the processor chip 100 may include a first memory 111 and a second memory 112 divided, for example, in terms of hardware, and the specific description thereof overlapping with FIG. 4B may not be repeated here.
FIGS. 6A, 6B and 6C are diagrams illustrating example operations based on an application according to an embodiment of the present disclosure.
As shown in FIG. 6A, based on application A unrelated to the input content being executed, the second processor 130 may provide information on data stored in the memory 110 and information on the artificial intelligence model to the first processor 120. The second processor 130 may control for the first processor 120 to perform neural network processing on data based on the provided information. For example, based on application A unrelated to the input content being executed, the second processor 130 may identify a first address information of data stored in the unsecure area and a second address information of the artificial intelligence model, and may provide the first address information and the second address information to the first processor 120. The second processor 130 may control the first processor 120 to perform neural network processing on data based on the first address information and the second address information.
For example, application A may be an application that receives ambient illuminance value sensed through a sensor as input processing the most appropriate brightness. The ambient illuminance value may not be data requiring security and thus may be stored in the unsecure area. The second processor 130 may identify the first address information of ambient illuminance value and the second address information of the artificial intelligence model corresponding to application A, and may provide the first address information and the second address information to the first processor 120. The second processor 130 may control the first processor 120 to process the most appropriate brightness corresponding to the ambient illuminance value based on the first address information and the second address information.
For example, in this case, the second processor 130 may not have to use the third processor 140.
Thereafter, based on application A such as FIG. 6B being terminated and application B, which image processes input content, being executed, the second processor 130 may control the third processor 140 to provide information on input content stored in the memory 110 to the first processor 120.
For example, based on application A being terminated and application B, which image processes input content, being executed, the second processor 130 may control the third processor 140 to identify address information of input content by accessing the secure area (Storage 2), and provide address information of the input content to the first processor 120.
The third processor 140 may control the first processor 120 to perform neural network processing on input content based on information on input content provided from the third processor 140 and the artificial intelligence model information stored in the unsecure area (Storage 1).
For example, the third processor 140 may control the first processor 120 to perform neural network processing on input content based on address information of input content provided from the third processor 140 and the artificial intelligence model information stored in the unsecure area (Storage 1). For example, the third processor 140 may perform image processing through the first processor 120 by reading input content received at predetermined intervals from the secure area, and then repeating the operations of storing in the secure area.
For example, in this case, the second processor 130, after initializing the first processor 120, may delegate operations thereafter to the third processor 140.
For example, the second processor 130 may be a processor that controls the overall operations of the processor chip 100, but as the third processor 130 is a processor only used in necessary cases, the third processor 140 may be more advantageous in image processing in real time.
In FIG. 6B, the memory 110 is described as including a secure area and an unsecure area, but is not limited thereto. For example, as shown in FIG. 6C, the processor chip 100 may include a first memory 111 and a second memory 112 divided in terms of a hardware, and the specific description thereof overlapping with FIG. 4B may not be repeated here.
FIG. 7 is a flowchart illustrating an example method of controlling a processor chip according to an embodiment of the present disclosure.
A method of controlling a processor chip including a memory, a first processor performing neural network processing on data stored in the memory, a second processor and a third processor includes the second processor transmitting a control signal to the third processor to cause the third processor to perform an operation (S710). Further, a control signal may be transmitted to the first processor by the second processor to cause the first processor to perform an operation (S720).
Transmitting a control signal to the third processor (S710) may include transmitting a start signal to the third processor to cause the third processor to provide information on input content stored in the memory to the first processor, and transmitting a control signal to the first processor (S720) includes transmitting an initializing signal to the first processor to cause the first processor to perform neural network processing on input content based on information on input content provided from the third processor and the artificial intelligence model information stored in the memory.
Further, transmitting a control signal to the third processor (S710) may include identifying address information of input content by accessing the secure area of the memory, and transmitting a start signal to the third processor to provide address information of input content to the first processor, and transmitting a control signal to the first processor (S720) includes transmitting an initializing signal to the first processor to cause the first processor to perform neural network processing on input content based on address information of input content provided from the third processor and the artificial intelligence model information stored in the unsecure area of the memory.
The providing address information on one from a plurality of artificial intelligence models stored in the unsecure area to the first processor by the second processor and obtaining the artificial intelligence model information based on address information of the artificial intelligence model provided from the second processor by the first processor may be further included.
Further, transmitting a control signal to the third processor (S710) may include transmitting a start signal to the third processor to cause the third processor to provide address information on one of the plurality of artificial intelligence models stored in the unsecure area based on input content, and the control method may further include the first processor obtaining artificial intelligence model information based on address information of artificial intelligence model provided from the third processor.
Sequentially receiving a plurality of frames included in the input content and storing in the secure area of the memory may further be included, and transmitting a control signal to the third processor (S710) may include transmitting a start signal to the third processor for the third processor to provide address information of frames sequentially stored in the memory at predetermined time intervals to the first processor.
Further, transmitting a control signal to the third processor (S710) may include, based on a first application being executed, transmitting a start signal to the third processor to cause the third processor to provide information on the input content to the first processor.
Based on the first application being terminated, transmitting an end signal to the third processor by the second processor to cause the third processor to terminate the provided operations, terminating neural network processing operation by the first processor under the control of the third processor, and providing a signal indicating neural network processing operation is terminated by the third processor to the second processor may be further included.
According to the various example embodiments of the present disclosure such as the above, the processor chip may, for example, use a DSP rather than a CPU to perform neural network processing on original content to reduce memory use, improve output quality of content, and perform processing in real-time.
In the above, the memory, the first processor, the second processor and the third processor are described as being implemented in a processor chip for the sake of convenience and ease of description, but is not limited thereto. For example, an electronic apparatus may, for example, and without limitation, be implemented as including a memory, CPU, DSP and NPU as a separate configuration.
According to an embodiment, the various example embodiments described above may be implemented as a software including instructions stored on machine-readable storage media readable by a machine (e.g.: computer). The machine, as an apparatus capable of calling an instruction stored in a storage medium and operating according to the called instruction, may include an electronic device (e.g.: electronic apparatus) according to the disclosed embodiments. Based on instructions being executed by the processor, the processor may directly, or using other elements under the control of the processor, perform a function corresponding to the instruction. The instruction may include a code generated by a compiler or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Herein, the ‘non-transitory’ storage medium may not include a signal and is tangible, but does not distinguish data being semi-permanently or temporarily stored in a storage medium.
According to an embodiment, the method according to various embodiments disclosed herein may be provided in a computer program product. A computer program product may be exchanged between a seller and a purchaser as a commodity. A computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)) or distributed online through an application store (e.g. Play Store™). In the case of on-line distribution, at least a portion of the computer program product may be at least temporarily stored in a storage medium such as a manufacturer's server, a server of an application store, or a memory of a relay server, or temporarily generated.
In addition, according to an embodiment, the various embodiments described above may be implemented in a computer or in a recording medium capable of reading a similar apparatus using a software, a hardware or a combination of software and hardware. In some cases, the embodiments described herein may be implemented as a processor itself. Based on a software implementation, the embodiments according to the process and function described in the present disclosure may be implemented as separate software modules. Each of the software modules may perform one or more function or operations described in the present disclosure.
The computer instructions for performing a processing operations of another apparatus according to the various embodiments described above may be stored in a non-transitory computer-readable medium. The computer instructions stored in this non-transitory computer-readable medium, based on being executed by the processor of a specific device, may have a specific device perform a processing operations of other device according to the various embodiments described above. The non-transitory computer readable medium may refer, for example, to a medium that stores data semi-permanently, and is readable by an apparatus. Examples of a non-transitory computer-readable medium may include a compact disc (CD), a digital versatile disc (DVD), a hard disc, a Blu-ray disc, a universal serial bus (USB), a memory card, a read only memory (ROM), and the like.
In addition, each of the elements (e.g.: a module or a program) according to the various embodiments described above may be composed of a single entity or a plurality of entities, and some sub-elements of the abovementioned sub-elements may be omitted, or another sub-element may be further included in various embodiments. Alternatively or additionally, some elements (e.g.: modules or programs) may be integrated into one entity to perform the same or similar functions performed by each respective element prior to integration. The operations performed by a module, a program, or other element, in accordance with the various embodiments, may be performed sequentially, in a parallel, repetitively, or in a heuristically manner, or at least some operations may be performed in a different order, omitted, or may further include a different operations.
While various example embodiments have been illustrated and described with reference to various figures, the disclosure is not limited to specific embodiments or the drawings, and it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined.

Claims (15)

What is claimed is:
1. A processor chip configured to perform neural network processing, comprising:
a neural processing unit (NPU), comprising a processor, configured to perform the neural network processing on data;
a central processing unit (CPU) comprising a processor;
a third processor; and
a memory configured to include a secure area that is accessible by the neural processing unit and is not accessible by the central processing unit, and an unsecure area accessible by the neural processing unit and the central processing unit,
wherein the central processing unit is configured to control to transmit a start signal to the third processor to cause the third processor to provide address information of one of a plurality of artificial intelligence models stored in the unsecure area included in the memory,
wherein the third processor is configured to:
identify the one of the plurality of artificial intelligence models based on a resolution of an input content,
provide the address information of the one of the plurality of artificial intelligence models to the neural processing unit so that the neural processing unit can perform the neural network processing by accessing the secure area based on the address information.
2. The processor chip according to claim 1, wherein, the central processing unit is configured to:
transmit an initializing signal to the neural processing unit to cause the neural processing unit to perform the neural network processing on the input content based on address information of the input content provided by the third processor and artificial intelligence model information.
3. The processor chip according to claim 2, wherein the neural processing unit is configured to obtain the artificial intelligence model information based on the address information of the artificial intelligence model provided by the third processor.
4. The processor chip according to claim 2, further comprising:
a communication interface comprising communication circuitry;
wherein the processor chip is configured to store a plurality of frames included in the input content and sequentially received through the communication interface in the secure area of the memory, and
wherein the central processing unit is configured to transmit the start signal to the third processor to cause the third processor to provide address information of the frames sequentially stored in the memory to the neural processing unit at predetermined time intervals.
5. The processor chip according to claim 2, wherein the central processing unit is configured to:
access the unsecure area included in the memory to identify address information of data corresponding to a second application based on the second application being executed,
provide the identified address information to the neural processing unit, and
control the neural processing unit to perform the neural network processing for the data based on address information of the data provided by the central processing unit and the artificial intelligence model information stored in the unsecure area.
6. The processor chip according to claim 1, wherein the central processing unit is configured to transmit the start signal to the third processor to cause the third processor to provide information on the input content to the neural processing unit, based on a first application being executed.
7. The processor chip according to claim 6, wherein the central processing unit is configured to transmit an end signal to the third processor to cause the third processor to stop providing information to the neural processing unit based on the first application being terminated, and
wherein the third processor is configured to control the neural processing unit to stop the neural network processing, and to provide a signal indicating that the neural network processing is stopped to the central processing unit.
8. The processor chip according to claim 1,
wherein the third processor includes a processor performing a predetermined operation.
9. The processor chip according to claim 1, wherein the third processor is configured to identify address information of the input content by accessing the secure area and provide address information of the input content in the secure area to the neural processing unit at predetermined time intervals determined based on the input content.
10. A method of controlling a processor chip comprising a neural processing unit (NPU), a central processing unit (CPU), a third processor, and a memory including a secure area that is accessible by the neural processing unit and is not accessible by the central processing unit and an unsecure area accessible by the neural processing unit and the central processing unit, the method comprising:
transmitting, by the central processing unit, a start signal to the third processor to cause the third processor to provide address information of one of a plurality of artificial intelligence models stored in the unsecure area included in the memory;
based on receiving the start signal, the third processor identifying the one of the plurality of artificial intelligence models based on a resolution of an input content; and
providing, by the third processor, the address information of the one of a plurality of artificial intelligence models to the neural processing unit so that the neural processing unit performs neural network processing by accessing the secure area based on the address information.
11. The method according to claim 10, wherein the method further comprises:
transmitting, by the central processing unit, an initializing signal to the neural processing unit to perform the neural network processing on the input content based on address information of the input content provided from the third processor and artificial intelligence model information.
12. The method according to claim 11, wherein the method further comprises:
obtaining, by the neural processing unit, the artificial intelligence model information based on the address information of the artificial intelligence model provided by the third processor.
13. The method according to claim 11, the method further comprising:
sequentially receiving a plurality of frames included in the input content and stored in the secure area of the memory; and
wherein the transmitting the start signal to the third processor comprises transmitting the start signal to the third processor to cause the third processor to provide the address information of frames sequentially stored in the memory to the neural processing unit at a predetermined time interval.
14. The method according to claim 11, wherein the transmitting the start signal to the third processor comprises transmitting the start signal to the third processor to cause the third processor to provide information on the input content to the neural processing unit based on a first application being executed.
15. The method according to claim 14, the method further comprising:
transmitting, by the central processing unit, an end signal to the third processor to cause the third processor to stop provided operations based on the first application being terminated;
terminating the neural network processing by the neural processing unit under control of the third processor; and
providing, by the third processor, a signal indicating that the neural network processing is terminated to the central processing unit.
US16/906,130 2019-08-13 2020-06-19 Processor chip and control methods thereof Active US11842265B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/906,130 US11842265B2 (en) 2019-08-13 2020-06-19 Processor chip and control methods thereof

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020190099124A KR102147912B1 (en) 2019-08-13 2019-08-13 Processor chip and control methods thereof
KR10-2019-0099124 2019-08-13
US16/747,989 US11681904B2 (en) 2019-08-13 2020-01-21 Processor chip and control methods thereof
US16/906,130 US11842265B2 (en) 2019-08-13 2020-06-19 Processor chip and control methods thereof

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/747,989 Continuation US11681904B2 (en) 2019-08-13 2020-01-21 Processor chip and control methods thereof

Publications (2)

Publication Number Publication Date
US20210049450A1 US20210049450A1 (en) 2021-02-18
US11842265B2 true US11842265B2 (en) 2023-12-12

Family

ID=69411327

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/747,989 Active 2041-02-07 US11681904B2 (en) 2019-08-13 2020-01-21 Processor chip and control methods thereof
US16/906,130 Active US11842265B2 (en) 2019-08-13 2020-06-19 Processor chip and control methods thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/747,989 Active 2041-02-07 US11681904B2 (en) 2019-08-13 2020-01-21 Processor chip and control methods thereof

Country Status (6)

Country Link
US (2) US11681904B2 (en)
EP (1) EP3779761A1 (en)
JP (1) JP7164561B2 (en)
KR (1) KR102147912B1 (en)
CN (1) CN112396168A (en)
WO (1) WO2021029504A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021079257A (en) * 2021-03-04 2021-05-27 株式会社三洋物産 Game machine
KR20230069729A (en) * 2021-11-12 2023-05-19 삼성전자주식회사 Display apparatus and operating method thereof
CN116091293B (en) * 2022-09-13 2023-10-31 北京理工大学 Microminiature intelligent missile-borne computer architecture

Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4697212A (en) 1983-06-18 1987-09-29 Sony Corporation Method and apparatus for recording a digital information signal
JP2001188767A (en) 1999-12-28 2001-07-10 Fuji Xerox Co Ltd Neutral network arithmetic unit and method
US20050125369A1 (en) 2003-12-09 2005-06-09 Microsoft Corporation System and method for accelerating and optimizing the processing of machine learning techniques using a graphics processing unit
US20060112213A1 (en) 2004-11-12 2006-05-25 Masakazu Suzuoki Methods and apparatus for secure data processing and transmission
US20070006193A1 (en) * 2000-02-17 2007-01-04 Elbrus International Single-chip multiprocessor with clock cycle-precise program scheduling of parallel execution
KR20080060649A (en) 2006-12-27 2008-07-02 엘지전자 주식회사 Apparatus and method for data processing
US20140101405A1 (en) 2012-10-05 2014-04-10 Advanced Micro Devices, Inc. Reducing cold tlb misses in a heterogeneous computing system
US20140282508A1 (en) * 2013-03-14 2014-09-18 Qualcomm Incorporated Systems and methods of executing multiple hypervisors
US9076001B1 (en) * 2012-02-06 2015-07-07 Marvell International Ltd. Method and apparatus for implementing a secure content pipeline
US20160321784A1 (en) 2015-04-28 2016-11-03 Qualcomm Incorporated Reducing image resolution in deep convolutional networks
US20160379004A1 (en) * 2015-06-29 2016-12-29 Samsung Electronics Co., Ltd. Semiconductor device
KR20170106338A (en) 2015-01-22 2017-09-20 퀄컴 인코포레이티드 Model compression and fine-tuning
US20180129893A1 (en) 2016-11-07 2018-05-10 Samsung Electronics Co., Ltd. Convolutional neural network processing method and apparatus
US20180174268A1 (en) 2012-08-23 2018-06-21 Microsoft Technology Licensing, Llc Direct communication between gpu and fpga components
CN108255773A (en) 2017-12-07 2018-07-06 中国航空工业集团公司西安航空计算技术研究所 A kind of intelligence computation heterogeneous polynuclear processing method and platform
JP2018152639A (en) 2017-03-10 2018-09-27 株式会社半導体エネルギー研究所 Semiconductor device and display system
US20180285715A1 (en) 2017-03-28 2018-10-04 Samsung Electronics Co., Ltd. Convolutional neural network (cnn) processing method and apparatus
US20180293707A1 (en) 2017-04-10 2018-10-11 Samsung Electronics Co., Ltd. System and method for deep learning image super resolution
EP3389005A1 (en) 2017-04-10 2018-10-17 INTEL Corporation Abstraction library to enable scalable distributed machine learning
US20190080239A1 (en) 2017-09-13 2019-03-14 Samsung Electronics Co., Ltd. Neural network system for reshaping a neural network model, application processor including the same, and method of operating the same
JP2019053734A (en) 2017-09-14 2019-04-04 三星電子株式会社Samsung Electronics Co.,Ltd. Heterogeneous accelerator for highly efficient learning system
US20190114541A1 (en) 2017-10-18 2019-04-18 Samsung Electronics Co., Ltd. Method and system of controlling computing operations based on early-stop in deep neural network
CN109657788A (en) 2018-12-18 2019-04-19 北京中科寒武纪科技有限公司 Data processing method, device and Related product
US20190156187A1 (en) 2017-11-21 2019-05-23 Google Llc Apparatus and mechanism for processing neural network tasks using a single chip package with multiple identical dies
US20190180177A1 (en) 2017-12-08 2019-06-13 Samsung Electronics Co., Ltd. Method and apparatus for generating fixed point neural network
US20190188917A1 (en) * 2017-12-20 2019-06-20 Eaton Intelligent Power Limited Lighting And Internet Of Things Design Using Augmented Reality
US20190206026A1 (en) 2018-01-02 2019-07-04 Google Llc Frame-Recurrent Video Super-Resolution
US20190251471A1 (en) * 2016-06-16 2019-08-15 Hitachi, Ltd. Machine learning device
US20200082279A1 (en) * 2018-09-11 2020-03-12 Synaptics Incorporated Neural network inferencing on protected data
US20200104722A1 (en) 2017-11-20 2020-04-02 Shanghai Cambricon Information Technology Co., Ltd Task parallel processing method, apparatus and system, storage medium and computer device
KR20200084949A (en) 2018-12-27 2020-07-14 삼성전자주식회사 Electronic device and control method thereof
US10713143B1 (en) * 2019-06-24 2020-07-14 Accenture Global Solutions Limited Calibratable log projection and error remediation system
US20200242734A1 (en) * 2017-04-07 2020-07-30 Intel Corporation Methods and systems using improved convolutional neural networks for images processing
US20200379923A1 (en) * 2019-05-30 2020-12-03 Synaptics Incorporated Granular access control for secure memory
US20210165883A1 (en) 2018-08-14 2021-06-03 Huawei Technologies Co., Ltd. Artificial intelligence ai processing method and ai processing apparatus
US11055816B2 (en) * 2017-06-05 2021-07-06 Rakuten, Inc. Image processing device, image processing method, and image processing program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180075913A (en) * 2016-12-27 2018-07-05 삼성전자주식회사 A method for input processing using neural network calculator and an apparatus thereof
EP3563304B1 (en) * 2016-12-30 2022-03-09 Intel Corporation Deep learning hardware
US20180253636A1 (en) * 2017-03-06 2018-09-06 Samsung Electronics Co., Ltd. Neural network apparatus, neural network processor, and method of operating neural network processor
CN107885509A (en) * 2017-10-26 2018-04-06 杭州国芯科技股份有限公司 A kind of neutral net accelerator chip framework based on safety
KR20190051697A (en) * 2017-11-07 2019-05-15 삼성전자주식회사 Method and apparatus for performing devonvolution operation in neural network
EP3707572B1 (en) * 2017-11-10 2023-08-23 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles

Patent Citations (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4697212A (en) 1983-06-18 1987-09-29 Sony Corporation Method and apparatus for recording a digital information signal
JP2001188767A (en) 1999-12-28 2001-07-10 Fuji Xerox Co Ltd Neutral network arithmetic unit and method
US6654730B1 (en) 1999-12-28 2003-11-25 Fuji Xerox Co., Ltd. Neural network arithmetic apparatus and neutral network operation method
US7895587B2 (en) 2000-02-17 2011-02-22 Elbrus International Single-chip multiprocessor with clock cycle-precise program scheduling of parallel execution
US20070006193A1 (en) * 2000-02-17 2007-01-04 Elbrus International Single-chip multiprocessor with clock cycle-precise program scheduling of parallel execution
US20050125369A1 (en) 2003-12-09 2005-06-09 Microsoft Corporation System and method for accelerating and optimizing the processing of machine learning techniques using a graphics processing unit
KR20050056124A (en) 2003-12-09 2005-06-14 마이크로소프트 코포레이션 System and method for accelerating and optimizing the processing of machine learning techniques using a graphics processing unit
US7219085B2 (en) 2003-12-09 2007-05-15 Microsoft Corporation System and method for accelerating and optimizing the processing of machine learning techniques using a graphics processing unit
US20090125717A1 (en) 2004-11-12 2009-05-14 Sony Computer Entertainment Inc. Methods and Apparatus for Secure Data Processing and Transmission
US20060112213A1 (en) 2004-11-12 2006-05-25 Masakazu Suzuoki Methods and apparatus for secure data processing and transmission
KR100924043B1 (en) 2004-11-12 2009-10-27 소니 컴퓨터 엔터테인먼트 인코포레이티드 Methods and apparatus for secure data processing and transmission
US8001377B2 (en) 2004-11-12 2011-08-16 Sony Computer Entertainment Inc. Methods and apparatus for secure data processing and transmission
US7502928B2 (en) 2004-11-12 2009-03-10 Sony Computer Entertainment Inc. Methods and apparatus for secure data processing and transmission
KR20080060649A (en) 2006-12-27 2008-07-02 엘지전자 주식회사 Apparatus and method for data processing
US9076001B1 (en) * 2012-02-06 2015-07-07 Marvell International Ltd. Method and apparatus for implementing a secure content pipeline
US20180174268A1 (en) 2012-08-23 2018-06-21 Microsoft Technology Licensing, Llc Direct communication between gpu and fpga components
US20140101405A1 (en) 2012-10-05 2014-04-10 Advanced Micro Devices, Inc. Reducing cold tlb misses in a heterogeneous computing system
JP2015530683A (en) 2012-10-05 2015-10-15 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated Reducing cold translation index buffer misses in heterogeneous computing systems
US20140282508A1 (en) * 2013-03-14 2014-09-18 Qualcomm Incorporated Systems and methods of executing multiple hypervisors
US9606818B2 (en) 2013-03-14 2017-03-28 Qualcomm Incorporated Systems and methods of executing multiple hypervisors using multiple sets of processors
US10223635B2 (en) 2015-01-22 2019-03-05 Qualcomm Incorporated Model compression and fine-tuning
KR20170106338A (en) 2015-01-22 2017-09-20 퀄컴 인코포레이티드 Model compression and fine-tuning
US20160321784A1 (en) 2015-04-28 2016-11-03 Qualcomm Incorporated Reducing image resolution in deep convolutional networks
JP2018523182A (en) 2015-04-28 2018-08-16 クゥアルコム・インコーポレイテッドQualcomm Incorporated Reducing image resolution in deep convolutional networks
US20160379004A1 (en) * 2015-06-29 2016-12-29 Samsung Electronics Co., Ltd. Semiconductor device
US10657274B2 (en) 2015-06-29 2020-05-19 Samsng Electronics Co., Ltd. Semiconductor device including memory protector
JP6629678B2 (en) * 2016-06-16 2020-01-15 株式会社日立製作所 Machine learning device
US20190251471A1 (en) * 2016-06-16 2019-08-15 Hitachi, Ltd. Machine learning device
US10755126B2 (en) 2016-11-07 2020-08-25 Samsung Electronics Co., Ltd. Convolutional neural network processing method and apparatus
US11508146B2 (en) 2016-11-07 2022-11-22 Samsung Electronics Co., Ltd. Convolutional neural network processing method and apparatus
US20200372276A1 (en) 2016-11-07 2020-11-26 Samsung Electronics Co., Ltd. Convolutional neural network processing method and apparatus
US20180129893A1 (en) 2016-11-07 2018-05-10 Samsung Electronics Co., Ltd. Convolutional neural network processing method and apparatus
JP2018152639A (en) 2017-03-10 2018-09-27 株式会社半導体エネルギー研究所 Semiconductor device and display system
US20180285715A1 (en) 2017-03-28 2018-10-04 Samsung Electronics Co., Ltd. Convolutional neural network (cnn) processing method and apparatus
US11514290B2 (en) 2017-03-28 2022-11-29 Samsung Electronics Co., Ltd. Convolutional neural network (CNN) processing method and apparatus
US20200242734A1 (en) * 2017-04-07 2020-07-30 Intel Corporation Methods and systems using improved convolutional neural networks for images processing
EP3389005A1 (en) 2017-04-10 2018-10-17 INTEL Corporation Abstraction library to enable scalable distributed machine learning
US10489887B2 (en) 2017-04-10 2019-11-26 Samsung Electronics Co., Ltd. System and method for deep learning image super resolution
US20180293707A1 (en) 2017-04-10 2018-10-11 Samsung Electronics Co., Ltd. System and method for deep learning image super resolution
KR20180114488A (en) 2017-04-10 2018-10-18 삼성전자주식회사 System and method for deep learning image super resolution
US20200090305A1 (en) 2017-04-10 2020-03-19 Samsung Electronics Co., Ltd. System and method for deep learning image super resolution
US11055816B2 (en) * 2017-06-05 2021-07-06 Rakuten, Inc. Image processing device, image processing method, and image processing program
US20190080239A1 (en) 2017-09-13 2019-03-14 Samsung Electronics Co., Ltd. Neural network system for reshaping a neural network model, application processor including the same, and method of operating the same
KR20190030034A (en) 2017-09-13 2019-03-21 삼성전자주식회사 Neural network system reshaping neural network model, Application processor having the same and Operating method of neural network system
JP2019053734A (en) 2017-09-14 2019-04-04 三星電子株式会社Samsung Electronics Co.,Ltd. Heterogeneous accelerator for highly efficient learning system
US10474600B2 (en) 2017-09-14 2019-11-12 Samsung Electronics Co., Ltd. Heterogeneous accelerator for highly efficient learning systems
KR20190043419A (en) 2017-10-18 2019-04-26 삼성전자주식회사 Method of controlling computing operations based on early-stop in deep neural network
US20190114541A1 (en) 2017-10-18 2019-04-18 Samsung Electronics Co., Ltd. Method and system of controlling computing operations based on early-stop in deep neural network
US20200104722A1 (en) 2017-11-20 2020-04-02 Shanghai Cambricon Information Technology Co., Ltd Task parallel processing method, apparatus and system, storage medium and computer device
US11113104B2 (en) 2017-11-20 2021-09-07 Shanghai Cambricon Information Technology Co., Ltd Task parallel processing method, apparatus and system, storage medium and computer device
US10936942B2 (en) 2017-11-21 2021-03-02 Google Llc Apparatus and mechanism for processing neural network tasks using a single chip package with multiple identical dies
US20190156187A1 (en) 2017-11-21 2019-05-23 Google Llc Apparatus and mechanism for processing neural network tasks using a single chip package with multiple identical dies
CN108255773A (en) 2017-12-07 2018-07-06 中国航空工业集团公司西安航空计算技术研究所 A kind of intelligence computation heterogeneous polynuclear processing method and platform
US20190180177A1 (en) 2017-12-08 2019-06-13 Samsung Electronics Co., Ltd. Method and apparatus for generating fixed point neural network
US20190188917A1 (en) * 2017-12-20 2019-06-20 Eaton Intelligent Power Limited Lighting And Internet Of Things Design Using Augmented Reality
US20190206026A1 (en) 2018-01-02 2019-07-04 Google Llc Frame-Recurrent Video Super-Resolution
US20210165883A1 (en) 2018-08-14 2021-06-03 Huawei Technologies Co., Ltd. Artificial intelligence ai processing method and ai processing apparatus
US20200082279A1 (en) * 2018-09-11 2020-03-12 Synaptics Incorporated Neural network inferencing on protected data
CN109657788A (en) 2018-12-18 2019-04-19 北京中科寒武纪科技有限公司 Data processing method, device and Related product
US20210380127A1 (en) 2018-12-27 2021-12-09 Samsung Electronics Co., Ltd. Electronic device and control method therefor
KR20200084949A (en) 2018-12-27 2020-07-14 삼성전자주식회사 Electronic device and control method thereof
US20200379923A1 (en) * 2019-05-30 2020-12-03 Synaptics Incorporated Granular access control for secure memory
US10713143B1 (en) * 2019-06-24 2020-07-14 Accenture Global Solutions Limited Calibratable log projection and error remediation system

Non-Patent Citations (19)

* Cited by examiner, † Cited by third party
Title
European Search Report dated Nov. 9, 2021 for EP Application No. 20154613.2.
Extended European Search Report dated Aug. 7, 2020 for EP Application No. 20154613.2.
Ignatov et al., "AI Benchmark: Running Deep Neural Networks on Android Smartphones", Jan. 23, 2019, ROBOCUP 2008, pp. 288-314.
Japanese Office Action dated Jul. 6, 2021 for JP Application No. 2020-066302.
Japanese Office Action dated Mar. 8, 2022 for JP Application No. 2020-066302.
JP Notice of Allowance dated Oct. 11, 2022 for JP 2020-066302.
Kanonov ("Secure Containers in Android: the Samsung Knox Case Study") arXiv:1605.08567v1 [cs.CR] May 27, 2016 (Year: 2016). *
Lapid ("Navigating the Samsung TrustZone and Cache-Attacks on the Keymaster Trustlet") Computer Security 23rd European Symposium on Research in Computer Security, ESORICS 2018, Barcelona, Spain, Sep. 3-7, 2018 (Year: 2018). *
Non-Final Office Action dated May 11, 2022 for U.S. Appl. No. 16/747,989; Tai et al.
Notice of Allowance dated Nov. 1, 2022 for U.S. Appl. No. 16/747,989; Tai et al.
Notice of Allowance for U.S. Appl. No. 16/747,989 dated Feb. 15, 2023; Tai et al.
Office Action with English translation for KR10-2019-0099124, dated Jan. 2, 2020, 8 pages.
PCT Search Report dated May 13, 2020 for International Application No. PCT/KR2020/001010 filed Jan. 21, 2020.
PCT Written Opinion dated May 13, 2020 for International Application No. PCT/KR2020/001010 filed Jan. 21, 2020.
U.S. Appl. No. 16/747,989, filed Jan. 21, 2020; Tai et al.
Wan ("Satin: A Secure and Trustworthy Asynchronous Introspection on Multi-Core ARM Processors") 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) (Year: 2019). *
Wan2 ("Remotely Controlling TrustZone Applications? A Study on Securely and Resiliently Receiving Remote Commands") WiSec '21, Jun. 28-Jul. 2, 2021, Abu Dhabi, United Arab Emirates (Year: 2021). *
Wang, Software Support for Heterogeneous Computing, Aug. 9, 2018, 2018 IEEE Computer Society Annual Symposium on VLSI (Year: 2018). *
Yoo, 1.2 Intelligence on Silicon: From Deep-Neural-Network Accelerators to Brain Mimicking Al-SoCs, Mar. 7, 2019, 2019 IEEE International Solid-State Circuits Conference—(ISSCC) (Year: 2019). *

Also Published As

Publication number Publication date
EP3779761A1 (en) 2021-02-17
JP7164561B2 (en) 2022-11-01
WO2021029504A1 (en) 2021-02-18
US20210049449A1 (en) 2021-02-18
US11681904B2 (en) 2023-06-20
JP2021034008A (en) 2021-03-01
CN112396168A (en) 2021-02-23
US20210049450A1 (en) 2021-02-18
KR102147912B1 (en) 2020-08-25

Similar Documents

Publication Publication Date Title
US11842265B2 (en) Processor chip and control methods thereof
KR102194795B1 (en) Electronic device and method for contolling power
US11763440B2 (en) Electronic apparatus and control method thereof
CN109151966B (en) Terminal control method, terminal control device, terminal equipment and storage medium
EP3907695A1 (en) Electronic apparatus and control method thereof
TWI673677B (en) Semiconductor device
US20220147298A1 (en) Home appliance and control method thereof
US20150294646A1 (en) Display apparatus and method for displaying screen images from multiple electronic devices
US20120011468A1 (en) Information processing apparatus and method of controlling a display position of a user interface element
JP2016532208A (en) Improved power control technology for integrated PCIe controllers
US20180192107A1 (en) Display apparatus and control method thereof
TW202040411A (en) Methods and apparatus for standardized apis for split rendering
US20150317185A1 (en) Method for switching operating system and electronic device using the method
US11159838B2 (en) Electronic apparatus, control method thereof and electronic system
US20220414828A1 (en) Electronic apparatus, control method thereof and electronic system
WO2021102772A1 (en) Methods and apparatus to smooth edge portions of an irregularly-shaped display
US11315223B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
US20210227288A1 (en) Source apparatus and control method therefor
US20190362466A1 (en) Content adaptive rendering
US11423600B2 (en) Methods and apparatus for configuring a texture filter pipeline for deep learning operation
US10755666B2 (en) Content refresh on a display with hybrid refresh mode
WO2021087826A1 (en) Methods and apparatus to improve image data transfer efficiency for portable devices
US20200178067A1 (en) Electronic device and control method thereof
CN117496866A (en) Thin film transistor TFT screen driving system, method and display device
CN111930219A (en) Scalable display method for mobile device, and storage medium

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE