WO2021016931A1 - 一种集成芯片以及处理传感器数据的方法 - Google Patents
一种集成芯片以及处理传感器数据的方法 Download PDFInfo
- Publication number
- WO2021016931A1 WO2021016931A1 PCT/CN2019/098653 CN2019098653W WO2021016931A1 WO 2021016931 A1 WO2021016931 A1 WO 2021016931A1 CN 2019098653 W CN2019098653 W CN 2019098653W WO 2021016931 A1 WO2021016931 A1 WO 2021016931A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processor
- sensor
- recognition result
- data
- target data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Definitions
- This application relates to the field of data processing in the field of artificial intelligence, and more specifically, to an integrated chip and a method for processing sensor data.
- An electronic device usually includes a low-energy real-time response processor, which can periodically drive a sensor on the electronic device to perceive external information, and can respond in real time to external information acquired by the sensor.
- the electronic device is equipped with a microphone for acquiring external sounds, and the real-time response processor can periodically drive the microphone and perform target data extraction on the acquired audio information to determine whether there is a wake-up word.
- the module in the dormant state can be awakened.
- the real-time response processor can wake up the voice command module in the electronic device, so that the electronic device can respond to the user's voice command and perform corresponding operations.
- the real-time response processor can wake up the image display module in the electronic device and light up the screen.
- the tasks that the real-time response processor can respond to are usually relatively simple and cannot handle complex tasks, so they cannot meet the needs of users.
- the present application provides an integrated chip and a method for processing sensor data, with the purpose of being able to recognize complex scenes and process complex tasks in the case of real-time response to external sensors.
- an integrated chip which is characterized by comprising: a first processor, configured to obtain first sensor data from a first external sensor, and extract first target data from the first sensor data,
- the first processor is a real-time response processor; an accelerator is used to recognize the first target data according to a first neural network model to obtain a first recognition result, and the first recognition result is used to determine the The target operation corresponding to the first recognition result.
- the integrated chip can also be called a system on a chip (SOC), or a part of the system on a chip.
- SOC system on a chip
- target data extraction in this application can be interpreted as extracting part of information from the information obtained by external sensors.
- the extracted information can be equivalent to "features".
- the extracted information can be used for further processing. In other words, part of the data can be excluded from the information obtained by external sensors, and the retained data can be further identified.
- One way is to extract target data according to preset rules, and extract data that meets the preset rules. When there is no preset rule, all data obtained by external sensors can be sent to the accelerator.
- a neural network model is a mathematical model that includes a neural network.
- the neural network model can be stored on the storage medium in the form of algorithms, codes, etc.
- the neural network model executed by the accelerator may be simplified. Simplify the neural network model, for example, you can remove a part of the layers from the trained neural network model, such as convolutional layer, pooling layer, neural network layer, etc. For another example, the number of neurons in each layer of the neural network model can be reduced.
- the simplified neural network model needs to be retrained.
- the target operation may be to maintain the current state of the electronic device.
- the target operation can be to wake up other processors inside the electronic device and switch other processors from the sleep state to the working state; in the other case, send the first target data, the first recognition result, the instruction and other information To other processors.
- the real-time response processor can respond to data from external sensors in real time, and the neural network model can perform high-frequency response or even real-time response to the data sent by the real-time response processor, without significantly increasing power consumption. It can identify complex scenes and handle complex tasks with high frequency.
- the first processor is further configured to determine the target operation according to the first recognition result.
- the first sensor capable of responding to the external sensor in real time is used to respond to the accelerator to avoid using an excessive number of processors and simplify the chip hardware structure.
- the integrated chip further includes: a second processor, configured to determine the target operation according to the first recognition result.
- the second processor is used to process the calculation results output by the accelerator, which makes it easier to assign specific functions to the hardware architecture, and reduces the complexity of data flow between the processor and the processor, and between the accelerator and the processor. .
- the first processor is further configured to notify the second processor to switch from a sleep state to a working state after extracting the first characteristic data
- the second processor is specifically configured to determine the target operation according to the first recognition result when in the working state.
- the second processor may not be a real-time response processor.
- the second processor can be awakened, so that the second processor is in a working state and responds to the accelerator The recognition result sent. Since the second processor does not need to be in a real-time online state, this helps to save energy.
- the first processor is further configured to determine a third recognition result according to the first sensor data, and the first recognition result and the third The recognition result is used to jointly determine the target operation.
- simple tasks can be handed over to the first processor for processing, and complex tasks can be handed over to the accelerator for processing. Due to the small amount of calculation for simple tasks, there is no need to occupy the computing resources of the accelerator, which is beneficial to improve the use efficiency of the accelerator. Moreover, the tasks originally processed by the first processor are still handed over to the first processor for processing instead of being processed by other processors or accelerators, which facilitates compatibility with traditional real-time response processors.
- the first processor is further configured to determine a third recognition result based on the first sensor data; the second processor is specifically configured to determine a third recognition result based on The first recognition result and the third recognition result determine the target operation.
- the second processor is used to process the calculation results output by the first processor and the accelerator, avoiding data backflow, making it easier to assign specific functions to the hardware architecture, and reducing the interaction between the processor and the processor, the accelerator and the The complexity of data flow between processors.
- the first processor is further configured to obtain second sensor data from a second external sensor, and extract a second target from the second sensor data Data; the accelerator is also used to recognize the second target data according to a second neural network model to obtain a second recognition result, and the second recognition result and the first recognition result are used to jointly determine the Target operation.
- the accelerator can process data from two different external sensors, so the accelerator can recognize various perception data, which is beneficial to improving the ability of the electronic device to recognize complex scenes.
- the accelerator is also used to identify the first target data and the second target data in a time-sharing manner.
- the accelerator because the accelerator takes a very short time to calculate data, it can be preset that the accelerator only processes one task per unit time. In other words, there is no need to add a processing unit or a storage unit in the accelerator, and the recognition results corresponding to different sensor data can be obtained in time.
- the integrated chip further includes: a controller, configured to determine the first priority corresponding to the first target data and the first priority corresponding to the second target data The second priority; the accelerator is specifically configured to identify the first target data and the second target data in a time-sharing manner according to the first priority and the second priority.
- the integrated chip also includes a controller for scheduling data streams.
- the integrated chip can send the processing sequence of the data stream to the accelerator.
- the accelerator can process the data in sequence according to the priority determined by the controller, which is beneficial to ensure Important data is processed early to avoid congestion in the data stream.
- the integrated chip further includes: a controller, configured to determine the first priority corresponding to the first target data and the first priority corresponding to the second target data Second priority, and according to the first priority and the second priority, the first processor is controlled to send the first target data and the second target data to the accelerator in a time-sharing manner, so that The accelerator recognizes the first target data and the second target data by time sharing.
- a controller configured to determine the first priority corresponding to the first target data and the first priority corresponding to the second target data Second priority, and according to the first priority and the second priority, the first processor is controlled to send the first target data and the second target data to the accelerator in a time-sharing manner, so that The accelerator recognizes the first target data and the second target data by time sharing.
- the integrated chip further includes a controller for scheduling data flow.
- the controller may control the first processor to send the data that the accelerator currently needs to process to the accelerator.
- the accelerator does not need too much memory to store the data to be processed.
- the first processor since the first processor’s ability to process data is weaker than that of the accelerator, the first processor requires relatively less memory to process data. Therefore, the extra memory can be used to store data that has not been processed by the accelerator, which is conducive to sufficient Utilize the utilization of processing units and storage units.
- the integrated chip further includes: a third processor, configured to switch from a sleep state to a working state in response to the target operation.
- the third processor may not be a real-time response processor, and the third processor may not need to be in a real-time online state, which helps to save energy.
- the parameters in the first neural network model are updated through a network.
- updating the parameters of the first neural network model may be a method of upgrading the accelerator.
- This method of upgrading the accelerator occupies less data resources, so it is simpler, easier to execute, and more flexible.
- the first external sensor includes one of a camera, a microphone, a motion sensor, a distance sensor, an ambient light sensor, a magnetic field sensor, a fingerprint sensor, or a temperature sensor.
- an accelerator can be used to process any sensor data, which is conducive to identifying complex scenes and processing complex tasks.
- an electronic device including an integrated chip as in the first aspect and any possible implementation of the first aspect.
- a method for processing sensor data including: acquiring first sensor data from a first external sensor in real time, and extracting first target data from the first sensor data; according to a first neural network model The first target data is recognized to obtain a first recognition result, and the first recognition result is used to determine a target operation corresponding to the first recognition result.
- the method further includes: determining the target operation according to the first recognition result.
- the method further includes: acquiring second sensor data from a second external sensor in real time, and extracting second target data from the second sensor data;
- the second target data is recognized according to a second neural network model to obtain a second recognition result, and the second recognition result and the first recognition result are used to jointly determine the target operation.
- the method further includes: executing the target operation.
- the parameters in the first neural network model are updated through a network.
- the first external sensor includes one of a camera, a microphone, a motion sensor, a distance sensor, an ambient light sensor, a magnetic field sensor, a fingerprint sensor, or a temperature sensor.
- an apparatus for processing sensor data includes a module for executing the third aspect or the method in any possible implementation manner of the third aspect.
- FIG. 1 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of interaction between an integrated chip and an external sensor provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of the hardware structure of an accelerator provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of interaction between an integrated chip and an external sensor according to an embodiment of the present application.
- Fig. 5 is a schematic diagram of interaction between an integrated chip and an external sensor provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of interaction between an integrated chip and an external sensor provided by an embodiment of the present application.
- FIG. 7 is a schematic diagram of interaction between an integrated chip and an external sensor according to an embodiment of the present application.
- FIG. 8 is a schematic flowchart of a method for processing sensor data provided by an embodiment of the present application.
- references described in this specification to "one embodiment” or “some embodiments”, etc. mean that one or more embodiments of the present application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the phrases “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless it is specifically emphasized otherwise.
- the terms “including”, “including”, “having” and their variations all mean “including but not limited to” unless otherwise specifically emphasized.
- the electronic device may be a portable electronic device that also contains other functions such as a personal digital assistant and/or a music player function, such as a mobile phone, a tablet computer, and a wearable electronic device with wireless communication function (such as a smart watch) Wait.
- portable electronic devices include but are not limited to carrying Or portable electronic devices with other operating systems.
- the aforementioned portable electronic device may also be other portable electronic devices, such as a laptop computer (Laptop) and the like. It should also be understood that, in some other embodiments, the above electronic device may not be a portable electronic device, but a desktop computer.
- FIG. 1 shows a schematic structural diagram of an electronic device 100.
- the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
- Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
- SIM Subscriber identification module
- the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
- the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
- the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units.
- the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, etc.
- AP application processor
- ISP image signal processor
- controller video codec
- digital signal processor digital signal processor
- baseband processor baseband processor
- different processing units may be independent components or integrated in one or more processors.
- the electronic device 101 may also include one or more processors 110.
- the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
- a memory may be provided in the processor 110 to store instructions and data.
- the memory in the processor 110 may be a cache memory.
- the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. In this way, repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the electronic device 101 in processing data or executing instructions is improved.
- the processor 110 may include one or more interfaces.
- the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, and a universal asynchronous transceiver (universal asynchronous transceiver) interface.
- asynchronous receiver/transmitter, UART) interface mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM card interface, and/or USB interface, etc.
- the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
- the USB interface 130 can be used to connect a charger to charge the electronic device 101, and can also be used to transfer data between the electronic device 101 and peripheral devices.
- the USB interface 130 can also be used to connect headphones and play audio through the headphones.
- the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
- the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
- the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
- the internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions.
- the processor 110 can execute the above-mentioned instructions stored in the internal memory 121 to enable the electronic device 101 to execute the off-screen display method provided in some embodiments of the present application, as well as various applications and data processing.
- the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more applications (such as photo galleries, contacts, etc.).
- the data storage area can store data (such as photos, contacts, etc.) created during the use of the electronic device 101.
- the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more disk storage components, flash memory components, universal flash storage (UFS), etc.
- the processor 110 may execute instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110 to cause the electronic device 101 to execute the instructions provided in the embodiments of the present application. The method of off-screen display, and other applications and data processing.
- the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
- the charging management module 140 is used to receive charging input from the charger.
- the charger can be a wireless charger or a wired charger.
- the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
- the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
- the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
- the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
- the power management module 141 may also be provided in the processor 110.
- the power management module 141 and the charging management module 140 may also be provided in the same device.
- the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
- the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- Antenna 1 can be multiplexed as a diversity antenna for wireless LAN.
- the antenna can be used in combination with a tuning switch.
- the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
- the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
- at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
- at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
- the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
- WLAN wireless local area networks
- BT wireless fidelity
- GNSS global navigation satellite system
- FM frequency modulation
- NFC near field communication technology
- infrared technology infrared, IR
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
- the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
- the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
- the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
- the display screen 194 is used to display images, videos, etc.
- the display screen 194 includes a display panel.
- the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
- LCD liquid crystal display
- OLED organic light-emitting diode
- active-matrix organic light-emitting diode active-matrix organic light-emitting diode
- AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
- the electronic device 100 may include one or more display screens 194.
- the display screen 194 of the electronic device 100 may be a flexible screen.
- the flexible screen has attracted much attention for its unique characteristics and great potential.
- flexible screens have the characteristics of strong flexibility and bendability, and can provide users with new interactive methods based on bendable characteristics, which can meet more users' needs for electronic devices.
- the foldable display screen on the electronic device can be switched between a small screen in a folded configuration and a large screen in an unfolded configuration at any time. Therefore, users use the split screen function on electronic devices equipped with foldable display screens more and more frequently.
- the internal sensors of the electronic device can include camera 193, microphone 170C, pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J , Touch sensor 180K, ambient light sensor 180L, bone conduction sensor 180M, etc.
- the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
- the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
- ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be provided in the camera 193.
- the camera 193 is used to capture still images or videos.
- the object generates an optical image through the lens and projects it to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- ISP outputs digital image signals to DSP for processing.
- DSP converts the digital image signal into standard RGB (R is red (red), G is green (green), B is blue (blue)), YUV (Y is luminance component, U ⁇ V is chrominance component, U means Blue, V represents red) and other format image signals.
- the electronic device 100 may include one or more cameras 193.
- Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
- MPEG moving picture experts group
- the microphone 170C is used to capture audio.
- the microphone 170C can convert the captured sound signal into an electric signal.
- the microphone 170C can be used for functions such as calling and recording.
- the microphone 170C can also be used to capture the user's voice instructions.
- the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
- the pressure sensor 180A may be provided on the display screen 194.
- the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
- the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
- the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
- touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
- the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
- the angular velocity of the electronic device 100 around three axes ie, X, Y, and Z axes
- the gyro sensor 180B can be used for image stabilization.
- the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
- the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
- the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers, etc.
- the ambient light sensor 180L is used to sense the brightness of the ambient light.
- the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
- the fingerprint sensor 180H is used to collect fingerprints.
- the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
- the temperature sensor 180J is used to detect temperature.
- the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 executes to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
- the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
- the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
- Touch sensor 180K also called “touch panel”.
- the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
- the touch sensor 180K is used to detect touch operations acting on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- the visual output related to the touch operation can be provided through the display screen 194.
- the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
- the button 190 includes a power button, a volume button, and so on.
- the button 190 may be a mechanical button. It can also be a touch button.
- the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
- the processor 110 may include a real-time response processor for periodically driving each sensor in the sensor module and processing the acquired information.
- the real-time response processor can drive the camera to take photos at a fixed time.
- the camera sends the captured photos to the real-time response processor through the interface.
- the real-time response processor can judge the changes in the photos from the previous moment to the next moment (such as light and dark changes), and wake up the next-level processor according to the judgment result.
- the real-time response processor can send the acquired photos to a next-level processor (such as a central processing unit (CPU) or GPU) to perform corresponding operations (such as adjusting screen brightness).
- a next-level processor such as a central processing unit (CPU) or GPU
- the real-time response processor can periodically drive the microphone to capture audio.
- the microphone sends the captured audio to the real-time response processor through the interface.
- the real-time response processor can determine whether there is a specific frequency of sound in the audio (such as human voice). ) Appears and wakes up the next-level processor according to the judgment result.
- the real-time response processor can send the acquired audio to the next-level processor (such as the voice recognition module in the CPU) to perform corresponding operations (such as executing the voice instructions contained in the audio).
- the real-time response processor can drive the acceleration sensor regularly.
- the acceleration sensor can determine the position of the electronic device and send the position information to the real-time response processor.
- the real-time response processor can determine whether the electronic device is in motion. And wake up the next level processor according to the judgment result.
- the real-time response processor can send the acquired position information to the next-level processor (such as the step-counting module in the CPU) to perform corresponding operations (such as recording the number of steps the user walks).
- the real-time response processor can drive the fingerprint sensor regularly, and the fingerprint sensor can send the detected signal to the real-time response processor, so that the real-time response processor can determine whether there is a finger touching the fingerprint sensor, and can determine whether there is a finger touching the fingerprint sensor. As a result, the next processor is awakened.
- the real-time response processor can send the acquired fingerprint information to the next-level processor (such as the fingerprint recognition module in the CPU) to perform corresponding operations (such as lighting up the screen).
- the real-time response processor can drive the buttons at regular intervals, and the buttons can send the detected signal to the real-time response processor, so that the real-time response processor can determine whether the keyboard is pressed by the user, and wake up according to the judgment result Next-level processor.
- the real-time response processor can send the acquired key operation information to the next-level processor (such as CPU) to perform corresponding operations (such as lighting up the screen).
- the tasks that the real-time response processor can handle are relatively simple, and the data can be transferred to the next-level processor only when a change in information is recognized.
- the real-time response processor ’s recognition capability and the amount of identifiable information are very limited, unable to recognize the complex environment to adjust the working status of the electronic device; on the other hand, the real-time response processor will not judge part of the acquired information. Accurate, thereby falsely triggering other processors, causing the electronic device to consume power meaninglessly.
- the electronic device contains a processor that can recognize complex environments, but this type of processor has a large amount of calculation. Starting this type of processor often requires the cooperation of multiple hardware and multiple software programs, which is not conducive to timely response to the external environment. The change.
- the real-time response processor Since the real-time response processor has the characteristics of low recognition ability and low power consumption, the real-time response processor usually cannot directly start this type of processor. In order to further pursue the user experience of electronic devices and realize artificial intelligence, this application provides an integrated chip with the purpose of improving the real-time response capability of electronic devices.
- Figure 2 shows a schematic diagram of the interaction between the integrated chip of the present application and the sensor.
- the integrated chip 200 can be applied in the electronic device 100 shown in FIG. 1.
- the integrated chip 200 can replace the processor 110 in the electronic device 100 shown in FIG. 1.
- the integrated chip 200 includes a first processor 221 for acquiring first sensor data from a first external sensor 211, and extracting first target data from the first sensor data, and the first processor 221 is a real-time response processing Device.
- the integrated chip 200 further includes: an accelerator 230, configured to recognize the first target data according to the first neural network model to obtain a first recognition result, and the first recognition result is used to determine the Target operation.
- the integrated chip 200 may also be called a system on a chip (SOC) or a part of a system on a chip.
- SOC system on a chip
- FIG. 3 is a schematic structural diagram of an accelerator 230.
- the core part of the accelerator 230 is the arithmetic circuit 303.
- the controller 304 controls the arithmetic circuit 303 to extract data from the memory (weight memory or input memory) and perform calculations.
- the arithmetic circuit 303 includes multiple processing units (Process Engine, PE).
- the arithmetic circuit 303 is a two-dimensional systolic array.
- the arithmetic circuit 303 may also be a one-dimensional systolic array or other electronic circuits capable of performing mathematical operations such as multiplication and addition.
- the arithmetic circuit 303 is a general-purpose matrix processor.
- the arithmetic circuit fetches the corresponding data of matrix B from the weight memory 302 and caches it on each PE in the arithmetic circuit.
- the arithmetic circuit fetches the matrix A data and matrix B from the input memory 301 to perform matrix operations, and the partial or final result of the obtained matrix is stored in an accumulator 308.
- the vector calculation unit 307 can perform further processing on the output of the operation circuit 303, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, and so on.
- the vector calculation unit 307 can be used for network calculations in the non-convolutional/non-FC layer of the neural network, such as pooling, batch normalization, local response normalization, etc. .
- the vector calculation unit 307 can store the processed output vector to the unified buffer 306.
- the vector calculation unit 307 may apply a nonlinear function to the output of the arithmetic circuit 303, such as a vector of accumulated values, to generate the activation value.
- the vector calculation unit 307 generates a normalized value, a combined value, or both.
- the processed output vector can be used as an activation input to the arithmetic circuit 303, for example for use in a subsequent layer in a neural network. Part or all of the steps of the method provided in this application may be executed by the arithmetic circuit 303 or the vector calculation unit 307.
- the unified memory 306 is used to store input data and output data.
- the weight data directly transfers the input data in the cache memory 311 to the input memory 301 and/or the unified memory 306 through the bus interface unit 312, stores the weight data in the cache memory 311 into the weight memory 302, and transfers the data in the unified memory 306 Stored in the cache memory 311.
- the bus interface unit (BIU) 312 is configured to implement interaction between the first processor 310 and the instruction fetch memory 309 through a bus.
- An instruction fetch buffer 309 connected to the controller 304 is used to store instructions used by the controller 304.
- the controller 304 is configured to call the instructions cached in the instruction fetch memory 309 to control the working process of the operation accelerator 230.
- the unified memory 306, the input memory 301, the weight memory 302, the cache memory 311, and the fetch memory 309 are all on-chip memories.
- the first processor 221 may include a plurality of processing units.
- the processing unit may be a logical processing unit or a physical processing unit.
- the first processor 221 may be divided into multiple processing units through an algorithm, or the first processor 221 includes multiple physically detachable processing units. Multiple processing units work together to process data.
- the first external sensor 211 may be, for example, a camera 193, a microphone 170C, a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, and a proximity light sensor 180G, as shown in FIG.
- a camera 193 a microphone 170C
- a pressure sensor 180A a pressure sensor 180A
- a gyroscope sensor 180B an air pressure sensor 180C
- a magnetic sensor 180D a magnetic sensor 180D
- an acceleration sensor 180E a distance sensor 180F
- a proximity light sensor 180G as shown in FIG.
- One of fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light sensor 180L, and bone conduction sensor 180M One of fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light sensor 180L, and bone conduction sensor 180M.
- the first processor 221 may perform target data extraction on sensor data obtained by multiple external sensors.
- the first processor 221 may perform target data extraction on photos captured by a camera and audio captured by a microphone.
- the first processing unit in the first processor 221 may perform target data extraction on the photos captured by the camera, and the second processing unit in the first processor 221 may perform target data extraction on the audio captured by the microphone.
- the first processing unit in the first processor 221 may perform target data extraction on the photos captured by the camera, and may perform target data extraction on the audio captured by the microphone.
- the first processor 221 may only perform target data extraction on sensor data obtained by one external sensor.
- the first processor 221 only performs target data extraction on photos captured by the camera.
- the processing unit A and the processing unit B can simultaneously extract target data from different photos; or, the processing unit A and the processing unit B can simultaneously extract different parts of the same photo. Perform target data extraction.
- the "target data extraction” in this application can be interpreted as extracting part of information from the information obtained by external sensors.
- the information extracted by the first processor 221 may be equivalent to "features".
- the extracted information can be used for further processing.
- the first processor 221 removes a part of the data from the information obtained by the external sensor, and the retained data can be further identified.
- the first processor 221 may perform target data extraction according to preset rules, and extract data that meets the preset rules. When there is no preset rule, the first processor 221 may send all the data obtained by the external sensor to the accelerator 230.
- the extracted information may be pixels that constitute a person.
- the pixels in the image that do not constitute a person can be eliminated.
- the preset rule can be "the pixels that make up the character”.
- the extracted information can be a video containing several frames of images. There is an object in the video that moves within the several frames, and before the several frames, the object is not Move occurs. In other words, the images in the video that have not moved objects can be eliminated.
- the preset rule can be "there is a picture of an object in motion".
- the extracted information may be data of a specific frequency band.
- audio data that is not a specific frequency band in the audio can be eliminated.
- the preset rule may be "audio data in a specific frequency band”.
- the extracted information may be a process in which the orientation of the electronic device changes.
- the data captured by the acceleration sensor can be eliminated.
- the preset rule may be "the speed of the change of the bearing signal is greater than the bearing signal of the preset value”.
- the extracted information may be a captured fingerprint signal.
- the electronic device does not detect a fingerprint
- the data captured by the fingerprint sensor can be eliminated.
- the preset rule may be "the signal value captured when a finger is detected”.
- the extracted information may be the button that was pressed and the order in which the button was pressed.
- the electronic device does not detect that the user presses the button, the data captured by the button can be eliminated.
- the preset rule may be "the signal value captured when the button is pressed.”
- the accelerator 230 which may also be referred to as a neural network accelerator 230, is a device or module that has computing capabilities and can execute a neural network model. In the field of neural networks or data processing, the accelerator 230 is sometimes referred to as a classifier. For example, the accelerator 230 may be a neural-network (NN) computing processor.
- NN neural-network
- a neural network model is a mathematical model that includes a neural network.
- the neural network model can be stored on the storage medium in the form of algorithms, codes, etc.
- Neural-network Processing Unit is a neural-network (NN) computing processor. It uses biological neural network structure for reference, such as the transfer mode between human brain neurons, to quickly process input information , You can continue to learn by yourself. NPU can realize the intelligent cognition of electronic devices and other applications, such as: image recognition, face recognition, voice recognition, text understanding, etc.
- the accelerator 230 may execute one or more neural network models.
- a neural network model corresponds to an external sensor.
- a neural network model can only process data from a certain type of external sensor.
- the accelerator 230 may execute a neural network model for image processing, and the obtained recognition result may be the label of the image and the probability of each label.
- the label of the image can be "man”, “ woman”, “beard”, “long hair”, “day”, night, “indoor”, “outdoor”, “meeting room”, “in car”, etc.
- the label of the image may be a feature of the first target data.
- the accelerator 230 may execute a neural network model for audio processing, and the obtained recognition result may be the content of the voice command and the probability of each voice command content.
- the content of the voice command can be "make a call”, “meeting”, “check weather” and so on.
- the content of the voice instruction may be a feature of the first target data.
- the accelerator 230 may execute a neural network model for orientation information processing, and the obtained recognition result may be the angular range of the electronic device flipped and the probability of the angular range.
- the angle range of the electronic device can be rotated clockwise from 360° to 720°.
- the angular range of the flip of the electronic device may be a characteristic of the first target data.
- the accelerator 230 may execute a neural network model for fingerprint matching, and the obtained recognition result may be the fingerprint type and the probability of each fingerprint type.
- the fingerprint type can be thumb fingerprint, ring finger fingerprint, etc.
- the fingerprint type may be a characteristic of the first target data.
- the accelerator 230 may execute a neural network model for touch processing, and the obtained recognition result may be the content of the touch command and the probability of each touch command content.
- the content of the touch command can be "Mistaken Impact”, “Screen Capture”, etc.
- the content of the touch command may be a feature of the first target data.
- a neural network model corresponds to multiple external sensors.
- a neural network model can process data from a variety of external sensors.
- a neural network model that can recognize both video information and audio information can be used.
- the accelerator 230 obtains the first recognition result and sends the first recognition result to other processors, so that the other processors can determine the target operation corresponding to the first recognition result.
- the target operation may be to maintain the current state of the electronic device.
- the target operation may be to wake up the third processor inside the electronic device to switch the third processor from the sleep state to the working state; the target operation further includes the first target data, the first recognition result, the instruction, etc.
- the information is sent to the third processor.
- the called third processor is a real-time response processor, then the third processor may not be awakened, and the instruction or data can be directly sent to the third processor, and the third processor can respond to the instruction or data in real time .
- the third processor may also be a processor on the integrated chip 200.
- the other processor may determine a final recognition result according to the first recognition result. Assuming that the first recognition result includes feature A, feature B, feature C, and that the probability of feature A is 50%, the probability of feature B is 30%, and the probability of feature C is 20%, it means the first target The probability that the data belongs to feature A is 50%, the probability that the first target data belongs to feature B is 30%, and the probability that the first target data belongs to feature C is 20%.
- One way to determine the final recognition result is to use the recognition result with the largest probability value as the final recognition result, that is, the first target data belongs to feature A.
- Another way to determine the final recognition result is to find the weighted average of feature A, feature B, and feature C when feature A, feature B, and feature C are all numerical values.
- the weight is the probability value corresponding to each feature. That is, the first target data belongs to (feature A ⁇ 0.5+feature B ⁇ 0.3+feature C ⁇ 0.2)/3.
- Another way to determine the final recognition result is to use multiple features with larger probability values as the final recognition result, for example, using feature A and feature B as the final recognition result.
- the final recognition result can be associated with the target operation through algorithms or codes. For example, in a case where the final recognition result is recognition result A, the target operation corresponding to the first recognition result can be determined to be operation B.
- the wireless communication module may be activated to connect to WIFI or Bluetooth.
- the recording application can be started to record the content of the meeting.
- the fall prevention application program may be started.
- the application program corresponding to the ring finger fingerprint may be started.
- the processor determines that the first target data contains a touch command of "mis-hit"
- the key operation can be ignored, and some modules in the electronic device can be switched to a sleep state.
- the traditional NPU processor can carry heavy and complex programs, it requires multi-level programs or multiple hardware wake-ups (such as the load on the CPU as a co-processor, and the CPU is responsible for scheduling tasks), and also requires external storage.
- Devices such as Double Data Rate (DDR) memory
- DDR Double Data Rate
- the traditional NPU processor consumes a lot of power when carrying heavy and complex programs.
- the computing power of the accelerator 230 may be weaker than that of a traditional NPU processor.
- the neural network model executed by the accelerator 230 may be simplified.
- Reducing the computing power of the processor can be, for example, reducing the computing precision of the processor from hexadecimal to octal.
- the neural network model for example, you can remove some layers from the trained neural network model, such as convolutional layer, pooling layer, neural network layer, etc. For another example, the number of neurons in each layer of the neural network model can be reduced.
- the simplified neural network model needs to be retrained.
- the traditional NPU processor needs to be equipped with external memory (such as DDR memory) to provide storage space for data recognition.
- the accelerator 230 may further include a memory inside.
- the memory stores instructions for executing the neural network model, and the accelerator 230 may save the first target sent by the real-time response processor. Data, and provide data processing storage space for the accelerator 230 to identify the first target data.
- the accelerator 230 may include a processor and a memory, or include a processor having a function of storing data.
- the integrated chip 200 also integrates a device or module with a storage function.
- the accelerator 230 includes a cache memory for storing intermediate data generated during the execution of the first neural network model.
- the larger the neural network model that the accelerator 230 can execute the larger the storage capacity of the memory in the accelerator 230. Frequent use of the accelerator 230 may cause power consumption problems. However, the simpler the neural network model, the more the accuracy of the recognition result generated by the accelerator 230 cannot be guaranteed.
- the accelerator 230 can be upgraded by updating the neural network model, so that the accelerator 230 can handle more tasks.
- the difference from ordinary processors is that in the case of ordinary processors that do not use neural network models, ordinary processors complete the data processing process by executing algorithms or codes.
- the room for improvement of algorithms and codes is limited and the update difficulty is relatively high. It is not conducive to the artificial intelligence of electronic devices, nor is it conducive to improving the user experience of electronic devices. Updating the accelerator 230 of the present application can be to add or delete the neural network model, or to update the parameters in the neural network model. Therefore, the update method of the accelerator 230 of the present application is relatively simple and convenient to implement.
- the parameters in the first neural network model are updated through the network.
- the parameters in the first neural network model may be, for example, weight parameters, neuron activation/inactivation parameters, and so on.
- the first processor 221 is further configured to determine a target operation corresponding to the first recognition result according to the first recognition result.
- the first processor 221 responding in real time processes the first recognition result, and determines the target operation in response to the first recognition result.
- the first processor 221 includes a processing unit that can drive an external sensor in real time, a processing unit that can respond to sensor data in real time, and a processing unit that can respond to the accelerator 230 and obtain a recognition result.
- FIG. 2 shows the data flow transmitted by the first external sensor 211, the first processor 221, and the accelerator 230.
- the integrated chip 200 further includes: a second processor 222, configured to determine a target operation corresponding to the first recognition result according to the first recognition result.
- the second processor 222 which is different from the first processor 221, processes the first recognition result, and determines a target operation in response to the first recognition result.
- the second processor 222 includes a processing unit that can respond to the accelerator 230. 4-7 show schematic diagrams of interaction between the integrated chip 200 of the present application and external sensors.
- FIG. 4 shows the data flow transmitted by the first external sensor 211, the first processor 221, the accelerator 230, and the second processor 222.
- the first processor 221 is further configured to notify the second processor 222 to switch from a sleep state to a working state after extracting the first characteristic data; the second processor 222 specifically It is used to determine the target operation according to the first recognition result when in the working state.
- the second processor 222 may not be a real-time response processor. Because the accelerator 230 has high computing power and fast computing speed, after the first processor 221 completes the extraction of characteristic data, the second processor 222 can be awakened, so that the second processor 222 is in a working state and responds to the information sent by the accelerator 230. Recognition results. Since the second processor 222 may not be in a real-time online state, this helps to save energy consumption.
- the first processor 221 is further configured to obtain second sensor data from the second external sensor 212, and to extract second target data from the second sensor data; the accelerator 230 is also configured to The second target data is recognized according to a second neural network model to obtain a second recognition result, and the second recognition result and the first recognition result are used to jointly determine the target operation.
- the second processor 222 is specifically configured to determine the target operation according to the first recognition result and the second recognition result.
- FIG. 5 shows the data flow transmitted by the first external sensor 211, the second external sensor 212, the first processor 221, the accelerator 230, and the second processor 222. That is, two different external sensors capture the first sensor data and the second sensor data respectively, the first processor 221 extracts the first target data from the first sensor data, and the first processor 221 extracts the first target data from the second sensor data.
- the accelerator 230 uses the first neural network model to recognize the first target data, and the second neural network model to recognize the second target data.
- the first neural network model There is an association or correspondence between the first neural network model, the type of the first target data, and the first external sensor 211.
- the second processor 222 may determine the target operation according to the first recognition result and the second recognition result.
- the second processor 222 may be a real-time response processor.
- the second processor 222 can respond to the recognition result sent by the accelerator 230 in real time.
- the second processor 222 may be a non-real-time response processor, that is, the second processor 222 includes a working state and a sleep state.
- the first processor 221 may wake up the second processor 222 while sending the first target data to the accelerator 230.
- the accelerator 230 wakes up the second processor 222.
- the integrated chip 200 further includes a controller 240, and the controller 240 wakes up the second processor 222.
- the second processor 222 does not receive a new recognition result within a period of time, and the second processor 222 may be switched to the sleep state.
- the second processor 222 may respond to an instruction from the first processor 221 or the controller 240 to switch from the working state to the sleep state.
- the first external sensor 211 is a camera
- the second external sensor 212 is a microphone.
- the first processor 221 periodically drives the camera to capture image information.
- the first processor 221 periodically drives the microphone to capture audio information.
- the first processor 221 continuously monitors the image information sent by the camera and the audio information sent by the microphone.
- the first processor 221 monitors that the voice of a character appears in the audio
- the first processor 221 intercepts the image information obtained during the period of the human voice as the first target data, and intercepts the voice obtained during the period of the human voice.
- the audio information as the second target data.
- the first processor 221 sends the first target data and the second target data to the accelerator 230.
- the accelerator 230 may use the first neural network model to recognize the first target data and obtain the first recognition result.
- the first neural network model may be an image processing neural network model, and the first recognition result may be "night" and “outdoor” image tags.
- the accelerator 230 may use the second neural network model to recognize the second target data and obtain the second recognition result.
- the second neural network model may be a speech recognition neural network model, and the second recognition result may be a "screaming" audio tag.
- the accelerator 230 sends the first recognition result including “night” and “outdoor” and the second recognition result including “scream” to the second processor 222.
- the second processor 222 may determine the target operation according to the recognition results of "night”, “outdoor”, and "scream”, such as driving the electronic device into an emergency safety mode, and reminding the user who is using the electronic device to give an alarm or ask for help.
- the electronic device can save the image captured by the camera. And the electronic device can start an alarm reminder interface, which is convenient for users to quickly get help in emergency situations.
- the first external sensor 211 is a microphone
- the second external sensor 212 is a touch sensor.
- the first processor 221 periodically drives the microphone to capture audio information.
- the first processor 221 periodically drives the touch sensor to capture gesture operation information.
- the first processor 221 continuously monitors the audio information sent by the microphone and the gesture operation information sent by the touch sensor.
- the first processor 221 intercepts the audio information as the first target data, and intercepts the gesture operation information as the second target data.
- the first processor 221 sends the first target data and the second target data to the accelerator 230.
- the accelerator 230 may use the first neural network model to recognize the first target data and obtain the first recognition result.
- the first neural network model may be a speech recognition neural network model, and the first recognition result may be an audio tag of "meeting minutes”.
- the accelerator 230 may use the second neural network model to recognize the second target data and obtain the second recognition result.
- the second neural network model may be a gesture operation neural network model, and the second recognition result may be a label of "touch area A".
- the accelerator 230 sends the first recognition result including “meeting minutes” and the second recognition result including “touch A area” to the second processor 222.
- the second processor 222 can determine the target operation according to the recognition results of the "meeting minutes” and “touching the A area", for example, driving the electronic device into the meeting mode, starting the recording program, and opening the note recording program.
- the electronic device can determine that the user is issuing an instruction, can quickly recognize the instruction contained in the voice, and quickly respond to the user's operation.
- the first external sensor 211 is an ambient light sensor
- the second external sensor 212 is an acceleration sensor.
- the first processor 221 periodically drives the ambient light sensor to capture light intensity information.
- the first processor 221 periodically drives the acceleration sensor to capture the orientation information of the electronic device.
- the first processor 221 continuously monitors the light intensity information sent by the ambient light sensor and the orientation information sent by the acceleration sensor.
- the first processor 221 detects a change in the position information
- the first processor 221 intercepts the light intensity information as the first target data, and intercepts the position information as the second target data.
- the first processor 221 sends the first target data and the second target data to the accelerator 230.
- the accelerator 230 may use the first neural network model to recognize the first target data and obtain the first recognition result.
- the first neural network model may be an optical signal processing neural network model, and the first recognition result may be a label "from bright to dark”.
- the accelerator 230 may use the second neural network model to recognize the second target data and obtain the second recognition result.
- the second neural network model may be a rotation angle recognition neural network model, and the second recognition result may be a label of "running".
- the accelerator 230 sends the first recognition result including "from bright to dark” and the second recognition result including “running” to the second processor 222.
- the second processor 222 may determine the target operation according to the recognition results of "from light to dark” and “running”, such as driving the electronic device into the exercise recording mode, turning off the touch sensor, and turning off the display screen.
- the electronic device determines whether the user is in motion according to the information captured by the accelerator 230 sensor. If it is, the electronic device can turn off the screen to prevent the user from accidentally touching the screen while exercising. Moreover, even when the user does not start any exercise recording program, the electronic device can automatically trigger the recording of the user's exercise data.
- the accelerator 230 can process data captured from more external sensors.
- the accelerator 230 may recognize the first target data and the second target data in a time-sharing manner.
- the first target data is recognized at the first moment, and after the first target data is recognized, the second target data is recognized.
- the accelerator 230 may determine the recognition sequence of the first target data and the second target data according to the sequence of receiving the first target data and the second target data. The accelerator 230 may also determine the recognition order of the first target data and the second target data according to the priorities of the first external sensor 211 and the second external sensor 212.
- the first processor 221 is further configured to determine a first priority corresponding to the first target data and a second priority corresponding to the second target data.
- the first priority and the second priority may be the order in which the first processor 221 sends the first target data and the second target data.
- the first priority and the second priority may be the order in which the first processor 221 extracts data from the first sensor data and the second sensor data.
- the first processor 221 may determine the sending sequence of the first target data and the second target data according to the sequence of receiving the first sensor data and the second sensor data.
- the first processor 221 may also determine the first priority and the second priority according to the priorities of the first external sensor 211 and the second external sensor 212. For example, the priority of the information captured by the camera is higher than the priority of the information captured by the microphone. level.
- the first processor 221 is further configured to send the first target data and the second target data to the accelerator 230 in a time sharing manner.
- the integrated chip 200 further includes: a controller 240, configured to determine a first priority corresponding to the first target data and a second priority corresponding to the second target data; the accelerator 230, Specifically, it is used to identify the first target data and the second target data by time-sharing according to the first priority and the second priority.
- a controller 240 configured to determine a first priority corresponding to the first target data and a second priority corresponding to the second target data
- the accelerator 230 Specifically, it is used to identify the first target data and the second target data by time-sharing according to the first priority and the second priority.
- the integrated chip 200 may further include a controller 240.
- the integrated chip 200 also includes a controller 240 for scheduling data, and the controller 240 determines the order in which the accelerator 230 recognizes the data.
- the controller 240 determines that the priority of the first target data is the first priority and the second target data is the second priority, so that the accelerator 230 matches the priority according to the priority.
- the first target data and the second target data are identified.
- the controller 240 sends the first priority corresponding to the first target data and the second priority corresponding to the second target data to the accelerator 230, and the accelerator 230 according to the first priority and the second priority , Determining to identify the first target data and the second target data by time sharing.
- the controller 240 controls the first processor 221 to transmit the first target data and the second target data to the accelerator 230 in a time sharing manner. Therefore, the accelerator 230 can recognize the first target data and the second target data according to the time-sharing according to the receiving order.
- the accelerator 230 receives the first target data and recognizes the first target data.
- the accelerator 230 receives the second target data with a higher priority than the first target data, and the accelerator 230 can interrupt the recognition process of the first target data and give priority to Identify the second target data.
- the first processor 221 is further configured to determine a third recognition result according to the first sensor data, and the third recognition result and the first recognition result are used to jointly determine the target operation.
- the first processor is further configured to determine the target operation according to the first recognition result and the third recognition result.
- the first processor 221 responding in real time may recognize the first sensor data to obtain a third recognition result, and respond to the first recognition result sent by the accelerator to determine a target operation in response to the first recognition result and the third recognition result.
- the first processor 221 is further configured to determine a third recognition result according to the first sensor data; the second processor 222 is specifically configured to determine a third recognition result according to the first recognition result and the first 3. Recognition result, confirm the target operation.
- the first processor 221 responding in real time can recognize the first sensor data to obtain the third recognition result.
- the second processor 222 processes the first recognition result and the third recognition result, and determines a target operation in response to the first recognition result and the third recognition result.
- simple tasks can be handed over to the first processor 221 for processing, and complex tasks can be handed over to the accelerator 230 for processing.
- the second processor 222 includes a processing unit that can respond to the accelerator 230 and the first processor 221.
- FIG. 7 shows the data flow transmitted by the first external sensor 211, the first processor 221, the accelerator 230, and the second processor 222.
- the real-time response processor can drive the camera to take photos at a fixed time.
- the camera sends the captured photos to the real-time response processor through the interface.
- the real-time response processor can judge the changes in the photos from the previous moment to the next moment.
- the judgment result of whether the brightness change occurs is obtained, and the image information of the same period is sent to the accelerator 230.
- the accelerator 230 processes the image information to obtain the label of the image and the probability of each label.
- the second processor 222 can perform corresponding operations according to the recognition results sent by the first processor 221 and the accelerator 230.
- the first recognition result contains the label "indoor”
- the third recognition result is that the image has a light and dark change.
- the device 222 may start a program for reminding the user to start the lighting device.
- the real-time response processor can periodically drive the microphone to capture audio, and the microphone will send the captured audio to the real-time response processor through the interface.
- the real-time response processor can determine whether there is a dominant sound in the audio and set the same period
- the audio information is sent to the accelerator 230.
- the accelerator 230 processes the audio information to obtain the content of the voice command and the probability of each voice command content.
- the second processor 222 can perform corresponding operations according to the recognition results sent by the first processor 221 and the accelerator 230. For example, the first recognition result includes the voice command of "make a call", the third recognition result does not include the host voice,
- the second processor 222 can start a program for locking the electronic device, and prompt the user who is using the electronic device to unlock.
- FIG. 8 shows a schematic flowchart of a method for processing sensor data provided by this application.
- the method 800 shown in FIG. 8 may be executed by the processor 110 in FIG. 1 or the integrated chip 200 shown in FIG. 2.
- the first external sensor includes one of a camera, a microphone, a motion sensor, a distance sensor, an ambient light sensor, a magnetic field sensor, a fingerprint sensor, or a temperature sensor.
- the first external sensor can be, for example, the camera 193, microphone 170C, pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint One of the sensor 180H, the temperature sensor 180J, the touch sensor 180K, the ambient light sensor 180L, and the bone conduction sensor 180M.
- target data extraction in this application can be interpreted as extracting part of information from the information obtained by external sensors.
- the extracted information can be equivalent to "features".
- the extracted information can be used for further processing. In other words, some data can be excluded from the information obtained from external sensors, and the retained data can be further identified.
- One way is to extract target data according to preset rules, and extract data that meets the preset rules. When there is no preset rule, all data obtained by external sensors can be sent to the accelerator.
- Step 801 may be performed by the first processor in the integrated chip as shown in FIG. 2.
- the accelerator in the integrated chip as shown in FIG. 2 can be used to recognize the first target data according to the first neural network model to obtain the first recognition result.
- a neural network model is a mathematical model that includes a neural network.
- the neural network model can be stored on the storage medium in the form of algorithms, codes, etc.
- the concept of neural network has been introduced in the previous article, so I won't repeat it here.
- the neural network model executed by the accelerator may be simplified. Simplify the neural network model, for example, you can remove some layers from the trained neural network model, such as convolutional layer, pooling layer, neural network layer, etc. For another example, the number of neurons in each layer of the neural network model can be reduced.
- the simplified neural network model needs to be retrained.
- the first recognition result can be obtained; according to the first recognition result, the target operation corresponding to the first recognition result can be determined.
- the other processor may determine a final recognition result according to the first recognition result. Assuming that the first recognition result includes feature A, feature B, feature C, and that the probability of feature A is 50%, the probability of feature B is 30%, and the probability of feature C is 20%, it means the first target The probability that the data belongs to feature A is 50%, the probability that the first target data belongs to feature B is 30%, and the probability that the first target data belongs to feature C is 20%.
- One way to determine the final recognition result is to use the recognition result with the largest probability value as the final recognition result, that is, the first target data belongs to feature A.
- Another way to determine the final recognition result is to find the weighted average of feature A, feature B, and feature C when feature A, feature B, and feature C are numerical values.
- the weight is the probability value corresponding to each feature, namely The first target data belongs to (feature A ⁇ 0.5+feature B ⁇ 0.3+feature C ⁇ 0.2)/3.
- Another way to determine the final recognition result is to use multiple features with larger probability values as the final recognition result, for example, using feature A and feature B as the final recognition result.
- the final recognition result can be associated with the target operation through algorithms or codes.
- the method further includes: executing the target operation.
- the target operation may be to maintain the current state of the electronic device.
- the target operation can be to wake up other processors in the electronic device, and switch other processors from the sleep state to the working state; the target operation further includes sending the first target data, the first recognition result, the instruction and other information To other processors.
- the other processors that are awakened may be real-time response processors.
- the method further includes: determining the target operation according to the first recognition result.
- the processor responding to the first external sensor in real time processes the first recognition result, and determines the target operation in response to the first recognition result.
- the device for executing method 800 includes a processing unit that can drive external sensors in real time, a processing unit that can respond to sensor data in real time, and a processing unit that can respond to the recognition result obtained by the accelerator.
- the method further includes: acquiring second sensor data from a second external sensor in real time, and extracting second target data from the second sensor data; and analyzing the second target according to the second neural network model.
- the data is recognized to obtain a second recognition result, and the second recognition result and the first recognition result are used to jointly determine the target operation.
- the second external sensor may be, for example, a camera 193, a microphone 170C, a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, and a fingerprint as shown in FIG.
- a camera 193 a microphone 170C
- a pressure sensor 180A a pressure sensor 180A
- a gyroscope sensor 180B an air pressure sensor 180C
- a magnetic sensor 180D a magnetic sensor 180D
- an acceleration sensor 180E a distance sensor 180F
- a proximity light sensor 180G a fingerprint as shown in FIG.
- two different external sensors capture the first sensor data and the second sensor data respectively.
- the first target data can be extracted from the first sensor data
- the second target data can be extracted from the second sensor data.
- the neural network model can identify the first target data
- the second neural network model can identify the second target data.
- the processor can determine the target operation according to the first recognition result and the second recognition result.
- the parameters in the first neural network model are updated through the network.
- the parameters in the first neural network model may be, for example, weight parameters, neuron activation/inactivation parameters, and so on.
- the disclosed system, device, and method may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Hardware Design (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Neurology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Power Sources (AREA)
- Image Analysis (AREA)
Abstract
一种集成芯片(200),属于人工智能领域,其特征在于,包括:第一处理器(221),用于从第一外部传感器(211)获取第一传感器数据,并且从第一传感器数据中提取第一目标数据,第一处理器(221)为实时响应处理器;加速器(230),用于根据第一神经网络模型对第一目标数据进行识别以得到第一识别结果,第一识别结果用于确定与第一识别结果对应的目标操作。还提供了一种处理传感器数据的方法,目的在于在实时响应外部传感器的情况下能够识别复杂场景,处理复杂任务。
Description
本申请涉及人工智能领域中的数据处理领域,并且更具体地,涉及一种集成芯片以及处理传感器数据的方法。
电子设备通常包含一种低耗能的实时响应处理器,该实时响应处理器可以定时驱动电子设备上的传感器以感知外界信息,并可以对该传感器获取到的外界信息进行实时响应。例如,电子设备上配置有用于获取外界声音的麦克风,实时响应处理器可以周期性地驱动该麦克风,并对获取到的音频信息进行目标数据提取,判断是否有唤醒词。在实时响应处理器捕获到该唤醒词后,可以唤醒处于休眠状态的模块。一种可能的情况,在捕获到特定唤醒词后,实时响应处理器可以唤醒电子设备内的语音指令模块,使得电子设备可以响应用户的语音指令并执行相应操作。另一种可能的情况,在捕获到特定唤醒词后,实时响应处理器可以唤醒电子设备内的图像显示模块并点亮屏幕。
由于实时响应处理器处于永久在线的状态,为避免能耗过大,实时响应处理器能够响应的任务通常相对简单,无法处理复杂任务,因此无法满足用户的需求。
发明内容
本申请提供一种集成芯片以及处理传感器数据的方法,目的在于在实时响应外部传感器的情况下能够识别复杂场景,处理复杂任务。
第一方面,提供了一种集成芯片,其特征在于,包括:第一处理器,用于从第一外部传感器获取第一传感器数据,并且从所述第一传感器数据中提取第一目标数据,所述第一处理器为实时响应处理器;加速器,用于根据第一神经网络模型对所述第一目标数据进行识别以得到第一识别结果,所述第一识别结果用于确定与所述第一识别结果对应的目标操作。
集成芯片又可以被称作片上系统(system on a chip,SOC),或者是片上系统的一部分。
本申请中的“目标数据提取”可以被解释为,从外部传感器获取到的信息中提取一部分信息。被提取出的信息可以等价于“特征”。被提取出的信息可以用于进一步处理。也就是说,可以从外部传感器获取到的信息中剔除一部分数据,保留下来的数据可以被进一步识别。一种方式是依据预设的规则进行目标数据提取,将满足预设规则的数据提取出来。当不存在预设规则的情况下,可以将外部传感器获取到的全部数据发送至加速器。
神经网络模型是一种包含神经网络的数学模型。神经网络模型可以通过算法、代码等形式存储在存储介质上。在本申请中,加速器执行的神经网络模型可以是经过简化的。简化神经网络模型,例如,可以从已经训练好的神经网络模型中去除一部分层,例如卷积层、 池化层、神经网络层等。又如,可以减少神经网络模型各层中的神经元数量。简化后的神经网络模型需要重新训练。
一种情况,目标操作可以是维持电子设备当前的状态。另一种情况,目标操作可以是唤醒电子设备内部的其他处理器,使其他处理器从休眠状态切换至工作状态;另一种情况,将第一目标数据、第一识别结果、指令等信息发送至其他处理器。
在本申请实施例中,实时响应处理器可以实时响应外部传感器的数据,而神经网络模型可以对实时响应处理器发送的数据进行高频率响应甚至实时响应,在不明显增加耗电量的情况下能够高频率地识别复杂场景、处理复杂任务。
结合第一方面,在第一方面的某些实现方式中,所述第一处理器还用于根据所述第一识别结果确定所述目标操作。
在本申请实施例中,将能够实时响应外部传感器的第一传感器用于响应加速器,避免使用过多处理器的数量,简化芯片硬件结构。
结合第一方面,在第一方面的某些实现方式中,所述集成芯片还包括:第二处理器,用于根据所述第一识别结果确定所述目标操作。
在本申请实施例中,使用第二处理器处理加速器输出的计算结果,更容易为硬件架构赋予特定地功能,降低了处理器与处理器之间、加速器与处理器之间的数据流向复杂度。
结合第一方面,在第一方面的某些实现方式中,所述第一处理器,还用于在提取所述第一特征数据之后,通知所述第二处理器从休眠状态切换到工作状态;所述第二处理器,具体用于在处于所述工作状态的情况下,根据所述第一识别结果确定所述目标操作。
换句话说,第二处理器可以不是实时响应处理器。
在本申请实施例中,由于加速器的计算能力较高、计算速度较快,当第一处理器完成特征数据的提取之后,可以唤醒第二处理器,使得第二处理器处于工作状态并响应加速器发送的识别结果。由于第二处理器可以不必处于实时在线的状态,这样有助于节省耗能。
结合第一方面,在第一方面的某些实现方式中,所述第一处理器,还用于根据所述第一传感器数据确定第三识别结果,所述第一识别结果与所述第三识别结果用于共同确定所述目标操作。
在本申请实施例中,可以将简单的任务交给第一处理器处理,将复杂的任务交给加速器处理。由于简单任务的计算量小,不需要占用加速器的计算资源,有利于提高加速器的使用效率。并且,将原本由第一处理器处理的任务仍交由第一处理器处理而非转为其他处理器或加速器处理,有利于与传统的实时响应处理器兼容。
结合第一方面,在第一方面的某些实现方式中,所述第一处理器,还用于根据所述第一传感器数据确定第三识别结果;所述第二处理器,具体用于根据所述第一识别结果以及所述第三识别结果,确定所述目标操作。
在本申请实施例中,使用第二处理器处理第一处理器以及加速器输出的计算结果,避免数据回流,更容易为硬件架构赋予特定的功能,降低了处理器与处理器之间、加速器与处理器之间的数据流向复杂度。
结合第一方面,在第一方面的某些实现方式中,所述第一处理器,还用于从第二外部传感器获取第二传感器数据,并且从所述第二传感器数据中提取第二目标数据;所述加速器,还用于根据第二神经网络模型对所述第二目标数据进行识别以得到第二识别结果,所 述第二识别结果与所述第一识别结果用于共同确定所述目标操作。
在本申请实施例中,加速器可以对来自两种不同的外部传感器的数据进行处理,因此加速器可以对各种感知数据进行识别,有利于提升电子设备识别复杂场景的能力。
结合第一方面,在第一方面的某些实现方式中,所述加速器,还用于分时对所述第一目标数据和所述第二目标数据进行识别。
在本申请实施例中,由于加速器计算数据耗时非常短,因此可以预设加速器在单位时间内仅处理一个任务。也就是说,无需在加速器内增加处理单元或存储单元,即可及时获取与不同的传感器数据对应的识别结果。
结合第一方面,在第一方面的某些实现方式中,所述集成芯片还包括:控制器,用于确定所述第一目标数据对应的第一优先级和所述第二目标数据对应的第二优先级;所述加速器,具体用于根据所述第一优先级和所述第二优先级分时对所述第一目标数据和所述第二目标数据进行识别。
在本申请实施例中,集成芯片还包括用于调度数据流的控制器,集成芯片可以将数据流的处理顺序发送至加速器,加速器可以按照控制器确定的优先级按顺序处理数据,有利于确保重要数据及早地被处理,避免数据流发生拥塞等情况。
结合第一方面,在第一方面的某些实现方式中,所述集成芯片还包括:控制器,用于确定所述第一目标数据对应的第一优先级和所述第二目标数据对应的第二优先级,并且根据所述第一优先级和所述第二优先级控制所述第一处理器分时向所述加速器发送所述第一目标数据和所述第二目标数据,以使得所述加速器分时对所述第一目标数据和所述第二目标数据进行识别。
在本申请实施例中,集成芯片还包括用于调度数据流的控制器,在加速器完成对之前数据的处理后,控制器可以控制第一处理器将加速器当前需要处理的数据发送至加速器。加速器无需过多的内存去存储待处理数据。并且,由于第一处理器的处理数据的能力较加速器弱,第一处理器处理数据所需的内存相对较少,因此可以多余的内存可以被用于存储未被加速器处理的数据,有利于充分利用处理单元、存储单元的利用率。
结合第一方面,在第一方面的某些实现方式中,所述集成芯片还包括:第三处理器,用于响应所述目标操作,从休眠状态切换至工作状态。
在本申请实施例中,第三处理器可以不是实时响应处理器,第三处理器可以不必处于实时在线的状态,这样有助于节省耗能。
结合第一方面,在第一方面的某些实现方式中,所述第一神经网络模型中的参数通过网络进行更新。
在本申请实施例中,更新第一神经网络模型的参数可以是一种升级加速器的方法,这种升级加速器的方法占用数据资源少,因此较为简便,便于执行,灵活性较高。
结合第一方面,在第一方面的某些实现方式中,所述第一外部传感器包括摄像头、麦克风、运动传感器、距离传感器、环境光传感器、磁场传感器、指纹传感器或温度传感器中的一种。
在本申请实施例中,对于任意传感器数据均可采用加速器进行处理,有利于识别复杂场景,处理复杂任务。
第二方面,提供了一种电子设备,包括如第一方面以及第一方面任一种可能的实现方 式中的集成芯片。
第三方面,提供了一种处理传感器数据的方法,包括:实时地从第一外部传感器获取第一传感器数据,并且从所述第一传感器数据中提取第一目标数据;根据第一神经网络模型对所述第一目标数据进行识别以得到第一识别结果,所述第一识别结果用于确定与所述第一识别结果对应的目标操作。
结合第三方面,在第三方面的某些实现方式中,所述方法还包括:根据所述第一识别结果确定所述目标操作。
结合第三方面,在第三方面的某些实现方式中,所述方法还包括:实时地从第二外部传感器获取第二传感器数据,并且从所述第二传感器数据中提取第二目标数据;根据第二神经网络模型对所述第二目标数据进行识别以得到第二识别结果,所述第二识别结果与所述第一识别结果用于共同确定所述目标操作。
结合第三方面,在第三方面的某些实现方式中,所述方法还包括:执行所述目标操作。
结合第三方面,在第三方面的某些实现方式中,所述第一神经网络模型中的参数通过网络进行更新。
结合第三方面,在第三方面的某些实现方式中,所述第一外部传感器包括摄像头、麦克风、运动传感器、距离传感器、环境光传感器、磁场传感器、指纹传感器或温度传感器中的一种。
第四方面,提供了一种处理传感器数据的装置,所述装置包括用于执行所述第三方面或者第三方面的任一可能的实现方式中的方法的模块。
图1是本申请实施例提供的一种电子设备的硬件结构示意图。
图2是本申请实施例提供的一种集成芯片与外部传感器交互的示意图。
图3是本申请实施例提供的一种加速器的硬件结构示意图。
图4是本申请实施例提供的一种集成芯片与外部传感器交互的示意图。
图5是本申请实施例提供的一种集成芯片与外部传感器交互的示意图。
图6是本申请实施例提供的一种集成芯片与外部传感器交互的示意图。
图7是本申请实施例提供的一种集成芯片与外部传感器交互的示意图。
图8是本申请实施例提供的一种处理传感器数据的方法的示意性流程图。
下面将结合附图,对本申请中的技术方案进行描述。
以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个、两个或两个以上。术语“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般 表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
以下介绍了本申请实施例提供的电子设备和用于使用这样的电子设备的实施例。在一些实施例中,电子设备可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式电子设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴电子设备(如智能手表)等。便携式电子设备的示例性实施例包括但不限于搭载
或者其它操作系统的便携式电子设备。上述便携式电子设备也可以是其它便携式电子设备,诸如膝上型计算机(Laptop)等。还应当理解的是,在其他一些实施例中,上述电子设备也可以不是便携式电子设备,而是台式计算机。
示例性的,图1示出了电子设备100的结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器等。其中,不同的处理单元可以是独立的部件,也可以集成在一个或多个处理器中。在一些实施例中,电子设备101也可以包括一个或多个处理器110。其中,控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。在其他一些实施例中,处理器110中还可以设置存储器,用于存储指令和数据。示例性地,处理器110中的存储器可以为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。这样就避免了重复存取,减少了处理器110的等待时间,因而提高了电子设备101处理数据或执行指令的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路间(inter-integrated circuit,I2C)接口,集成电路间音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,SIM卡接口,和/或USB接口等。其中,USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备101充电,也可以用于电子设备101与外围设备之间传输数据。该USB接口130也可以用于连接耳机,通过耳机播放音频。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得电子设备101执行本申请一些实施例中所提供的灭屏显示的方法,以及各种应用以及数据处理等。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统;该存储程序区还可以存储一个或多个应用(比如图库、联系人等)等。存储数据区可存储电子设备101使用过程中所创建的数据(比如照片,联系人等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如一个或多个磁盘存储部件,闪存部件,通用闪存存储器(universal flash storage,UFS)等。在一些实施例中,处理器110可以通过运行存储在内部存储器121的指令,和/或存储在设置于处理器110中的存储器的指令,来使得电子设备101执行本申请实施例中所提供的灭屏显示的方法,以及其他应用及数据处理。电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线 1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或多个显示屏194。
电子设备100的显示屏194可以是一种柔性屏,目前,柔性屏以其独特的特性和巨大的潜力而备受关注。柔性屏相对于传统屏幕而言,具有柔韧性强和可弯曲的特点,可以给用户提供基于可弯折特性的新交互方式,可以满足用户对于电子设备的更多需求。对于配置有可折叠显示屏的电子设备而言,电子设备上的可折叠显示屏可以随时在折叠形态下的小屏和展开形态下大屏之间切换。因此,用户在配置有可折叠显示屏的电子设备上使用分屏功能,也越来越频繁。
电子设备内部的传感器可以包括摄像头193,麦克风170C,压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB(R为red(红),G为green(绿),B为blue(蓝)),YUV(Y为亮度分量,U\V为色度分量,U表示蓝色、V表示红色)等格式的图像信号。在一些实施例中,电子设备100可以包括1个或多个摄像头193。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
麦克风170C用于捕获音频。麦克风170C可以将捕获到的声音信号转换为电信号。在一些实例中,麦克风170C可以用于通话、录音等功能。在一些实例中,麦克风170C还可以用于捕获用户的语音指令。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,X,Y和Z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用 于横竖屏切换,计步器等应用。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
处理器110可以包括实时响应处理器,用于周期性地驱动传感器模块中的各个传感器,并对获取到的信息进行处理。
以摄像头为例,实时响应处理器可以定时驱动摄像头拍摄照片,摄像头将拍摄到的照片通过接口发送给实时响应处理器,实时响应处理器可以判断前一时刻至后一时刻照片中发生的变化(如明暗变化),并根据判断的结果唤醒下一级处理器。实时响应处理器可以将获取到的照片发送给下一级处理器(如中央处理器(central processing unit,CPU)或GPU),以执行相应地操作(如调整屏幕亮度)。
以麦克风为例,实时响应处理器可以定时驱动麦克风捕获音频,麦克风将捕获到音频通过接口发送给实时响应处理器,实时响应处理器可以判断该音频中是否有特定频率的声音(如人的声音)出现,并根据判断的结果唤醒下一级处理器。实时响应处理器可以将获取到的音频发送给下一级处理器(如CPU中的语音识别模块),以执行相应地操作(如执行音频中包含的语音指令)。
以加速度传感器为例,实时响应处理器可以定时驱动加速度传感器,加速度传感器可以确定电子设备的方位,并将该方位信息发送给实时响应处理器,实时响应处理器可以判断电子设备是否处于运动状态,并根据判断的结果唤醒下一级处理器。实时响应处理器可以将获取到的方位信息发送给下一级处理器(如CPU中的计步模块),以执行相应地操作(如记录用户走路的步数)。
以指纹传感器为例,实时响应处理器可以定时驱动指纹传感器,指纹传感器可以将检测到的信号发送给实时响应处理器,从而实时响应处理器可以判断是否有手指触摸指纹传感器,并可以根据判断的结果唤醒下一级处理器。实时响应处理器可以将获取到的指纹信息发送给下一级处理器(如CPU中的指纹识别模块),以执行相应地操作(如点亮屏幕)。
以按键为例,实时响应处理器可以定时驱动按键,按键可以将检测到的信号发送给实时响应处理器,从而实时响应处理器可以判断是否有键盘被用户按下,并可以根据判断的结果唤醒下一级处理器。实时响应处理器可以将获取到的按键操作信息发送给下一级处理器(如CPU),以执行相应的操作(如点亮屏幕)。
如上所述,实时响应处理器所能够处理的任务相对简单,仅在识别到有信息发生变化的情况下可以将数据传递至下一级处理器。一方面,实时响应处理器的识别能力以及可识别的信息量都非常有限,无法识别复杂的环境以调整电子设备的工作状态;另一方面,实时响应处理器会对获取到的部分信息判断不准确,从而误触发其他处理器,导致电子设备无意义地耗电。并且,假设电子设备包含能够识别复杂环境的处理器,但是这类处理器的计算量大,启动这种处理器往往需要多个硬件的配合,还需要启动多重软件程序,不利于及时响应外界环境的变化。由于实时响应处理器具有识别能力低、耗电量小的特点,实时响应处理器通常无法直接启动这一类处理器。为进一步追求电子设备的用户体验度,实现人工智能,本申请提供一种集成芯片,目的在于提升电子设备的实时响应能力。
图2示出了本申请的集成芯片与传感器交互的示意图。集成芯片200可以应用在如图1所示的电子设备100中。集成芯片200可以替换图1所示电子设备100中的处理器110。
集成芯片200包括第一处理器221,用于从第一外部传感器211获取第一传感器数据,并且从所述第一传感器数据中提取第一目标数据,所述第一处理器221为实时响应处理器。集成芯片200还包括:加速器230,用于根据第一神经网络模型对所述第一目标数据进行识别以得到第一识别结果,该第一识别结果用于确定与所述第一识别结果对应的目标操作。
集成芯片200又可以被称作片上系统(system on a chip,SOC),或者是片上系统的一部分。
上文中已介绍了实时响应处理器响应外部传感器的多种可能的实现方式,在此不再赘述。
如图3所示为一种加速器230的结构示意图。加速器230的核心部分为运算电路303,控制器304控制运算电路303提取存储器(权重存储器或输入存储器)中的数据并进行运算。在一些实现中,运算电路303内部包括多个处理单元(Process Engine,PE)。在一些实现中,运算电路303是二维脉动阵列。运算电路303还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路303是通用的矩阵处理器。举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路从权重存储器302中取矩阵B相应的数据,并缓存在运算电路中每一个PE上。运算电路从输入存储器301中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)308中。
向量计算单元307可以对运算电路303的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。例如,向量计算单元307可以用于神经网络中非卷积 /非FC层的网络计算,如池化(Pooling),批归一化(Batch Normalization),局部响应归一化(Local Response Normalization)等。在一些实现中,向量计算单元能307将经处理的输出的向量存储到统一缓存器306。例如,向量计算单元307可以将非线性函数应用到运算电路303的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元307生成归一化的值、合并值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路303的激活输入,例如用于在神经网络中的后续层中的使用。本申请提供的方法的部分或全部步骤可以由运算电路303或向量计算单元307执行。
统一存储器306用于存放输入数据以及输出数据。权重数据直接通过总线接口单元312将缓存存储器311中的输入数据搬运到输入存储器301和/或统一存储器306、将缓存存储器311中的权重数据存入权重存储器302,以及将统一存储器306中的数据存入缓存存储器311。
总线接口单元(Bus Interface Unit,BIU)312,用于通过总线实现第一处理器310和取指存储器309之间进行交互。与控制器304连接的取指存储器(instruction fetch buffer)309,用于存储控制器304使用的指令。控制器304,用于调用取指存储器309中缓存的指令,实现控制该运算加速器230的工作过程。一般地,统一存储器306,输入存储器301,权重存储器302、缓存存储器311以及取指存储器309均为片上(On-Chip)存储器。
应理解,本领域技术人员根据图3所示的加速器230的结构,可以联想到加速器230的其他可能构造。可以理解的是,图3所示的实施例仅是为了帮助本领域技术人员更好地理解本申请的技术方案,而并非是对本申请技术方案的限制。在受益于前述描述和相关附图中呈现的指导启示下,本领域技术人员将会想到本申请的许多改进和其他实施例。因此,应理解,本申请不限于所公开的特定实施例。
第一处理器221可以包括多个处理单元。其中,处理单元可以是逻辑处理单元或物理处理单元。换句话说,可以通过算法将第一处理器221划分为多个处理单元,或者,第一处理器221包括物理可拆解的多个处理单元。多个处理单元协同处理数据。
第一外部传感器211例如可以是如图1所示的摄像头193、麦克风170C、压力传感器180A、陀螺仪传感器180B、气压传感器180C、磁传感器180D、加速度传感器180E、距离传感器180F、接近光传感器180G、指纹传感器180H、温度传感器180J、触摸传感器180K,环境光传感器180L、骨传导传感器180M中的一个。
在一个示例中,第一处理器221可以对多个外部传感器得到的传感器数据进行目标数据提取。
例如,第一处理器221可以对摄像头捕获的照片、麦克风捕获的音频进行目标数据提取。
一种情况,第一处理器221中的第一处理单元可以对摄像头捕获的照片进行目标数据提取,第一处理器221中的第二处理单元可以对麦克风捕获的音频进行目标数据提取。
另一种情况,第一处理器221中的第一处理单元可以对摄像头捕获的照片进行目标数据提取,并且可以对麦克风捕获的音频进行目标数据提取。
在一个示例中,第一处理器221可以仅对一个外部传感器得到的传感器数据进行目标数据提取。例如,第一处理器221仅对摄像头捕获的照片进行目标数据提取。在第一处理器221包括多个处理单元的情况下,处理单元A、处理单元B可以同时对不同的照片进行 目标数据提取;或者,处理单元A、处理单元B可以同时对同一照片的不同部分进行目标数据提取。
本申请中的“目标数据提取”可以被解释为,从外部传感器获取到的信息中提取一部分信息。被第一处理器221提取出的信息可以等价于“特征”。被提取出的信息可以用于进一步处理。也就是说,第一处理器221从外部传感器获取到的信息中剔除一部分数据,保留下来的数据可以被进一步识别。第一处理器221可以依据预设的规则进行目标数据提取,将满足预设规则的数据提取出来。当不存在预设规则的情况下,第一处理器221可以将外部传感器获取到的全部数据发送至加速器230。
例如,当外部传感器是摄像头的情况下,被提取的信息可以是构成人物的像素点。也就是说,图像中不构成人物的像素点可以被剔除。也就是说,预设的规则可以是“构成人物的像素点”。又如,当外部传感器是摄像头的情况下,被提取的信息可以是包含若干帧图像的视频,视频中存在某个物体在该若干帧内发生了移动,并且在该若干帧之前,该物体未发生移动。也就是说,视频中未发生物体移动的画面可以被剔除。也就是说,预设的规则可以是“存在处于运动状态的物体的画面”。
例如,当外部传感器是音频数据的情况下,被提取的信息可以是特定频段的数据。也就是说,音频中不是特定频段的音频数据可以被剔除。也就是说,预设的规则可以是“处在特定频段内的音频数据”。
例如,当外部传感器是加速度传感器的情况下,被提取的信息可以是电子设备的方位发生变化的过程。也就是说,在电子设备的方位未发生变化时,加速度传感器捕获的数据可以被剔除。也就是说,预设的规则可以是“方位信号变化的速度大于预设值的方位信号”。
例如,当外部传感器是指纹传感器的情况下,被提取的信息可以是捕获到的指纹信号。也就是说,在电子设备未检测到有指纹时,指纹传感器捕获的数据可以被剔除。也就是说,预设的规则可以是“检测到手指时捕获的信号值”。
例如,当外部传感器是按键的情况下,被提取的信息可以是被按下的按键以及按键被按下的顺序。也就是说,在电子设备未检测到用户按下按键时,按键捕获的数据可以被剔除。也就是说,预设的规则可以是“按键被按下时捕获的信号值”。
加速器230,又可以被称作神经网络加速器230,是具有计算能力、能够执行神经网络模型的装置或模块。在神经网络或数据处理领域中,加速器230有时又被称作分类器(classifier)。例如,加速器230可以是一种神经网络(neural-network,NN)计算处理器。
神经网络模型是一种包含神经网络的数学模型。神经网络模型可以通过算法、代码等形式存储在存储介质上。
神经网络处理器(Neural-network Processing Unit,NPU)为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
加速器230可以执行一个或多个神经网络模型。
在一个示例中,一个神经网络模型对应一个外部传感器。也就是说,一个神经网络模型仅可以处理来自某一种外部传感器的数据。
例如,当第一目标数据包含图像信息的情况下,加速器230可以执行用于图像处理的 神经网络模型,得到的识别结果可以是图像的标签以及各个标签所属的概率。图像的标签可以是“男人”、“女人”、“胡须”、“长发”、“白天”、黑夜、“室内”、“室外”、“会议室”、“车内”等。图像的标签可以是该第一目标数据的特征。
例如,当第一目标数据包含音频信息的情况下,加速器230可以执行用于音频处理的神经网络模型,得到的识别结果可以是语音指令的内容,以及各个语音指令内容所属的概率。语音指令的内容可以是“拨打电话”、“会议”、“查询天气”等。语音指令内容可以是该第一目标数据的特征。
例如,当第一目标数据包含方位信息的情况下,加速器230可以执行用于方位信息处理的神经网络模型,得到的识别结果可以是电子设备翻转的角度范围以及该角度范围所属的概率。电子设备翻转的角度范围可以是顺时针翻转360°~720°等。电子设备翻转的角度范围可以是该第一目标数据的特征。
例如,当第一目标数据包含指纹信息的情况下,加速器230可以执行用于指纹匹配的神经网络模型,得到的识别结果可以是指纹类型以及各个指纹类型所属的概率。指纹类型可以是大拇指指纹、无名指指纹等。指纹类型可以是该第一目标数据的特征。
例如,当第一目标数据包含按键信息的情况下,加速器230可以执行用于触控处理的神经网络模型,得到的识别结果可以是触控指令的内容以及各个触控指令内容所属的概率。触控指令的内容可以是“误撞击”、“截取屏幕”等。触控指令的内容可以是该第一目标数据的特征。
在一个示例中,一个神经网络模型对应多个外部传感器。也就是说,一个神经网络模型可以处理来自多种外部传感器的数据。
例如,在由摄像头捕获的视频信息与由麦克风捕获的音频信息之间存在某种关联时,可以使用既能够识别视频信息又能够识别音频信息的神经网络模型。
加速器230获取到的第一识别结果,并将该第一识别结果发送给其他处理器,从而该其他处理器可以确定与该第一识别结果对应的目标操作。
一种情况,目标操作可以是维持电子设备当前的状态。另一种情况,目标操作可以是唤醒电子设备内部的第三处理器,使第三处理器从休眠状态切换至工作状态;目标操作进一步还包括将第一目标数据、第一识别结果、指令等信息发送至第三处理器。另一种情况,被调用的第三处理器是实时响应处理器,那么可以不唤醒第三处理器,直接将指令或数据发送至第三处理器,第三处理器可以实时响应该指令或数据。第三处理器还可以是该集成芯片200上的一个处理器。
该其他处理器可以根据第一识别结果确定一个最终的识别结果。假设第一识别结果中包括特征A、特征B、特征C,以及特征A所属的概率为50%、特征B所属的概率为30%、特征C所述的概率为20%,意味着第一目标数据属于特征A的概率为50%,第一目标数据属于特征B的概率为30%,第一目标数据属于特征C的概率为20%。确定最终识别结果的一种方式,将概率值最大的识别结果作为最终识别结果,即第一目标数据属于特征A。确定最终识别结果的另一种方式,在特征A、特征B、特征C均为数值的情况下,求特征A、特征B、特征C的加权平均值,权重即为各个特征对应的概率值,即第一目标数据属于(特征A×0.5+特征B×0.3+特征C×0.2)/3。确定最终识别结果的另一种方式,将概率值较大的多个特征作为最终的识别结果,例如将特征A、特征B作为最终的识别结果。
可以通过算法或代码的方式将最终识别结果与目标操作关联起来。例如在最终识别结果为识别结果A的情况下,可以确定与该第一识别结果对应的目标操作为操作B。
例如,第一目标数据为照片时,在处理器确定第一目标数据包含“室内”的标签时,可以启动无线通信模块以连接WIFI或蓝牙。
例如,第一目标数据为音频时,在处理器确定第一目标数据包含“会议”的内容时,可以启动录音应用程序以记录会议内容。
例如,第一目标数据为方位时,在处理器确定第一目标数据对应“顺时针翻转360°~720°”的识别结果时,可以启动防跌落应用程序。
例如,第一目标数据为指纹时,在处理器确定第一目标数据对应“无名指指纹”的识别结果时,可以启动与无名指指纹对应的应用程序。
例如,第一目标数据为按键信息时,在处理器确定第一目标数据包含“误撞击”的触控指令时,可以忽略该按键操作,并使电子设备中的部分模块切换至休眠状态。
值得一提的是,传统NPU处理器虽然能够运载重型复杂程序,但需要多级程序或多个硬件唤醒调动(如负载在CPU上作为协作处理器,由CPU负责调度任务),还需要外接存储设备(如双倍率(Double Data Rate,DDR)内存),无法对实时响应处理器发送的数据进行高频率地实时响应,即实时响应处理器无法实时调动该传统NPU处理器。并且,传统NPU处理器在运载重型复杂程序时耗电量较大。在本申请中,加速器230的计算能力可以弱于传统NPU处理器。在本申请中,加速器230执行的神经网络模型可以是经过简化的。
降低处理器的计算能力,例如可以是降低处理器的计算精度,由16进制降至8进制。
简化神经网络模型,例如,可以从已经训练好的神经网络模型中去除一部分层,例如卷积层、池化层、神经网络层等。又如,可以减少神经网络模型各层中的神经元数量。简化后的神经网络模型需要重新训练。
另外,传统NPU处理器需要配置外部存储器(如DDR内存),为数据识别提供存储空间。在本申请中,为了使实时响应处理器能够快速调动加速器230,加速器230内部可以进一步包含存储器,存储器上存储有执行神经网络模型的指令,且加速器230可以保存实时响应处理器发送的第一目标数据,并为加速器230识别第一目标数据提供数据处理存储空间。也就是说,加速器230可以包括处理器以及存储器,或者包括具有存储数据功能的处理器。也就是说,该集成芯片200上除了集成处理器外还集成有具备存储功能的装置或模块。
可选的,加速器230包括缓存存储器,用于存储执行所述第一神经网络模型过程中产生的中间数据。
应理解,加速器230能够执行的神经网络模型越庞大,加速器230内的存储器的存储量也就越大,频繁使用加速器230会引起耗电的问题。但神经网络模型越简单,加速器230生成的识别结果的准确性越无法保证。
另外,可以通过更新神经网络模型的方式升级加速器230,使加速器230能够处理更多的任务。与普通处理器不同之处在于,在普通处理器不使用神经网络模型的情况下,普通处理器通过执行算法或代码完成数据处理过程,算法和代码的提升空间是有限的,更新难度较高,不利于电子设备的人工智能化,也不利于提升电子设备的用户体验度。而更新 本申请的加速器230,可以是添加、删减神经网络模型,也可以是更新神经网络模型中的参数,因此本申请加速器230的更新方式较为简便,方便执行。
可选的,所述第一神经网络模型中的参数通过网络进行更新。
第一神经网络模型中的参数例如可以是权重参数、神经元激活/非激活参数等。
可选的,第一处理器221还用于根据该第一识别结果,确定与所述第一识别结果对应的目标操作。
也就是说,由实时响应的第一处理器221处理该第一识别结果,并确定响应该第一识别结果的目标操作。换句话说,该第一处理器221既包括可以实时驱动外部传感器的处理单元,又包括可以实时响应传感器数据的处理单元,并且包括可以响应加速器230并得到识别结果的处理单元。图2示出了第一外部传感器211、第一处理器221、加速器230传递的数据流。
可选的,集成芯片200还包括:第二处理器222,用于根据该第一识别结果,确定与所述第一识别结果对应的目标操作。
也就是说,由与该第一处理器221不同的第二处理器222处理该第一识别结果,并确定响应该第一识别结果的目标操作。换句话说,该第二处理器222包括可以响应加速器230的处理单元。图4-7示出了本申请的集成芯片200与外部传感器交互的示意图。
图4示出了第一外部传感器211、第一处理器221、加速器230、第二处理器222传递的数据流。
为了便于描述,下面以加速器230将识别结果发送至第二处理器222的情况为例进行说明。应理解,在受益于本申请实施例描述和相关附图中呈现的指导启示下,本领域技术人员将会想到本申请的许多改进和其他实施例,如加速器230将识别结果发送至第一处理器221的方案。因此,应理解,本申请不限于所公开的特定实施例。
可选的,所述第一处理器221,还用于在提取所述第一特征数据之后,通知所述第二处理器222从休眠状态切换到工作状态;所述第二处理器222,具体用于在处于所述工作状态的情况下,根据所述第一识别结果确定所述目标操作。
也就是说,第二处理器222可以不是实时响应处理器。由于加速器230的计算能力较高、计算速度较快,当第一处理器221完成特征数据的提取之后,可以唤醒第二处理器222,使得第二处理器222处于工作状态并响应加速器230发送的识别结果。由于第二处理器222可以不必处于实时在线的状态,这样有助于节省耗能。
可选的,所述第一处理器221,还用于从第二外部传感器212获取第二传感器数据,并且从所述第二传感器数据中提取第二目标数据;所述加速器230,还用于根据第二神经网络模型对所述第二目标数据进行识别以得到第二识别结果,所述第二识别结果与所述第一识别结果用于共同确定所述目标操作。
可选的,所述第二处理器222具体用于,根据所述第一识别结果以及所述第二识别结果,确定所述目标操作。
图5示出了第一外部传感器211、第二外部传感器212、第一处理器221、加速器230、第二处理器222传递的数据流。也就是说,两个不同的外部传感器分别捕获第一传感器数据、第二传感器数据,第一处理器221从第一传感器数据中提取第一目标数据,第一处理器221从第二传感器数据中提取第二目标数据,加速器230使用第一神经网络模型对第一 目标数据进行识别,使用第二神经网络模型对第二目标数据进行识别。第一神经网络模型、第一目标数据的类型、第一外部传感器211三者之间存在关联关系或对应关系。第二神经网络模型、第二目标数据的类型、第二外部传感器212三者之间存在关联关系或对应关系。第二处理器222可以根据第一识别结果、第二识别结果,确定该目标操作。
在一个示例中,第二处理器222可以是实时响应处理器。第二处理器222可以实时响应加速器230发送的识别结果。
在一个示例中,第二处理器222可以是非实时响应处理器,即第二处理器222包括工作状态以及休眠状态。第一处理器221可以在向加速器230发送第一目标数据的同时,唤醒第二处理器222。或者,由加速器230唤醒该第二处理器222。或者,集成芯片200还包括控制器240,由控制器240唤醒第二处理器222。第二处理器222在一段时间内未接收到新的识别结果,第二处理器222可以切换为休眠状态。或者,第二处理器222可以响应第一处理器221或控制器240的指示,由工作状态切换为休眠状态。
下面通过3个场景为例进行说明。应理解,在受益于本申请实施例的描述和相关附图中呈现的指导启示下,本领域技术人员将会想到本申请的许多改进和其他实施例,因此,除本申请提供的场景以外,集成芯片200还可应用在其他场景中。应理解,本申请不限于所公开的特定实施例。
场景一
第一外部传感器211为摄像头,第二外部传感器212为麦克风。第一处理器221周期性地驱动摄像头捕捉图像信息。第一处理器221周期性地驱动麦克风捕捉音频信息。第一处理器221持续监听摄像头发送的图像信息和麦克风发送的音频信息。当第一处理器221监听到音频中出现了人物的声音时,第一处理器221截取该人声发生时间段内获得的图像信息作为第一目标数据,并截取该人声发生时间段内获得的音频信息作为第二目标数据。第一处理器221将第一目标数据、第二目标数据发送给加速器230。
加速器230可以使用第一神经网络模型对第一目标数据进行识别,得到第一识别结果。该第一神经网络模型可以是图像处理神经网络模型,第一识别结果可以是“夜晚”、“室外”的图像标签。加速器230可以使用第二神经网络模型对第二目标数据进行识别,得到第二识别结果。该第二神经网络模型可以是语音识别神经网络模型,第二识别结果可以是“尖叫”的音频标签。
加速器230将包含“夜晚”、“室外”的第一识别结果以及包含“尖叫”的第二识别结果发送至第二处理器222。第二处理器222可以根据“夜晚”、“室外”、“尖叫”的识别结果,确定目标操作,例如驱动电子设备进入紧急安全模式,提醒正在使用电子设备的用户报警或求救。
换句话说,当用户发出尖叫声时,电子设备可以将摄像头拍摄到图像保存下来。并且电子设备可以启动报警提醒界面,便于用户在紧急状况下快速获取帮助。
场景二
第一外部传感器211为麦克风,第二外部传感器212为触控传感器。第一处理器221周期性地驱动麦克风捕捉音频信息。第一处理器221周期性地驱动触控传感器捕捉手势操作信息。第一处理器221持续监听麦克风发送的音频信息和触控传感器发送的手势操作信息。当第一处理器221监听到音频中出现了人物的声音,且手势操作信号发生波动时,第 一处理器221截取音频信息作为第一目标数据,并截取手势操作信息作为第二目标数据。第一处理器221将第一目标数据、第二目标数据发送给加速器230。
加速器230可以使用第一神经网络模型对第一目标数据进行识别,得到第一识别结果。该第一神经网络模型可以是语音识别神经网络模型,第一识别结果可以是“会议纪要”的音频标签。加速器230可以使用第二神经网络模型对第二目标数据进行识别,得到第二识别结果。该第二神经网络模型可以是手势操作神经网络模型,第二识别结果可以是“触摸A区域”的标签。
加速器230将包含“会议纪要”的第一识别结果以及包含“触摸A区域”的第二识别结果发送至第二处理器222。第二处理器222可以根据“会议纪要”、“触摸A区域”的识别结果,确定目标操作,例如驱动电子设备进入会议模式,启动录音程序并打开笔记记录程序。
换句话说,当用户长时间触摸屏幕A区域,并向电子设备说出有关会议的语音,电子设备可以判断用户在发出指令,可以快速识别该语音所包含的指令,从而快速响应用户的操作。
场景三
第一外部传感器211为环境光传感器,第二外部传感器212为加速度传感器。第一处理器221周期性地驱动环境光传感器捕捉光强信息。第一处理器221周期性地驱动加速度传感器捕捉电子设备方位信息。第一处理器221持续监听环境光传感器发送的光强信息和加速度传感器发送的方位信息。当第一处理器221监听到方位信息发生变化时,第一处理器221截取光强信息作为第一目标数据,并截取方位信息作为第二目标数据。第一处理器221将第一目标数据、第二目标数据发送给加速器230。
加速器230可以使用第一神经网络模型对第一目标数据进行识别,得到第一识别结果。该第一神经网络模型可以是光信号处理神经网络模型,第一识别结果可以是“由明变暗”的标签。加速器230可以使用第二神经网络模型对第二目标数据进行识别,得到第二识别结果。该第二神经网络模型可以是旋转角度识别神经网络模型,第二识别结果可以是“跑步”的标签。
加速器230将包含“由明变暗”的第一识别结果以及包含“跑步”的第二识别结果发送至第二处理器222。第二处理器222可以根据“由明变暗”、“跑步”的识别结果,确定目标操作,例如驱动电子设备进入运动记录模式、关闭触控传感器并熄灭显示屏。
换句话说,当用户将手机放入口袋或背包时,电子设备根据加速器230传感器捕获的信息判断用户是否处在运动中。若是,则电子设备可以熄屏,防止用户在运动时因误碰而长时间点亮屏幕。并且,即使用户并未启动任何运动记录程序时,电子设备可以自动触发记录用户的运动数据。
应理解,加速器230可以处理来自更多外部传感器捕获的数据。
可选的,加速器230可以分时对所述第一目标数据和所述第二目标数据进行识别。
例如,在第一时刻对第一目标数据进行识别,在识别完该第一目标数据之后,对该第二目标数据进行识别。
加速器230可以根据接收到第一目标数据、第二目标数据的先后顺序,确定该第一目标数据和该第二目标数据的识别顺序。加速器230也可以根据第一外部传感器211和第二外部传感器212的优先级,判断第一目标数据、第二目标数据的识别顺序。
可选的,所述第一处理器221还用于确定所述第一目标数据对应的第一优先级和所述第二目标数据对应的第二优先级。
所述第一优先级、第二优先级可以是第一处理器221发送第一目标数据、第二目标数据的顺序。所述第一优先级、第二优先级可以是第一处理器221对第一传感器数据、第二传感器数据提取数据的顺序。
第一处理器221可以根据接收到第一传感器数据、第二传感器数据的先后顺序,确定该第一目标数据和该第二目标数据的发送顺序。第一处理器221也可以根据第一外部传感器211和第二外部传感器212的优先级,判断第一优先级、第二优先级,如摄像头捕获的信息的优先级高于麦克风捕获的信息的优先级。
可选的,所述第一处理器221还用于分时向所述加速器230发送所述第一目标数据、所述第二目标数据。
可选的,所述集成芯片200还包括:控制器240,用于确定所述第一目标数据对应的第一优先级和所述第二目标数据对应的第二优先级;所述加速器230,具体用于根据所述第一优先级和所述第二优先级分时对所述第一目标数据和所述第二目标数据进行识别。
如图6所示,集成芯片200还可以包括控制器240。也就是说,集成芯片200中还包括用于调度数据的控制器240,由控制器240决定加速器230识别数据的顺序。在加速器230识别第一目标数据、第二目标数据之前,控制器240确定第一目标数据的优先级为第一优先级,第二目标数据为第二优先级,从而加速器230按照优先级高低对第一目标数据、第二目标数据进行识别。
在一个示例中,控制器240向加速器230发送所述第一目标数据对应的第一优先级和所述第二目标数据对应的第二优先级,加速器230根据第一优先级和第二优先级,确定分时对所述第一目标数据和所述第二目标数据进行识别。
在一个示例中,控制器240控制第一处理器221分时向加速器230发送所述第一目标数据和所述第二目标数据。从而加速器230可以根据按照接收顺序分时对所述第一目标数据和所述第二目标数据进行识别。
在一个示例中,在第一时刻,加速器230接收到第一目标数据,并对该第一目标数据进行识别。在加速器230尚未完成对第一目标数据识别的第二时刻,加速器230接收到优先级高于该第一目标数据的第二目标数据,加速器230可以中断对第一目标数据的识别过程,优先对第二目标数据进行识别。
可选的,所述第一处理器221,还用于根据所述第一传感器数据确定第三识别结果,所述第三识别结果与所述第一识别结果用于共同确定所述目标操作。
可选的,所述第一处理器还用于根据所述第一识别结果以及所述第三识别结果,确定所述目标操作。
也就是说,可以将简单的任务交给第一处理器221处理,将复杂的任务交给加速器230处理。实时响应的第一处理器221可以对第一传感器数据进行识别,得到第三识别结果,并响应加速器发送的第一识别结果,确定响应该第一识别结果以及第三识别结果的目标操作。
可选的,所述第一处理器221,还用于根据所述第一传感器数据确定第三识别结果;所述第二处理器222,具体用于根据所述第一识别结果以及所述第三识别结果,确定所述 目标操作。
也就是说,实时响应的第一处理器221可以对第一传感器数据进行识别,得到第三识别结果。第二处理器222处理该第一识别结果以及第三识别结果,并确定响应该第一识别结果以及第三识别结果的目标操作。换句话说,可以将简单的任务交给第一处理器221处理,将复杂的任务交给加速器230处理。该第二处理器222包括可以响应加速器230以及第一处理器221的处理单元。图7示出了第一外部传感器211、第一处理器221、加速器230、第二处理器222传递的数据流。
以摄像头为例,实时响应处理器可以定时驱动摄像头拍摄照片,摄像头将拍摄到的照片通过接口发送给实时响应处理器,实时响应处理器可以判断前一时刻至后一时刻照片中发生的变化,得出是否发生明暗变化的判断结果,并将同一时段的图像信息发送至加速器230。加速器230对图像信息进行处理,得到图像的标签以及各个标签所属的概率。第二处理器222可以根据第一处理器221以及加速器230发送的识别结果执行相应操作,例如,第一识别结果包含“室内”的标签,第三识别结果为图像发生了明暗变化,第二处理器222可以启动用于提醒用户启动照明设备的程序。
以麦克风为例,实时响应处理器可以定时驱动麦克风捕获音频,麦克风将捕获到音频通过接口发送给实时响应处理器,实时响应处理器可以判断该音频中是否有机主的声音出现,并将相同时段的音频信息发送至加速器230。加速器230对音频信息进行处理,得到语音指令的内容,以及各个语音指令内容所属的概率。第二处理器222可以根据第一处理器221以及加速器230发送的识别结果执行相应操作,例如,第一识别结果包含“拨打电话”的语音指令,第三识别结果为不包含机主声音,第二处理器222可以启动锁定电子设备的程序,并提示正在使用电子设备的用户进行解锁。
图8所示为本申请提供的一种处理传感器数据的方法的示意性流程图。图8所示的方法800可以由图1中的处理器110或者图2所示的集成芯片200执行。
801,实时地从第一外部传感器获取第一传感器数据,并且从所述第一传感器数据中提取第一目标数据。
可选的,所述第一外部传感器包括摄像头、麦克风、运动传感器、距离传感器、环境光传感器、磁场传感器、指纹传感器或温度传感器中的一种。
第一外部传感器例如可以是如图1所示的摄像头193、麦克风170C、压力传感器180A、陀螺仪传感器180B、气压传感器180C、磁传感器180D、加速度传感器180E、距离传感器180F、接近光传感器180G、指纹传感器180H、温度传感器180J、触摸传感器180K,环境光传感器180L、骨传导传感器180M中的一个。
本申请中的“目标数据提取”可以被解释为,从外部传感器获取到的信息中提取一部分信息。被提取出的信息可以等价于“特征”。被提取出的信息可以用于进一步处理。也就是说,从外部传感器获取到的信息中可以剔除一部分数据,保留下来的数据可以被进一步识别。一种方式是依据预设的规则进行目标数据提取,将满足预设规则的数据提取出来。当不存在预设规则的情况下,可以将外部传感器获取到的全部数据发送至加速器。
步骤801可以由如图2所示的集成芯片中的第一处理器执行。
802,根据第一神经网络模型对所述第一目标数据进行识别以得到第一识别结果,所述第一识别结果用于确定与所述第一识别结果对应的目标操作。
如图2所示的集成芯片中的加速器可以用于根据第一神经网络模型对所述第一目标数据进行识别以得到第一识别结果。
神经网络模型是一种包含神经网络的数学模型。神经网络模型可以通过算法、代码等形式存储在存储介质上。前文中已经介绍了神经网络的概念,在此不再赘述。在本申请中,加速器执行的神经网络模型可以是经过简化的。简化神经网络模型,例如,可以从已经训练好的神经网络模型中去除一部分层,例如卷积层、池化层、神经网络层等。又如,可以减少神经网络模型各层中的神经元数量。简化后的神经网络模型需要重新训练。
根据第一神经网络模型,可以获取到第一识别结果;根据第一识别结果,可以确定与该第一识别结果对应的目标操作。
该其他处理器可以根据第一识别结果确定一个最终的识别结果。假设第一识别结果中包括特征A、特征B、特征C,以及特征A所属的概率为50%、特征B所属的概率为30%、特征C所述的概率为20%,意味着第一目标数据属于特征A的概率为50%,第一目标数据属于特征B的概率为30%,第一目标数据属于特征C的概率为20%。确定最终识别结果的一种方式,将概率值最大的识别结果作为最终识别结果,即第一目标数据属于特征A。确定最终识别结果的另一种方式,在特征A、特征B、特征C为数值的情况下,求特征A、特征B、特征C的加权平均值,权重即为各个特征对应的概率值,即第一目标数据属于(特征A×0.5+特征B×0.3+特征C×0.2)/3。确定最终识别结果的另一种方式,将概率值较大的多个特征作为最终的识别结果,例如将特征A、特征B作为最终的识别结果。
可以通过算法或代码的方式将最终识别结果与目标操作关联起来。
可选的,所述方法还包括:执行所述目标操作。
一种情况,目标操作可以是维持电子设备当前的状态。另一种情况,目标操作可以是唤醒电子设备内部的其他处理器,使其他处理器从休眠状态切换至工作状态;目标操作进一步还包括将第一目标数据、第一识别结果、指令等信息发送至其他处理器。被唤醒的其他处理器可以是实时响应处理器。
可选的,所述方法还包括:根据所述第一识别结果确定所述目标操作。
由实时响应第一外部传感器的处理器处理该第一识别结果,并确定响应该第一识别结果的目标操作。换句话说,执行方法800的装置包括可以实时驱动外部传感器的处理单元,又包括可以实时响应传感器数据的处理单元,并且包括可以响应加速器得到的识别结果的处理单元。
可选的,所述方法还包括:实时地从第二外部传感器获取第二传感器数据,并且从所述第二传感器数据中提取第二目标数据;根据第二神经网络模型对所述第二目标数据进行识别以得到第二识别结果,所述第二识别结果与所述第一识别结果用于共同确定所述目标操作。
第二外部传感器例如可以是如图1所示的摄像头193、麦克风170C、压力传感器180A、陀螺仪传感器180B、气压传感器180C、磁传感器180D、加速度传感器180E、距离传感器180F、接近光传感器180G、指纹传感器180H、温度传感器180J、触摸传感器180K,环境光传感器180L、骨传导传感器180M中的一个。
也就是说,两个不同的外部传感器分别捕获第一传感器数据、第二传感器数据,从第一传感器数据中可以提取第一目标数据,从第二传感器数据中可以提取第二目标数据,第 一神经网络模型可以对第一目标数据进行识别,第二神经网络模型可以对第二目标数据进行识别。第一神经网络模型、第一目标数据的类型、第一外部传感器三者之间存在关联关系或对应关系。第二神经网络模型、第二目标数据的类型、第二外部传感器三者之间存在关联关系或对应关系。处理器根据第一识别结果、第二识别结果,可以确定该目标操作。
可选的,所述第一神经网络模型中的参数通过网络进行更新。
第一神经网络模型中的参数例如可以是权重参数、神经元激活/非激活参数等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
Claims (18)
- 一种集成芯片,其特征在于,包括:第一处理器,用于从第一外部传感器获取第一传感器数据,并且从所述第一传感器数据中提取第一目标数据,所述第一处理器为实时响应处理器;加速器,用于根据第一神经网络模型对所述第一目标数据进行识别以得到第一识别结果,所述第一识别结果用于确定与所述第一识别结果对应的目标操作。
- 如权利要求1所述的集成芯片,其特征在于,所述集成芯片还包括:第二处理器,用于根据所述第一识别结果确定所述目标操作。
- 根据权利要求2所述的集成芯片,其特征在于,所述第一处理器,还用于在提取所述第一特征数据之后,通知所述第二处理器从休眠状态切换到工作状态;所述第二处理器,具体用于在处于所述工作状态的情况下,根据所述第一识别结果确定所述目标操作。
- 如权利要求2或3所述的集成芯片,其特征在于,所述第一处理器,还用于根据所述第一传感器数据确定第三识别结果;所述第二处理器,具体用于根据所述第一识别结果以及所述第三识别结果,确定所述目标操作。
- 如权利要求1至3中任一项所述的集成芯片,其特征在于,所述第一处理器,还用于从第二外部传感器获取第二传感器数据,并且从所述第二传感器数据中提取第二目标数据;所述加速器,还用于根据第二神经网络模型对所述第二目标数据进行识别以得到第二识别结果,所述第二识别结果与所述第一识别结果用于共同确定所述目标操作。
- 如权利要求5所述的集成芯片,其特征在于,所述加速器,还用于分时对所述第一目标数据和所述第二目标数据进行识别。
- 如权利要求6所述的集成芯片,其特征在于,所述集成芯片还包括:控制器,用于确定所述第一目标数据对应的第一优先级和所述第二目标数据对应的第二优先级;所述加速器,具体用于根据所述第一优先级和所述第二优先级分时对所述第一目标数据和所述第二目标数据进行识别。
- 如权利要求6所述的集成芯片,其特征在于,所述集成芯片还包括:控制器,用于确定所述第一目标数据对应的第一优先级和所述第二目标数据对应的第二优先级,并且根据所述第一优先级和所述第二优先级控制所述第一处理器分时向所述加速器发送所述第一目标数据和所述第二目标数据,以使得所述加速器分时对所述第一目标数据和所述第二目标数据进行识别。
- 根据权利要求1至8任一所述的集成芯片,其特征在于,所述集成芯片还包括:第三处理器,用于响应所述目标操作,从休眠状态切换至工作状态。
- 根据权利要求1至9中任一项所述的集成芯片,其特征在于,所述第一神经网络模型中的参数通过网络进行更新。
- 根据权利要求1至10中任一项所述的集成芯片,其特征在于,所述第一外部传感器包括摄像头、麦克风、运动传感器、距离传感器、环境光传感器、磁场传感器、指纹传感器或温度传感器中的一种。
- 一种电子设备,包括如权利要求1至11中任一项所述的集成芯片。
- 一种处理传感器数据的方法,其特征在于,包括:实时地从第一外部传感器获取第一传感器数据,并且从所述第一传感器数据中提取第一目标数据;根据第一神经网络模型对所述第一目标数据进行识别以得到第一识别结果,所述第一识别结果用于确定与所述第一识别结果对应的目标操作。
- 根据权利要求13所述的方法,其特征在于,所述方法还包括:根据所述第一识别结果确定所述目标操作。
- 根据权利要求13或14所述的方法,其特征在于,所述方法还包括:实时地从第二外部传感器获取第二传感器数据,并且从所述第二传感器数据中提取第二目标数据;根据第二神经网络模型对所述第二目标数据进行识别以得到第二识别结果,所述第二识别结果与所述第一识别结果用于共同确定所述目标操作。
- 根据权利要求13至15任一所述的方法,其特征在于,所述方法还包括:执行所述目标操作。
- 根据权利要求13至16中任一项所述的方法,其特征在于,所述第一神经网络模型中的参数通过网络进行更新。
- 根据权利要求13至17中任一项所述的方法,其特征在于,所述第一外部传感器包括摄像头、麦克风、运动传感器、距离传感器、环境光传感器、磁场传感器、指纹传感器或温度传感器中的一种。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/098653 WO2021016931A1 (zh) | 2019-07-31 | 2019-07-31 | 一种集成芯片以及处理传感器数据的方法 |
CN202310116490.0A CN116167422A (zh) | 2019-07-31 | 2019-07-31 | 一种集成芯片以及处理传感器数据的方法 |
EP19939634.2A EP3933664A4 (en) | 2019-07-31 | 2019-07-31 | INTEGRATED CHIP AND SENSOR DATA PROCESSING |
CN202310108138.2A CN116070684B (zh) | 2019-07-31 | 2019-07-31 | 一种集成芯片以及处理传感器数据的方法 |
CN201980091752.4A CN113490943B (zh) | 2019-07-31 | 2019-07-31 | 一种集成芯片以及处理传感器数据的方法 |
US17/511,383 US20220044043A1 (en) | 2019-07-31 | 2021-10-26 | Integrated circuit and sensor data processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/098653 WO2021016931A1 (zh) | 2019-07-31 | 2019-07-31 | 一种集成芯片以及处理传感器数据的方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/511,383 Continuation US20220044043A1 (en) | 2019-07-31 | 2021-10-26 | Integrated circuit and sensor data processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021016931A1 true WO2021016931A1 (zh) | 2021-02-04 |
Family
ID=74229663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/098653 WO2021016931A1 (zh) | 2019-07-31 | 2019-07-31 | 一种集成芯片以及处理传感器数据的方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20220044043A1 (zh) |
EP (1) | EP3933664A4 (zh) |
CN (3) | CN116070684B (zh) |
WO (1) | WO2021016931A1 (zh) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11355175B2 (en) * | 2020-04-09 | 2022-06-07 | Micron Technology, Inc. | Deep learning accelerator and random access memory with a camera interface |
US11461651B2 (en) * | 2020-04-09 | 2022-10-04 | Micron Technology, Inc. | System on a chip with deep learning accelerator and random access memory |
US11726784B2 (en) | 2020-04-09 | 2023-08-15 | Micron Technology, Inc. | Patient monitoring using edge servers having deep learning accelerator and random access memory |
US11874897B2 (en) | 2020-04-09 | 2024-01-16 | Micron Technology, Inc. | Integrated circuit device with deep learning accelerator and random access memory |
US11887647B2 (en) | 2020-04-09 | 2024-01-30 | Micron Technology, Inc. | Deep learning accelerator and random access memory with separate memory access connections |
IT202100012395A1 (it) * | 2021-05-13 | 2022-11-13 | St Microelectronics Srl | Circuito controllore, sistema e procedimento corrispondenti |
US20240028222A1 (en) * | 2022-07-22 | 2024-01-25 | Dell Products L.P. | Sleep mode using shared memory between two processors of an information handling system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0944438A (ja) * | 1995-07-26 | 1997-02-14 | Hitachi Ltd | 同期タイプ入出力命令の多重処理方法 |
CN104809501A (zh) * | 2014-01-24 | 2015-07-29 | 清华大学 | 一种基于类脑协处理器的计算机系统 |
CN107360327A (zh) * | 2017-07-19 | 2017-11-17 | 腾讯科技(深圳)有限公司 | 语音识别方法、装置和存储介质 |
CN108256492A (zh) * | 2018-01-26 | 2018-07-06 | 郑州云海信息技术有限公司 | 一种图像识别方法、装置及系统 |
KR20180099420A (ko) * | 2017-02-27 | 2018-09-05 | 한국과학기술원 | Cpu와 gpu간에 동기화를 가속화하여 수행하는 데이터 프로세서 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3748451A (en) * | 1970-08-21 | 1973-07-24 | Control Data Corp | General purpose matrix processor with convolution capabilities |
US20060230213A1 (en) * | 2005-03-29 | 2006-10-12 | Via Technologies, Inc. | Digital signal system with accelerators and method for operating the same |
US20100208631A1 (en) * | 2009-02-17 | 2010-08-19 | The Regents Of The University Of California | Inaudible methods, apparatus and systems for jointly transmitting and processing, analog-digital information |
US8806269B2 (en) * | 2011-06-28 | 2014-08-12 | International Business Machines Corporation | Unified, workload-optimized, adaptive RAS for hybrid systems |
CN102663666B (zh) * | 2012-03-27 | 2014-05-14 | 中国人民解放军国防科学技术大学 | 基于fpga的二维图像重采样算法加速器 |
US9747546B2 (en) * | 2015-05-21 | 2017-08-29 | Google Inc. | Neural network processor |
US10248907B2 (en) * | 2015-10-20 | 2019-04-02 | International Business Machines Corporation | Resistive processing unit |
US20170270406A1 (en) * | 2016-03-18 | 2017-09-21 | Qualcomm Incorporated | Cloud-based processing using local device provided sensor data and labels |
CN105891430B (zh) * | 2016-04-11 | 2018-06-01 | 上海大学 | 一种基于神经网络的食品质量监测装置 |
CN105828041A (zh) * | 2016-04-11 | 2016-08-03 | 上海大学 | 一种支持并行预处理的视频采集系统 |
CN107633298B (zh) * | 2017-03-10 | 2021-02-05 | 南京风兴科技有限公司 | 一种基于模型压缩的递归神经网络加速器的硬件架构 |
CN108734288B (zh) * | 2017-04-21 | 2021-01-29 | 上海寒武纪信息科技有限公司 | 一种运算方法及装置 |
CN107451659B (zh) * | 2017-07-27 | 2020-04-10 | 清华大学 | 用于位宽分区的神经网络加速器及其实现方法 |
CN207458128U (zh) * | 2017-09-07 | 2018-06-05 | 哈尔滨理工大学 | 一种基于fpga在视觉应用中的卷积神经网络加速器 |
CN107729050B (zh) * | 2017-09-22 | 2021-01-22 | 中国科学技术大学苏州研究院 | 基于let编程模型的实时系统及任务构建方法 |
US11222256B2 (en) * | 2017-10-17 | 2022-01-11 | Xilinx, Inc. | Neural network processing system having multiple processors and a neural network accelerator |
CN107977662B (zh) * | 2017-11-06 | 2020-12-11 | 清华大学深圳研究生院 | 一种实现高速处理计算机视觉图像的分层计算方法 |
CN109800802A (zh) * | 2019-01-10 | 2019-05-24 | 深圳绿米联创科技有限公司 | 视觉传感器及应用于视觉传感器的物体检测方法和装置 |
CN110033003B (zh) * | 2019-03-01 | 2023-12-15 | 华为技术有限公司 | 图像分割方法和图像处理装置 |
-
2019
- 2019-07-31 CN CN202310108138.2A patent/CN116070684B/zh active Active
- 2019-07-31 WO PCT/CN2019/098653 patent/WO2021016931A1/zh unknown
- 2019-07-31 CN CN202310116490.0A patent/CN116167422A/zh active Pending
- 2019-07-31 CN CN201980091752.4A patent/CN113490943B/zh active Active
- 2019-07-31 EP EP19939634.2A patent/EP3933664A4/en not_active Ceased
-
2021
- 2021-10-26 US US17/511,383 patent/US20220044043A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0944438A (ja) * | 1995-07-26 | 1997-02-14 | Hitachi Ltd | 同期タイプ入出力命令の多重処理方法 |
CN104809501A (zh) * | 2014-01-24 | 2015-07-29 | 清华大学 | 一种基于类脑协处理器的计算机系统 |
KR20180099420A (ko) * | 2017-02-27 | 2018-09-05 | 한국과학기술원 | Cpu와 gpu간에 동기화를 가속화하여 수행하는 데이터 프로세서 |
CN107360327A (zh) * | 2017-07-19 | 2017-11-17 | 腾讯科技(深圳)有限公司 | 语音识别方法、装置和存储介质 |
CN108256492A (zh) * | 2018-01-26 | 2018-07-06 | 郑州云海信息技术有限公司 | 一种图像识别方法、装置及系统 |
Also Published As
Publication number | Publication date |
---|---|
EP3933664A1 (en) | 2022-01-05 |
CN113490943A (zh) | 2021-10-08 |
EP3933664A4 (en) | 2022-04-06 |
CN116167422A (zh) | 2023-05-26 |
CN116070684A (zh) | 2023-05-05 |
CN113490943B (zh) | 2023-03-10 |
US20220044043A1 (en) | 2022-02-10 |
CN116070684B (zh) | 2023-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020187157A1 (zh) | 一种控制方法和电子设备 | |
WO2021016931A1 (zh) | 一种集成芯片以及处理传感器数据的方法 | |
WO2020156269A1 (zh) | 一种具有柔性屏幕的电子设备的显示方法及电子设备 | |
WO2020151580A1 (zh) | 一种屏幕控制和语音控制方法及电子设备 | |
WO2021249053A1 (zh) | 图像处理的方法及相关装置 | |
WO2022179376A1 (zh) | 手势控制方法与装置、电子设备及存储介质 | |
WO2021147396A1 (zh) | 图标管理方法及智能终端 | |
WO2020168976A1 (zh) | 灭屏显示的方法和电子设备 | |
CN110401768B (zh) | 调节电子设备的工作状态的方法和装置 | |
WO2021213151A1 (zh) | 显示控制方法和可穿戴设备 | |
WO2021169515A1 (zh) | 一种设备间数据交互的方法及相关设备 | |
WO2021036830A1 (zh) | 一种折叠屏显示应用方法及电子设备 | |
CN113260949B (zh) | 一种降低功耗的方法和电子设备 | |
WO2022048484A1 (zh) | 灭屏显示方法和电子设备 | |
WO2021052139A1 (zh) | 手势输入方法及电子设备 | |
WO2021031745A1 (zh) | 一种应用打开方法和电子设备 | |
WO2022151887A1 (zh) | 睡眠监测方法及相关装置 | |
CN111880661A (zh) | 手势识别方法及装置 | |
CN114089902A (zh) | 手势交互方法、装置及终端设备 | |
CN114422686A (zh) | 参数调整方法及相关装置 | |
CN111625175A (zh) | 触控事件处理方法、触控事件处理装置、介质与电子设备 | |
WO2022188511A1 (zh) | 语音助手唤醒方法及装置 | |
WO2022089625A1 (zh) | 一种增强现实功能控制方法和电子设备 | |
RU2782960C1 (ru) | Способ управления и электронное устройство | |
WO2024152910A1 (zh) | 一种录像方法以及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19939634 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019939634 Country of ref document: EP Effective date: 20210929 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |