CN117975242A - First processing module, processing device, chip and terminal equipment - Google Patents

First processing module, processing device, chip and terminal equipment Download PDF

Info

Publication number
CN117975242A
CN117975242A CN202211321297.2A CN202211321297A CN117975242A CN 117975242 A CN117975242 A CN 117975242A CN 202211321297 A CN202211321297 A CN 202211321297A CN 117975242 A CN117975242 A CN 117975242A
Authority
CN
China
Prior art keywords
unit
processing
processing module
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211321297.2A
Other languages
Chinese (zh)
Inventor
张晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211321297.2A priority Critical patent/CN117975242A/en
Publication of CN117975242A publication Critical patent/CN117975242A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a first processing module, a processing device, a chip and terminal equipment, wherein the first processing module comprises a control unit; the control unit is used for: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to a processing unit of a second processing module under the condition that the characteristic information of the target object is matched with preset characteristic information; the characteristic information of the target object is used for the processing unit to execute corresponding operation.

Description

First processing module, processing device, chip and terminal equipment
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a first processing module, a processing device, a chip and terminal equipment.
Background
In the related art, after an application processor (Application Processor, AP) acquires a target image captured by an imaging unit, the target image is input to a neural network (Neural Network, NN) processing unit, the neural network processing unit infers the target image to obtain an inference result of the target image, the neural network processing unit sends the inference result of the target image to the application processor, and then the application processor analyzes the inference result of the target image to determine whether to execute a corresponding operation. However, the application processor in the related art needs to perform a large number of operations, resulting in high power consumption of the application processor and high occupation of the computing resources of the application processor.
Disclosure of Invention
The embodiment of the application provides a first processing module, a processing device, a chip and terminal equipment.
In a first aspect, an embodiment of the present application provides a first processing module, where the first processing module includes a control unit;
The control unit is used for: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to a processing unit of a second processing module under the condition that the characteristic information of the target object is matched with preset characteristic information;
the characteristic information of the target object is used for the processing unit to execute corresponding operation.
In a second aspect, an embodiment of the present application provides a processing apparatus, including: the device comprises a first processing module and a second processing module, wherein the first processing module comprises a control unit, and the second processing module comprises a processing unit;
The control unit is used for: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to the processing unit under the condition that the characteristic information of the target object is matched with preset characteristic information;
The processing unit is used for: and executing an operation corresponding to the characteristic information of the target object.
In a third aspect, an embodiment of the present application provides a chip, where the chip includes the first processing module according to the first aspect, or the chip includes the processing device according to the second aspect.
In a fourth aspect, an embodiment of the present application provides a terminal device, where the terminal device includes the first processing module in the first aspect, or the terminal device includes the processing apparatus in the second aspect, or the terminal device includes the chip in the third aspect.
In an embodiment of the present application, the first processing module includes a control unit; the control unit is used for: determining characteristic information of a target object in the target image, and sending the characteristic information of the target object to a processing unit of the second processing module under the condition that the characteristic information of the target object is matched with preset characteristic information; the characteristic information of the target object is used for the processing unit to execute corresponding operation. In this way, the processing unit of the second processing module receives the feature information of the target object, and executes the operation corresponding to the feature information of the target object, without determining whether the feature information of the target object matches the preset feature information, so that the power consumption and the operation resource of the second processing module can be reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application.
FIG. 1 is a schematic diagram of an image processing flow provided in the related art;
FIG. 2 is a schematic diagram of a first processing module according to an embodiment of the present application;
FIG. 3 is a schematic diagram of another first processing module according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first processing module according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a first processing module according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first processing module according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a first processing module according to another embodiment of the present application;
Fig. 8 is a schematic structural diagram of a processing apparatus according to an embodiment of the present application;
FIG. 9 is a schematic diagram of another processing apparatus according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a software framework of an RTOS and an AP according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a chip according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical scheme of the present application will be specifically described below by examples and with reference to the accompanying drawings. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
It should be noted that: in the examples of the present application, "first," "second," etc. are used to distinguish similar objects and are not necessarily used to describe a particular order or sequence.
In addition, the embodiments of the present application may be arbitrarily combined without any collision. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The Always On (AO/AON) scheme refers to a solution of intelligent perception realized based On an Always On Camera/Camera (AO/AON Camera), and generally comprises functions of human eye gaze recognition, owner recognition, gesture recognition and the like, and is typically characterized by long-time low-power-consumption operation. An application scenario of AO/AON Camera may be: the image data is converted into scene information through a Camera image sensor (which can be an AO/AON Camera), a neural network Processing unit or a universal Visual Digital Signal Processor (VDSP) and a specific image recognition algorithm, so that the user needs are recognized, and services are actively provided.
Fig. 1 is a schematic diagram of an image processing flow provided in the related art, and the image processing flow may be referred to as a general scene image recognition software processing flow, as shown in fig. 1:
First, an AON Application (APP) starts a Camera through a Camera framework to acquire a target image. In practice, AON APP in APP sends a start message to a Sensor (Sensor) in Hardware (hard) sequentially through an AON Service (which may include an AP central processor (Central Processing Unit, CPU)) in a Framework (frame) and a Camera application program interface (Application Programming Interface, API), a Camera (Camera) HAL kernel (Core) in a Hardware abstraction layer (Hardware Abstraction Layer, HAL), and a Camera driver (CAMERA DRIVER) in a kernel driver (KERNEL DRIVER) to make the Sensor start working to capture surrounding target images. Wherein the Sensor may be referred to as a camera, a webcam or a camera unit.
And secondly, preprocessing the target image. Wherein, by preprocessing the target image, the target image can be processed into an input image required by the inference model. In some embodiments, preprocessing the target image may include at least one of: scaling processing, clipping processing, size conversion processing, image format processing, denoising processing, noise addition processing, gradation processing, rotation processing, normalization processing, and the like. In the implementation process, the Sensor sends the acquired target image to the AON service sequentially through the camera driver, the camera HAL Core and the camera API, so that the AON service preprocesses the target image.
The preprocessed image and inference control information are then sent to a machine learning (MECHINE LEARNING, ML) Framework/Library (Framework/Library) in a Framework (Framework) to cause the machine learning Framework/Library to obtain an inference result of the target image. In some embodiments, to accelerate reasoning, the machine learning framework/library may make hardware acceleration reasoning through neural network APIs (NN APIs), neural network Runtime (NN run times), hardware acceleration (Hardware Accelaration), NN driver in kernel driver call specialized processor (Specialized Processor). The specialized processor may include a DSP or neural network processor (Neural Network Processing Units, NPU), among others.
Finally, after the machine learning frame/library obtains the reasoning result of the target image, the reasoning result is sent to the AON Service, so that the AON Service performs state machine processing, analyzes the user behavior, and then converts the scene into corresponding operation or directly prompts the user to perform interaction.
In the related art of fig. 1, when the AO Camera function is implemented through the framework, the AP CPU needs to be kept in an awake state, and because it relies on a general Camera path, it is generally required to turn on the entire image subsystem (Camera Sub system, CSS) and perform a corresponding image signal Processor (IMAGE SIGNAL Processor, ISP) operation, which results in high power consumption, and the high power consumption cannot be applied to a scene that needs long-time monitoring. In addition, since the general framework scheme in fig. 1 needs to master the entire data path by the upper layer application, the image data needs to flow from the bottom layer hardware to the upper layer application, and then the application is set to the bottom layer for reasoning, the entire data path is longer, multiple data copies and inter-process communication are needed, which results in longer software path and poorer performance, and the general framework scheme cannot adapt to application scenes needing quick response, such as anti-peeping scenes (when information arrives in the anti-peeping scenes, the owner information needs to be identified in time to determine whether the information needs to be unfolded).
In some embodiments, an independent Low-Power processing Unit (or referred to as a Low Power Unit) is provided, a shot target image is obtained through the Low-Power Unit, so that an inference process of the target image is realized, hardware behaviors such as an NPU (non point of sale) can be realized at a bottom layer, and control and data post-processing is still completed by an AP CPU. In this way, the control of the parameter configuration and the inference process for the image capturing unit, the preprocessing of the target image is still in the AP CPU, however, the power consumption of the AP CPU and the occupation of the operation resources of the AP CPU are still high in this way.
In the embodiment of the application, a control unit is added, and the control unit is arranged outside the AP CPU, so that the control of parameter configuration and reasoning process of the image pickup unit and the preprocessing of the target image is realized through the control unit, and the power consumption of the AP CPU and the operation resource of the AP CPU can be reduced.
Fig. 2 is a schematic structural diagram of a first processing module according to an embodiment of the present application, as shown in fig. 2, the first processing module 21 includes a control unit 211; the control unit 211 is configured to: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to a processing unit 221 of the second processing module 22 when the characteristic information of the target object matches preset characteristic information; wherein the feature information of the target object is used for the processing unit 221 to perform a corresponding operation.
In some embodiments, the first processing module 21 may be a processing module other than the application processor AP. In some embodiments, the first processing module 21 may be referred to as a low power module. In some embodiments, the first processing module 21 may be included in one or more power domains. For example, the first processing module 21 may be included in a separate small system of a power domain.
In some embodiments, the control unit 211 may be a unit having a data processing function. For example, the control unit 211, the first processing module 21 or the second processing module 22 may comprise one or a combination of at least two of the following: microcontrol units (Microcontroller Unit, MCUs), application SPECIFIC INTEGRATED Circuits (ASICs), digital signal processors (DIGITAL SIGNAL processors, DSPs), digital signal processing devices (DIGITAL SIGNAL Processing Device, DSPDs), programmable logic devices (Programmable Logic Device, PLDs), field programmable gate arrays (Field Programmable GATE ARRAY, FPGA), controllers, microcontrollers, microprocessors, programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The control unit 211 may be an MCU, for example. Illustratively, the first processing module 21 may be a low power consumption unit.
In some embodiments, control unit 211 may have a separate operating system. For example, the control unit 211 may choose to run a more efficient Real-time operating system (Real-Time Operating System, RTOS). Implementing the operating system may include one of: real-time thread operating systems (Real-TIME THREAD, RTTHREAD), freeRTOS, lightweight operating systems (Lite Operating System, liteOS), and the like.
The real-time performance of the real-time operating system is the biggest characteristic, and the real-time operating system can comprise a real-time task scheduler, and the biggest difference between the task scheduler and other operating systems is emphasized that: CPU time is allocated strictly by priority and time slice rotation is not an essential option for real-time schedulers. Unlike common time-sharing operation systems (Linux, windows, unix, etc.), the real-time operation system has more advantages in the field of industrial automation, the real-time operation system has response time certainty which cannot be compared with that of the time-sharing operation system, and the time complexity of all core algorithms from a scheduler algorithm to an interrupt response system to a message transmission mechanism is O (1), which indicates that the response speed of the system is independent of the number of system tasks and the weight of load, but only depends on the design of priority, so that the system can immediately respond to an event within a designated time after the event with high priority occurs even if the current system runs at full load. Because of this design concept and algorithmic advantage, time-sharing systems are not able to achieve a deterministic response time by improving processor performance under severe loading conditions, based on relevant mathematical theory.
In some embodiments, the second processing module 22 may include an application processor AP. In some embodiments, the processing unit 221 of the second processing module 22 may include some or all of the units in the AP CPU. The processing unit 221 of the second processing module 22 may be an AP CPU or a unit for controlling an image capturing unit in the AP CPU, for example.
In some embodiments, the control unit 211 may obtain the inference result of the target image captured by the image capturing unit, and determine the feature information of the target object in the target image according to the inference result of the target image. In other embodiments, the control unit 211 may obtain a preprocessed image obtained by preprocessing the target image, and determine the feature information of the target object in the target image according to the preprocessed image. In still other embodiments, the control unit 211 may obtain a target image, and determine characteristic information of a target object in the target image.
In some embodiments, the target object may include at least one of: any one or more objects in the real world such as a person, a part of the human body, an animal, a plant, an identification code, etc. The target object may be a human eye, the preset feature information may include motion information of the human eye facing the display screen of the terminal device (may be determined by one frame of target image), or the preset feature information may include motion information of the human eye facing the display screen of the terminal device for a period of time exceeding a first period of time (may be determined by multiple frames of target images). Also by way of example, the target object may be a face, the preset feature information may include action information of the face smiling/having a certain gesture to the display screen of the terminal device (which may be determined by one frame of the target image), or the preset feature information may include action information of the face smiling/having a certain gesture for a period exceeding the second period (which may be determined by a plurality of frames of the target image). As another example, the target object may be a two-dimensional code, the preset feature information may include feature information (may be determined by a frame of the target image) in which the two-dimensional code exists, or the preset feature information may include information in which a duration of the feature information in which the two-dimensional code exists exceeds a third duration.
In some embodiments, the feature information of the target object matches the preset feature information, which may include: the characteristic information of the target object is the same as the preset characteristic information, or the pixel degree of the characteristic information of the target object and the preset characteristic information is higher than a threshold value.
In some embodiments, the processing unit 221 may perform an operation corresponding to the feature information of the target object in case of obtaining the feature information of the target object. For example, when the feature information of the target object includes motion information of a display screen of the terminal device facing the human eye, or includes motion information of a duration of time of the human eye facing the display screen of the terminal device exceeding a first duration, the processing unit 221 may perform determining whether the human face matches a preset human face and/or whether an iris of the human eye matches the preset iris, and in case of matching, control the terminal device to display a main interface, and in case of non-matching, control the terminal device to display an interface prompting the user to input a password. Also for example, in a case where the feature information of the target object includes action information of a face smiling/having a certain gesture on the display screen of the terminal device or includes action information of a face smiling/having a certain gesture for a duration exceeding a second duration, the processing unit 221 may perform an operation of opening the camera application program and control the display screen of the terminal device to switch from the off-screen state to display the photographing interface without the user clicking the camera application program through the main interface to enter the photographing interface. For further example, when the feature information of the target object includes feature information including a two-dimensional code, or includes information that a duration of the feature information including the two-dimensional code exceeds a third duration, the processing unit 221 may perform switching of the display screen of the terminal device from the screen-off state to the display interface corresponding to the two-dimensional code, without performing an action of selecting to scan the two-dimensional code after the user clicks an application program for scanning the two-dimensional code through the main interface, so that the terminal device can display the display interface corresponding to the two-dimensional code.
In an embodiment of the present application, the first processing module includes a control unit; the control unit is used for: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to a processing unit of a second processing module under the condition that the characteristic information of the target object is matched with preset characteristic information; the characteristic information of the target object is used for the processing unit to execute corresponding operation. In this way, the processing unit of the second processing module executes the operation corresponding to the characteristic information of the target object after receiving the characteristic information of the target object, without determining whether the characteristic information of the target object matches the preset characteristic information, so that the power consumption and the operation resource of the second processing module can be reduced.
In some embodiments, the control unit 211 is further configured to: in case that the feature information of the target object matches the preset feature information, transmitting a first wake-up signal to the processing unit 221; wherein the first wake-up signal is used for the processing unit 221 to transition from the sleep state to the working state.
In such an embodiment, the processing unit 221 of the second processing module 22 transitions from the sleep state to the active state upon receiving the first wake-up signal.
Illustratively, the control unit 211 may transmit the first wake-up signal to the processing unit 221 in case that the characteristic information of the target object matches the preset characteristic information, and the control unit 211 may further transmit the characteristic information of the target object to the processing unit 221 of the second processing module 22. For example, the first wake-up signal and the characteristic information of the target object may be transmitted simultaneously or may be transmitted sequentially. For example, after the target duration in which the first wake-up signal transmission is completed, the feature information of the target object is transmitted. In some embodiments, the control unit 211 may transmit a first wake-up signal to the processing unit 221, receive an acknowledgement signal transmitted by the processing unit 221, and then the control unit 211 may transmit the characteristic information of the target object to the processing unit 221 of the second processing module 22.
In other embodiments, the first wake-up signal may be replaced with an interrupt signal or a notification signal. The interrupt signal or the notification signal is used to instruct the processing unit 221 to receive the characteristic information of the target object, or to instruct the processing unit 221 to receive the information.
In this way, the processing unit 221 may be in a sleep state, and in case that the characteristic information of the target object matches the preset characteristic information, the control unit 211 wakes up the processing unit 221, so that the processing unit 221 is waken up when the processing unit 221 is required to perform a corresponding operation, thereby enabling a reduction in power consumption of the processing unit 221.
In some embodiments, control unit 211 may send a shut down signal to processing unit 221 to cause processing unit 221 to enter a sleep state. In other embodiments, the processing unit 221 may enter the sleep state by itself according to the application usage of the terminal device. For example, the processing unit 211 enters the sleep state after the terminal device enters the off-screen state, or the processing unit 211 enters the sleep state at intervals of a preset time period after the terminal device enters the off-screen state. In some embodiments, in the case where the processing unit 221 autonomously enters the sleep state, a notification signal for instructing the processing unit 221 to enter the sleep state may be transmitted to the control unit 211.
Fig. 3 is a schematic structural diagram of another first processing module according to an embodiment of the present application, and the embodiment of fig. 3 is different from the embodiment of fig. 2 in that: in the embodiment corresponding to fig. 3, the control unit 211 is further configured to: at least one of the following is transmitted to the image capturing unit 31: the second wake-up signal, the first shooting parameter information and the first closing signal;
wherein, the second wake-up signal is used for the camera unit 31 to enter a working state;
the first shooting parameter information is used for the image capturing unit 31 to capture the target image;
the first closing signal is used for the image capturing unit 31 to enter a closed state.
In this way, the control unit 211 can control the imaging unit 31 to enter the operating state and the off state, and instruct the imaging unit 31 of the first shooting parameter information, thereby enabling the imaging unit 31 to perform shooting according to the first shooting parameter information, and obtain the target image.
In some embodiments, the first photographing parameter information may include at least one of: shooting size, shooting period, shooting frame rate, exposure amount, sharpness, focal length, and the like. In this way, the image capturing unit 31 can capture images of surrounding objects based on the first capturing parameter information sent from the control unit 211. In other embodiments, the first photographing parameter information may be stored in the photographing unit 31 in advance, and in the case where the photographing unit 31 enters the operation state, the photographing unit 31 may perform photographing according to the first photographing parameter information stored in advance. In still other embodiments, the control unit 211 may transmit the updated first photographing parameter information to the photographing unit 31 in case of the update of the first photographing parameter information, and the photographing unit 31 stores the updated first photographing parameter information, thereby enabling the photographing unit 31 to photograph using the updated first photographing parameter information.
In some embodiments, the control unit 211 is further configured to: receiving the instruction information sent by the processing unit 221, and sending feedback information instructing the processing unit 221 to wake up the image capturing unit 31 to the processing unit 221;
Wherein the indication information includes at least one of: the time when the processing unit 221 wakes up or shuts down the image capturing unit 31, the reason information that the processing unit 221 wakes up or shuts down the image capturing unit 31, the priority that the processing unit 221 wakes up or shuts down the image capturing unit 31;
The feedback information is used for the processing unit 221 to transmit at least one of the following to the image capturing unit 31: the third wake-up signal, the second shooting parameter information and the second closing signal; the third wake-up signal is used for the camera unit 31 to enter a working state, the second shooting parameter information is used for the camera unit 31 to acquire the target image, and the second close signal is used for the camera unit 31 to enter a closing state.
In this case, the control unit 211 can control the behavior of the processing unit 221 to control the image capturing unit 31 to enter the on state or the off state, thereby avoiding a collision between the control unit 211 and the processing unit 221 when controlling the image capturing unit 31.
In some embodiments, the time at which the processing unit 221 wakes up or shuts down the camera unit 31 may be at least one of: the start time when the processing unit 221 wakes up or shuts down the image capturing unit 31, the end time when the processing unit 221 wakes up or shuts down the image capturing unit 31, and the duration when the processing unit 221 wakes up or shuts down the image capturing unit 31. The cause information to wake up or shut down the camera unit 31 may be indicated with specific information comprising one or more bits. Alternatively, the control unit 211 and the processing unit 221 may have a correspondence between at least one specific information and at least one cause information stored in advance, where one specific information may correspond to one cause information that wakes up or turns off the image capturing unit 31.
In some embodiments, the second photographing parameter information may be the same as or different from or partially the same as the first photographing parameter information. For example, the second photographing parameter information may include at least one of: shooting size, shooting period, shooting frame rate, exposure amount, sharpness, focal length, and the like.
Fig. 4 is a schematic structural diagram of a first processing module according to an embodiment of the present application, and the embodiment corresponding to fig. 4 is different from the embodiment corresponding to fig. 3 in that: in the embodiment corresponding to fig. 4, the control unit 211 is further configured to: receiving the motion information transmitted by the motion measuring unit 41, and controlling at least one of the following operations in case that the motion information matches a preset motion: the image capturing unit 31, an image signal processing ISP unit (not shown in fig. 4) in the first processing module 21, and a neural network processing unit (not shown in fig. 4) in the first processing module 21.
In other embodiments, the control unit 211 is further configured to: receiving an interrupt signal or an inter-core message sent by the processing unit 221 or other processing modules, and controlling at least one of the following operations according to the interrupt signal or the inter-core message: the image capturing unit 31, the image signal processing ISP unit in the first processing module 21, and the neural network processing unit in the first processing module 21.
In some embodiments, the control unit 211 may communicate with the camera unit 31 via a peripheral Bus (ADVANCED PERIPHERAL Bus, APB) Bus, and/or the control unit 211 may communicate with the ISP unit via an advanced high-Performance Bus (ADVANCED HIGH Performance Bus, AHB) Bus, and/or the control unit 211 may communicate with the neural network processing unit via an AHB Bus.
In some embodiments, the motion measurement unit 41 may include at least one of: an inertial measurement unit (Inertial Measurement Unit, IMU), a speed sensor, an acceleration sensor, a displacement sensor, an angle sensor, and the like.
In some embodiments, an interrupt signal or an inter-core message may be used to instruct control of the operation of the camera unit 31.
In some embodiments, the control unit 211 controlling the operation of the image capturing unit 31 may include at least one of: the control unit 211 transmits a second wake-up signal to the image capturing unit 31 to operate the image capturing unit 31; the control unit 211 transmits the first photographing parameter information to the image pickup unit 31 to cause the image pickup unit 31 to photograph the target image based on the first photographing parameter information.
In some embodiments, control unit 211 may include at least one of the following: the control unit 211 sends a fourth wake-up signal to the ISP unit to cause the ISP unit to operate; the control unit 211 transmits the processing order information of the at least one subunit and/or the image processing parameter information of the at least one subunit to the ISP unit so that the ISP unit processes the target image according to the processing order information of the at least one subunit and/or the image processing parameter information of the at least one subunit.
In some embodiments, the control unit 211 controlling the operation of the neural network processing unit may include at least one of: the control unit 211 transmits a fifth wakeup signal to the neural network processing unit to cause the neural network processing unit to operate; the control unit 211 transmits control inference information to the neural network processing unit, so that the neural network processing unit infers the preprocessed image based on the control inference information, and a target inference result is obtained.
In some embodiments, the control unit 211 may have stored therein a preset motion, which may include a plurality of motions, and, illustratively, may include at least one of: a movement of lifting the terminal device, a movement of lifting the terminal device and holding the display screen of the terminal device against the face, a movement of turning over the terminal device, a movement of holding the terminal device in a certain position, a movement of holding the camera unit 31 of the terminal device against the identification code, and the like.
In some embodiments, the matching of the motion information to the preset motion may include: the motion information is the same as the preset motion or the similarity between the motion information and the preset motion is higher than a threshold.
For example, the control unit 211 may control at least one of the following to operate in a case where it is determined that the movement information is movement to lift the terminal device and keep the display screen of the terminal device facing the face: the image capturing unit 31, the image signal processing ISP unit in the first processing module 21, and the neural network processing unit in the first processing module 21, after that, after the control unit 211 determines the feature information of the target object, including the smiling of the face against the display screen/the action information with a certain gesture for a duration exceeding the second duration, the control unit 211 may send the feature information of the target object to the processing unit 221, so that the processing unit 221 controls the display screen to switch from the off-screen state to the display shooting interface.
Also for example, the control unit 211 may control at least one of the following operations in a case where it is determined that the motion information is to raise the terminal device and keep the image pickup unit 31 of the terminal device facing a certain object: the image capturing unit 31, the image signal processing ISP unit in the first processing module 21, and the neural network processing unit in the first processing module 21, after that, the control unit 211 determines the characteristic information of the target object, including the information that the duration of the characteristic information of the two-dimensional code exceeds the third duration, where the control unit 211 may send the characteristic information of the target object to the processing unit 221, so that the processing unit 221 controls the display screen to switch from the screen-off state to the display interface corresponding to the two-dimensional code.
Fig. 5 is a schematic structural diagram of still another first processing module according to an embodiment of the present application, and the embodiment corresponding to fig. 5 is different from the embodiment corresponding to fig. 3 in that: in the embodiment corresponding to fig. 5, the first processing module 21 further includes an image signal processing ISP unit 212 and a first storage unit 213;
The ISP unit 212 is configured to: receiving the target image sent by the image capturing unit 31, preprocessing the target image to obtain a preprocessed image, and storing the preprocessed image in the first storage unit 213; wherein the preprocessed image is used to determine characteristic information of the target object.
In some embodiments, the control unit 211 is further configured to: transmitting to the ISP unit 212 at least one of: a fourth wake-up signal, a third off signal, processing order information of at least one subunit of the ISP units 212, image processing parameter information of the at least one subunit;
the ISP unit 212 is also configured to at least one of:
entering a working state according to the fourth wake-up signal;
entering a closing state according to the third closing signal;
According to the processing sequence information of the at least one subunit, the target image is sequentially processed through the at least one subunit, and the preprocessed image is obtained;
and processing the target image in sequence according to the image processing parameter information of the at least one subunit to obtain the preprocessed image.
The at least one subunit may comprise at least one of: the device comprises a subunit for scaling a target image, a subunit for clipping the target image, a subunit for performing size conversion processing on the target image, a subunit for performing image format processing on the target image, a subunit for performing denoising processing on the target image, a subunit for adding noise processing on the target image, a subunit for performing gray scale processing on the target image, a subunit for performing rotation processing on the target image, and a subunit for performing normalization processing on the target image.
Fig. 6 is a schematic structural diagram of a first processing module according to another embodiment of the present application, and the embodiment of fig. 6 is different from the embodiment of fig. 5 in that: in the corresponding embodiment of fig. 6, the first processing module 21 further includes a neural network processing unit 214; the neural network processing unit 214 is configured to: reading the preprocessed image from the first storage unit 213, reasoning the preprocessed image according to a target reasoning model to obtain a target reasoning result, and storing the target reasoning result into the first storage unit 213; the target reasoning result is used for determining characteristic information of the target object.
In some embodiments, the control unit 211 is further configured to: the target inference result is read from the first storage unit 213, and feature information of the target object is determined according to the target inference result.
Illustratively, a frame of target image may correspond to a target inference result. For example, the presence of a face in a target image may be determined by a frame of the image. Also for example, one frame of target image may correspond to a plurality of target inference results. For example, a frame of target image may determine that a face and a preset gesture are present in the image. Further exemplary, the multi-frame image may correspond to a target inference result. For example, the action information of the human hand in the multi-frame target image may be determined by the multi-frame target image, or the duration of the smiling/action information with a certain gesture of the face to the display screen of the terminal device may be determined by the multi-frame target image, beyond the second duration.
In some embodiments, the target inference results may be determined directly as characteristic information of the target object. In other embodiments, the characteristic information of the target object may be determined based on a plurality of target inference results. For example, if the plurality of target inference results each have a human hand, the characteristic information of the target object may include motion information of the human hand.
In some embodiments, the control unit 211 is further configured to: transmitting to the neural network processing unit 214 at least one of: a fifth wake-up signal, a fourth off signal, and inference control information;
the neural network processing unit 214 is further configured to at least one of:
Entering a working state according to the fifth wake-up signal;
entering a closing state according to the fourth closing signal;
And reading the target inference model from the first storage unit 213 according to the inference control information, and inferring the preprocessed image according to the target inference model to obtain a target inference result.
In some embodiments, the inference control information may include at least one of: identification information of the target inference model, parameter information of the target inference model and control information of an inference process. Wherein, the parameter information of the target inference model may include structural information and/or model weight information of the target inference model.
In some embodiments, the target inference model comprises a first inference model and a second inference model, and the target inference result comprises a first inference result and a second inference result;
The neural network processing unit 214 is further configured to: reasoning the preprocessed image according to the first reasoning model to obtain the first reasoning result; and reasoning the preprocessed image according to the second reasoning model to obtain the second reasoning result.
In this case, one target image may correspond to the first inference result and the second inference result. The first inference model may be used to identify gesture information in the target image and the second inference model may be used to identify a face in the target image. In some scenarios, the inference process corresponding to the first inference model may be performed in series or in parallel with the inference process corresponding to the second inference model.
Fig. 7 is a schematic structural diagram of a first processing module according to another embodiment of the present application, and the embodiment of fig. 7 is different from the embodiment of fig. 6 in that: in the embodiment corresponding to fig. 7, the control unit 211 is further configured to: the target inference model is read from the second storage unit 222 of the second processing module 22, and the target inference model is stored in the first storage unit 213.
In some implementations, the control unit 211 may determine the target inference model according to the interrupt signal or the inter-core message or the motion information, the control unit 211 will not read the target inference model from the second storage unit 222 if it is determined that the target inference model is stored in the first storage unit 213, the control unit 211 reads the target inference model from the second storage unit 222 if it is determined that the target inference model is not stored in the first storage unit 213, and stores the target inference model in the first storage unit 213, so that the neural network processing unit 214 can read the target inference model from the first storage unit 213.
In some embodiments, the first storage unit 213 or the second storage unit 222 may further include at least one of: the above-described computer storage media/memory may include one or an integration of a plurality of the following: read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), flash Memory (Flash Memory), magnetic surface Memory, optical disk, read Only optical disk (Compact Disc Read-Only Memory, CD-ROM), and the like.
In some implementations, the control unit 211 communicates with the ISP unit 212 via an advanced high-performance bus AHB bus, and the communication distance between the control unit 211 and the ISP unit 212 is less than a first distance. In some embodiments, the communication distance between the control unit 211 and the ISP unit 212 may be the shortest distance between the two that can be connected by an AHB bus.
In some implementations, the control unit 211 communicates with the neural network processing unit 214 via an AHB bus, and the communication distance between the control unit 211 and the neural network processing unit 214 is less than the second distance. In some embodiments, the communication distance between the control unit 211 and the neural network processing unit 214 may be the shortest distance between the two that can be connected through an AHB bus.
In this way, the control unit 211 communicates with the ISP unit 212 through a shorter AHB bus and/or the control unit 211 communicates with the neural network processing unit 214 through a shorter AHB bus, so that Input-Output (IO) access efficiency can be improved.
In some embodiments, ISP unit 212 may store the preprocessed image to first memory unit 213 via an advanced extensible interface (Advanced eXtensible Interface, AXI) bus, and/or neural network processing unit 214 may read the preprocessed image from first memory unit 213 via the AXI bus, and/or neural network processing unit 214 may store the target inference result to first memory unit 213 via the AXI bus, and/or control unit 211 may read the target inference result from first memory unit 213 via the AXI bus.
The embodiment of the application also provides a processing device, which comprises: the device comprises a first processing module and a second processing module, wherein the first processing module comprises a control unit, and the second processing module comprises a processing unit;
The control unit is used for: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to the processing unit under the condition that the characteristic information of the target object is matched with preset characteristic information;
The processing unit is used for: and executing an operation corresponding to the characteristic information of the target object.
Fig. 8 is a schematic structural diagram of a processing apparatus according to an embodiment of the present application, and the difference between the embodiment of fig. 8 and the embodiment of fig. 3 is that: in the embodiment corresponding to fig. 8, the processing unit 211 is further configured to: transmitting at least one of the following to the image capturing unit 31: the third wake-up signal, the second shooting parameter information and the second closing signal;
the image capturing unit 31 is further configured to at least one of:
Collecting the target image according to the third wake-up signal;
Acquiring the target image according to the second shooting parameter information;
And entering a closing state according to the second closing signal.
Fig. 9 is a schematic structural diagram of another processing apparatus according to an embodiment of the present application, as shown in fig. 9, the low power consumption unit (corresponding to the first processing module in the above example) may include a low power consumption sensor (Lowpower Sensor, corresponding to the image capturing unit in the above embodiment), (Mobile Industry Processor Interface, MIPI) camera serial interface (CAMERA SERIAL INTERFACE, CSI), a lightweight ISP (ISP Lite, corresponding to the above ISP unit 212), an intelligent engine (SMARTENGINE, SME) BUS Matrix (BUS Matrix), a general NPU (general-NPU, GNPU) (corresponding to the above neural network processing unit 214), an On-chip random access memory (On-chip Random Access Memory, OCRAM) (corresponding to the above first storage unit 213), and an MCU (corresponding to the above control unit).
The low-power consumption sensor transmits the shot target image to the ISP Lite through the mobile MIPI CSI; ISP Lite preprocesses the target image to obtain a preprocessed image, stores the preprocessed image to OCRAM through an SME BUS Matrix, reads the preprocessed image and a target reasoning model from OCRAM through the SME BUS Matrix, obtains a target reasoning result through the preprocessed image and the target reasoning model, stores the target reasoning result to an OCRAM through the SME BUS Matrix, reads the target reasoning result through the SME BUS Matrix, determines characteristic information of a target object in the target image based on the target reasoning result, wakes an AP CPU (corresponding to a processing unit in the embodiment) in the AP (corresponding to a second processing module in the embodiment) when the characteristic information of the target object matches the preset characteristic information, and sends the characteristic information of the target object to the AP CPU. Wherein the MCU may communicate with the AP CPU via an Inter-core connectivity (Inter-Core Connectivity, ICC) interface. The MCU may also read the target inference model from DDR (corresponding to the second memory location in the above embodiment) through the SME BUS Matrix and write the target inference model to OCRAM through the SME BUS Matrix.
The MCU can control the low power consumption sensor to work or shut down through the SME BUS Matrix and an Input-output multiplexer (Input-Output Multiplexer, IOMUX), and the AP CP can also control the low power consumption sensor to work or shut down through the IOMUX.
In addition, the MCU may also transmit the first photographing parameter information to the photographing unit through the SME BUS Matrix and the IOMUX, may also transmit the processing order information of at least one sub-unit and/or the image processing parameter information of at least one sub-unit to the ISP unit 212 through the SME BUS Matrix, and may also transmit the inference control information to the neural network processing unit 214 through the SME BUS Matrix.
In the implementation process, the processes of software control, reasoning process, data post-processing (namely, determining the characteristic information of the target object according to the target reasoning result), hardware control and the like are all concentrated in the MCU of the low-power consumption processing unit, and only a control interface (namely, an interface of the control unit/MCU for receiving interrupt signals or inter-core messages or motion information) and a result feedback interface (namely, an ICC interface of the control unit/MCU) are exposed to the upper layer application.
The MCU realizes all logic control functions of the AO Camera, including at least one of the following:
1. sensor (corresponding to the above-mentioned image pickup unit) function control, resource management; the resource management may include the MCU controlling the behavior of the AP CPU control Sensor;
2. Image preprocessing software flow control and hardware management (power-on and power-off, effect parameters, pipeline design, etc.); the Pipeline design comprises the steps that the MCU sends processing sequence information of at least one subunit to the ISP Lite;
3. Neural network processing unit control (wake-up, inference control, etc.);
4. multilevel identification flow control (multi-scenario fusion, multi-model serial reasoning, etc.); for example, the target inference model includes a first inference model and a second inference model, the first and second inference models applying different multi-scenarios, and/or the inference process of the first inference model and the inference process of the second inference model proceed in series;
5. invalid data filtering (reducing AP CPU post-processing and AP wake-up frequency);
6. The AP wakes up.
In the hardware design process, the necessary IP cores Intellectual Property core required for integrating AO Camera in the power domain independent small system (i.e. the above low-power consumption unit) may include: MCU, GNPU, ISP unit, MIPI CSI OCRAM.
MCU: pipeline, peripheral control, NPU post-processing and the like which are responsible for connecting the AO Camera functions in series.
GNPU: responsible for model reasoning, has higher IO performance through the AXI bus access OCRAM of the advanced extensible interface.
ISP unit: is responsible for image preprocessing to improve the recognition success rate, and has higher IO performance through AXI bus access OCRAM.
OCRAM: the stored image and model data and some operation intermediate values can be directly accessed by MCU through high-performance bus AHB, and the bandwidth is high.
FIG. 10 is a schematic diagram of a software framework of an RTOS and an AP according to an embodiment of the present application, as shown in FIG. 10, where the RTOS may be a software framework of an MCU, and the RTOS may also be referred to as an RTOS-SME in some embodiments, where the RTOS may include system Services and components (SYSTEM SERVICE & components), real-Time (RT) Kernel (Kernel), interface (Interface), services (Services), and HAL/Driver.
The system services and components may include, among other things, low power control (Low Power Control, LPC), shell, device management Framework (DEVICE MANAGER Framework). The LPC is used for controlling the Gating (Gating) of the bus, and if no bus access event exists on the bus, the Gating of the bus is adjusted to a power consumption saving mode. Shell is used to send commands to the system.
RT Kernel may include: semaphores, event sets, mutexes, mailboxes (MailBox), message queues, CPU architecture ARM, thread management, clock management, interrupt management, and memory management.
The interface comprises a Snapshot (snap shot) and a two-dimensional code (Quick Response Code, QR code).
The service comprises: low power camera service (LP CAMERA SERVICE), ISP Lite processing (ISP LITE HANDLER).
The HAL/Driver comprises: ISP DRIVER, ICC, MIPI DRIVER, camera serial (CAMERA SERIAL, CS) Driver, mailbox (MailBox).
Wherein, RTOS can communicate with LP CAMERA DRIVER in Driver of AP through ICC. In the AP LP CAMERA DRIVER can communicate with LP CAMERA HAL in the HAL, LP CAMERA HAL in the HAL can communicate with the LP Camera providing service (LP Camera Provider Service). Also included in the AP is an Application (Application).
LP CAMERA SERVICE: LP CAMERA SERVICE in the SME subsystem is responsible for communication with the AP subsystem and the GNPU subsystem and is connected with the whole image recognition flow; and notifying the user of the identification result.
ISP LITE DRIVER (or ISP LITE HANDLER): ISP Lite hardware driver and control module, providing relevant control and notification interface;
MIPI DRIVER (or MIPI HANDLER): MIPI CSI hardware driving and control module for providing MIPI CSI channel control function;
ICC: the inter-core communication module based on Mailbox provides a foundation for connecting each subsystem for each Service;
LP CAMERA HAL: the LP CAMERA HAL on the AP side is packaged, is communicated with the RTOS through LP CAMERA DRIVER, controls a Camera Sensor through Camera Sensor driving, receives a control command of the upper layer LP Camera Provider Service, and notifies the upper layer of an LP Camera recognition result.
In some embodiments, the processing units ISP, NPU and MCU are hung on the similar AHB buses, register control is performed through the shortest AHB bus, peripheral control is performed through the shortest APB bus, and the high-level expandable bus interface AXI interface with high bit width is matched, so that the IO access bandwidth and efficiency are improved, and the performance of the whole flow is improved. The hardware can complete tasks in the hardware in the shortest time, so that the MCU can control the hardware to enter a closing or low-power consumption mode, and the power consumption of the whole AO Camera function is further reduced.
In the embodiment of the application, the function realization process is carried out in the low-power processing unit, so that the extremely low-power AO Camera function can be realized. In addition, portability can be enhanced, and the RTOS independent AO Camera control logic can be conveniently transplanted to the Internet of things (Internet of Things, ioT) device meeting the hardware requirements.
In the embodiment of the application, the MCU is used for matching with a real-time operating system to independently operate the hardware such as Sensor, ISP, NPU (DSP) and the like, and the lower power consumption can be brought without depending on an AP CPU; by using the independent AO Camera architecture, when accessing peripheral or internal hardware IP, the nearest bus route (bus route) can be used, so that the control efficiency is higher; the method is supported to be triggered by information sources such as interrupt, IMU perception, ICC information and the like, and the triggering source is added, so that better expansibility can be brought to upper-layer applications.
In the embodiment of the application, the Sensor, ISP and NPU are located in the same Low Power Island to form a minimum system for image recognition, and independent Power rails (Power rails) are used for subdividing the Power domain (Power domain), so that independent operation of the first processing module can be kept when the AP is powered off; the MCU master control is used for controlling the Sensor, the ISP, the NPU and related peripherals to perform time-sharing work so as to minimize the power consumption; related devices are the same as one subsystem, and can be accessed through the shortest bus path, so that the best performance improvement is brought; the subsystem is also integrated with an intelligent sensing hub (SensorHub, which can comprise a motion measurement unit), for example, the MCU can directly access the data of the related IMU, and the data of the IMU is used for judging behaviors, scenes and body gestures, so that more function expansion is brought by combining the image recognition of the AON Camera; the AO Camera function support is triggered by information sources such as interrupt, IMU perception, ICC information and the like, and the RTOS can directly receive the information of the trigger sources to start the AO Camera and detect the work, and the AP can be in a sleep state during the period. Multiple trigger sources give the upper layer more application expansibility.
Fig. 11 is a schematic structural diagram of a chip according to an embodiment of the present application, and as shown in fig. 11, the chip 110 includes the first processing module 21 in any of the above embodiments.
In other embodiments, a chip may also be provided, which includes the first processing module and the second processing module of any of the embodiments described above.
In some embodiments, the chip may also include an IO interface.
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application, and as shown in fig. 12, the terminal device 120 may include the first processing module 21 in any of the foregoing embodiments.
In other embodiments, a terminal device may also be provided, where the terminal device includes the first processing module and the second processing module in any of the foregoing embodiments.
In still other embodiments, a terminal device may also be provided, where the terminal device includes a chip in any of the above embodiments.
In some embodiments, the terminal device may further include a communication module, which may be in communication with the second processing module.
The Terminal device in the present application may be referred to as a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), a subscriber unit, a subscriber Station, a Mobile Station, a remote Terminal, a Mobile device, a User Terminal, a wireless communication device, a User agent, or a User Equipment. The terminal device may comprise one or a combination of at least two of the following: internet of things (Internet of Things, ioT) devices, satellite terminals, wireless local loop (Wireless Local Loop, WLL) stations, personal digital assistants (Personal DIGITAL ASSISTANT, PDA), handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, servers, mobile phones (mobile phones), tablet computers (Pad), computers with wireless transceiver capabilities, palm computers, desktop computers, personal digital assistants, portable media players, smart speakers, navigation devices, smart watches, smart glasses, smart necklaces and other wearable devices, pedometers, digital TVs, virtual Reality (VR) terminal devices, augmented Reality (Augmented Reality, AR) terminal devices, wireless terminals in industrial control (industrial control), wireless terminals in unmanned (SELF DRIVING), wireless terminals in teleoperation (remote medical surgery), wireless terminals in smart grid (SMART GRID), wireless terminals in transportation security (transportation safety), wireless terminals in smart city (SMART CITY), wireless terminals in smart home (smart home) and wireless terminals in smart home (smart phone) and wireless systems, car-mounted devices, CPE (3430), mobile phones, modem (CPE) and other on-board devices.
The description of the processing device, chip and terminal device embodiments above is similar to the description of the first processing module embodiment above, with similar advantageous effects as the first processing module embodiment. For technical details not disclosed in the embodiments of the processing device, the chip and the terminal device of the present application, please refer to the description of the first processing module embodiment of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the mode control method is implemented in the form of a software function module, and sold or used as a separate product, the mode control method may also be stored in a computer storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a terminal device to execute all or part of the methods described in the embodiments of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment of the present application" or "the foregoing embodiment" or "some implementations" or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "an embodiment of the application" or "the foregoing embodiments" or "some implementations" or "some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically connected, electrically connected or can be communicated with each other; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
In several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
The features disclosed in the several product embodiments provided by the application can be combined arbitrarily under the condition of no conflict to obtain new product embodiments. The features disclosed in the several device embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new device embodiment.
It should be noted that the drawings in the embodiments of the present application are only for illustrating schematic positions of respective devices on the terminal device, and do not represent actual positions in the terminal device, the actual positions of respective devices or respective areas may be changed or shifted according to actual situations (for example, structures of the terminal device), and proportions of different parts in the terminal device in the drawings do not represent actual proportions.
As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A first processing module, characterized in that the first processing module comprises a control unit;
The control unit is used for: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to a processing unit of a second processing module under the condition that the characteristic information of the target object is matched with preset characteristic information;
the characteristic information of the target object is used for the processing unit to execute corresponding operation.
2. The first processing module of claim 1, wherein the first processing module comprises a processor,
The control unit is further configured to: transmitting a first wake-up signal to the processing unit under the condition that the characteristic information of the target object is matched with the preset characteristic information;
The first wake-up signal is used for converting the processing unit from a sleep state to a working state.
3. The first processing module of claim 1, wherein the first processing module comprises a processor,
The control unit is further configured to: transmitting at least one of the following to the image capturing unit: the second wake-up signal, the first shooting parameter information and the first closing signal;
the second wake-up signal is used for enabling the camera unit to enter a working state;
the first shooting parameter information is used for the camera unit to acquire the target image;
the first closing signal is used for enabling the camera unit to enter a closing state.
4. The first processing module of claim 1, wherein the first processing module comprises a processor,
The control unit is further configured to: receiving the indication information sent by the processing unit, and sending feedback information for indicating the processing unit to wake up the image pickup unit to the processing unit;
wherein the indication information includes at least one of: the time when the processing unit wakes up or closes the camera unit, the reason information of the processing unit wakes up or closes the camera unit, and the priority of the processing unit wakes up or closes the camera unit;
The feedback information is used for the processing unit to send at least one of the following to the camera unit: the third wake-up signal, the second shooting parameter information and the second closing signal; the third wake-up signal is used for enabling the camera unit to enter a working state, the second shooting parameter information is used for enabling the camera unit to acquire the target image, and the second closing signal is used for enabling the camera unit to enter a closing state.
5. The first processing module of claim 1, wherein the control unit is further configured to:
Receiving an interrupt signal or an inter-core message sent by the processing unit or other processing modules, and controlling at least one of the following operations according to the interrupt signal or the inter-core message: the image processing system comprises an image pick-up unit, an image signal processing ISP unit in the first processing module and a neural network processing unit in the first processing module; or alternatively
Receiving motion information sent by a motion measurement unit, and controlling at least one of the following operations under the condition that the motion information is matched with a preset motion: the image processing device comprises an image capturing unit, an image signal processing ISP unit in the first processing module and a neural network processing unit in the first processing module.
6. The first processing module according to any one of claims 1 to 5, further comprising an image signal processing ISP unit and a first storage unit;
The ISP unit is used for: receiving the target image sent by the image pickup unit, preprocessing the target image to obtain a preprocessed image, and storing the preprocessed image into the first storage unit; wherein the preprocessed image is used to determine characteristic information of the target object.
7. The first processing module of claim 6, wherein the first processing module comprises a processor,
The control unit is further configured to: transmitting to the ISP unit at least one of: a fourth wake-up signal, a third off signal, processing sequence information of at least one subunit in the ISP unit, and image processing parameter information of the at least one subunit;
the ISP unit is further configured to at least one of:
entering a working state according to the fourth wake-up signal;
entering a closing state according to the third closing signal;
According to the processing sequence information of the at least one subunit, the target image is sequentially processed through the at least one subunit, and the preprocessed image is obtained;
and processing the target image in sequence according to the image processing parameter information of the at least one subunit to obtain the preprocessed image.
8. The first processing module of claim 6, further comprising a neural network processing unit;
The neural network processing unit is used for: reading the preprocessed image from the first storage unit, reasoning the preprocessed image according to a target reasoning model to obtain a target reasoning result, and storing the target reasoning result into the first storage unit; the target reasoning result is used for determining characteristic information of the target object.
9. The first processing module of claim 8, wherein the first processing module comprises a processor,
The control unit is further configured to: and reading the target reasoning result from the first storage unit, and determining the characteristic information of the target object according to the target reasoning result.
10. The first processing module of claim 8, wherein the first processing module comprises a processor,
The control unit is further configured to: transmitting to the neural network processing unit at least one of: a fifth wake-up signal, a fourth off signal, and inference control information;
The neural network processing unit is further configured to at least one of:
Entering a working state according to the fifth wake-up signal;
entering a closing state according to the fourth closing signal;
And reading the target inference model from the first storage unit according to the inference control information, and inferring the preprocessed image according to the target inference model to obtain a target inference result.
11. The first processing module of any of claims 8 to 10, the target inference model comprising a first inference model and a second inference model, the target inference result comprising a first inference result and a second inference result;
The neural network processing unit is further configured to: reasoning the preprocessed image according to the first reasoning model to obtain the first reasoning result; and reasoning the preprocessed image according to the second reasoning model to obtain the second reasoning result.
12. The first processing module according to any one of claims 8 to 10, wherein the control unit is further configured to: and reading the target inference model from a second storage unit of the second processing module, and storing the target inference model in the first storage unit.
13. The first processing module according to any of claims 8 to 10, wherein,
The control unit is communicated with the ISP unit through an advanced high-performance bus AHB bus, and the communication distance between the control unit and the ISP unit is smaller than a first distance; and/or the number of the groups of groups,
The control unit is communicated with the neural network processing unit through an AHB bus, and the communication distance between the control unit and the neural network processing unit is smaller than a second distance.
14. A processing apparatus, comprising: the device comprises a first processing module and a second processing module, wherein the first processing module comprises a control unit, and the second processing module comprises a processing unit;
The control unit is used for: determining characteristic information of a target object in a target image, and sending the characteristic information of the target object to the processing unit under the condition that the characteristic information of the target object is matched with preset characteristic information;
The processing unit is used for: and executing an operation corresponding to the characteristic information of the target object.
15. The processing apparatus according to claim 14, wherein the processing apparatus comprises a processor,
The processing unit is further configured to: transmitting at least one of the following to the image capturing unit: the third wake-up signal, the second shooting parameter information and the second closing signal;
The camera shooting unit is also used for at least one of the following:
Collecting the target image according to the third wake-up signal;
Acquiring the target image according to the second shooting parameter information;
And entering a closing state according to the second closing signal.
16. A chip comprising the first processing module of any one of claims 1 to 13, or the processing device of claim 14 or 15.
17. A terminal device comprising the first processing module of any of claims 1 to 13, or the terminal device comprising the processing apparatus of claim 14 or 15, or the terminal device comprising the chip of claim 16.
CN202211321297.2A 2022-10-26 2022-10-26 First processing module, processing device, chip and terminal equipment Pending CN117975242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211321297.2A CN117975242A (en) 2022-10-26 2022-10-26 First processing module, processing device, chip and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211321297.2A CN117975242A (en) 2022-10-26 2022-10-26 First processing module, processing device, chip and terminal equipment

Publications (1)

Publication Number Publication Date
CN117975242A true CN117975242A (en) 2024-05-03

Family

ID=90858631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211321297.2A Pending CN117975242A (en) 2022-10-26 2022-10-26 First processing module, processing device, chip and terminal equipment

Country Status (1)

Country Link
CN (1) CN117975242A (en)

Similar Documents

Publication Publication Date Title
US20220174215A1 (en) Methods and apparatus to operate a mobile camera for low-power usage
US20200372250A1 (en) Service Processing Method and Related Apparatus
CN107925738B (en) Method and electronic device for providing image
US20160080652A1 (en) Information processing device and image input device
US20140075178A1 (en) Providing Support for Device States
CN116070684B (en) Integrated chip and method for processing sensor data
WO2024016564A1 (en) Two-dimensional code recognition method, electronic device, and storage medium
WO2022179604A1 (en) Method and apparatus for determining confidence of segmented image
CN112541450A (en) Context awareness function control method and related device
CN117975242A (en) First processing module, processing device, chip and terminal equipment
CN107291597A (en) A kind of multi-modal decision-making sensory perceptual system of O&M based on multinuclear heterogeneous processor
CN111880661A (en) Gesture recognition method and device
CN114125148B (en) Control method of electronic equipment operation mode, electronic equipment and readable storage medium
CN114741121B (en) Method and device for loading module and electronic equipment
US20210377512A1 (en) Processing of signals using a recurrent state estimator
CN114237861A (en) Data processing method and equipment thereof
WO2024055764A1 (en) Image processing method and apparatus
EP4246308A1 (en) Always-on display control method, electronic device and storage medium
CN117692998A (en) Data acquisition method under abnormal dormancy condition and electronic equipment
KR20240097046A (en) Server for providing service for educating english and method for operation thereof
CN117133430A (en) Apparatus and method for power management
CN113672083A (en) Method and device for monitoring work and rest moments
Casares Energy-efficient lightweight algorithms for embedded smart cameras: design, implementation and performance analysis
CN118283581A (en) Near field communication processing method and device and electronic equipment
CN115061702A (en) IDE management method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination