CN111902791A - Method, chip and terminal for identifying user behavior - Google Patents

Method, chip and terminal for identifying user behavior Download PDF

Info

Publication number
CN111902791A
CN111902791A CN201880091728.6A CN201880091728A CN111902791A CN 111902791 A CN111902791 A CN 111902791A CN 201880091728 A CN201880091728 A CN 201880091728A CN 111902791 A CN111902791 A CN 111902791A
Authority
CN
China
Prior art keywords
image data
terminal
user
face
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880091728.6A
Other languages
Chinese (zh)
Inventor
韦益德
孙忠
李大伟
叶波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN111902791A publication Critical patent/CN111902791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a method, a chip and a terminal for identifying user behaviors, wherein the method comprises the following steps: the terminal acquires image data in real time through a low-power-consumption camera which is always turned on; the terminal analyzes whether a specific event occurs or not according to the image data; the terminal determines that the specific event occurs; and the terminal starts an application function corresponding to the artificial intelligence AI. The technical scheme provided by the embodiment of the application can continuously realize the artificial intelligence capability of the normally open AI, sense the action, the behavior intention, the environment change and the like of the user in real time, and can actively provide more natural human-computer interaction and better user experience for the user.

Description

Method, chip and terminal for identifying user behavior Technical Field
The embodiment of the application relates to the field of communication, in particular to a method, a chip and a terminal for identifying user behaviors.
Background
With the development of the AI artificial intelligence technology, the AI technology is applied to the terminal device more and more widely, so that the functions of the terminal device are more and more intelligent. For example, with the popularization of AI technology in terminal devices, the terminal devices have increasingly powerful functions in the fields of perception, image processing, audio processing, language processing, and the like.
In the prior art, AI artificial intelligence is integrated in a software system, and the AI artificial intelligence function basically requires a certain action of a user or the triggering of other application modules. In the terminal device in the prior art, when there is a service need, the corresponding application module will call the corresponding AI artificial intelligence function. AI artificial intelligence's among this terminal equipment function can not continuously be always opened, can not continuously always come the change such as perception user action, action intention through the AI technique, and it is not good with experience.
Therefore, how to enable the terminal device to actively sense changes such as user behaviors and behavior intentions in real time and improve user experience becomes a problem which needs to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a method, a chip and a terminal for identifying user behaviors, so that terminal equipment can continuously and normally open an Artificial Intelligence (AI) capability in a low power consumption mode, sense user actions, behavior intentions and the like in real time, and can actively provide more natural human-computer interaction and better user experience for users.
In a first aspect, a method for identifying user behavior is provided, the method comprising: the terminal acquires image data in real time through a low-power-consumption camera which is always turned on; the terminal analyzes whether a specific event occurs or not according to the image data; the terminal determines that the specific event occurs; and the terminal starts an application function corresponding to the artificial intelligence AI.
In the embodiment of the application, the coprocessor is connected with the main processor, and the low-power-consumption camera can be connected with the coprocessor.
With reference to the first aspect, in certain implementations of the first aspect, the terminal invokes an AI algorithm to analyze whether the specific event occurs according to the image data.
With reference to the first aspect, in certain implementations of the first aspect, the specific event is a change in user face data in the image data.
With reference to the first aspect, in certain implementations of the first aspect, the determining, by the terminal, that the specific event occurs includes: and the terminal determines whether the face data of the user in the image data is from the existence to the nonexistence or from the nonexistence to the existence.
With reference to the first aspect, in some implementation manners of the first aspect, the terminal starts a face recognition function according to user face data in the image data, and determines that the user face data in the image data is preset first face data; and unlocking the screen by the terminal.
With reference to the first aspect, in certain implementations of the first aspect, the terminal determines that user face data in the image data is from none to yes; the terminal lights up a screen.
With reference to the first aspect, in certain implementations of the first aspect, the terminal determines whether user face data in the image data is present or absent; and the terminal locks the screen.
With reference to the first aspect, in some implementations of the first aspect, the terminal determines whether user face data in the image data is present or absent within a preset time; and the terminal extinguishes the screen.
With reference to the first aspect, in certain implementations of the first aspect, the terminal inputs the image data into an AI algorithm model, and the AI algorithm model invokes a corresponding algorithm in an AI operator library to analyze whether the image data is user face data.
With reference to the first aspect, in certain implementations of the first aspect, the AI algorithm library is solidified in hardware of the terminal.
With reference to the first aspect, in certain implementations of the first aspect, a hardware accelerator is used to invoke a corresponding operator in the AI operator library, and whether the specific event occurs is analyzed according to the image data.
In a second aspect, there is provided a chip for identifying user behavior, the chip comprising: a coprocessor and a main processor, wherein the coprocessor is connected with the main processor,
the coprocessor is used for executing the following operations: acquiring image data in real time through a low-power-consumption camera, wherein the low-power-consumption camera is connected with the coprocessor and is always turned on; analyzing whether a specific event occurs according to the image data; and determining that the specific event occurs, and sending an Artificial Intelligence (AI) message to the main processor.
The main processor is configured to: and opening an application function corresponding to the AI according to the received AI message.
With reference to the second aspect, in some implementations of the second aspect, the coprocessor is specifically configured to: and calling an AI algorithm to analyze whether the specific event occurs or not according to the image data.
With reference to the second aspect, in some implementations of the second aspect, the specific event is a change in user face data in the image data.
With reference to the second aspect, in certain implementations of the second aspect, the coprocessor comprises: an AI engine module, an AI algorithm library module, an AI application layer module,
the AI engine module is to: calling a corresponding AI algorithm to carry out AI calculation according to the image data;
the AI algorithm module is to: calling a corresponding AI operator in an AI algorithm library to analyze whether the user face data in the image data exist or not or exist from now on according to the input image data, and reporting an identification result to the AI application layer;
the AI application layer module is to: and reporting the AI message to the main controller according to the identification result.
With reference to the second aspect, in some implementations of the second aspect, the main processor is specifically configured to: and starting a face recognition function according to the user face data in the image data, determining the user face data in the image data as preset face data, and unlocking the screen.
With reference to the second aspect, in some implementations of the second aspect, the main processor is further specifically configured to: and determining that the face data of the user in the image data is from nothing to any, and lighting up a screen.
With reference to the second aspect, in some implementations of the second aspect, the main processor is specifically configured to: and determining whether the face data of the user in the image data exists or not, and locking the screen.
With reference to the second aspect, in some implementations of the second aspect, the main processor is further specifically configured to: and determining whether the face data of the user in the image data exists or not within preset time, and turning off the screen.
With reference to the second aspect, in some implementations of the second aspect, the coprocessor further includes: and the hardware accelerator module is used for calling the AI algorithm library module to analyze whether the user face data in the image data is from the existence to the nonexistence or from the nonexistence to the existence to accelerate.
With reference to the second aspect, in certain implementations of the second aspect, the AI operator library is solidified in hardware of the coprocessor.
The application provides a chip for discerning user's action can make terminal equipment under low-power consumption mode, last the artificial intelligence ability of always opening AI, and real-time perception user action, action intention etc. can initiatively provide more natural human-computer interaction and better user experience for the user.
In a third aspect, a terminal is provided, including: the system comprises a coprocessor, a main processor and a low-power-consumption camera, wherein the coprocessor is connected with the main processor, and the low-power-consumption camera is connected with the coprocessor.
The coprocessor is used for executing the following operations: acquiring image data in real time through a low-power-consumption camera which is always turned on; analyzing whether a specific event occurs according to the image data; and determining that the specific event occurs, and sending an Artificial Intelligence (AI) message to the main processor.
The main processor is configured to: and opening an application function corresponding to the AI according to the received AI message.
With reference to the third aspect, in some implementations of the third aspect, the coprocessor is specifically configured to: and calling an AI algorithm to analyze whether the specific event occurs or not according to the image data.
With reference to the third aspect, in certain implementations of the third aspect, the specific event is a change in user face data in the image data.
With reference to the third aspect, in certain implementations of the third aspect, the coprocessor includes: an AI engine module, an AI algorithm library module, an AI application layer module,
the AI engine module is to: calling a corresponding AI algorithm to carry out AI calculation according to the image data;
the AI algorithm module is to: calling a corresponding AI operator in an AI algorithm library to analyze whether the user face data in the image data exist or not or exist from now on according to the input image data, and reporting an identification result to the AI application layer;
the AI application layer module is to: and reporting the AI message to the main controller according to the identification result.
With reference to the third aspect, in some implementations of the third aspect, the main processor is specifically configured to: and starting a face recognition function according to the user face data in the image data, determining the user face data in the image data as preset face data, and unlocking the screen.
With reference to the third aspect, in some implementations of the third aspect, the main processor is further specifically configured to: and determining that the face data of the user in the image data is from nothing to any, and lighting up a screen.
With reference to the third aspect, in some implementations of the third aspect, the main processor is specifically configured to: and determining whether the face data of the user in the image data exists or not, and locking the screen.
With reference to the third aspect, in some implementations of the third aspect, the main processor is further specifically configured to: and determining whether the face data of the user in the image data exists or not within preset time, and turning off the screen.
With reference to the third aspect, in some implementations of the third aspect, the coprocessor further includes: and the hardware accelerator module is used for calling the AI algorithm library module to analyze whether the user face data in the image data is from the existence or not or from the nonexistence to the existence process for acceleration.
With reference to the third aspect, in certain implementations of the third aspect, the AI operator library is solidified in hardware of the coprocessor.
The embodiment of the application provides a terminal for recognizing user behaviors, so that terminal equipment can continuously and normally open Artificial Intelligence (AI) capability in a low power consumption mode, sense user actions, behavior intentions and the like in real time, and can actively provide more natural human-computer interaction and better user experience for a user.
In a fourth aspect, a terminal is provided, including:
the acquisition module is used for acquiring image data in real time through the low-power-consumption camera which is always turned on.
And the analysis module is used for analyzing whether a specific event occurs or not according to the image data.
A determination module for determining that the specific event occurs.
And the processing module is used for starting the application function corresponding to the artificial intelligence AI.
In the embodiment of the application, the coprocessor is connected with the main processor, and the low-power-consumption camera can be connected with the coprocessor.
With reference to the fourth aspect, in some implementations of the fourth aspect, the analysis module is specifically configured to: and the terminal calls an AI algorithm to analyze whether the specific event occurs or not according to the image data.
With reference to the fourth aspect, in some implementations of the fourth aspect, the specific event is a change in user face data in the image data.
With reference to the fourth aspect, in some implementations of the fourth aspect, the determining module is specifically configured to: and determining whether the face data of the user in the image data is from the existence to the nonexistence or from the nonexistence to the existence.
With reference to the fourth aspect, in some implementations of the fourth aspect, the determining module is specifically configured to: according to the user face data in the image data, starting a face recognition function, and determining that the user face data in the image data is preset first face data; and unlocking the screen.
With reference to the fourth aspect, in some implementations of the fourth aspect, the determining module is specifically configured to: determining that the face data of the user in the image data is from nothing to nothing; the screen is lit.
With reference to the fourth aspect, in some implementations of the fourth aspect, the determining module is specifically configured to: determining whether the face data of the user in the image data exists or not within preset time; the screen is extinguished.
With reference to the fourth aspect, in some implementations of the fourth aspect, the analysis module is specifically configured to: and inputting the image data into an AI algorithm model, and calling a corresponding algorithm in an AI operator library by the AI algorithm model to analyze whether the image data is the user face data.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the AI algorithm library is solidified in hardware of the terminal.
With reference to the fourth aspect, in some implementations of the fourth aspect, the analysis module is further specifically configured to: and calling a corresponding operator in the AI operator library through a hardware accelerator, and analyzing whether the specific event occurs or not according to the image data.
In a fifth aspect, there is provided a computer readable storage medium comprising a computer program which, when run on a computer, causes the computer to perform the method as described in the first aspect or any one of the implementations of the first aspect.
A sixth aspect provides a computer program product for causing a computer to perform the method as set forth in the first aspect or any one of the implementation manners of the first aspect when the computer program product runs on the computer.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying user behavior provided by an embodiment of the present application.
Fig. 2 is a schematic block diagram of a hardware architecture of a terminal 200 according to an embodiment of the present disclosure.
Fig. 3 is a schematic flowchart of a terminal device face recognition unlocking scene provided in an embodiment of the present application.
Fig. 4 is a schematic flowchart of a scene of intelligent bright-screen unlocking of a terminal device according to an embodiment of the present application.
Fig. 5 is a schematic flowchart of a scene of intelligent screen blanking of a terminal device according to an embodiment of the present application.
Fig. 6 is a schematic flowchart of a terminal device environment scene identification scenario provided in an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a terminal 700 according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a chip 800 for identifying user behavior according to an embodiment of the present application.
Fig. 9 is a schematic view of an interface change of a terminal after the method for identifying user behavior provided by the embodiment of the application is used.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
It should be understood that, in the embodiment of the present application, the type of the terminal device is not particularly limited, and the terminal device may include, but is not limited to, a Mobile Station (MS), a mobile phone (MS), a User Equipment (UE), a handset (handset), a portable device (portable equipment), a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a Radio Frequency Identification (RFID) terminal device for logistics, a handheld device having a wireless communication function, a computing device or other devices connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a vehicle network.
By way of example and not limitation, in the embodiments of the present application, the terminal device may also be a wearable device. Wearable equipment can also be called wearable intelligent equipment, is the general term of applying wearable technique to carry out intelligent design, develop the equipment that can dress to daily wearing, like glasses, gloves, wrist-watch, dress and shoes etc.. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable smart device includes full functionality, large size, and can implement full or partial functionality without relying on a smart phone, such as: smart watches or smart glasses and the like, and only focus on a certain type of application functions, and need to be used in cooperation with other devices such as smart phones, such as various smart bracelets for physical sign monitoring, smart jewelry and the like.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision and reasoning, human-computer interaction, recommendation and search, AI basic theory, and the like.
With the development of the AI artificial intelligence technology, the AI technology is applied to the terminal device more and more widely, so that the functions of the terminal device are more and more intelligent. For example, with the popularization of AI technology in terminal devices, the terminal devices have increasingly powerful functions in the fields of perception, image processing, audio processing, language processing, and the like.
In the prior art, AI artificial intelligence is integrated in a software system, and the AI artificial intelligence function basically requires a certain action of a user or the triggering of other application modules. In the terminal device in the prior art, when there is a service need, the corresponding application module will call the corresponding AI artificial intelligence function. The AI artificial intelligence function in the terminal equipment can not be continuously opened, and the user behavior, behavior intention, environment change and other changes can not be sensed through continuous opening of the AI technology.
The following is to combine the face unlocking function to analyze in detail that the AI technology in the prior art needs a certain action of a user or the triggering of other application modules to trigger the corresponding AI artificial intelligence function.
In the prior art, when a user needs to start a face unlocking function in a terminal device, the user can pick up a mobile phone or rotate the mobile phone. The terminal device can recognize the specific action of the user by a sensor or the like under the condition that the system is dormant or normally stands by. For example, the terminal device may recognize a specific motion of the user such as picking up a mobile phone or rotating the mobile phone by a sensor such as an acceleration or a gyroscope. After the system is awakened, a face unlocking service is triggered, and the face unlocking service starts a camera to acquire a face image. And the face unlocking service calls a corresponding AI algorithm to detect the face according to the acquired face image, and identifies and compares the detection result of the face. And if the results of the recognized face and the preset face are not consistent, the system can not be unlocked, and the terminal equipment is switched into a dormant state.
As can be seen from the face unlocking function in the existing terminal equipment, the face unlocking function in the prior art can be started only by additional action triggering. If the user cannot operate the terminal device, for example, cannot pick up the mobile phone or rotate the mobile phone, the AI algorithm corresponding to the face unlocking service cannot be started, and the user experience is not good.
The AI technology in the prior art is analyzed in detail below with reference to an image processing function, and only when a certain action of a user or the triggering of another application module is required, the corresponding AI artificial intelligence function is triggered.
In the prior art, when a user needs to start a camera photographing image processing function in a terminal device, the user can generate an original image in a camera module after photographing. The camera module can call corresponding AI algorithm to analyze image information and deep-scene information, and can perform backlight, defogging and other operations on a corresponding background area after recognizing the face and the deep-scene image, so that the face saturation can be improved, and the color distortion can be reduced.
As can be seen from the camera photographing image processing function in the existing terminal device, the AI algorithm is integrated in the software system, and as a capability of the software system, the corresponding AI algorithm is triggered only by being called by other application modules. If other application modules are not triggered, the AI algorithm cannot be started, and the terminal equipment cannot actively sense the surrounding environment state in real time.
In summary, in the prior art, the terminal device cannot autonomously run the AI sensing capability, and needs to rely on a specific action or call some application modules, so that the user experience is poor.
The embodiment of the application provides a method for identifying user actions, behavior intentions and the like, so that the terminal equipment does not depend on specific operations of a user, can sense the change of the user intentions in real time, can provide the capability of seamlessly sensing application services for the user, and is more intelligent and more comfortable in human-computer experience.
Fig. 1 is a schematic flow chart of a method for identifying user behavior provided by an embodiment of the present application. The method shown in fig. 1 may include steps 110-140, and the steps 110-140 are described in detail below.
And step 110, the terminal acquires image data in real time through a low-power-consumption camera.
The low-power-consumption camera in the embodiment of the application can be always started at a specific frame rate, so that the image data around the terminal equipment can be collected in real time, and the collected image data can be reported to the co-controller.
It should be understood that the low-power consumption camera can be used as an infrastructure, and continuously collects data around the terminal, so as to provide a hardware basis for the terminal equipment to realize the automatic operation AI artificial intelligence technology.
And step 120, the terminal analyzes whether a specific event occurs according to the image data.
The scenario of the specific event is not specifically limited in the embodiment of the present application. As an example, in a face recognition scenario, the occurrence of the specific event may be used to indicate whether the face data of the user in the image data collected by the low power consumption camera is from the existence to the nonexistence, or from the nonexistence to the existence. As another example, in a scenario of smart face unlocking, the occurrence of the specific event may be used to indicate that the user face data in the image data collected by the low power consumption camera is from scratch, and the user face data in the image data is matched with the pre-stored face data. As another example, in the scenario of smart screen-off, the occurrence of the specific event may be used to indicate that the image data collected by the low-power camera does not have the user face data in a specific time. The following will describe in detail the specific implementation process of the above scenario with reference to fig. 2 to 5, and details are not repeated here.
In step 130, the terminal determines that a specific event occurs.
The terminal can call a corresponding AI algorithm, analyze the data in the received image and determine the occurrence of a certain specific event. As one example, the specific event is the presence to absence, or presence, of the user's face data. The terminal can call a face detection algorithm to analyze the data in the collected image and determine whether the data conforms to the basic contour characteristics of the face. If the data conforms to the basic contour features of the human face, the occurrence of the specific event can be determined. As one example, the particular event is user face recognition. The terminal can call a face recognition algorithm, compare the user face data in the collected image with pre-stored user face data (user face data pre-stored by the owner), and determine whether the face in the image is matched with the pre-stored face. If the face in the image matches a pre-stored face, the occurrence of the particular event may be determined. As another example, the specific event is that the user surrounding environment scene is a target scene. The terminal can call an environment recognition algorithm for the environment scene in the collected image, judge whether the surrounding environment scene of the user is a target scene, and determine whether the environment scene in the image is the target scene. If the user's ambient scene is a target scene, the occurrence of the particular event may be determined.
Step 140, the terminal starts the corresponding application function.
The working mode corresponding to the AI started by the terminal in the embodiment of the application can be understood as the function corresponding to different APPs. As an example, for an application of face unlocking, a face recognition unlocking function may be started according to a face recognition result. As another example, for an intelligent bright screen application, the bright screen function may be turned on according to the face detection result.
In the embodiment of the application, the terminal can acquire user image data in real time through the low-power-consumption camera and can autonomously operate AI perception capability. The terminal equipment can sense the action, the behavior intention and the like of the user in real time without depending on the specific operation of the user, and the capability of seamlessly sensing the application service can be provided for the user, so that the terminal equipment is more intelligent and the human-computer experience is more comfortable.
The terminal in the embodiment of the application may include: the system comprises a main processor, a coprocessor and a normally-open low-power-consumption camera.
When no service exists, the main controller system in the terminal is in normal sleep standby mode and enters a low power consumption mode. And after the coprocessor in the terminal reports the AI event message, a main controller system in the terminal is awakened. The main controller in the terminal can realize various bright spot service functions according to the product service requirements, or transmit the event message to other related service modules, and the other service modules finish final processing.
As an example, the terminal may report the AI identification result to the main controller when it is determined that a specific event occurs in step 120. As one example, the coprocessor may generate an AI message at the AI application layer and may report the AI message to the main controller. This will be described in detail below with reference to fig. 2, and will not be described in detail here.
As another example, the coprocessor in the terminal may analyze whether the face of the user can be detected in the image according to the image data and the corresponding AI algorithm, and may determine, from the previously acquired state, that the previous non-user face data in the image acquired by the low-power-consumption camera is changed into user face data, or that the previous user face data in the image acquired by the low-power-consumption camera is changed into non-user face data (which may also be understood as a change in the user's behavioral intention). As another example, in a scenario of recognizing the surrounding environment of the user, the occurrence of the special event may be used to indicate that the environmental scene around the user has changed in the image captured by the low power consumption camera. For example, the coprocessor may analyze the image according to the image data and the corresponding AI algorithm, and may analyze that the previous environmental scene is changed into the target environmental scene according to the previously acquired environmental scene around the user. Specific implementation manners of the two application scenarios will be described below with reference to fig. 3 to 6, and details are not repeated here.
In the embodiment of the present application, the main controller may be responsible for running various applications of the terminal device, including but not limited to: user Interface (UI), human interface, face recognition, environment recognition, automatic on/off screen, etc.
The following describes in detail, with reference to fig. 2, a specific implementation manner in which the main processor and the coprocessor perform cooperative processing in the embodiment of the present application, so that the terminal device can autonomously run an AI sensing capability and sense changes of a user intention, an expression, and an environment in real time.
Fig. 2 is a schematic block diagram of a hardware architecture of a terminal 200 according to an embodiment of the present disclosure. The hardware architecture shown in fig. 2 may include a main processor 210, a co-processor 220, and a low power camera 230.
The coprocessor 220: the AI capability is integrated, and the continuous operation in a low power consumption mode can be used for detecting the action intention of a user and environmental changes. The coprocessor 220 is connected to the main processor 210, and when a corresponding event is detected, the main processor 210 is triggered to wake up the main processor 210 by reporting an AI event message to the main processor 210.
The main processor 210: when there is no service, the main controller 210 system may perform a normal sleep standby state and enter a low power consumption mode. After receiving the AI event message sent by the coprocessor 220, the main processor 210 is awakened, and then receives an event reported by the coprocessor 220, and triggers a corresponding service scenario function.
Low-power normally-open camera 230: the peripheral chip software interface (driver) provided by the coprocessor 220 is connected to the coprocessor 220, and provides a data source for the coprocessor 220 to process AI services.
The system architecture of coprocessor 220 is described in detail below.
The coprocessor 220 module may be a Real Time Operating System (RTOS). When an external event or data is generated, it can be accepted and processed at a sufficiently fast speed. The processing result can control the production process within the specified time or make quick response to the processing system, and all available resources are scheduled to complete the real-time task. And all real-time tasks are controlled to be coordinated and run in a consistent manner, so that the response speed is high and the reliability is high.
The RTOS system of coprocessor 220 may include: a kernel (kernel)221, a framework layer (framework layer) 222, and an APP application layer 223.
The kernel (kernel)221 includes: peripheral driver module 2211, hardware acceleration module 2212 and AI operator library module 2213.
The framework layer 222 includes: an AI application management module 2221, an AI algorithm management module 2222, and an AI algorithm model 2223.
The APP application layer 223 includes: an AI application layer module 2231, an AI engine module 2232, and an AI model management module 2233.
The above modules are described in detail below.
Peripheral driver module 2211: and a software interface can be provided for connecting various peripheral chips. For example, a low power camera 230 may be connected, and the low power camera 230 may provide a hardware basis for the coprocessor 220 to perceive user behavioral intent or environmental changes. The coprocessor 220 may analyze characteristics of the user's actions and the surrounding environment according to the image data acquired by the low power consumption camera 230, and provide a data source for the coprocessor 220 to process the AI service.
Specifically, the terminal may obtain image data in real time through the normally open low power consumption camera 230 connected to the peripheral driver module 2211.
Optionally, in some embodiments, the peripheral devices that may be connected to the peripheral driver module 2211 may further include, but are not limited to: sensors (which may be used to identify user actions), low power microphones (which may be used to analyze characteristics of a user's voice, etc.), location sensors (e.g., Global Positioning System (GPS), wireless local area network (WIFI), modem, which may be used to provide location information of a user).
The AI application management module 2221: the data reported by the peripheral driver module 2211 may be classified. For example, the received data may be classified into image classes, video classes, audio classes, etc. so as to invoke different classes of AI algorithm models 2223 for analysis processing.
The AI engine module 2232: and the AI algorithm model 2223 can be responsible for scheduling and coordinating operations. Since a plurality of AI algorithm models 2223 are simultaneously operated, the scheduling management control of the AI engine module 2232 can maximally ensure the software to be operated in order.
The AI algorithm management module 2222: and the AI application management module 2221 is responsible for algorithm management, and can select a corresponding AI algorithm model from the multiple operating AI algorithm models 2223 for analysis according to different types of data reported by the AI application management module 2221.
AI Algorithm model 2223: may be a collection of algorithmic features of images, sounds consistent with certain services. For example, in conducting a face recognition service, the AI algorithm model 2223 may be a set of features that conform to the contours of a face. As another example, in performing context-aware services, the AI algorithm model 2223 may be a collection of features that conform to a certain context. The AI algorithm model 2223 may be trained using large-scale image data, and after the training is completed, an algorithm model is generated, and the corresponding AI operator may operate the algorithm model to perform operations such as environment recognition or face recognition.
Specifically, after the co-processing 220 in the terminal receives the image data reported by the normally-open low-power-consumption camera 230, the AI application management module 2221 may determine whether the data to be processed is the user face data or not, and call the corresponding AI algorithm through the AI engine module 2232 to analyze whether the acquired data in the image is the user face data or not.
It should be noted that the AI algorithm model 2223 may be integrated in a software system by default, or may be updated to the coprocessor 220 through the main controller 210, which is not specifically limited in this embodiment of the present application.
AI model management module 2233: in some embodiments, master controller 210 may also optimize AI algorithm model 2223. For example, the result of the AI algorithm model 2223 may be comprehensively determined using positioning information such as GPS/WIFI/modem, so as to improve the accuracy of the AI algorithm model 2223. AI model management module 2233 can modify certain features in AI algorithm model 2223.
AI operator library module 2213: the AI engine module 2232 may run the AI model management module 2233 by calling operators in the AI operator library module 2213 for operations such as environment recognition or face recognition. Because the resources of the coprocessor 220 are limited, the AI operator library module 2213 for designing a large number of mathematical calculations can be solidified in hardware, most operators of the AI can be realized by the hardware, and high processor load generated by software realization operators can be avoided. The interface of the hardware curing operator may be provided by the kernel 221 to the AI model management module 2233 for use.
It should be understood that the AI operator library module 2213 solidified in hardware (software solidification) may be writing software onto a coprocessor chip, and the software written on may be run through the coprocessor chip. The software solidification is to make the software on the silicon chip (so-called firmware) to realize the software function, so that the complexity of the operating system and the language processing is shared by both the software and the hardware.
In the embodiment of the present application, the AI operator library module 2213 is fixed on the hardware of the coprocessor, and the operation of the software fixing can improve the operation speed of the whole system, improve the reliability, reduce the cost, and facilitate large-scale production and standardization.
Hardware acceleration module 2212: the process of the AI model management module 2233 can be accelerated by the AI engine module 2232 invoking operators in the AI operator library module 2213 through an acceleration mode. The AI engine module 2232 can be ensured to call the operators in the AI operator library module 2213 quickly and in real time, and capability interfaces are provided for various AI algorithms in the AI model management module 2233 of the framework layer 222.
AI application layer module 2231: the application layer 223 can be located, and various continuous AI applications can be implemented in the APP application layer 223 according to the scene requirements of the service design of the terminal device. The AI application layer module 2231 may call various algorithms to obtain AI identification results of various devices connected to the peripheral device, and may report a corresponding AI event message to the main controller 210. If the main controller 210 is in the sleep state, the AI event message may be secondarily processed after being awakened.
Specifically, when the user face data in the image data reported by the low-power-consumption camera 230 of the terminal is from the beginning to the end, or from the end to the end, the AI application management module 2221 reports the face detection result to the AI application layer module 2231. The AI application layer module 2231, after obtaining the identification result, forms an identification event message and reports the identification event message to the AI event message manager 212 in the main controller 210.
The system architecture of the main processor 210 is described in detail below.
The main processor 210: and the system is responsible for running various applications of the terminal equipment, including UI (user interface) man-machine interaction interface, cloud interaction and the like. When no service exists, the main controller system is in normal sleep standby mode and enters a low power consumption mode.
The main processor 210 may include: AI native 211, AI event message manager (AI service)212, Application (APP) 213, APP 214, APP 215.
AI local (AI native) 211: the AI event message reported by the coprocessor 220 may be received and the main controller 210 is awakened. The AI algorithm model 2223 optimized by the main controller 210 may also be sent to the AI engine module 2232 of the coprocessor 220, and the AI engine module 2232 may update the AI algorithm model 2223 through the AI model management module 2233.
AI event message manager (AI service) 212: the AI event message reported by the AI native 211 can be received, the AI capability interface of the terminal device can be managed in a unified manner, and an AI Application Program Interface (API) is provided for each service module. According to the product service requirements, various bright spot service functions are realized. For example, different highlight service functions may be implemented according to different applications (APP 213 or APP 214 or APP 215).
Specifically, the AI event message manager 212 in the main controller 210 wakes up the main controller 210 after receiving the recognition event message transmitted by the AI application layer module 2231. The main controller 210 may determine whether the user face data detected in the image reported by the low power consumption camera 230 matches with the pre-stored face data (the owner face data may be understood as analyzing whether the user face detected in the image reported by the low power consumption camera 230 matches with the owner face). When it is determined that the user face detected in the image reported by the low power consumption camera 230 matches a pre-stored face, the corresponding application function (e.g., face recognition unlocking, screen unlocking) may be started.
Optionally, in some embodiments, if large data processing is required, the AI service 212 may also transmit data to the cloud to complete a low power consumption service processing mode combining the terminal device and the cloud.
In the embodiment of the application, the main frequency of the operation of the coprocessor is low, a large number of AI operators of mathematical operation are integrated in a hardware solidification mode, and peripheral devices are low-power-consumption devices and can be normally opened and operate AI sensing capability in a low-power-consumption mode, so that terminal equipment can sense the action change or the environment change of a user without depending on specific actions.
The specific implementation of the hardware architecture shown in fig. 2 is described in detail below with reference to specific scenarios in fig. 3-6.
Fig. 3 is a schematic flowchart of a terminal device face recognition unlocking scene provided in an embodiment of the present application. The method shown in FIG. 3 may include steps 310-360, which are described in detail below with respect to steps 310-360.
It should be noted that the example of fig. 3 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art from the example of fig. 3 given herein that various equivalent modifications or variations may be made, and such modifications or variations also fall within the scope of the embodiments of the present application.
Step 310: and starting.
Step 315: the low-power consumption camera collects images.
The low power consumption camera 230 connected below the coprocessor 220 continuously collects images around the terminal device, and may report the collected image data to the AI application management module 2221.
Step 320: the coprocessor calls a face detection algorithm model to detect whether face data exists or not.
After the co-processing 220 receives the image data reported by the low power consumption camera 230, the AI application management module 2221 may call the corresponding AI face detection algorithm model 2223 to analyze whether there is face data in the acquired image according to the data to be processed as the face data of the user in the analysis image through the AI engine module 2232.
Specifically, the AI face detection algorithm model 2223 may call a corresponding AI operator in the AI operator library module 2213, and operate the AI face detection algorithm model 2223 to perform face detection. If the AI face detection algorithm model 2223 can detect that the facial contour features of the user in the image are complete and conform to the facial features, it can be determined that a face appears in the image reported by the low-power-consumption camera 230.
If the face detection result is that the contour of the user's face in the reported image is complete and matches the face feature (i.e., the user's face appears in the image), the AI application management module 2221 may execute step 325.
If the face detection result is that the contour of the user's face in the reported image is incomplete and does not conform to the face characteristics (i.e., the user's face does not appear in the image), the AI application management module 2221 may re-execute step 315.
Step 325: and comparing with the previous state, judging whether the face detection result is changed.
After the AI face detection algorithm model 2223 detects that the user's face appears in the image, the comparison with the previous state can be made to see whether the face detection result changes.
If the previous state is compared, the AI application management module 2221 may execute step 330 when it is found that the state change of the image reported by the low-power-consumption camera 230 from the state in which no user face appears in the previous image is a state in which a user face appears in the currently acquired image.
If the previous state is compared, it is found that the state of the detected face in the image reported by the low-power-consumption camera 230 does not change, that is, the image reported by the low-power-consumption camera 230 detects the face of the user, and the state in the image is not changed compared with the current state, the AI application management module 2221 may execute step 315 again.
Step 330: the coprocessor reports the face detection message to the main controller.
When the state of the image reported by the low-power-consumption camera 230 is changed from the state in which no user face appears in the previous image to the state in which a user face appears in the currently acquired image, the AI application management module 2221 reports the face detection result to the main controller 210.
Specifically, the AI application management module 2221 in the coprocessor 220 may report the face detection result to the AI application layer module 2231. The AI application layer module 2231, after obtaining the face detection result, forms a face detection event message and reports the face detection event message to the AI event message manager 212 in the main controller 210.
Step 335: the master controller is awakened.
The AI event message manager 212 in the main controller 210 wakes up the main controller 210 after receiving the face detection event message transmitted by the AI application layer module 2231.
Step 340: the main controller starts a face recognition process.
The main controller 210 may start a corresponding face recognition procedure after receiving the face detection event message.
Step 345: the main controller judges whether the human face features are matched with the owner.
The main controller 210 may determine whether the detected face features of the images reported by the power consumption cameras 230 match the stored face images after the corresponding face recognition process is started.
If the main controller 210 determines that the image reported by the power consumption camera 230 detects that the facial features match the stored facial image, step 350 may be executed.
If the main controller 210 determines that the image reported by the power consumption camera 230 detects that the facial features do not match the stored facial image, step 355 may be performed.
Step 350: the main controller unlocks the screen.
When it is determined that the face features detected in the image reported by the power consumption camera 230 match the stored face image, the main controller 210 unlocks the screen of the terminal device, displays the encrypted information prompt, and enters a normal use mode.
Step 355: the main controller maintains a screen lock state.
The main controller 210 keeps the screen locked state when it is determined that the face features detected from the image reported by the power consumption camera 230 are not matched with the stored face image.
Step 360: and (6) ending.
In the embodiment of the application, the terminal equipment can acquire user image data in real time through the low-power-consumption camera and can autonomously operate AI perception capability. When the user needs to use the terminal equipment, the head only needs to be appeared in the front of the terminal screen, and the terminal equipment can independently perform the face recognition unlocking function, so that the terminal equipment is more intelligent, and the human-computer experience is more comfortable.
Referring to fig. 9, when a user (the face data of the user must be matched with the face features of the owner or with the pre-stored face features) needs to use the terminal device, only the head needs to be presented in front of the terminal screen, and the interface of the terminal is changed from 910 to 920 automatically. As can be seen from fig. 9, when the head of the user does not appear in front of the terminal screen, the screen of the terminal is kept in the screen locking state (see 930 in fig. 9, the terminal home page is in the screen locking state), and when the user needs to use the terminal device, the head appears in front of the terminal screen, and the terminal automatically unlocks the screen (see 940 in fig. 9, the terminal home interface screen is successfully unlocked).
The following describes in more detail a scenario of intelligent bright-screen unlocking of the terminal device in the embodiment of the present application with reference to a specific example in fig. 4. It should be noted that the example of fig. 4 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art from the example of fig. 4 given herein that various equivalent modifications or variations may be made, and such modifications or variations also fall within the scope of the embodiments of the present application.
Fig. 4 is a schematic flowchart of a scene of intelligent bright-screen unlocking of a terminal device according to an embodiment of the present application. The method shown in FIG. 4 may include steps 410-460, which are described in detail below with respect to steps 410-460.
In the embodiment of the application, the intelligent bright screen unlocking of the terminal equipment can be understood that when the terminal equipment is in a dormant state, a user looks at a screen of the terminal equipment, the screen is automatically lightened, and a subsequent face recognition unlocking process can be entered.
It should be understood that in the scenario of the intelligent bright-screen unlocking of the terminal device in fig. 4, after the face detection is completed by the co-controller shown in fig. 3, the bright-screen function of the terminal device is added, and the main controller performs a subsequent face recognition process.
Please refer to step 310-335 in fig. 3 for the description of step 410-435, which is not described herein.
Step 440: the main controller lights up the screen and starts the face recognition process.
The main controller 210 may light up the screen after receiving the face detection event message, and start a face recognition process.
Step 445: the main controller judges whether the human face features are matched with the owner.
The main controller 210 may determine whether the detected face features of the images reported by the power consumption cameras 230 match the stored face images after the corresponding face recognition process is started.
If the main controller 210 determines that the image reported by the power consumption camera 230 detects that the facial features match the stored facial image, step 450 may be executed.
If the main controller 210 determines that the image reported by the power consumption camera 230 detects that the facial features do not match the stored facial image, step 455 may be executed.
Step 450: the main controller unlocks the screen.
When it is determined that the face features detected in the image reported by the power consumption camera 230 match the stored face image, the main controller 210 unlocks the screen of the terminal device, displays the encrypted information prompt, and enters a normal use mode.
Step 455: the main controller maintains a screen lock state.
The main controller 210 keeps the screen locked state when it is determined that the face features detected from the image reported by the power consumption camera 230 are not matched with the stored face image. If the human face can not be detected within a certain time, the screen-off dormant state can be entered. For a specific process of entering the screen-off sleep state of the terminal device, please refer to the method shown in fig. 5, which is not described herein again.
Step 460: and (6) ending.
In the embodiment of the application, the terminal equipment can acquire user image data in real time through the low-power-consumption camera and can autonomously operate AI perception capability. When a user needs to use the terminal equipment, the head is only required to be arranged in front of the terminal screen, and the terminal equipment can automatically brighten the screen. The screen can be freely displayed when the user inconveniently operates the terminal equipment, so that the terminal equipment is more intelligent, and the human-computer experience is more comfortable.
The following describes in more detail a scenario of intelligent screen-off unlocking of the terminal device in the embodiment of the present application with reference to a specific example in fig. 5. It should be noted that the example of fig. 5 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art from the example of fig. 5 given herein that various equivalent modifications or variations may be made, and such modifications or variations also fall within the scope of the embodiments of the present application.
Fig. 5 is a schematic flowchart of a scene of intelligent screen blanking of a terminal device according to an embodiment of the present application. The method shown in FIG. 5 may include steps 510-570, which are described in detail below with respect to steps 510-570.
In the embodiment of the application, the intelligent bright screen unlocking of the terminal equipment can be understood as that the screen is lightened when a user looks at the terminal screen.
In the embodiment of the application, the intelligent screen-off unlocking of the terminal equipment can be understood as that if a user does not see the screen within a certain time, the screen is automatically turned off.
Step 510: and starting.
Step 515: the low-power consumption camera collects images.
The low power consumption camera 230 connected below the coprocessor 220 continuously collects images around the terminal device, and may report the collected image data to the AI application management module 2221.
Step 520: the coprocessor calls a face detection algorithm model to detect whether a face exists.
After the co-processing 220 receives the image data reported by the low power consumption camera 230, the AI application management module 2221 may call the corresponding AI face detection algorithm model 2223 to analyze whether a face appears in the acquired image by the AI engine module 2232 according to the data to be processed as the facial features of the user in the analysis image.
Specifically, the AI face detection algorithm model 2223 may call a corresponding AI operator in the AI operator library module 2213, and operate the AI face detection algorithm model 2223 to perform face detection. If the AI face detection algorithm model 2223 can detect that the facial contour features of the user in the image are complete and conform to the facial features, it can be determined that a face appears in the image reported by the low-power-consumption camera 230.
If the face detection result is that the contour of the user's face in the reported image is complete and matches the face feature (i.e., the user's face appears in the image), the AI application management module 2221 may re-execute step 515.
It should be understood that if the AI face detection algorithm model 2223 detects that there is still a user's face in front of the screen after the terminal device is on, the screen is continuously on and detection is repeated.
If the face detection result is that the contour of the user's face in the reported image is incomplete and does not conform to the face characteristics (i.e., the user's face does not appear in the image), the AI application management module 2221 may execute step 525.
Step 525: and comparing with the previous state, judging whether the face detection result is changed.
The AI face detection algorithm model 2223 may find that the face of the user is not detected in the image reported by the low-power-consumption camera 230, and may compare the detected face with the previous state to see whether the face detection result changes.
If the previous state is compared, the AI application management module 2221 may execute step 530 when it is found that the state change of the image reported by the low-power-consumption camera 230 from the state in which the user face appears in the previous image to the state in which no user face appears in the currently acquired image.
Step 530: the coprocessor reports the face detection message to the main controller.
When the state of the image reported by the low-power-consumption camera 230 changes from the state in which the user's face appears in the previous image to the state in which no user's face appears in the currently acquired image, the AI application management module 2221 reports the face detection result to the main controller 210.
It should be appreciated that when the user looks at the screen, the coprocessor 220 always detects a human face through the images captured by the low power camera 230, and the screen is not extinguished. When the user does not see the screen, and the coprocessor 220 cannot detect the face through the image acquired by the low-power consumption camera 230, the coprocessor 220 generates an event message and notifies the main controller 210, and the main controller 210 enters a screen-off dormant process.
Step 535: the master controller is awakened.
The AI event message manager 212 in the main controller 210 wakes up the main controller 210 after receiving the face detection event message transmitted by the AI application layer module 2231.
Step 540: the main controller judges whether a human face exists or not.
The main controller 210 may determine whether a human face is detected in the images reported by the low power consumption camera 230 after receiving the human face detection event message.
If no human face is detected in the image reported by the low power consumption camera 230, the main controller 210 may execute step 545 and 555.
If a human face is detected in the image reported by the low power consumption camera 230, the main controller 210 may execute step 560 and step 565.
Step 545: the master controller starts a sleep timer.
The main controller 210 may start the sleep timer and enter the sleep state after receiving the face detection event message because the coprocessor 220 cannot detect the face through the image collected by the low power consumption camera 230.
Step 550: whether a timer is triggered.
The main controller 210 may perform step 555 in case of triggering a timer.
The master controller 210 may perform step 565 without triggering a timer.
Step 555: and (4) dark screen, and entering a sleep mode.
The main controller 210 may extinguish the screen and enter the sleep mode in case the timer is triggered.
Step 560: the timer is turned off.
The main controller 210 may turn off the timer after receiving the face detection event message because the co-processor 220 detects a face through the image collected by the low power consumption camera 230.
Step 565: the bright screen state is maintained.
Step 570: and (6) ending.
The main controller 210 may indicate that the coprocessor 220 always detects a human face through the image acquired by the low power consumption camera 230 under the condition that the timer is turned off, and the user always looks at the screen and keeps a bright screen state.
In the embodiment of the application, the terminal equipment can acquire user image data in real time through the low-power-consumption camera and can autonomously operate AI perception capability. When the user does not see the terminal screen within a period of time, the terminal equipment can automatically turn off the screen. The screen can be freely displayed when the user inconveniently operates the terminal equipment, so that the terminal equipment is more intelligent, and the human-computer experience is more comfortable.
The environment scene identification scene of the terminal device in the embodiment of the present application is described in more detail below with reference to a specific example in fig. 6. It should be noted that the example of fig. 6 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art from the example of fig. 6 given herein that various equivalent modifications or variations may be made, and such modifications or variations are intended to fall within the scope of the embodiments of the present application.
Fig. 6 is a schematic flowchart of a terminal device environment scene identification scenario provided in an embodiment of the present application. The method shown in fig. 6 may include steps 610-655, which are described in detail below with respect to steps 610-655.
Step 610: and starting.
Step 615: the low-power consumption camera collects images.
The low power consumption camera 230 connected below the coprocessor 220 continuously collects images around the terminal device, and may report the collected image data to the AI application management module 2221.
Step 620: the coprocessor calls an environment recognition algorithm model to detect whether the environment scene around the user is a target environment scene.
After the co-processing 220 receives the image data reported by the low power consumption camera 230, the AI application management module 2221 may identify the user peripheral environment scene in the analysis image according to the data that needs to be processed, and invoke the corresponding environment identification algorithm model through the AI engine module 2232 to analyze whether the user peripheral environment scene in the collected image is a target environment scene (e.g., a conference room).
The AI application management module 2221 may perform step 625 if it is recognized that the user's surrounding environment scene in the image is the target environment scene.
If it is recognized that the user's surrounding environment scene in the image is not the target environment scene, the AI application management module 2221 may re-perform step 615.
Step 625: and comparing with the previous state, judging whether the detection result of the environmental scene changes.
After the environment recognition algorithm model detects that the surrounding environment scene of the user in the image is the target environment scene, the state can be compared with the previous state to see whether the environment scene detection result changes or not.
If the previous state is compared, when it is found that the image reported by the low power consumption camera 230 changes from another environment scene (e.g., in a car) to a target environment scene (e.g., a meeting room) of the currently acquired image, the AI application management module 2221 may execute step 630.
If the previous state is compared, when it is found that the image reported by the low power consumption camera 230 is not changed from another environment scene (e.g., in a vehicle) to the target environment scene of the currently acquired image, the AI application management module 2221 may execute step 315 again.
Step 630: the coprocessor reports the context identification message to the main controller.
When the image reported by the low-power-consumption camera 230 changes from another environment scene (e.g., in a vehicle) to a target environment scene of the currently acquired image, the AI application management module 2221 reports the environment recognition result to the main controller 210.
Specifically, the AI application management module 2221 in the coprocessor 220 may report the environment recognition result to the AI application layer module 2231. The AI application layer module 2231, after obtaining the environment recognition result, forms an environment recognition event message and reports the environment recognition event message to the AI event message manager 212 in the main controller 210.
Step 635: the master controller is awakened.
The AI event message manager 212 in the main controller 210 wakes up the main controller 210 after receiving the environment recognition event message transmitted by the AI application layer module 2231.
Step 640: the main controller judges whether to switch to the target environment scene.
The main controller 210 may determine whether the ambient scene of the user detected in the image reported by the power consumption camera 230 changes to the target ambient scene after being awakened.
If the main controller 210 determines that the environmental scene around the user detected in the image reported by the power consumption camera 230 changes to the target environmental scene, step 645 may be executed.
If the main controller 210 determines that the environment scene around the user detected in the image reported by the power consumption camera 230 does not change to the target environment scene, step 650 may be performed.
Step 645: the main controller changes the terminal scene mode.
The main controller 210 may automatically change the mode of the terminal device when it is determined that the ambient scene change of the user detected in the image reported by the power consumption camera 230 is the target ambient scene.
With the target environment scene as a conference room, if the environment recognition result received by the main controller 210 is that the user scene is switched to the conference room, the terminal device automatically adjusts to a conference mode (e.g., a vibration or mute mode).
Taking the target environment scene as an in-vehicle or an outdoor scene as an example, if the environment recognition result received by the main controller 210 is that the user scene is switched to the in-vehicle or the outdoor scene, the terminal device automatically adjusts to a driving mode (e.g., a ring mode).
Step 650: the main controller does not change the terminal scene mode.
The main controller 210 may not change the mode of the terminal device when it is determined that the ambient environment scene of the user detected in the image reported by the power consumption camera 230 does not change to the target ambient scene.
Step 655: and (6) ending.
In the embodiment of the application, the terminal equipment can acquire user image data in real time through the low-power-consumption camera and can autonomously operate AI perception capability. The terminal device can automatically recognize the environment and switch to a corresponding scene mode (for example, a conference mode), so that the terminal device is more intelligent and the human-computer experience is more comfortable.
The method for identifying user behavior according to the embodiment of the present invention is described in detail above with reference to fig. 1 to 6, and the apparatus according to the embodiment of the present invention is described in detail below. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 7 is a schematic structural diagram of a terminal 700 according to an embodiment of the present application. The terminal 700 may include: an acquisition module 710, an analysis module 720, a determination module 730, and a processing module 740. The above modules are described in detail below.
The acquiring module 710 is configured to acquire image data in real time through a low-power-consumption camera, where the low-power-consumption camera is always turned on.
And an analyzing module 720, configured to analyze whether a specific event occurs according to the image data.
A determining module 730, configured to determine that the specific event occurs.
And the processing module 740 is configured to open an application function corresponding to the artificial intelligence AI.
Optionally, in some embodiments, the analysis module 720 is specifically configured to: and the terminal calls an AI algorithm to analyze whether the specific event occurs or not according to the image data.
Optionally, in some embodiments, the specific event is a change in user face data in the image data.
Optionally, in some embodiments, the determining module 730 is specifically configured to: and determining whether the face data of the user in the image data is from the existence to the nonexistence or from the nonexistence to the existence.
Optionally, in some embodiments, the determining module 730 is specifically configured to: according to the user face data in the image data, starting a face recognition function, and determining that the user face data in the image data is preset first face data; and unlocking the screen.
Optionally, in some embodiments, the determining module 730 is specifically configured to: determining that the face data of the user in the image data is from nothing to nothing; the screen is lit.
Optionally, in some embodiments, the determining module 730 is specifically configured to: determining whether the face data of the user in the image data exists or not within preset time; the screen is extinguished.
Optionally, in some embodiments, the analysis module 720 is specifically configured to: and inputting the image data into an AI algorithm model, and calling a corresponding algorithm in an AI operator library by the AI algorithm model to analyze whether the image data is the user face data.
Optionally, in some embodiments, the AI algorithm library is solidified in hardware of the terminal.
Optionally, in some embodiments, the analysis module 720 is further specifically configured to: and calling a corresponding operator in the AI operator library through a hardware accelerator, and analyzing whether the specific event occurs or not according to the image data.
Fig. 8 is a schematic structural diagram of a chip 800 for identifying user behavior according to an embodiment of the present application. The chip may include a coprocessor 810, a main processor 820.
Where coprocessor 820 may correspond to coprocessor 220 of fig. 2, main processor 810 may correspond to main processor 210 of fig. 2.
The coprocessor 820 is configured to perform the following operations: acquiring image data in real time through a low-power-consumption camera, wherein the low-power-consumption camera is connected with the coprocessor and is always turned on; analyzing whether a specific event occurs according to the image data; and determining that the specific event occurs, and sending an Artificial Intelligence (AI) message to the main processor.
The main processor 810 is configured to: and opening an application function corresponding to the AI according to the received AI message.
Optionally, in some embodiments, the coprocessor 820 is specifically configured to: and calling an AI algorithm to analyze whether the specific event occurs or not according to the image data.
Optionally, in some embodiments, the specific event is a change in user face data in the image data.
Optionally, in some embodiments, the coprocessor 820 comprises: an AI engine module, an AI algorithm library module, an AI application layer module,
the AI engine module is to: calling a corresponding AI algorithm to carry out AI calculation according to the image data;
the AI algorithm module is to: calling a corresponding AI operator in an AI algorithm library to analyze whether the user face data in the image data exist or not or exist from now on according to the input image data, and reporting an identification result to the AI application layer;
the AI application layer module is to: and reporting the AI message to the main controller according to the identification result.
Optionally, in some embodiments, the main processor 810 is specifically configured to: and starting a face recognition function according to the user face data in the image data, determining the user face data in the image data as preset face data, and unlocking the screen.
Optionally, in some embodiments, the main processor 810 is further specifically configured to: and determining that the face data of the user in the image data is from nothing to any, and lighting up a screen.
Optionally, in some embodiments, the main processor 810 is specifically configured to: and determining whether the face data of the user in the image data exists or not, and locking the screen.
Optionally, in some embodiments, the main processor 810 is further specifically configured to: and determining whether the face data of the user in the image data exists or not within preset time, and turning off the screen.
Optionally, in some embodiments, the coprocessor 820 further comprises: and the hardware accelerator module is used for calling the AI algorithm library module to analyze whether the user face data in the image data is from the existence or not or from the nonexistence to the existence process for acceleration.
Optionally, in some embodiments, the AI operator library is solidified in hardware of the coprocessor.
An embodiment of the present application further provides a computer-readable storage medium, which includes a computer program and when the computer program runs on a terminal, the computer program causes the computer to execute the method as described in steps 110-140 and the like.
The embodiment of the present application further provides a computer program product, which, when running on a terminal, causes the computer to execute the method as described in steps 110-140, etc.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Additionally, various aspects or features of embodiments of the application may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used in the embodiments of this application is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD), etc.), smart cards, and flash memory devices (e.g., erasable programmable read-only memory (EPROM), card, stick, or key drive, etc.). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (24)

  1. A method for identifying user behavior, comprising:
    the terminal acquires image data in real time through a low-power-consumption camera which is always turned on;
    the terminal analyzes whether a specific event occurs or not according to the image data;
    the terminal determines that the specific event occurs;
    and the terminal starts an application function corresponding to the artificial intelligence AI.
  2. The method of claim 1, wherein the terminal analyzes whether a specific event occurs according to the image data, comprising:
    and the terminal calls an AI algorithm to analyze whether the specific event occurs or not according to the image data.
  3. The method according to claim 1 or 2, wherein the specific event is a change in user face data in the image data.
  4. The method of claim 3, wherein the terminal determining that the specific event occurs comprises:
    and the terminal determines whether the face data of the user in the image data is from the existence to the nonexistence or from the nonexistence to the existence.
  5. The method according to claim 4, wherein the terminal starts an application function corresponding to artificial intelligence AI, and the method comprises:
    the terminal determines that the face data of the user in the image data is from nothing to nothing;
    and the terminal is lightened.
  6. The method according to claim 4 or 5, wherein the terminal starts an application function corresponding to the artificial intelligence AI, and the method comprises the following steps:
    the terminal starts a face recognition function according to the user face data in the image data, and recognizes the user face data in the image data as preset face data;
    and unlocking the screen by the terminal.
  7. The method according to claim 4, wherein the terminal starts an application function corresponding to artificial intelligence AI, and the method comprises:
    the terminal determines whether the face data of the user in the image data exists or not;
    the terminal maintains a lock screen interface.
  8. The method according to claim 7, wherein the terminal starts an application function corresponding to artificial intelligence AI, and the method comprises:
    the terminal determines whether the face data of the user in the image data exists or not within preset time;
    and the terminal goes out of the screen.
  9. The method according to any one of claims 2 to 8, wherein the terminal calls an AI algorithm to analyze whether a specific event occurs according to the image data, and the method comprises the following steps:
    and the terminal inputs the image data into an AI algorithm model, and the AI algorithm model calls a corresponding algorithm in an AI operator library to analyze whether the image data is the user face data.
  10. The method according to claim 9, characterized in that the AI algorithm library is solidified in the hardware of the terminal.
  11. The method according to claim 9 or 10, wherein the terminal inputs the image data into an AI algorithm model, and the AI algorithm model calls a corresponding algorithm in an AI operator library to analyze whether the image data is user face data, including:
    and the AI algorithm model calls a corresponding operator in the AI operator library through a hardware accelerator and analyzes whether the specific event occurs or not according to the image data.
  12. A chip for identifying user behavior, comprising: a coprocessor, a main processor, the coprocessor is connected with the main processor,
    the coprocessor is used for executing the following operations:
    acquiring image data in real time through a low-power-consumption camera, wherein the low-power-consumption camera is connected with the coprocessor and is always turned on;
    analyzing whether a specific event occurs according to the image data;
    determining the occurrence of the specific event, and sending an Artificial Intelligence (AI) message to a main processor;
    the main processor is configured to: and opening an application function corresponding to the AI according to the received AI message.
  13. The chip of claim 12, wherein the coprocessor is specifically configured to:
    and calling an AI algorithm to analyze whether the specific event occurs or not according to the image data.
  14. The chip according to claim 12 or 13, wherein the specific event is a change in user face data in the image data.
  15. The chip of claim 14, wherein the coprocessor comprises: an AI engine module, an AI algorithm library module, an AI application layer module,
    the AI engine module is to: calling a corresponding AI algorithm to carry out AI calculation according to the image data;
    the AI algorithm module is to: calling a corresponding AI operator in an AI algorithm library to analyze whether the user face data in the image data exist or not or exist from now on according to the input image data, and reporting an identification result to the AI application layer;
    the AI application layer module is to: and reporting the AI message to the main controller according to the identification result.
  16. The chip of claim 15, wherein the main processor is specifically configured to:
    according to the user face data in the image data, starting a face recognition function, and recognizing the user face data in the image data as preset face data;
    and unlocking the screen.
  17. The chip according to claim 15 or 16, wherein the main processor is further configured to:
    determining that the face data of the user in the image data is from nothing to nothing;
    the screen is lit.
  18. The chip of claim 15, wherein the main processor is specifically configured to:
    determining whether the face data of the user in the image data exists or not;
    and maintaining a screen locking interface of the screen.
  19. The chip of claim 18, wherein the main processor is further specifically configured to:
    determining whether the face data of the user in the image data exists or not within preset time;
    and (5) turning off the screen and entering a dormant state.
  20. The chip of any of claims 15 to 19, wherein the co-processor further comprises:
    and the hardware accelerator module is used for calling the AI algorithm library module to analyze whether the user face data in the image data is from the existence or not or from the nonexistence to the existence process for acceleration.
  21. The chip of claim 20, in which the AI operator library is solidified in hardware of the coprocessor.
  22. A terminal, characterized in that it comprises a chip according to any one of claims 12 to 21 and a low power consumption camera, said low power consumption camera being connected to said co-processor.
  23. A computer storage medium, characterized in that it comprises a computer program which, when run on the terminal, causes the terminal to perform the method according to any one of claims 1 to 11.
  24. A computer program product, characterized in that it comprises a computer program which, when run on the terminal, causes the terminal to carry out the method according to any one of claims 1 to 11.
CN201880091728.6A 2018-10-16 2018-10-16 Method, chip and terminal for identifying user behavior Pending CN111902791A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/110391 WO2020077523A1 (en) 2018-10-16 2018-10-16 Method used for recognizing user behavior, chip and terminal

Publications (1)

Publication Number Publication Date
CN111902791A true CN111902791A (en) 2020-11-06

Family

ID=70283703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880091728.6A Pending CN111902791A (en) 2018-10-16 2018-10-16 Method, chip and terminal for identifying user behavior

Country Status (2)

Country Link
CN (1) CN111902791A (en)
WO (1) WO2020077523A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578893A (en) * 2020-12-11 2021-03-30 Oppo广东移动通信有限公司 Data processing system, chip, method and storage medium
CN116301362A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Image processing method, electronic device and storage medium
CN118116034A (en) * 2024-04-26 2024-05-31 厦门四信通信科技有限公司 Pedestrian retrograde detection method, device, equipment and medium based on AI visual analysis

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4175338A4 (en) * 2020-06-24 2023-07-19 Beijing Xiaomi Mobile Software Co., Ltd. Communication processing method, communication processing apparatus and storage medium
CN111988511B (en) * 2020-08-31 2021-08-27 展讯通信(上海)有限公司 Wearable equipment and image signal processing device thereof
CN112541450A (en) * 2020-12-18 2021-03-23 Oppo广东移动通信有限公司 Context awareness function control method and related device
CN113657263A (en) * 2021-08-16 2021-11-16 深圳多模智能科技有限公司 Method, device, terminal and medium for awakening terminal to identify biological characteristic information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140173719A1 (en) * 2012-12-18 2014-06-19 Hon Hai Precision Industry Co., Ltd. Industrial manipulating system with multiple computers and industrial manipulating method
CN105549739A (en) * 2015-12-10 2016-05-04 魅族科技(中国)有限公司 Screen lighting method and terminal
CN105759935A (en) * 2016-01-29 2016-07-13 华为技术有限公司 Terminal control method and terminal
CN106604369A (en) * 2016-10-26 2017-04-26 惠州Tcl移动通信有限公司 Terminal device with dual-mode switching function
CN107396170A (en) * 2017-07-17 2017-11-24 上海斐讯数据通信技术有限公司 A kind of method and system based on iris control video playback
CN107659719A (en) * 2017-09-19 2018-02-02 上海爱优威软件开发有限公司 A kind of Scene Simulation method, Scene Simulation system and terminal
CN107944325A (en) * 2017-11-23 2018-04-20 维沃移动通信有限公司 A kind of barcode scanning method, barcode scanning device and mobile terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130104682A (en) * 2012-03-15 2013-09-25 최상길 Apparatus and method for automatically locking display and touch in mobile phone
US9854159B2 (en) * 2012-07-20 2017-12-26 Pixart Imaging Inc. Image system with eye protection
CN102970411B (en) * 2012-10-24 2018-06-26 康佳集团股份有限公司 Smart mobile phone screen locking solution lock control method and smart mobile phone based on Face datection
CN107657167A (en) * 2017-11-02 2018-02-02 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of face unblock

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140173719A1 (en) * 2012-12-18 2014-06-19 Hon Hai Precision Industry Co., Ltd. Industrial manipulating system with multiple computers and industrial manipulating method
CN105549739A (en) * 2015-12-10 2016-05-04 魅族科技(中国)有限公司 Screen lighting method and terminal
CN105759935A (en) * 2016-01-29 2016-07-13 华为技术有限公司 Terminal control method and terminal
CN106604369A (en) * 2016-10-26 2017-04-26 惠州Tcl移动通信有限公司 Terminal device with dual-mode switching function
CN107396170A (en) * 2017-07-17 2017-11-24 上海斐讯数据通信技术有限公司 A kind of method and system based on iris control video playback
CN107659719A (en) * 2017-09-19 2018-02-02 上海爱优威软件开发有限公司 A kind of Scene Simulation method, Scene Simulation system and terminal
CN107944325A (en) * 2017-11-23 2018-04-20 维沃移动通信有限公司 A kind of barcode scanning method, barcode scanning device and mobile terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578893A (en) * 2020-12-11 2021-03-30 Oppo广东移动通信有限公司 Data processing system, chip, method and storage medium
CN116301362A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Image processing method, electronic device and storage medium
CN116301362B (en) * 2023-02-27 2024-04-05 荣耀终端有限公司 Image processing method, electronic device and storage medium
CN118116034A (en) * 2024-04-26 2024-05-31 厦门四信通信科技有限公司 Pedestrian retrograde detection method, device, equipment and medium based on AI visual analysis

Also Published As

Publication number Publication date
WO2020077523A1 (en) 2020-04-23

Similar Documents

Publication Publication Date Title
CN111902791A (en) Method, chip and terminal for identifying user behavior
US11042728B2 (en) Electronic apparatus for recognition of a user and operation method thereof
US20210176595A1 (en) Contextual information usage in systems that include accessory devices
CN111903113A (en) Method, chip and terminal for identifying environmental scene
EP3264290A1 (en) Method and apparatus for recommendation of an interface theme
CN111316199B (en) Information processing method and electronic equipment
CN105975828B (en) Unlocking method and device
EP2990949B1 (en) Methods and devices for backing up file
US20170032638A1 (en) Method, apparatus, and storage medium for providing alert of abnormal video information
US10764425B2 (en) Method and apparatus for detecting state
CN107705245A (en) Image processing method and device
CN113971271A (en) Fingerprint unlocking method and device, terminal and storage medium
EP3477545A1 (en) Video identification method and device
CN105956513A (en) Method and device for executing reaction action
CN115083401A (en) Voice control method and device
KR20190048630A (en) Electric terminal and method for controlling the same
CN106371941A (en) Running state adjustment method and apparatus
CN113454647A (en) Electronic device for recognizing object in image and operation method thereof
CN117009005A (en) Display method, automobile and electronic equipment
CN114066458A (en) Biometric identification method, biometric identification device, and storage medium
CN108158063A (en) Raincoat control method and device
CN111507202B (en) Image processing method, device and storage medium
CN114764300B (en) Window page interaction method and device, electronic equipment and readable storage medium
CN112102848B (en) Method, chip and terminal for identifying music
CN116028966A (en) Application display method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201106

RJ01 Rejection of invention patent application after publication