CN114127686A - Method, device and terminal for starting application program - Google Patents

Method, device and terminal for starting application program Download PDF

Info

Publication number
CN114127686A
CN114127686A CN202080006930.1A CN202080006930A CN114127686A CN 114127686 A CN114127686 A CN 114127686A CN 202080006930 A CN202080006930 A CN 202080006930A CN 114127686 A CN114127686 A CN 114127686A
Authority
CN
China
Prior art keywords
image
camera
processor
terminal
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080006930.1A
Other languages
Chinese (zh)
Inventor
李鑫强
刘翠君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114127686A publication Critical patent/CN114127686A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method, a device and a terminal for opening an application program relate to the technical field of terminals, and the method improves the accuracy of opening the application program by utilizing a two-stage image recognition mechanism under the conditions of ensuring the whole power consumption of the terminal to be saved and simplifying the user operation, and comprises the following steps: the method comprises the steps of firstly obtaining a low-definition image collected by a first camera, identifying the low-definition image, obtaining a high-definition image collected by the first camera or a second camera when an identification result meets a first preset condition, further identifying, and starting an application program function corresponding to the high-definition image when the identification result meets a second preset condition.

Description

Method, device and terminal for starting application program Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method and an apparatus for opening an application, and a terminal.
Background
With the development of electronic technology, more and more applications are installed in the terminal, the functions of the applications are more and more abundant, and the function menu is deeper and deeper. How to help users find out applications to be used from a plurality of applications and further quickly find out functions to be used in the applications has become a hot point of research of terminal manufacturers.
Currently, user-unaware solutions have emerged that utilize image recognition technology to launch applications. Specifically, the terminal uses an image acquired by the camera in real time, performs image recognition on the acquired image, and then starts an application program corresponding to the image according to an image recognition result. Because the terminal needs to normally open the camera, so adopt the low-power consumption camera to gather the image in real time, be favorable to reducing the whole consumption at terminal. However, the resolution of the image acquired by the low-power-consumption camera is generally low, and when the image with the low resolution is identified, the accuracy of the identification result is low, which easily causes the wrong application program to be started or the corresponding application program not to be started in time, and seriously affects the user experience. Therefore, how to provide a scheme for starting an application program with higher accuracy while saving the overall power consumption of the terminal is an urgent problem to be solved.
Disclosure of Invention
According to the method, the device and the terminal for starting the application program, the accuracy of starting the application program can be improved under the conditions that the whole power consumption of the terminal is guaranteed to be saved and the user operation is simplified. In order to achieve the above object, the embodiments of the present application provide the following technical solutions.
In a first aspect, a method for starting an application program is provided, where the method includes: acquiring a first image acquired by a first camera, wherein the first image has a first resolution; identifying the first image to obtain a first identification result; when the first recognition result meets a first preset condition, acquiring a second image acquired by the first camera or the second camera, wherein the second image has a second resolution, and the second resolution is greater than the first resolution; recognizing the second image to obtain a second recognition result; and when the second identification result meets a second preset condition, starting at least part of functions of the first application program corresponding to the second image, wherein the first preset condition and the second preset condition are the same or different.
That is to say, use first camera to gather low clear image earlier, carry out preliminary judgement to the surrounding environment. And when the judgment result meets a first preset condition, acquiring a high-definition image by using the first camera or the second camera, further confirming or further processing the high-definition image, and then starting all or part of functions of the corresponding application program. Therefore, the collected low-definition images are used for preliminary judgment, so that the overall power consumption of the terminal is favorably reduced, the accuracy of the identification result is favorably improved by using the high-definition images for secondary judgment, and the automatic starting of the application program or the application program function is ensured to be consistent with the expectation of a user. In addition, the terminal can automatically start the application program or the application program function according to the acquired image, so that the complex operation that a user manually searches the application program and searches the corresponding function in the application program is avoided, and the interaction efficiency between the user and the terminal is improved.
In a possible implementation manner, before acquiring the second image acquired by the second camera, the method further includes: and starting the second camera.
That is to say, after the first image that first camera was gathered satisfied first preset condition, open the second camera again and gather the second image of high definition, be favorable to saving the holistic consumption of terminal.
In one possible implementation, acquiring a first image captured by a first camera includes: a first image acquired by a first camera is acquired in real time or periodically.
In some examples, the first camera may be in a normally open state, and is configured to collect the first image in real time or periodically, so as to detect a current environment of the user, and facilitate the terminal to find a need for the user to open the application program in time.
In a possible implementation manner, the first preset condition and the second preset condition include a preset identification type, and the preset identification type includes any one or any several of a two-dimensional code, a barcode, food, text, and a payment device.
For example, when the second image contains a two-dimensional code or a bar code, the first application function corresponding to the two-dimensional code or the bar code in the second image is started. And when the second image contains food, starting a first application program function related to the food, wherein the first application program function is used for calculating the calorie of the food in the second image. And when the second image contains characters, starting a first application program function related to character processing, wherein the first application program function is used for translating the characters contained in the second image. And when the second image contains the payment equipment, starting a first application program function related to mobile payment, wherein the first application program function is a payment function.
In a possible implementation manner, before acquiring the first image acquired by the first camera, the method further includes: judging whether a terminal comprising a first camera generates a preset action or not; and when the terminal is determined to have the preset action, starting the first camera.
For example, the terminal may also start the first camera in a specific scene, and acquire the first image, so as to monitor the current environment where the user is located in real time. For example, the terminal may be provided with a switch for turning on or off a function of rapidly turning on an application program according to an image acquired by a camera, which is provided in the embodiment of the present application, and may also be referred to as a "direct service" function for short. That is to say, after the terminal starts the "direct service" function, the terminal starts the first camera to acquire the first image in real time, and analyzes the first image so as to monitor the current environment of the user in real time. And after the service direct function is closed, closing the first camera. For another example, the terminal may also monitor a state of the terminal by using another sensor, and when the state of the terminal satisfies the trigger condition, the first camera is turned on to acquire the first image. For example, the terminal may detect the attitude of the terminal itself using, for example, a gravity sensor or the like. And if the sensor concentrator determines that the terminal generates a preset action according to the data collected by the gravity sensor, starting a first camera to collect a first image.
The terminal is exemplified by a mobile phone. If the sensor concentrator determines that the mobile phone is turned over according to data monitored by a gravity sensor arranged in the mobile phone, the first camera is started to acquire a first image. Or the first camera is a rear camera of the mobile phone, and if the sensor concentrator determines that the mobile phone shakes according to data monitored by a gravity sensor arranged in the mobile phone, the first camera is started to acquire a first image. Of course, the condition for triggering the terminal to start the first camera to acquire the first image may also be set based on a specific application scenario, terminal capability, and the like, which is not specifically limited in this embodiment of the application.
In one possible implementation, the first camera and the second camera are cameras on different sides of the terminal, and the preset action is a turning action.
In one possible implementation, identifying the first image or the second image includes: the first processor identifies the first image or the second image by running a neural network model; opening at least part of the functions of the first application corresponding to the second image comprises: the second processor starts at least part of the function of the first application program corresponding to the second image.
In one example, the processor performs image parsing on the first image and the second image separately using the same algorithmic model of the neural network. The resolution of the second image is high due to the low resolution of the first image. Therefore, the accuracy of the second analysis result obtained by processing the second image is higher than that of the first analysis result obtained by processing the first image, and the accuracy of starting the application program based on the second analysis result is improved.
In another example, the neural network model used in the processor to process the first image and the neural network model used to process the second image may be the same type of algorithm model, but the specific parameters in the algorithm model are different, or the resolution results are different in accuracy. Because the resolution ratio of the second image is higher than that of the first image, the accuracy of a second analysis result obtained by carrying out image analysis on the second image is higher than that of a first analysis result obtained by processing the first image, and the accuracy of starting an application program based on the second analysis result is improved.
In yet another example, the neural network model used in the processor to process the first image and the neural network model used to process the second image may be different types of algorithmic models. Since the resolution of the second image is higher than that of the first image, the second analysis result obtained by performing image analysis on the second image may contain more detailed content. For example, the neural network model used to process the first image is an image classification model. The neural network model for processing the second image is a character detection model or a translation model, etc. For example, the processor first determines whether the first image includes text using an image classification model. After determining that the first image determines that the characters are included, recognizing the characters in the second image by using a character detection model, or translating the detected characters by using a translation model.
In a second aspect, an apparatus for starting an application is provided, including: the image interface is used for acquiring a first image acquired by the first camera and acquiring a second image acquired by the first camera or the second camera, wherein the first image has a first resolution, the second image has a second resolution, and the second resolution is greater than the first resolution; the processor is used for identifying the first image to obtain a first identification result; when the first identification result meets a first preset condition, further controlling the image interface to acquire a second image, and identifying the second image to obtain a second identification result; and when the second identification result meets a second preset condition, starting at least part of functions of the first application program corresponding to the second image, wherein the first preset condition and the second preset condition are the same or different.
In a possible implementation manner, before the processor controls the image interface to acquire the second image, the processor is further configured to start the second camera; the image interface is used for acquiring a second image acquired by the second camera after the second camera is started.
In a possible implementation, the image interface is specifically configured to acquire, in real time or periodically, the first image acquired by the first camera.
In one possible implementation, the processor includes a first processor and a second processor; the first processor is specifically used for identifying the first image to obtain a first identification result and identifying the second image to obtain a second identification result; the second processor is specifically used for further controlling the image interface to acquire a second image when the first identification result meets a first preset condition; and when the second recognition result meets a second preset condition, starting at least part of functions of the first application program corresponding to the second image.
In a possible implementation manner, before the processor controls the image interface to acquire the second image, the second processor is further specifically configured to start the second camera.
In one example, the first processor is a low power NPU and the second processor includes an application processor (i.e., CPU), or the second processor includes a CPU and a sensor hub. Specifically, the low-power-consumption NPU is configured to identify a first image to obtain a first identification result, and identify a second image to obtain a second identification result. The sensor concentrator is used for controlling the starting of the first camera; when the first recognition result meets a first preset condition, further starting a second camera, and sending a second recognition result or a second image to the CPU; the CPU is used for starting at least part of functions of the first application program corresponding to the second image.
In a third aspect, an apparatus for starting an application is provided, the apparatus including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first image acquired by a first camera, and the first image has a first resolution; the identification unit is used for identifying the first image to obtain a first identification result; the acquisition unit is further used for acquiring a second image acquired by the first camera or the second camera when the first identification result meets a first preset condition, wherein the second image has a second resolution, and the second resolution is greater than the first resolution; the identification unit is also used for identifying the second image to obtain a second identification result; and the starting unit is used for starting at least part of functions of the first application program corresponding to the second image when the second identification result meets a second preset condition, and the first preset condition and the second preset condition are the same or different.
In a possible implementation manner, the starting unit is further configured to start the second camera before acquiring the second image acquired by the second camera.
In one possible implementation, acquiring a first image captured by a first camera includes: a first image acquired by a first camera is acquired in real time or periodically.
In a possible implementation manner, the first preset condition and the second preset condition include a preset identification type, and the preset identification type includes any one or any several of a two-dimensional code, a barcode, food, text, and a payment device.
In a possible implementation manner, the identification unit is further configured to determine whether a preset action occurs on a terminal including the first camera before the first image acquired by the first camera is acquired; and the starting unit is also used for starting the first camera when the identification unit determines that the terminal generates the preset action.
In one possible implementation, the first camera and the second camera are cameras on different sides of the terminal, and the preset action is a turning action.
In one possible implementation, identifying the first image or the second image includes: the recognition unit recognizes the first image or the second image by running a neural network model.
In a possible implementation manner, the identification unit is an NPU, and the starting unit is a CPU.
In a fourth aspect, an apparatus for starting an application is provided, including: a processor and a memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read by the processor from the memory, cause the apparatus to perform the method as described in the first aspect and any one of its possible implementations.
A fifth aspect provides a chip system, comprising at least one processor and at least one communication interface, wherein when the processor executes instructions, the processor performs the processing functions as described in the first aspect and any one of the possible implementations, and the at least one communication interface is configured to implement the communication functions as described in the above aspect and any one of the possible implementations. For example, the communication interface is used to acquire a first image and a second image.
A sixth aspect provides a computer-readable storage medium comprising computer instructions which, when executed on a terminal, cause the terminal to perform the method as described in the first aspect and any one of its possible implementations.
A seventh aspect provides a computer program product for causing a computer to perform the method as described in the first aspect and any one of its possible implementations described above when the computer program product runs on the computer.
In an eighth aspect, there is provided an apparatus including the apparatus as described in the second aspect and any one of its possible implementations, and a first camera. Optionally, the apparatus further comprises a second camera. Optionally, the apparatus further comprises an ISP for processing images acquired by the first camera or the second camera. Optionally, the apparatus further comprises a memory for storing computer program code for driving the processor to operate. The memory may refer to the description of the sixth aspect.
Drawings
Fig. 1A is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 1B is a schematic structural diagram of another terminal provided in the embodiment of the present application;
fig. 2 is a flowchart illustrating a method for starting an application according to an embodiment of the present application;
fig. 3A is a schematic diagram of a graphical interface of a terminal according to an embodiment of the present application;
FIG. 3B is a graphical interface diagram of some of the terminals provided in the embodiments of the present application;
fig. 4 is a flowchart illustrating a further method for starting an application according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure.
Detailed Description
Generally, an application or a specific function in the application that a user wants to open is usually strongly related to the current environment where the user is located. Therefore, images of the surrounding environment can be acquired through a camera of the terminal, the acquired images are analyzed, and then the application program or the specific function in the application program which the user wants to start is determined according to the analysis result. Specifically, the embodiment of the application provides that a low-definition image is collected by using a low-power-consumption camera, and the surrounding environment is preliminarily judged. And when the judgment result meets the preset condition, acquiring the high-definition image, further confirming or further processing the high-definition image, and then starting the corresponding application program or application program function. Therefore, the low-definition images acquired by the low-power-consumption camera are used, the initial judgment is favorable for reducing the overall power consumption of the terminal, the secondary judgment of the high-definition images is favorable for improving the accuracy of the analysis result, and the application program or the application program function automatically started by the terminal is ensured to be consistent with the expectation of the user. In addition, the terminal can automatically start the application program or the application program function according to the acquired image, so that the complex operation that a user manually searches the application program and searches the corresponding function in the application program is avoided, and the interaction efficiency between the user and the terminal is improved. The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
The technical scheme provided by the embodiment of the application can be applied to a terminal provided with a camera, the terminal can be, for example, a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an on-board device, an intelligent automobile, an intelligent sound, a robot, and the like, and the specific form of the terminal is not particularly limited by the application.
Fig. 1A shows a schematic structural diagram of the terminal 100. The terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a gravity sensor 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the terminal 100. In other embodiments of the present application, terminal 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete the control of instruction fetching and instruction execution. Wherein the application processor is also called Central Processing Unit (CPU).
In some embodiments of the present application, the processor 110 may include a sensor hub (sensor hub), a first processor (e.g., a low power NPU), and a CPU. The CPU is also called an application processor, subsequently also called a second processor, or the second processor optionally further comprises a sensor hub. Wherein the first processor has a lower power consumption than the second processor. The sensor hub is a microcontroller unit that integrates data from different sensors (e.g., camera 193, gravity sensor 180) and processes the data. For example, the traffic control scheduling and the calculation with a low calculation amount are performed. Of course, if a sensor hub is not employed, its function may be replaced by the CPU itself. The following description will be given mainly taking as an example that the second processor includes both a CPU and a sensor hub.
For example, as shown in fig. 1B, it is a schematic diagram of another hardware structure of the terminal 100. The sensor hub of the terminal 100 is also connected with two cameras 193, a first camera and a second camera, respectively. The first camera is a low-power-consumption camera, and the second camera is a high-definition camera. The power consumption of the first camera and the resolution ratio of the collected image are lower than those of the second camera. The sensor concentrator can control the first camera to be in a normally open state, collects low-definition images in real time, and is used for monitoring the environment where a user is located. And sending the low-definition images acquired by the first camera to the first processor, and performing reasoning calculation by the first processor according to the low-definition images to determine the current environment of the user. And when the inference result of the low-definition images meets a first preset condition, the sensor concentrator starts a second camera to collect high-definition images, and the high-definition images are continuously sent to the first processor to confirm the environment where the user is located and/or further process the high-definition images. It can be understood that the first camera may be always on and acquire the low-definition image in real time, or be triggered by some preset condition or a child-tiger action to enter an always on state and acquire the low-definition image in real time. Alternatively, the first camera may periodically collect low-definition images. For example, the first camera collects a low-definition image every time a collection period comes, or the first camera may be in an off or low-power consumption state in a period in which an image is not collected, which is not limited in this embodiment.
When the inference result of the high-definition image meets a second preset condition, the sensor hub sends the high-definition image or a result obtained after the high-definition image is processed to the CPU, or the first processor sends a result obtained by further processing the high-definition image, for example, a cognitive result to the CPU, and the CPU performs subsequent processing, for example, starting a corresponding application program or starting a corresponding function in the corresponding application program.
In other embodiments, other sensors, such as a gravity sensor, may also be connected to the sensor hub of the terminal 100. The terminal 100 may first detect the posture, motion, and the like of the terminal 100 based on data of other sensors. When the gesture or the action of the terminal 100 is detected to meet a specific condition, the first camera is started to acquire a first image, and the current environment of the user is monitored in real time. Specific technical solutions will be described in detail below.
A memory, i.e., an internal memory, may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal 100, and may also be used to transmit data between the terminal 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other terminals, such as AR devices, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the terminal 100. The charging management module 140 may also supply power to the terminal through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied to the terminal 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the terminal 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the terminal 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the terminal 100 can communicate with a network and other devices through a wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The terminal 100 implements a display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the terminal 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The terminal 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the terminal 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In some embodiments, terminal 100 may include two cameras. One camera is a low-power-consumption camera and is used for collecting low-definition images; the other camera is a high-definition camera and is used for collecting high-definition images. The two cameras may be located on the same side of the terminal 100 or on both sides of the terminal 100, respectively.
In other embodiments, terminal 100 may include a camera. The terminal 100 may acquire a low-definition image and a high-definition image by using the camera, and the power consumption for acquiring the low-definition image is lower than that for acquiring the high-definition image, but the resolution of the low-definition image is lower than that of the high-definition image. For example, a processor (e.g., ISP) of the terminal 100 for generating a photographed image may control the camera to acquire images at a first frequency, the images being low-definition images. The ISP may also control the camera to acquire images at the second frequency, the images being high definition images. Wherein the first frequency is less than the second frequency. For another example, a processor (e.g., ISP) of the terminal 100 for generating a photographed image may perform a first processing on an image captured by the camera to obtain a low-definition image. And the image acquired by the camera can be subjected to second processing to obtain a high-definition image. Wherein the complexity of the first process is less than the complexity of the second process. Therefore, the power consumption of the camera or the ISP for acquiring the low-definition images is lower than that of the camera or the ISP for acquiring the high-definition images. Further, the specific processing procedure of the ISP for processing the image includes, but is not limited to, calibration, white balance, denoising or sharpening, and the present embodiment is not limited thereto.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the terminal 100 selects a frequency bin, the digital signal processor is configured to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The terminal 100 may support one or more video codecs. In this way, the terminal 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
In some embodiments of the present application, the terminal 100 may include a low power NPU, which may perform image recognition on the low-definition image, for example, to identify whether the low-definition image includes an image of a specific recognition type, such as a two-dimensional code, a barcode, food, text, a payment device, and the like. The NPU with low power consumption can also perform image recognition on the high-definition image, for example, whether the high-definition image contains an image of a specific recognition type, such as a two-dimensional code, a bar code, food, text, a payment device and the like. The identification algorithm of the low-power-consumption NPU for the low-definition image and the identification algorithm for the high-definition image can be the same or different. The specific identification method will be described in detail below.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the terminal 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the terminal 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the terminal 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The terminal 100 can implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The technical solutions in the following embodiments can be implemented in the terminal 100 having the above hardware architecture. As shown in fig. 2, a schematic flowchart of a method for opening an application according to an embodiment of the present application includes: s201, the terminal starts a first camera to collect a first image. In some embodiments, a sensor hub in the terminal may control the first camera to be in a normally open state, so that the first camera may acquire the first image in real time, and may be used to monitor the current environment where the user is located in real time. The resolution of the first image is less than the first threshold, the image quality of the first image may not be high, and the first image may even be a black-and-white image. In some examples, the first camera is a low power camera that captures a lower resolution raw image. Therefore, the power consumption of the terminal for acquiring the first image is greatly reduced. In other examples, the first camera may also be a normal camera, and then the sensor hub may control the first camera to capture the raw image at a first frequency (which is lower than a second frequency used when the first camera captures the high definition image), and/or control the image processor to perform a first processing on the captured raw image to obtain the first image. The resolution of the first image is low, the image quality is not high, and the power consumption is relatively low. The complexity of the first processing is smaller than that of the subsequent processing for acquiring the second image with high definition.
In other embodiments, the terminal may also start the first camera in a specific scene, and acquire the first image, so as to detect the current environment where the user is located in real time.
For example, the terminal may be provided with a switch for turning on or off a function of rapidly turning on an application program according to an image acquired by a camera, which is provided in the embodiment of the present application, and may also be referred to as a "direct service" function for short. For example, as shown in FIG. 3A, the switch may be provided in a "set" function of the system. For another example, as shown in fig. 3B, the switch may also be provided in a setting item of a specific application (e.g., "camera"). That is to say, after the terminal starts the "direct service" function, the sensor concentrator starts the first camera to collect the first image in real time, and analyzes the first image so as to monitor the current environment of the user in real time. And after the service direct function is closed, closing the first camera.
For another example, the terminal may also monitor a state of the terminal by using another sensor, and when the state of the terminal satisfies the trigger condition, the first camera is turned on to acquire the first image. For example, the terminal may detect the attitude of the terminal itself using, for example, a gravity sensor or the like. And if the sensor concentrator determines that the terminal generates a preset action according to the data collected by the gravity sensor, starting a first camera to collect a first image. The terminal is exemplified by a mobile phone. If the sensor concentrator determines that the mobile phone is turned over according to data monitored by a gravity sensor arranged in the mobile phone, the first camera is started to acquire a first image. Or the first camera is a rear camera of the mobile phone, and if the sensor concentrator determines that the mobile phone shakes according to data monitored by a gravity sensor arranged in the mobile phone, the first camera is started to acquire a first image. Of course, the condition for triggering the terminal to start the first camera to acquire the first image may also be set based on a specific application scenario, terminal capability, and the like, which is not specifically limited in this embodiment of the application.
S202, the terminal controls the first processor to process the first image to obtain a first analysis result. In some embodiments, after acquiring the first image, the sensor hub may invoke the first processor to perform image parsing on the first image to obtain a first parsing result. The first processor is a processing unit with an image analysis function and low power consumption. For example, the first processor is a low-power NPU, which consumes less power than the CPU and can satisfy a large amount of computation requirements required for image analysis. Alternatively, the low power consumption NPU may be replaced by a high power consumption NPU, which is not limited in this embodiment. Generally, the first camera acquires a large number of first images, and the first processor needs to frequently perform image analysis on the acquired first images. Therefore, the power consumption of the first processor is low, and the overall power consumption of the terminal is reduced.
For example, the first processor may perform image processing on the first image by using an algorithm model (e.g., an image classification algorithm) of the neural network, and the processing result (e.g., a classification result) is the first parsing result. It should be noted that, due to the low resolution of the first image, the neural network image classification algorithm can only perform preliminary classification on the first image, which results in low accuracy of classification or the first processor being unable to analyze the detail content in the first image. For example, objects with similar features in the first image may not be accurately classified by the first processor. For another example, the first processor may recognize whether the first image includes the two-dimensional code, but cannot recognize the content of the two-dimensional code.
Of course, the first processor may also process the first image by using other non-neural network algorithm models based on the actual application scenario, and the image processing method used in the first image processing is not limited in this embodiment of the application.
S203, if the first analysis result of the first image meets a first preset condition, the terminal starts a second camera to collect a second image. In some embodiments, when the sensor hub determines that the first image includes an image of a preset recognition type according to the first parsing result, the sensor hub considers that the first parsing result of the first image satisfies a first preset condition. The image of the preset identification type includes, but is not limited to, a two-dimensional code, a barcode, food, text, a payment device, and the like. That is to say, after the sensor hub preliminarily determines that the first image meets the first preset condition according to the first analysis result, the second camera is started to acquire the second image, so as to further determine the current environment of the user or further process the current environment to obtain more information. The resolution of the second image is greater than or equal to the first threshold, and the image quality of the second image is higher than that of the first image.
In some examples, the second camera is a different camera than the first camera. The second camera is a high-definition camera, and the resolution ratio of a second image acquired by the camera is higher and the image quality is higher. In other examples, the second camera is the same camera as the first camera. Then, when step S201 is executed, the camera captures an image at a first frequency (lower frequency), and performs a first process (simple image processing) on the captured image, including but not limited to a simpler processing operation of the ISP with lower power consumption, resulting in a first image with lower resolution and lower image quality. In performing this step, the camera captures an image at a second frequency (higher frequency) and performs a second process (more complex image processing) on the captured image, including but not limited to a more accurate processing operation of the ISP but with higher power consumption, resulting in a second image with higher resolution and higher image quality.
And S204, the terminal controls the first processor to process the second image to obtain a second analysis result. Similarly, after the second image is acquired, the sensor hub may invoke the first processor to perform image analysis on the second image to obtain a second analysis result. Where the processing of the second image by the first processor may be the same as or different from the processing of the first image.
In one example, the first processor performs image parsing on the first image and the second image, respectively, using the same algorithmic model of the neural network. The resolution of the second image is high due to the low resolution of the first image. Therefore, the accuracy of the second analysis result obtained by processing the second image by using the first processor is higher than that of the first analysis result obtained by processing the first image, and the reliability of the subsequent data reported to the CPU based on the second analysis result is improved.
In another example, the neural network model used to process the first image in the first processor and the neural network model used to process the second image may be the same type of algorithmic model, but the specific parameters in the algorithmic model are different, or the accuracy of the analytical results is different. Because the resolution ratio of the second image is higher than that of the first image, the accuracy of a second analysis result obtained by carrying out image analysis on the second image is higher than that of a first analysis result obtained by processing the first image, and the reliability of data reported to a CPU based on the second analysis result is improved.
In yet another example, the neural network model used in the first processor to process the first image and the neural network model used to process the second image may be different types of algorithmic models. Since the resolution of the second image is higher than that of the first image, the second analysis result obtained by performing image analysis on the second image may contain more detailed content. For example, the neural network model used to process the first image is an image classification model. The neural network model for processing the second image is a character detection model or a translation model, etc. For example, the first processor first determines whether the first image includes text using an image classification model. After determining that the first image determines that the characters are included, recognizing the characters in the second image by using a character detection model, or translating the detected characters by using a translation model.
And S205, if the second analysis result of the second image meets a second preset condition, the terminal controls the CPU to start a first application program function corresponding to the second image according to the second image. In some embodiments, when it is determined that the second image includes the preset type of image according to the second parsing result, the second parsing result of the second image is considered to satisfy a second preset condition. The image of the preset identification type includes, but is not limited to, a two-dimensional code, a barcode, food, text, a payment device, and the like. Wherein the first preset condition and the second preset condition are the same or different. That is, after the sensor hub further determines that the second image meets the second preset condition according to the second analysis result, the CPU is started to start the first application function based on the second image. The first application function may be all functions of the first application or may be a function in the first application.
For example, if the sensor hub determines that the second image includes the two-dimensional code or the barcode, the sensor hub starts the CPU to recognize the content of the two-dimensional code or the barcode based on the second image or the second parsing result, and starts a corresponding application program or a corresponding function in the application program based on the content of the two-dimensional code or the barcode. For example, the "shared bicycle" application is opened, or the "WeChat" application or the payment function in the "Payment treasures" is opened.
For another example, if the sensor hub determines that the second image includes food, the sensor hub starts the CPU to start an application related to the food, such as an application for calculating calories, based on the second image or the second parsing result.
For another example, if the sensor hub determines that text is included in the second image, the sensor hub launches the CPU based on an application associated with the text, such as an editing-type application or a translation-type application. Or the first processor recognizes the characters according to the second image and reports the characters to the sensor concentrator, then the sensor concentrator sends the recognition result to the CPU, and the CPU calls the application program related to the characters to perform the next processing on the recognized characters.
For another example, if the sensor hub determines that a payment device (e.g., a code scanner, a mobile payment terminal, etc.) is included in the second image, the sensor hub enables the CPU to turn on a payment function of a payment-related application, such as a "pay-for-treasure" application or a "WeChat" application.
In conclusion, because first camera and first treater are the low-power consumption device, so utilize first camera to gather low clear first image in real time to use first treater to carry out preliminary identification according to the environment that low clear first image was located to the user, be favorable to reducing the holistic consumption of terminal. And after the first low-definition image meets a certain condition, starting a second camera to acquire a second high-definition image, and performing second identification. Because the embodiment of the application carries out twice identification and adopts the second image with high definition to carry out the second identification, the accuracy of image identification is favorably improved, and further, the application program or the function of the application program started according to the second image with high definition is more accurate, and the expectation of a user is better met.
The following description is provided for the technical solution provided in the embodiment of the present application by taking the terminal 100 as a mobile phone as an example and combining a specific application scenario.
Scene 1 is a scene in which the mobile phone photographs an image including a two-dimensional code, a barcode, or the like. In general, the specifications (e.g., pixels) of the front camera of the mobile phone are lower than those of the rear camera. The front camera of the mobile phone can be used as a first camera for acquiring a first image with a first resolution (i.e., a low resolution), and the rear camera of the mobile phone can be used as a second camera for acquiring a second image with a second resolution (i.e., a high resolution). Of course, a camera with a low specification in the rear camera may be used as the first camera and a camera with a high specification in the rear camera may be used as the second camera based on the usage habits of the user. Of course, the first camera and the second camera may also be selected according to a specific scene, a specific configuration of the mobile phone, and the like, which is not limited in the embodiment of the present application.
Please refer to fig. 4, which is a schematic diagram of a method for opening an application according to an embodiment of the present application, and the method includes the following steps: s401, the mobile phone judges whether the triggering condition is met. The mobile phone can judge whether the user executes the preset operation according to the sensor configured by the mobile phone. And if the user is determined to execute the preset operation, the mobile phone starts the first camera to acquire the first image, and the current environment of the user is monitored in real time. Otherwise, whether the user executes the preset operation is continuously monitored.
For example, if the first camera of the mobile phone is a front camera and the second camera is a rear camera, the user may turn over the mobile phone to make the front camera of the mobile phone align with an object to be photographed, such as a two-dimensional code of a shared bicycle. In this scenario, the preset operation is an operation of turning over the mobile phone. That is to say, the mobile phone determines that the mobile phone is turned over according to a sensor (for example, a gravity sensor) configured in the mobile phone, and then automatically starts the front camera, collects the first image, and sends the first image to the first processor for image analysis.
For another example, the user may also control the mobile phone to turn on the first camera to capture the first image by shaking the mobile phone or performing a specific air-separating operation or inputting a voice command. The embodiments of the present application are not limited.
It should be noted that this step is an optional step. In some examples, the mobile phone may also normally open the first camera or automatically open the first camera after opening the "direct service" function, so as to monitor the environment where the user is located in real time.
S402, the mobile phone calls a first camera to acquire a first image. And S403, the mobile phone analyzes the first image by using the first processor. S404, the mobile phone determines whether to call a second camera to acquire a second image.
For example, the mobile phone determines whether the first image contains a specific image according to the result of the parsing of the first image by the first processor, for example: two-dimensional codes, bar codes, and the like. And if the first image is determined to contain the specific image, calling a second camera to acquire a second image with high resolution. Otherwise, step S401 continues.
S405, the mobile phone calls a second camera to acquire a second image. S406, the mobile phone uses the first processor to verify/further analyze the second image. S407, the mobile phone judges whether to report to the CPU.
In some examples, the mobile phone may use the first processor to parse the second image, determine whether the second image includes the specific image, and perform a check on the result of parsing the first image. If the verification is successful, the mobile phone reports the analysis result of the second image or the second image to the CPU so that the CPU can perform subsequent processing according to the analysis result of the second image or the second image. If the verification fails, the step S401 is continuously executed.
And S408, calling the corresponding application program by using the CPU by the mobile phone. In some examples, the CPU may further analyze the content of the two-dimensional code, the barcode, or the like included in the second image, and determine the application program corresponding to the two-dimensional code, the barcode, or the like in the second image, or the specific function in the corresponding application program.
For example, if the two-dimensional code in the second image is information of a "pay for treasure" application program, the mobile phone automatically starts the "pay for treasure" application program. For another example, if the two-dimensional code in the second image is the two-dimensional code of the account in the wechat application program, the mobile phone automatically starts a friend adding function of the wechat application program, and automatically adds the account corresponding to the two-dimensional code in the second image as a friend. For another example, if the two-dimensional code in the second image is a WeChat applet, the mobile phone automatically starts the corresponding applet. For another example, if the two-dimensional code in the second image is the two-dimensional code on the shared bicycle, the mobile phone automatically starts the application program of the corresponding shared bicycle, starts the function of scanning the code to unlock the lock, and analyzes the two-dimensional code in the second image to determine whether to unlock the lock of the shared bicycle. It may be noted that after the corresponding application program or the corresponding function in the corresponding application program is automatically started by the mobile phone, the application program may directly perform further processing on the second image. That is, after the corresponding application program is started, the mobile phone does not need to use the scanning function of the application program to shoot a new image and analyze the new image.
Therefore, the method for starting the application program provided by the embodiment of the application program can automatically start corresponding functions according to the image acquired by the camera. For the user, the user does not have to look up the application icon, find the corresponding function in the application, and take the image using a "sweep", etc. The method provided by the embodiment of the application enables the starting process of the application program to be more intelligent, and improves the interaction efficiency of the user and the mobile phone.
Scene 2, a scene in which the mobile phone photographs an image containing food. In this scenario, the execution steps of the mobile phone may refer to the description of the relevant steps in scenario 1, and scenario 2 differs from scenario 1 in that the first processor has a different resolution for the first image and the second image, and the CPU has a different processing for the second image.
If the mobile phone determines that the first image contains food according to the analysis result of the first processor on the first image, for example: and (4) calling a second camera to acquire a second image of the fruits, cake pastries, vegetables and the like. The first processor continues to parse the second image and verify whether the second image contains food. If the second image is determined to contain food according to the analysis result of the first processor on the second image, the second image can be reported to the CPU. The CPU starts a food-related application, such as an application that calculates calories of food. The food-related application may directly use the second image to calculate the amount of heat, etc., corresponding to the food in the second image.
Or, in the process of acquiring the first image and the second image, the mobile phone starts a camera application program. If the camera application program is configured with a calculation model for calculating food, the mobile phone can also directly use the calculation model in the camera application program to process the second image and present a corresponding result.
Scene 3, scene of the image containing characters shot by the mobile phone. In this scenario, the execution steps of the mobile phone may refer to the description of the relevant steps in scenario 1, and scenario 3 is different from scenario 1 in that the first processor has different resolutions for the first image and the second image, and the CPU has different processing for the second image.
If the mobile phone determines that the first image includes the text according to the analysis result of the first processor on the first image, for example: chinese, English, formula, etc., then a second camera is invoked to capture a second image. And the first processor continues to analyze the second image and checks whether the second image contains characters. If the second image is determined to contain characters according to the analysis result of the first processor on the second image, the second image can be reported to the CPU. The CPU opens applications related to word processing such as word editing documents, translation software, etc. The text-related application may directly use the second image to identify, translate, etc. text in the second image.
Alternatively, when the first processor determines that the second image includes a character when the first processor simultaneously configures the character recognition model or the character translation model, the character recognition model may be directly used to recognize the character in the second image, and the recognized character may be edited or copied. Alternatively, the recognized characters in the second image are translated into another language, speech, or the like by using the character translation model.
Scene 4, scene that the cell phone shoots the image including the payment equipment. In this scenario, the execution steps of the mobile phone may refer to the description of the relevant steps in scenario 1, and scenario 3 is different from scenario 1 in that the first processor has different resolutions for the first image and the second image, and the CPU has different processing for the second image.
If the mobile phone determines that the first image includes the payment device according to the analysis result of the first processor on the first image, for example: and the code scanning gun, the mobile payment terminal, the card swiping machine and the like are used for calling a second camera to acquire a second image. And the first processor continues to analyze the second image and checks whether the second image contains the payment equipment. If the second image is determined to contain the payment equipment according to the analysis result of the first processor on the second image, the second image can be reported to the CPU. The CPU opens an application associated with the payment function, for example, opens a payment code of a "WeChat" application or a "Paibao" application, etc. It should be noted that, before using the "service direct" function provided in the embodiment of the present application, the user may set the default payment method to be turned on.
The application embodiment further provides a device for starting an application program, as shown in fig. 5, the device includes an obtaining unit 501, an identifying unit 502, and a starting unit 503. The acquiring unit 501 is configured to acquire a first image acquired by a first camera, where the first image has a first resolution, and acquire a second image acquired by the first camera or a second camera when a first recognition result meets a first preset condition, where the second image has a second resolution, and the second resolution is greater than the first resolution, and the like. An identifying unit 502 for identifying the first image to obtain a first identification result, and identifying the second image to obtain a second identification result, etc. The starting unit 503 is configured to, when the second recognition result satisfies a second preset condition, start at least a part of functions of the first application program corresponding to the second image, and the like. Wherein the first preset condition and the second preset condition are the same or different. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The above units can be realized by software, hardware or a combination of software and hardware. Hardware includes, but is not limited to, various types of circuits such as digital circuits, analog circuits, arithmetic circuits, or discrete circuits, and related circuits may be in the form of chips, processors, or programmable logic circuits. The software includes computer driver code and may be executed by various types of processors.
The embodiment of the present application further provides a device for starting an application, where the device is included in a terminal, and the device has a function of implementing a terminal behavior in any one of the above-mentioned embodiments. The functions can be realized by software or hardware, and the corresponding software can be executed by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. As shown in fig. 6, the apparatus comprises a first processing module or unit 601, a second processing module or unit 602, a sensor control module or unit 603, at least one camera module or unit 604. Optionally, the apparatus further comprises a sensor module or unit 605 or the like. The first processing module or unit 601 corresponds to the first processor or related software mentioned in the previous embodiments. The second processing module or unit 602 corresponds to the CPU or related software mentioned in the previous embodiments. The sensor control module or unit 603 corresponds to the sensor hub or related software mentioned in the previous embodiments. At least one camera module or unit 604 corresponds to one or more cameras as mentioned in the previous embodiments. The sensor module or unit 605 corresponds to other types of sensors such as the aforementioned gravity sensor, and the specific implementation of the present embodiment can refer to the previous embodiment, which is not limited in the present embodiment.
Embodiments of the present application further provide a computer-readable storage medium, which includes computer instructions, and when the computer instructions are executed on a terminal, the terminal is caused to execute any one of the methods in the foregoing embodiments.
The embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to execute any one of the methods in the above embodiments.
It is to be understood that the above-mentioned terminal and the like include hardware structures and/or software modules corresponding to the respective functions for realizing the above-mentioned functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
In the embodiment of the present application, the terminal and the like may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network-side device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

  1. A method for opening an application, the method comprising:
    acquiring a first image acquired by a first camera, wherein the first image has a first resolution;
    identifying the first image to obtain a first identification result;
    when the first recognition result meets a first preset condition, acquiring a second image acquired by the first camera or the second camera, wherein the second image has a second resolution, and the second resolution is greater than the first resolution;
    recognizing the second image to obtain a second recognition result;
    and when the second identification result meets a second preset condition, starting at least part of functions of the first application program corresponding to the second image, wherein the first preset condition and the second preset condition are the same or different.
  2. The method of claim 1, wherein prior to said acquiring a second image captured by a second camera, the method further comprises: and starting the second camera.
  3. The method of claim 2, wherein said acquiring a first image captured by a first camera comprises: and acquiring the first image acquired by the first camera in real time or periodically.
  4. The method according to any one of claims 1 to 3, wherein the first preset condition and the second preset condition comprise a preset identification type, and the preset identification type comprises any one or any several of a two-dimensional code, a barcode, food, text, a payment device.
  5. The method of any of claims 1-4, wherein prior to said acquiring the first image acquired by the first camera, the method further comprises:
    judging whether a terminal comprising the first camera generates a preset action or not;
    and when the terminal is determined to generate the preset action, starting the first camera.
  6. The method according to claim 5, wherein the first camera and the second camera are cameras on different sides of the terminal, and the preset action is a flipping action.
  7. The method of any of claims 1-6, wherein the identifying the first image or the second image comprises: a first processor identifies the first image or the second image by running a neural network model;
    the opening at least part of the functions of the first application program corresponding to the second image comprises: the second processor starts at least part of functions of the first application program corresponding to the second image.
  8. An apparatus for opening an application, comprising:
    an image interface for acquiring a first image acquired by a first camera and for acquiring a second image acquired by the first camera or a second camera, the first image having a first resolution and the second image having a second resolution, the second resolution being greater than the first resolution;
    the processor is used for identifying the first image to obtain a first identification result; when the first identification result meets a first preset condition, further controlling the image interface to acquire the second image, and identifying the second image to obtain a second identification result; and when the second identification result meets a second preset condition, starting at least part of functions of the first application program corresponding to the second image, wherein the first preset condition and the second preset condition are the same or different.
  9. The apparatus of claim 8, wherein before the processor controls the image interface to obtain the second image, the processor is further configured to activate the second camera; the image interface is used for acquiring a second image acquired by the second camera after the second camera is started.
  10. The device according to claim 9, wherein the image interface is configured to acquire the first image captured by the first camera in real time or periodically.
  11. The apparatus of any of claims 8-10, wherein the processor comprises a first processor and a second processor;
    the first processor is specifically configured to identify the first image to obtain the first identification result, and identify the second image to obtain the second identification result;
    the second processor is specifically configured to further control the image interface to obtain the second image when the first recognition result meets a first preset condition; and when the second recognition result meets a second preset condition, starting at least part of functions of the first application program corresponding to the second image.
  12. The apparatus of claim 11, wherein the second processor is further configured to activate the second camera before the processor controls the image interface to obtain the second image.
CN202080006930.1A 2020-04-27 2020-04-27 Method, device and terminal for starting application program Pending CN114127686A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087316 WO2021217367A1 (en) 2020-04-27 2020-04-27 Method and apparatus for starting application program, and terminal

Publications (1)

Publication Number Publication Date
CN114127686A true CN114127686A (en) 2022-03-01

Family

ID=78332285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080006930.1A Pending CN114127686A (en) 2020-04-27 2020-04-27 Method, device and terminal for starting application program

Country Status (2)

Country Link
CN (1) CN114127686A (en)
WO (1) WO2021217367A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116301362A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Image processing method, electronic device and storage medium
CN116405749A (en) * 2023-03-30 2023-07-07 浙江德施曼科技智能股份有限公司 Door lock monitoring device, door lock system and implementation method for low-power-consumption continuous video recording
WO2023236801A1 (en) * 2022-06-07 2023-12-14 华为技术有限公司 Graphic code recognition method and electronic device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115080276B (en) * 2022-07-20 2022-12-09 北京聚通达科技股份有限公司 Application program function dynamic switching method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117256A (en) * 2015-08-31 2015-12-02 联想(北京)有限公司 Information processing method and electronic device
CN108989668A (en) * 2018-06-29 2018-12-11 维沃移动通信有限公司 A kind of working method and mobile terminal of camera
CN110516488A (en) * 2018-05-22 2019-11-29 维沃移动通信有限公司 A kind of barcode scanning method and mobile terminal
CN110517034A (en) * 2018-05-22 2019-11-29 维沃移动通信有限公司 A kind of object identifying method and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9137488B2 (en) * 2012-10-26 2015-09-15 Google Inc. Video chat encoding pipeline
CN109086095A (en) * 2018-06-20 2018-12-25 宇龙计算机通信科技(深圳)有限公司 The quick open method of application program, device, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117256A (en) * 2015-08-31 2015-12-02 联想(北京)有限公司 Information processing method and electronic device
CN110516488A (en) * 2018-05-22 2019-11-29 维沃移动通信有限公司 A kind of barcode scanning method and mobile terminal
CN110517034A (en) * 2018-05-22 2019-11-29 维沃移动通信有限公司 A kind of object identifying method and mobile terminal
CN108989668A (en) * 2018-06-29 2018-12-11 维沃移动通信有限公司 A kind of working method and mobile terminal of camera

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236801A1 (en) * 2022-06-07 2023-12-14 华为技术有限公司 Graphic code recognition method and electronic device
CN116301362A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Image processing method, electronic device and storage medium
CN116301362B (en) * 2023-02-27 2024-04-05 荣耀终端有限公司 Image processing method, electronic device and storage medium
CN116405749A (en) * 2023-03-30 2023-07-07 浙江德施曼科技智能股份有限公司 Door lock monitoring device, door lock system and implementation method for low-power-consumption continuous video recording
CN116405749B (en) * 2023-03-30 2023-11-28 浙江德施曼科技智能股份有限公司 Door lock monitoring device, door lock system and implementation method for low-power-consumption continuous video recording

Also Published As

Publication number Publication date
WO2021217367A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN114127686A (en) Method, device and terminal for starting application program
CN114946169B (en) Image acquisition method and device
CN114650363B (en) Image display method and electronic equipment
WO2021185105A1 (en) Method for switching between sim card and esim card, and electronic device
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN109903260B (en) Image processing method and image processing apparatus
CN113436576B (en) OLED display screen dimming method and device applied to two-dimensional code scanning
US20240137659A1 (en) Point light source image detection method and electronic device
CN114422340B (en) Log reporting method, electronic equipment and storage medium
CN114880251B (en) Memory cell access method, memory cell access device and terminal equipment
CN111563466A (en) Face detection method and related product
CN114466134A (en) Method and electronic device for generating HDR image
CN113727018A (en) Shooting method and equipment
US20240046604A1 (en) Image processing method and apparatus, and electronic device
EP4181016A1 (en) Image processing method, electronic device, image processing system and chip system
CN114498028A (en) Data transmission method, device, equipment and storage medium
CN116389884B (en) Thumbnail display method and terminal equipment
CN116074623B (en) Resolution selecting method and device for camera
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN115546248A (en) Event data processing method, device and system
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN116048196B (en) Cover opening and closing detection method and related device
CN113421209B (en) Image processing method, system on chip, electronic device, and medium
CN116048769B (en) Memory recycling method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination