CN114554069A - Terminal, task running method and device thereof, and storage medium - Google Patents

Terminal, task running method and device thereof, and storage medium Download PDF

Info

Publication number
CN114554069A
CN114554069A CN202011334793.2A CN202011334793A CN114554069A CN 114554069 A CN114554069 A CN 114554069A CN 202011334793 A CN202011334793 A CN 202011334793A CN 114554069 A CN114554069 A CN 114554069A
Authority
CN
China
Prior art keywords
camera
terminal
image
shielding
shielded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011334793.2A
Other languages
Chinese (zh)
Inventor
希曼舒·辛格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN202011334793.2A priority Critical patent/CN114554069A/en
Publication of CN114554069A publication Critical patent/CN114554069A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a terminal, a task operation method and device thereof and a storage medium, which can improve the operation convenience when the terminal is held by a single hand. The terminal at least comprises a first camera and a second camera, and the method comprises the following steps: detecting whether a first camera of the terminal is shielded or not under the condition that the first camera is in a working state; and when the fact that the sight line of the target camera is at least partially shielded is detected, running a specified task, wherein the specified task comprises controlling a second camera to execute shooting operation. The appointed task is operated when the first camera is shielded, the convenience of operating the task when the terminal is held by a single hand is improved, the task is operated through the first camera at the terminal, an additional sensor is not added, and the hardware cost is not increased.

Description

Terminal, task running method and device thereof, and storage medium
Technical Field
The application relates to the technical field of terminals, in particular to a terminal, a task running method and device thereof and a storage medium.
Background
Most tasks of a terminal such as a smart phone are operated in a touch screen mode, and with the development of technology, a screen of the terminal tends to be larger and larger, and although a better display effect can be brought to a user, when the user holds the terminal with one hand, the user generally performs human-computer interaction with the terminal through a thumb. When the screen is too big, some regional thumbs are difficult to touch, cause the simple operation nature to reduce.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
Based on the above, the application provides the terminal, the task operation method and device thereof, and the storage medium, which can improve the operation convenience when the terminal is held by one hand.
In a first aspect, a task running method for a terminal is provided, where the terminal includes at least a first camera, and the method includes the following steps:
under the condition that a first camera of the terminal is in a working state, detecting whether the first camera is shielded or not;
and when the first camera is detected to be at least partially shielded, running a specified task.
In one embodiment, the terminal further includes a second camera, and when it is detected that the first camera is at least partially occluded, the step of running the specified task includes: and controlling the second camera to execute shooting operation under the condition that the first camera is detected to be at least partially shielded.
In one embodiment, under the condition that a first camera of the terminal is in a working state, obtaining a shelter image collected by the first camera, and when a shelter in the shelter image is detected to be a predetermined object, and/or when the distance between the shelter and the first camera is smaller than a preset distance threshold, determining that the first camera is at least partially sheltered.
In one embodiment, the step of controlling the second camera to perform a shooting operation when it is detected that the first camera is at least partially occluded comprises: when detecting that the first camera is at least partially shielded, and the shielding mode is a sliding contact type shielding mode or a non-contact type shielding mode, controlling a second camera to execute shooting operation, wherein the second camera is a front camera, and the shooting operation comprises self-shooting operation.
In one embodiment, when it is detected that the distance between the shielding object and the first camera is zero, and when it is detected that the time from shielding until shielding of the first camera reaches a preset time, it is detected that the shielded pixel points in the shielding object image acquired by the first camera gradually increase and then gradually decrease, and/or when the shielded pixel points gradually decrease, it is determined that the shielding mode is a sliding contact type shielding mode.
In one embodiment, the method further comprises:
constructing a machine learning model, and acquiring data sets of an occluded image and an unoccluded image;
training the machine learning model by using the data set, wherein the data set of the shielded image comprises a shielding image of contact shielding and a shielding image of a non-contact shielding mode, and a shielding object in the shielding image is a predetermined object;
and acquiring a collected image of the first camera, inputting the collected image into the machine learning model completing training, and judging that the first camera is at least partially shielded when the collected image is identified to belong to a shielded image.
In one embodiment, before the step of detecting whether the first camera is occluded, the method further includes: and controlling the first camera to enter a working state under the condition that the second camera is detected to be opened and/or according to input information of a user.
In one embodiment, the method further comprises the following steps of performing shooting operation according to the image of the shielding object acquired by the first camera: and controlling the second camera to execute corresponding shooting operation according to the size of the blocked area in the blocking object image, the length of the blocked time, the moving direction of the blocking object relative to the first camera, the distance of the blocking object relative to the first camera and/or the number of times of the blocking object being blocked in a preset time.
In a second aspect, a task execution device of a terminal is provided, where the terminal includes at least a first camera, and the device includes:
the detection module is used for detecting whether a first camera of the terminal is shielded or not under the condition that the first camera is in a working state;
and the specified task running module is used for running the specified task when the first camera is detected to be at least partially shielded.
In one embodiment, the terminal further includes a second camera, and the designated task running module includes a shooting module, configured to control the second camera to perform a shooting operation when it is detected that the first camera is at least partially occluded.
In a third aspect, a terminal is provided, which includes a memory, a processor, and a first camera, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform the steps of the method described in any of the above embodiments.
In a fourth aspect, one or more non-transitory readable storage media are presented storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of a method as described in any of the embodiments above.
According to the terminal, the task running method and device and the storage medium, the designated task is run by detecting that the first camera is shielded, so that convenience of running the task when the terminal is held by one hand is improved, the task is run by the first camera of the terminal, an additional sensor is not added, and hardware cost is not increased.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic hardware structure diagram of a terminal implementing various embodiments of the present application;
fig. 2 is a flowchart illustrating a task execution method of a terminal according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a task execution method of a terminal according to another embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an effect of self-timer triggering of a rear camera in one embodiment of the present application;
FIG. 5 is a schematic diagram of a rear camera with a shielded finger in an empty space according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a self-timer task execution method of a terminal in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a task execution device of a terminal according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a task execution device of a terminal according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal in another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. The following embodiments and their technical features may be combined with each other without conflict.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present application, the terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal configuration shown in fig. 1 is not intended to be limiting, and that the terminal may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes the various components of the terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The terminal 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that may optionally adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or a backlight when the terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 100 or may be used to transmit data between the terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
Although not shown in fig. 1, the terminal 100 further includes at least one camera, and the like, specifically includes a self-timer camera (e.g., a front camera) and a rear camera, which are not described herein again.
The terminal described in the present application may include devices such as a smart phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like.
The following describes a task running method of the terminal according to the present application, taking a smart phone as an example.
Referring to fig. 2, a flowchart of a task execution method of a terminal according to an embodiment of the present application is shown, where the method includes steps 202 to 204:
step 202, detecting whether a first camera of the terminal is shielded or not under the condition that the first camera is in a working state;
and 204, running a specified task under the condition that the first camera is detected to be at least partially shielded.
According to the task running method of the terminal in the embodiment of the application, the designated task is run when the first camera is detected to be shielded, so that convenience of running the task when the terminal is held by a single hand is improved, the task is run through the first camera of the terminal, an additional sensor is not added, and hardware cost is not increased.
Please refer to fig. 3, which is a flowchart illustrating a task operating method of a terminal in another embodiment of the present application, in which the terminal includes a second camera, and the step of operating the designated task includes step 2042 when it is detected that the first camera is at least partially blocked: and controlling the second camera to execute shooting operation under the condition that the first camera is detected to be at least partially shielded.
This embodiment has improved the convenience of control second camera shooting operation when the one hand grips the terminal, comes the operation auto heterodyne through first camera, does not increase extra sensor. For example, as shown in fig. 4, the effect diagram in a specific example is shown, in the specific example, the second camera is a front camera, the first camera is a rear camera, and when the first camera is shielded, the second camera is triggered to perform self-shooting, so that convenience of performing a self-shooting task when the terminal is held by one hand is improved, and self-shooting is performed by the rear camera without adding an additional sensor.
The first camera is a camera which is easily shielded by any part of a single hand when the terminal is held by the single hand. Specifically, the first camera includes a rear camera, which is generally easily hidden by an index finger, a middle finger, and the like of a single hand when the terminal is held by the single hand. For the terminal with the camera arranged on the side face, the first camera can also comprise the cameras arranged on the side face, and when the terminal is held by a single hand, the cameras are correspondingly shielded by corresponding fingers. When the front camera of the terminal is held by a single hand, the front camera can also be used as the first camera when being easily shielded by the single hand. The number of the first cameras can be one, and the number of the first cameras can also be more than two, and when any one or more first cameras are detected to be at least partially shielded, the specified task is run.
The first camera is in a working state, which means that the image acquisition function of the first camera is opened.
When the first camera is shielded, a complete image of an object behind the shielding object cannot be acquired. The shielding mode can include contact shielding or non-contact shielding, and the contact shielding mode refers to a shielding object contacting with the lens of the first camera, and specifically includes a shielding mode of clicking and touching the first camera. The non-contact shielding mode means that a shielding object shields the sight of the first camera in a space, for example, a finger shields the first camera in a space. Fig. 4 is a schematic view of a rear camera shielded by a finger in space according to an embodiment of the present application.
The designated tasks comprise tasks which can be operated through man-machine interaction and can be application or preset functions in the terminal. For example, shooting (such as self-timer shooting), turning pages, game operation, audio/video playing, turning on/off, and the like, which is not limited herein.
In some possible implementations of step 202, the step of detecting whether the first camera is occluded may further include: and when the specified task is detected to be opened, such as the second camera is opened, and/or the first camera of the terminal is controlled to enter a working state according to the input information of the user. Therefore, the first camera is controlled to enter the working state under a certain trigger condition, and the power consumption required when the first camera is in the working state can be reduced.
The user input information may be at least one of input time, touch pressure, input times, gesture input, voice input, facial input information, and the like. In specific implementation, the input information is input by a user when a specified task needs to be run according to the shielding condition of the first camera.
Taking a smart phone as an example, in the specific implementation, when the designated task is opened, the designated task is displayed on a screen of the terminal, and the operation is waited to respond to the shielding of the first camera. Specifically, the user opens the second camera by opening the "camera application" to open the shooting task, for example, opens the front camera to open the self-shooting task, and when the first camera is shielded, the second camera is triggered to start self-shooting. For another example, for an audio/video playing task, a user opens the audio/video, and when the first camera is shielded, the tasks such as playing/fast forwarding/fast rewinding are executed.
In other embodiments, when the first camera of the terminal is in an operating state, and when the designated task is turned off, and/or when turn-off information input by a user is received, and/or when it is not detected that the first camera is blocked after a preset time period elapses since the first camera enters the operating state, the first camera may be controlled to enter a sleep state or a turn-off state, that is, the image capturing function is turned off. In this way, power consumption required when the first camera is in an operating state can be further reduced. In other embodiments, if it is detected that the user inputs the photographing information in the photographing application, the first camera is controlled to acquire the image in response to the photographing information without the aforementioned limitation.
In step 202, in some possible embodiments of determining whether the first cameras are blocked, when the first cameras include a plurality of first cameras, acquiring images synchronously acquired by the first cameras, and when the acquired image of at least one of the first cameras changes and the acquired images of other first cameras do not change, determining that a blocking object exists in front of the first camera.
In other possible embodiments, computer vision and image recognition techniques may be employed to determine whether the first camera is occluded. The method specifically comprises the following steps:
constructing a machine learning model, wherein the machine learning model can comprise a neural network architecture which comprises a plurality of convolutional layers and full-connection layers, and the neural network architecture can be a VGGNet or AlexNet pre-training neural network architecture, wherein VGGNet or AlexNet are deep convolutional neural networks; acquiring data sets of an occluded image and an unoccluded image, and respectively assigning occluded and unoccluded labels; the machine learning model is trained by using the data sets, specifically, the data set of the occluded image comprises occlusion images of various occlusion modes such as the contact occlusion and non-contact occlusion, and the occluded image can further be an occlusion image containing a predetermined object. And subsequently acquiring a collected image of the first camera, inputting the acquired image into the machine learning model so as to identify whether the collected image belongs to a blocked image, and judging that the first camera is at least partially blocked when the collected image is identified to belong to the blocked image. Preprocessing such as image resizing may be performed before input to the machine learning model.
Specifically, the size of the input image of the machine learning model may be set to 224 × 224, the input image passes through a Conv2D convolutional layer of 3 × 3 convolutional kernels, a MBConv convolutional layer of 3 × 3 convolutional kernels, and finally passes through a full-connection layer, the output of the full-connection layer is 1 or 0, if the output is 1, the input image belongs to an occluded image, and if the output is 0, the input image belongs to an unoccluded image.
When the shielding object exists in front of the first camera, whether the shielding object belongs to the shielding situation for triggering the operation of the specified task can be further judged, so that the false triggering probability is reduced. In this regard, in some possible embodiments, after the first camera of the terminal is controlled to enter the working state in step 204, an image of a blocking object acquired by the first camera is acquired, and when it is detected that the blocking object in the image of the blocking object is a predetermined object and/or it is detected that the distance between the blocking object and the first camera is smaller than a preset distance threshold, it is determined that the first camera is at least partially blocked, which indicates that the first camera belongs to a blocking situation for triggering the designated task to run, and is not a situation where the first camera is blocked by any object, that is, the designated task is run, where a probability of false triggering of the designated task to run can be reduced.
The predetermined object may be a user hand portion, such as a finger, or other user or manufacturer defined or preset specific object that is used to trigger the designated task runtime for the first camera. The preset distance threshold may be preset by a user or a manufacturer, and may specifically be any value in a range from 1 mm to 100 mm.
In specific implementation, a preset object for shielding can be preset, the characteristics of the preset object are obtained in advance and stored, and if the acquired image of the shielding object contains the characteristics of the preset object, the shielding object is determined to be the preset object. For example, the index finger of the user may be preset as a predetermined object for occlusion, and a feature, such as a fingerprint feature, of the index finger of the user is obtained, and if the fingerprint feature is included in the acquired image of the subsequent first camera, it is determined that the occlusion object is the index finger of the user.
Specifically, the distance between the blocking object and the first camera can be obtained according to the color of the blocking object image or the lateral distance between specific features, such as texture features, and generally, the deeper the color is, the wider the lateral distance between the features is, the closer the blocking object is to the first camera is. For example, the closer the finger is to the first camera, the greater the distance of adjacent fingerprints in the image.
When the first camera is detected to be at least partially shielded, and it is further detected that the shielding mode is a sliding contact type shielding mode or a non-contact type shielding mode, the specified task is executed. The influence of terminal shaking caused by a sliding contact type shielding mode or a non-contact type shielding mode is small, and the stability of task operation is improved. Especially, in the case that the second camera of the terminal is a front camera for self-timer and the designated task includes executing a self-timer operation task, the problem of image blurring caused by terminal shaking due to an improper shielding mode can be solved due to the shielding mode, and the stability of the terminal can be improved, so that the definition of a self-timer image can be improved.
Specifically, if the distance between the shielding object and the first camera is detected, non-contact shielding is determined.
The shielding object slides on the lens of the first camera, and the shielding mode is a sliding contact type shielding mode under the condition that the sliding process is in contact with the lens of the first camera, namely the sliding process is at a zero distance. The specific detection mode of the sliding contact type shielding mode comprises the step of judging that the shielding mode is the sliding contact type shielding mode when detecting that the distance between the shielding object and the first camera is zero and when detecting that the time from the shielding of the first camera to the shielding reaches the preset time, and detecting that the shielded pixel points in the shielding object image collected by the first camera gradually increase and then gradually decrease, and/or when the shielded pixel points gradually decrease.
In this embodiment, the preset time may be equal to the time required to slide the camera, but is not limited thereto. For example, the time required for the finger of the user to slide past the first camera is 1 second, and the first camera is continuously shielded within 1 second, and the preset time may be equal to 1 second. In this embodiment, for example, when the index finger of the user slides across the entire first camera, the first camera is continuously shielded, and the number of pixels shielded by the index finger in the shielding image gradually decreases, increases, and decreases. For another example, the index finger of the user slides through the middle of the camera, and the blocked pixel point reaches the maximum value at the beginning and then gradually decreases. More specifically, when the terminal comprises a first camera, the shielding mode is determined to be a sliding contact type shielding mode when any one of the situations occurs. When the terminal comprises more than two first cameras, at least one situation occurs, and the terminal is judged to be in a sliding contact type shielding mode.
Similarly, in the foregoing specific embodiment of obtaining the distance between the blocking object and the first camera, when the lateral distance between the specific feature, for example, the texture feature, of the image of the blocking object is greater than the preset value, it may be determined that the distance between the blocking object and the first camera is zero.
In step 204, for some possible embodiments of running the designated task, when it is detected that the first camera is at least partially shielded, the acquired image of the shielding object is acquired, and the designated task is run according to the image of the shielding object.
In some embodiments, the step of running the specified task from the obstruction image comprises: according to the size of the blocked area in the blocking object image, the length of the blocked time, the moving direction of the blocking object relative to the first camera, the distance of the blocking object relative to the first camera and/or the number of times of blocking within a preset time, different functions of the specified task are operated, but the method is not limited to the above.
The following description specifically describes the case where the blocking object is the index finger of the user, the first camera is the rear camera, and the terminal is the smart phone. As researches show, when the smart phone is held by one hand, the forefinger of the user can move freely with the highest flexibility, and the other four fingers are used for holding the edge of the smart phone.
In some specific embodiments, the designated task includes an audio/video task, different functions of the audio/video application can be run according to the movement direction of the shielding object relative to the first camera, and fast forwarding, fast rewinding, volume increasing and volume decreasing operations of the audio/video are respectively executed when the shielding object is detected to slide rightwards, leftwards, upwards and downwards relative to the first camera, so that the user habit is met. And when the time that the shielding object shields the first camera is detected to reach a first preset time, and a second preset time, the audio and video pause and switching functions are respectively executed. The first preset time and the second preset time are different, and furthermore, the difference value between the first preset time and the second preset time can be larger than the preset difference value so as to embody the discrimination. And when the shielded area is detected to be larger than a preset area threshold value, controlling the audio single-song circulation. And when the distance of the shelter relative to the first camera is detected to be larger than a preset threshold value, downloading the audio and video. And when the number of times that the first camera is shielded within the preset time is detected to be larger than a preset number value, downloading the audio and video.
It should be understood that the running operations of the audio and video are not limited to the above corresponding conditions, and all the operations are within the scope of the present application as long as they do not cause mutual exclusion. For example, when it is detected that the distance between the blocking object and the first camera is greater than a preset distance threshold, the audio and video can be controlled to pause, and when it is detected that the blocked area is greater than a preset area threshold, the audio and video can be downloaded, and the like, which is not described again.
In some embodiments, when the designated task is a shooting operation task of the second camera, the second camera may be controlled to perform a corresponding shooting operation according to a size of an area blocked in the blocking object image, a length of time of the blocking object, a moving direction of the blocking object with respect to the first camera, a distance of the blocking object with respect to the first camera, and/or a number of times of blocking within a predetermined time. Specifically, single shooting is triggered when a single sliding shielding or non-contact mode within preset time is detected, continuous self-shooting can be triggered when a continuous number of sliding shielding or non-contact modes are detected, selection operation of a beauty mode, such as original image style, fresh style and the like, can be run when a shielding object is detected to slide rightwards, leftwards, upwards or downwards relative to a first camera, and when a shielded area is detected to be larger than a preset area threshold value, shooting magnification is improved when a distance of the shielding object relative to the first camera is detected to be larger than a preset distance threshold value. Especially, the second camera is a front camera, when the designated task is a self-shooting task, the user may need to take some shooting postures, and a single-hand holding mode is usually adopted.
When the first camera comprises a plurality of rear cameras, for example, different functions of the designated task can be operated according to the position of the shielded first camera. For example, the situation that the camera comprises four rear cameras detects that the rear camera at the upper left corner is shielded, single self-photographing operation is executed, when the rear camera at the upper right corner is shielded, continuous self-photographing operation is executed, the selection operation of the beauty mode is executed when the rear camera at the lower left corner is shielded, and the light supplementing mode is started when the rear camera at the lower right corner is shielded.
It is understood that the respective operation of the second camera is not limited to the above corresponding conditions, and all that is not mutually exclusive falls within the scope of the present application. For example, when the distance of the blocking object relative to the first camera is greater than the preset distance threshold value, the second camera is controlled to start the beauty mode, and when the blocked area is greater than the preset area threshold value, the second camera is controlled to shoot for a single time, and the like, which is not described again.
Further, in the case that it is detected that the first camera is at least partially occluded, more than two specified tasks may be run, such as a self-timer task and an audio-video playing task, simultaneously; and under the condition that the first camera is detected to be at least partially shielded, different functions of the same appointed task can be operated, such as triggering a self-timer mode and starting a light supplement mode. To avoid confusion by executing different functions of a given task or different given tasks out of place. In some embodiments, when the first camera is controlled to enter the working state and the designated task is opened, the prompt information is displayed and used for guiding a user to trigger a function corresponding to the designated task according to a corresponding shielding mode. The prompt message may be a text message displayed on the operation interface, or may be a voice message, which is not limited herein.
Taking an appointed task as a self-timer task as an example, please refer to fig. 6, which is a schematic flow chart of a self-timer task operation method of a terminal in an embodiment of the present application, where the self-timer task operation method of the terminal includes: step 502, detecting that a self-shooting camera such as a front camera is opened; step 504, detecting that the rear camera is opened; step 506, detecting whether the rear camera is shielded; if yes, go to step 508, respond to the shelter to carry out the self-photographing; if not, responding to touch information/voice information and the like from the screen to execute a normal self-photographing process.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or partially with other steps or at least some of the sub-steps or stages of other steps.
It should be noted that, step numbers such as 202, 204, etc. are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform 204 first and then 202, etc. in the specific implementation, but these should be within the protection scope of the present application.
Fig. 7 is a block diagram illustrating a task execution device 600 of a terminal according to an embodiment, where the terminal includes at least one camera, and referring to fig. 7, the device includes:
the terminal comprises at least a first camera, and the device comprises:
the detecting module 610 is configured to detect whether a first camera of the terminal is shielded or not when the first camera is in a working state;
a designated task running module 620, configured to run a designated task when it is detected that the first camera is at least partially occluded.
In one embodiment, the terminal further includes a second camera, please refer to fig. 8, and the designated task running module 620 includes a shooting module 622, configured to control the second camera to perform a shooting operation when it is detected that the first camera is at least partially blocked.
The division of each module in the task execution device of the terminal is only used for illustration, and in other embodiments, the task execution device of the terminal may be divided into different modules as needed to complete all or part of the functions of the task execution device of the terminal.
For specific limitations of the task execution device of the terminal, reference may be made to the above limitations on the task execution method of the terminal, which are not described herein again. The respective modules in the task execution means of the terminal described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The implementation of each module in the task execution device of the terminal provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
Referring to fig. 9, the terminal 700 includes a memory 710, a processor 720, and a first camera 730, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor 720 to execute the steps of the method described in the embodiments of the present application. Referring to fig. 10, the terminal 700 may further include a second camera 740, and the processor 720 is configured to control the second camera 740 to perform a shooting operation when it is detected that the first camera 730 is at least partially occluded.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of a method of task execution for a terminal.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a method of task execution for a terminal.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and bused dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A task running method of a terminal, wherein the terminal at least comprises a first camera and a second camera, is characterized by comprising the following steps:
under the condition that a first camera of the terminal is in a working state, detecting whether the first camera is shielded or not;
and controlling the second camera to execute shooting operation under the condition that the first camera is detected to be at least partially shielded.
2. The method according to claim 1, characterized in that under the condition that a first camera of the terminal is in a working state, an occlusion image acquired by the first camera is acquired; when a blocking object in the blocking object image is detected to be a preset object, and/or when the distance between the blocking object and the first camera is smaller than a preset distance threshold value, it is determined that the first camera is at least partially blocked.
3. The method according to claim 2, wherein when it is detected that the first camera is at least partially shielded and the shielding mode is a sliding contact shielding mode or a non-contact shielding mode, the second camera is controlled to perform a shooting operation, the second camera is a front camera, and the shooting operation comprises a self-timer shooting operation.
4. The method according to claim 3, wherein when the distance between the shielding object and the first camera is detected to be zero, and when the time from shielding to shielding of the first camera reaches the preset time, it is detected that the shielded pixel points in the shielding object image collected by the first camera gradually increase and then gradually decrease, and/or the shielded pixel points gradually decrease, and the shielding mode is determined to be a sliding contact type shielding mode.
5. The method of claim 1, further comprising:
constructing a machine learning model, and acquiring data sets of an occluded image and an unoccluded image;
training the machine learning model by using the data set, wherein the data set of the shielded image comprises a shielding image of contact shielding and a shielding image of a non-contact shielding mode, and a shielding object in the shielded image is a predetermined object;
and acquiring a collected image of the first camera, inputting the collected image into the machine learning model after training, and judging that the first camera is at least partially shielded when the collected image is identified to belong to a shielded image.
6. The method of claim 2, wherein the step of detecting whether the first camera is occluded further comprises, prior to: and controlling the first camera to enter a working state under the condition that the second camera is detected to be opened and/or according to input information of a user.
7. The method of claim 1, wherein the controlling the second camera to perform the photographing operation further comprises the following steps of controlling the second camera to perform the photographing operation according to the image of the obstruction captured by the first camera: and controlling the second camera to execute corresponding shooting operation according to the size of the blocked area in the blocked object image, the length of the blocked time, the moving direction of the blocked object relative to the first camera, the distance of the blocked object relative to the first camera and/or the number of times of blocking within preset time.
8. A task execution device of a terminal, wherein the terminal includes at least a first camera and a second camera, the device comprising:
the detection module is used for detecting whether a first camera of the terminal is shielded or not under the condition that the first camera is in a working state;
and the assigned task running module is used for controlling the second camera to execute shooting operation under the condition that the first camera is detected to be at least partially shielded.
9. A terminal, comprising a memory and a processor, and a first camera and a second camera, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
10. One or more non-transitory readable storage media storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-7.
CN202011334793.2A 2020-11-24 2020-11-24 Terminal, task running method and device thereof, and storage medium Pending CN114554069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011334793.2A CN114554069A (en) 2020-11-24 2020-11-24 Terminal, task running method and device thereof, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011334793.2A CN114554069A (en) 2020-11-24 2020-11-24 Terminal, task running method and device thereof, and storage medium

Publications (1)

Publication Number Publication Date
CN114554069A true CN114554069A (en) 2022-05-27

Family

ID=81659105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011334793.2A Pending CN114554069A (en) 2020-11-24 2020-11-24 Terminal, task running method and device thereof, and storage medium

Country Status (1)

Country Link
CN (1) CN114554069A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580687A (en) * 2013-10-11 2015-04-29 Lg电子株式会社 Mobile terminal and controlling method thereof
CN104735340A (en) * 2013-12-24 2015-06-24 索尼公司 Spare camera function control
KR20150133466A (en) * 2014-05-20 2015-11-30 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN106101529A (en) * 2016-06-07 2016-11-09 广东欧珀移动通信有限公司 A kind of camera control method and mobile terminal
CN107800968A (en) * 2017-11-06 2018-03-13 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107835370A (en) * 2017-11-30 2018-03-23 珠海格力电器股份有限公司 A kind of camera switching method, device and electronic equipment
CN108184057A (en) * 2017-12-28 2018-06-19 努比亚技术有限公司 Flexible screen terminal taking method, flexible screen terminal and computer readable storage medium
CN109218527A (en) * 2018-08-31 2019-01-15 努比亚技术有限公司 screen brightness control method, mobile terminal and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580687A (en) * 2013-10-11 2015-04-29 Lg电子株式会社 Mobile terminal and controlling method thereof
CN104735340A (en) * 2013-12-24 2015-06-24 索尼公司 Spare camera function control
KR20150133466A (en) * 2014-05-20 2015-11-30 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN106101529A (en) * 2016-06-07 2016-11-09 广东欧珀移动通信有限公司 A kind of camera control method and mobile terminal
CN107800968A (en) * 2017-11-06 2018-03-13 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107835370A (en) * 2017-11-30 2018-03-23 珠海格力电器股份有限公司 A kind of camera switching method, device and electronic equipment
CN108184057A (en) * 2017-12-28 2018-06-19 努比亚技术有限公司 Flexible screen terminal taking method, flexible screen terminal and computer readable storage medium
CN109218527A (en) * 2018-08-31 2019-01-15 努比亚技术有限公司 screen brightness control method, mobile terminal and computer readable storage medium

Similar Documents

Publication Publication Date Title
US11429248B2 (en) Unread message prompt method and mobile terminal
CN107613131B (en) Application program disturbance-free method, mobile terminal and computer-readable storage medium
US20210150171A1 (en) Object recognition method and mobile terminal
CN110913132B (en) Object tracking method and electronic equipment
CN108495029B (en) Photographing method and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN109558000B (en) Man-machine interaction method and electronic equipment
CN110299100B (en) Display direction adjustment method, wearable device and computer readable storage medium
CN108628217B (en) Wearable device power consumption control method, wearable device and computer-readable storage medium
JP7467667B2 (en) Detection result output method, electronic device and medium
CN110187771B (en) Method and device for interaction of air gestures, wearable equipment and computer storage medium
CN109544445B (en) Image processing method and device and mobile terminal
CN111080747B (en) Face image processing method and electronic equipment
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN110321449B (en) Picture display method and terminal
CN117130469A (en) Space gesture recognition method, electronic equipment and chip system
CN111246105B (en) Photographing method, electronic device, and computer-readable storage medium
CN110177208B (en) Video recording association control method, equipment and computer readable storage medium
CN109799937B (en) Input control method, input control equipment and computer readable storage medium
CN109547696B (en) Shooting method and terminal equipment
CN109634503B (en) Operation response method and mobile terminal
CN111432122A (en) Image processing method and electronic equipment
CN108307108B (en) Photographing control method and mobile terminal
CN107609446B (en) Code pattern recognition method, terminal and computer readable storage medium
CN110399780B (en) Face detection method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination