CN115695599A - Method for prompting camera state and electronic equipment - Google Patents

Method for prompting camera state and electronic equipment Download PDF

Info

Publication number
CN115695599A
CN115695599A CN202110859099.0A CN202110859099A CN115695599A CN 115695599 A CN115695599 A CN 115695599A CN 202110859099 A CN202110859099 A CN 202110859099A CN 115695599 A CN115695599 A CN 115695599A
Authority
CN
China
Prior art keywords
area
camera
electronic device
region
display screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110859099.0A
Other languages
Chinese (zh)
Inventor
徐荣涛
吴同刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110859099.0A priority Critical patent/CN115695599A/en
Priority to PCT/CN2022/108373 priority patent/WO2023005999A1/en
Publication of CN115695599A publication Critical patent/CN115695599A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/85Protecting input, output or interconnection devices interconnection devices, e.g. bus-connected or in-line devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a method for prompting a camera state and electronic equipment, and relates to the technical field of terminals, wherein the method comprises the steps of detecting the state of a first camera, and displaying a first prompting signal in a first area of a display screen if the first camera is detected to be in an open state, wherein the first area is adjacent to a second area where a second camera is located, and the first prompting signal is used for prompting that the first camera is in the open state currently. The technical scheme that this application provided can show the state of suggestion camera remarkably, improves suggestion effect.

Description

Method for prompting camera state and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a method for prompting a camera state and electronic equipment.
Background
With the development and popularization of terminal technologies, electronic devices gradually integrate more and more functions. A camera is usually disposed in an electronic device such as a mobile phone, and an application program in the electronic device may call the camera to capture an image. In practical applications, the privacy of the user may be leaked due to such shooting operations, and therefore the state of the camera needs to be prompted in time.
In the prior art, when detecting that the camera is in an open state, the electronic device may display a yellow light spot on the upper right corner of the display screen, so that a user may determine whether the camera is in the open state by observing whether the light spot exists on the upper right corner of the display screen. However, this kind of prompting method is not obvious enough, and the user may ignore the light spot very much, and the prompting effect is poor.
Disclosure of Invention
In view of this, the present application provides a method for prompting a state of a camera and an electronic device, which can significantly prompt the state of the camera and improve a prompting effect.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a method for prompting a camera status, including:
detecting the state of a first camera;
and if the first camera is detected to be in the open state, displaying a first prompt signal in a first area of a display screen, wherein the first area is adjacent to a second area where a second camera is located, and the first prompt signal is used for prompting that the first camera is in the open state at present.
The first region and the second region are adjacent to each other, and the distance between the first region and the second region is smaller than or equal to a specific distance when the electronic device is in at least one use form. In some embodiments, if there is a partial area of the display screen between the first area and the second area, the partial area is not used to display any notification messages, nor does it include any objects (such as icons of applications) for interacting with the user. If a part of the body shell of the electronic device is arranged between the first area and the second area, the part of the body shell does not comprise any exposed components, such as a loudspeaker or a microphone. When the first area and the second area are closer, and the elements, notification messages or icons and the like included in the first area and the second area are fewer in factors which possibly interfere with perception of a user, the relevance between the display signal displayed in the first area and the camera in the second area is stronger, so that the user can better associate the prompt signal with the camera, and the prompt effect is better.
In this embodiment, the electronic device may display the first prompt signal through a first area adjacent to the second area in the display screen when detecting that the first camera is in an open state. Since the second area is the area where the second camera is located, when the first prompt signal is displayed in the first area, the user can easily associate the first prompt signal with the camera, so as to determine that the content prompted by the first prompt signal is that the first camera is currently in an open state. Compared with the mode of displaying the popup window under the display screen, the prompt process cannot interrupt other operations of the user, and the user experience is good. Compared with the mode of displaying the yellow light spot at the upper right corner of the display screen, the method can more accurately prompt the user that the camera is called at present instead of other functions such as positioning or recording and the like are called, the prompt mode is more remarkable, and the learning cost of the user is also reduced. In addition, when the user perceives the first prompt signal, the user may want to turn off the first camera, so the electronic device may quickly turn off the turned-on first camera based on the first area or the additional touch perception area, and a better privacy protection effect is provided.
Optionally, the first cue signal comprises a first light signal.
Any color attribute of the first light signal may be fixed, or may be changed with the prompt duration. When any color attribute of the optical signal can change along with the prompting duration, a better prompting effect can be achieved. In some embodiments, the hue and hue of the first light signal may be constant, with the brightness varying cyclically from low to high to low. In some embodiments, the hue and the lightness of the first light signal may be constant, with the hue varying cyclically in a preset manner.
Optionally, the first prompt signal includes an application identifier of a first application program, where the first application program is an application program that requests to open the first camera.
The characters and the icons can be displayed in the first area, so that more information related to the state of the camera is prompted, for example, an application program of the camera is opened through the indication of the application identifier, and the prompting effect is further improved.
Optionally, the first region surrounds the second region.
Wherein, if the second area is not in the same plane as the first area, the first area surrounding the second area may include the first area surrounding a projection of the second area on a plane where the first area is located.
If the first area surrounds the second area, the first prompt signal can surround the second camera, and the relevance between the first prompt signal and the camera is improved.
Optionally, the first region and the second region are in the same plane.
The first region and the second region are in the same plane, and the relevance between the first prompt signal and the camera is stronger, so that the prompt effect is improved.
Optionally, the first region is a circular region, a rectangular region, or a sector region.
In some embodiments, the shape of the first region may be similar to the shape of the second region.
Optionally, before the displaying the first prompt signal in the first area of the display screen, the method further includes:
determining the second region based on preset screen cutting region parameters;
determining the size of the minimum touch block of the display screen;
determining the first area based on the second area and the minimum touch block size.
The minimum touch block may be a minimum area of the display screen capable of clearly and unambiguously receiving the touch operation of the user, and the size or the determination mode of the minimum touch block may be obtained by the electronic device receiving a submission from a related technician. In some embodiments, the electronic device may obtain a stored number of Touch Pad (TP) rows and TP columns of the display screen, where a value obtained by dividing the number of horizontal pixels of the display screen by the number of TP columns may be a width of the minimum touch block, and a value obtained by dividing the number of vertical pixels of the display screen by the number of TP rows may be a height of the minimum touch block. In other embodiments, the electronic device may obtain a stored minimum touch block size.
The electronic equipment can automatically match the screen cutting area parameters and the minimum touch block size to obtain the first area, and the efficiency of determining the first area is improved.
In some embodiments, the minimum touch block size may be replaced with a preset size.
Optionally, the method further comprises:
updating the position of the first area based on a first dragging operation on the first area; and or (b) a,
updating the size of the first region based on a first scaling operation on the first region.
In practical application, the electronic device is various in types, and the first region obtained by automatic adaptation may not be perfectly matched with the second region of the electronic device, so that the electronic device may update the position or the size of the first region obtained by automatic adaptation based on a first dragging operation or a first zooming operation of a user, thereby accurately adjusting the first region and improving the accuracy of the first region.
In some embodiments, the electronic device may display a third frame on the display screen, where the shape of the third frame may be preset, and the position and size of the first frame may be random values or preset values. The electronic device may receive a second dragging operation of the user based on the third frame, and determine a position of the first area based on the second dragging operation; and receiving a second zooming operation of the user based on the third frame, and determining the zoomed size of the second zooming operation as the size of the first area. That is, the electronic device may determine the first region only according to the user operation, so that the first region can be determined even when the first region is difficult to be obtained through automatic matching (for example, the screen cutting region parameter cannot be obtained), and reliability of determining the first region is improved.
Optionally, the method further comprises:
if a first trigger operation is received based on a third area, the first camera is closed, and the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
The third area is a touch sensing area adjacent to the first area, and the electronic device may determine the third area in a manner similar to the determination of the first area. In some embodiments, if the third region is not the same region as the first region, the first region and the third region may be in the same plane. In some embodiments, the third region may surround the first region if the third region is not the same region as the first region. In some embodiments, the shape of the third region may be a similar figure to the shape of the first region.
Optionally, the method further comprises:
if a first trigger operation is received based on a third area, and the first camera detects that the sensitivity is reduced and the numerical variation of the reduction is larger than or equal to a first sensitivity threshold, closing the first camera, wherein the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
Optionally, the method further comprises:
if a first trigger operation is received based on a third area, and the first camera detects that the sensitivity is reduced and the reduced value is smaller than a second sensitivity threshold value, closing the first camera, wherein the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
Optionally, the first trigger operation is a sliding operation in which the third area points to the second area and at least a part of the sliding track is in the third area.
In some embodiments, the electronic device may receive a first trigger operation based on the third region, detect that the sensitivity is decreased and the decreased variation of the value is greater than or equal to the first sensitivity threshold by the first camera, or detect that the sensitivity is decreased and the decreased value is less than the second sensitivity threshold by the first camera, close the camera, and start sliding according to a direction from the third region to the second region by the user, gradually approach and block the first camera during the sliding process, so that the sensitivity of the first camera is decreased, and finally close the first camera. On one hand, the method accords with the operation logic that the user turns off after sensing that the first camera is turned on, and on the other hand, the possibility of misoperation is reduced.
Optionally, the method further comprises:
if a second trigger operation is received based on a third area, the first camera is determined to be kept started, and the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
In some embodiments, the second trigger operation may be a sliding operation in which the second region points to the third region and at least a part of the sliding track is in the third region; in other embodiments, the second trigger operation may be a sliding operation in which the third area points to the first edge, and at least part of the sliding track is located in the third area, and the first edge is an edge of the display screen closest to the third area.
Optionally, the display duration of the first prompt signal is a first preset duration.
Optionally, the method further comprises:
if the first trigger operation is not received based on a third area within the first preset duration, the first camera is kept started, and the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
Optionally, the third area is a touch sensing area adjacent to the first area, and the third area surrounds the first area.
Optionally, the method further comprises:
and if the first camera is determined to be kept on, updating the first optical signal displayed in the first area into a second optical signal.
The second prompt signal can be used for prompting that the first camera is in an open state currently, and the open state is allowed by a user. In some embodiments, the second prompt signal may be used to provide a fill-in light function.
That is, if the electronic device determines to turn off the first camera (for example, within the first preset time period, the first trigger operation is received based on the third area) or determines to keep turning on the first camera (for example, within the first preset time period, the first trigger operation is not received based on the third area, or the second trigger operation is received), that is, the feedback of the user has been obtained based on the first prompt signal, or the electronic device has sufficiently played a role of prompting although the feedback of the user has not been obtained, the display of the first prompt signal may be stopped (it may also be understood as hiding the first prompt signal). In some embodiments, if it is determined that the first camera is kept turned on, the electronic device may stop displaying the first prompt signal in the first area and display the second prompt signal in the first area, that is, update the first prompt signal displayed in the first area to the second prompt signal.
In some embodiments, if the electronic device receives a fifth trigger operation based on the third area, the electronic device may jump to open the first application program of the first camera.
Similarly, the electronic device may prompt the state of the second camera in a manner similar to the manner in which the state of the first camera is prompted, and may also control the second camera in a manner similar to the manner in which the state of the first camera is controlled.
In some embodiments, if the electronic device detects that the second camera is in the open state, the third prompt signal may be displayed in the first area. In some embodiments, the third cue signal may include at least one of a third light signal and an application identification of a second application, wherein the second application is an application requesting opening of the second camera.
It should be noted that, if the first camera and the second camera are the same camera, the third cue signal is the first cue signal, or it can be understood that the third cue signal is the same as the first cue signal. Of course, no matter whether the first camera and the second camera are the same camera or not, the third prompt signal can be distinguished from the first prompt signal and can also be the same as the first prompt signal, and the difference is that if the first prompt signal is the same as the third prompt signal, the prompt mode is simpler, and the requirement on the equipment capacity can be reduced; if the first prompt signal and the third prompt signal are different, the user can be more accurately prompted which camera is in the open state at present.
In some embodiments, the electronic device may turn off the second camera when the first trigger operation is received based on the third region and the amount of change in the value of the sensitivity reduction detected based on the second camera is greater than or equal to the third sensitivity threshold, or when the first trigger operation is received based on the third region and the sensitivity reduction is detected based on the second camera and the reduced value is less than the fourth sensitivity threshold.
In some embodiments, the display duration of the third prompt signal may be a second preset duration.
In some embodiments, if the second camera is currently in an on state and the electronic device receives a fourth trigger operation based on the third area, the second camera may be kept on.
In some embodiments, if the fourth trigger operation is not detected based on the third area within the second preset time period, it may be determined to keep the second camera turned on.
That is, if the electronic device determines to turn off the second camera or determines to keep turning on the second camera, the electronic device may stop displaying the third prompt signal. In some embodiments, if it is determined that the second camera is kept turned on, the electronic device may stop displaying the third prompt signal in the first area and display the fourth prompt signal in the first area, that is, update the third prompt signal displayed in the first area to the fourth prompt signal.
The fourth prompt signal may be used to prompt that the second camera is currently in an open state, and the open state is allowed by a user. In some embodiments, the fourth optical signal may provide a fill-in function.
In some embodiments, if the electronic device receives the fifth trigger operation based on the third area, the electronic device may jump to open the second application program of the second camera.
Optionally, if it is detected that the first camera is in an open state, displaying a first prompt signal in a first area of a display screen, including:
and if a first calling message is detected, displaying the first prompt signal in the first area of the display screen, wherein the first calling message is used for indicating that the first camera is called currently and is in an open state.
Optionally, the first camera and the second camera are the same camera.
Optionally, the first camera is a front camera.
Optionally, the first camera is a rear camera, and the second camera is a front camera.
Optionally, before the step of displaying the first prompt signal in the first area of the display screen if it is detected that the first camera is in the open state, the method further includes:
and receiving a first setting operation, wherein the first setting operation is used for opening a camera prompting function.
In a second aspect, an embodiment of the present application provides a method for prompting a camera state, including:
detecting the state of the first camera;
if the first camera is detected to be in the open state, a first prompt signal is displayed through at least one light assembly in a first area, wherein the first area surrounds a second area where a second camera is located, and the first prompt signal is used for prompting that the first camera is in the open state at present.
The second aspect has similar effects to the first aspect described above, except that in the first aspect, the first area is a partial area of the display screen, and the electronic device may display a prompt signal by drawing a user interface, where the prompt signal is a result of superposition of pixels in a plurality of different display states in the first area. In the second aspect, the first area may not be a partial area of the display screen, but may include an area of at least one light assembly, the light assembly may include a flash lamp or a light strip module, accordingly, the prompt signal may include only a light signal, and the electronic device does not need to draw a user interface. In the subsequent process, the electronic device can determine whether to close the camera according to the light sensitivity change of the camera, and does not depend on the gesture interaction of the user any more.
Optionally, the light assembly comprises a flashlight or a light strip module.
Optionally, if the light component comprises the light strip module, the light strip module is a ring-shaped light strip module surrounding the second area.
Optionally, the first optical signal is an optical signal with a first power, and the first power is smaller than a rated power of the light assembly or a preset first power threshold.
Optionally, the method further comprises:
and if the second camera is detected to be in the open state, displaying a third light signal through at least one light assembly in the first area.
Optionally, the third optical signal is an optical signal with a second power, and the second power is smaller than a rated power of the lamp component or a preset first power threshold.
The first power or the second power is smaller than the rated power of the light assembly or a preset first power threshold, so that when the light assembly emits the first light signal or the third light signal, the light assembly can be regarded as being in a low-power output mode, a user can distinguish the first light signal and the third light signal from the light emitting condition of the light assembly under other conditions, and the prompt effect is improved.
In a third aspect, an embodiment of the present application provides an apparatus for prompting a camera state, where the apparatus is disposed in an electronic device, and the electronic device is configured to execute the method of any one of the foregoing first aspect or second aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a memory for storing a computer program and a processor; the processor is adapted to perform the method of any of the first aspect described above when the computer program is invoked.
In a fifth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor, the processor is coupled with a memory, and the processor executes a computer program stored in the memory to implement the method of any one of the above first aspects.
The chip system can be a single chip or a chip module formed by a plurality of chips.
In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method in any one of the above first aspects.
In a seventh aspect, an embodiment of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to perform the method of any one of the above first aspects.
It is to be understood that, for the beneficial effects of the third aspect to the seventh aspect, reference may be made to the description of the first aspect or the second aspect, and details are not repeated here.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for prompting a camera state according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a display interface provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of another method for prompting a camera status according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another display interface provided by an embodiment of the present application;
fig. 7 is a schematic flowchart of another method for prompting a camera state according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of yet another display interface provided by an embodiment of the present application;
FIG. 9 is a flowchart of a method for determining a first area according to an embodiment of the present application;
FIG. 10 is a schematic view of a first region provided in accordance with an embodiment of the present application;
FIG. 11 is a schematic view of another first region provided in an embodiment of the present application;
fig. 12 is a flowchart of a method for detecting a status of a camera according to an embodiment of the present disclosure;
FIG. 13 is a schematic view of yet another display interface provided by an embodiment of the present application;
fig. 14 is a flowchart of a method for turning off a camera according to an embodiment of the present disclosure;
FIG. 15 is a partially enlarged view of yet another display interface provided in an embodiment of the present application;
FIG. 16 is a schematic diagram of a user operation provided in an embodiment of the present application;
FIG. 17 is an enlarged view of a portion of yet another display interface provided by an embodiment of the present application;
FIG. 18 is a schematic diagram of yet another user operation provided by an embodiment of the present application;
FIG. 19 is an enlarged view of a portion of yet another display interface provided by an embodiment of the present application;
FIG. 20 is a schematic view of yet another first region provided by an embodiment of the present application;
FIG. 21 is a schematic view of yet another first region provided in an embodiment of the present application;
FIG. 22 is a schematic view of yet another first region provided in an embodiment of the present application;
FIG. 23 is a schematic view of yet another first region provided in an embodiment of the present application;
FIG. 24 is a schematic diagram illustrating yet another user operation provided in an embodiment of the present application;
FIG. 25 is a schematic view of a first region and a third region provided in accordance with an embodiment of the present application;
FIG. 26 is an enlarged view of a portion of yet another display interface provided by an embodiment of the present application;
fig. 27 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The method for prompting the camera state provided by the embodiment of the application can be applied to electronic devices such as a mobile phone, a tablet personal computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the application does not limit the specific types of the electronic devices at all.
Fig. 1 is a schematic structural diagram of an example of an electronic device 100 according to an embodiment of the present disclosure. Electronic device 100 may include processor 110, memory 120, communication module 130, display 140, camera 150, sensor 160, and the like.
Processor 110 may include one or more processing units, and memory 120 may be used to store program codes and data, among other things. In the present embodiment, processor 110 may execute computer executable instructions stored by memory 120 for controlling and managing the actions of electronic device 100.
The communication module 130 may be used for communication between various internal modules of the electronic device 100, communication between the electronic device 100 and other external electronic devices, or the like. For example, if the electronic device 100 communicates with other electronic devices by means of a wired connection, the communication module 130 may include an interface, for example, a USB interface, where the USB interface may be an interface conforming to a USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
Alternatively, the communication module 130 may include an audio device, a radio frequency circuit, a bluetooth chip, a wireless fidelity (Wi-Fi) chip, a near-field communication (NFC) module, and the like, and may implement interaction between the electronic device 100 and other electronic devices in many different ways.
The display screen 140 is used to display images, video, and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 140, N being a positive integer greater than 1.
The camera 150 is used to capture still images or video. In some embodiments, electronic device 100 may include 1 or N cameras 150, N being a positive integer greater than 1. For example, when the electronic device 100 is a mobile phone, it may include a front camera disposed on one side of the display screen 140 and a plurality of rear cameras disposed on the opposite side of the display screen 140.
The front camera may be a camera disposed on a side opposite to the user when the user uses the electronic apparatus 100. Taking the electronic device 100 as an example of a mobile phone, when a user uses the mobile phone, a side provided with the display screen 140 is opposite to the user, a camera arranged in the side is a front-facing camera, and a camera arranged on a side opposite to the side is a rear-facing camera.
The sensors 160 may include pressure sensors, touch sensors, and light sensors.
The pressure sensor is used for sensing a pressure signal and converting the pressure signal into an electric signal. In some embodiments, the pressure sensor may be disposed on the display screen 140. When a touch operation is applied to the display screen 140, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 140, and the touch sensor and the display screen 140 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 140. In other embodiments, the touch sensor may be disposed on a surface of the electronic device 100, different from the position of the display screen 140.
The optical sensor can be used for sensing the brightness of the environment, further can be used for adaptively adjusting the screen brightness, automatically adjusting the white balance during photographing, detecting whether an object is close to the optical sensor, and the like.
Optionally, the electronic device 100 may further comprise peripheral devices (not shown in the figure), such as a mouse, a keyboard, a speaker, a microphone, etc.
It should be understood that the structure of the electronic device 100 is not specifically limited by the embodiments of the present application, except for the various components or modules listed in fig. 1. In other embodiments of the present application, the electronic device 100 may also include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the electronic device 100 may include an access layer, a control layer, and an underlying middle layer.
The access layer may be configured to present various interfaces to a user, and interact with the user based on the presented interfaces, including setting a control switch in the interface, position selection of the first area and the third area in this embodiment of the application, and touch interaction in the first area and the third area.
The control layer may be used to control the display or hiding of the cue signal and the interaction logic with the first area and the third area in the embodiments of the present application.
The control layer can be divided into a foreground and a background.
The foreground comprises a coordinate control module, an event registration module and a camera control module. The coordinate control module may be configured to determine a position (e.g., coordinates) and a size of the first region or the third region according to a screen cut region (notch) parameter and a TP parameter stored in the electronic device. The event registration module may be configured to register various events related to the embodiments of the present application, such as a camera open event, a touch slide event, a camera exposure change event, and the like, in an application framework (not shown in the drawings). The camera control module can be used for controlling the state of the camera, including opening or closing and the like.
The screen cutting area may be an area where the camera is located, and the screen cutting area parameter may be used to indicate a position, a size, and the like of the screen cutting area. The TP parameters may include a number of rows and a number of columns, where a number of horizontal pixels of the display screen divided by the number of columns is a width of a minimum touch block of the display screen, and a number of vertical pixels of the display screen in the number of rows is a height of the minimum touch block.
The background comprises a position matching module, an event monitoring module, a camera management module and a notification management module. The position matching module can match and determine the first area or the third area in the display screen according to the calculation result of the coordinate control module. The event monitoring module can be used for monitoring various events registered by the aforementioned event registering module, such as a camera opening event, a touch sliding event, and the like. The camera management module can be used for managing the camera, such as opening, closing, front and back jumping and the like. The notification management module may be used for message notification between processes or components, such as passing a message that a camera is opened, and the like, and may also be used for notifying a user of certain information.
The base intermediate layer may be used to provide underlying hardware and software support for the aforementioned access and control layers. In some embodiments, the base middle layer may include one or more of Huawei Mobile Services (HMS), window Management Services (WMS), activity Manager Services (AMS), package Management Services (PMS), structured query language software library (SQLite), broadcast (broadcast) services, and push suite (push kit).
In order to facilitate understanding of the technical solutions in the embodiments of the present application, an application scenario of the embodiments of the present application is first described below.
The electronic equipment is usually provided with a camera, and a user can call the camera through application programs such as the camera and the like so as to shoot images and achieve the purpose of recording information. However, the types and the number of the applications installed in the electronic device are increasing, and many third-party applications may open the camera and shoot images under the condition that the user does not know, which may invade the privacy of the user, so that the state of the camera needs to be prompted, so that the user can sense the state of the camera.
In some embodiments, the prompting of the electronic device to the front camera may be as shown in fig. 3. Referring to fig. 3, an operating system of the electronic device may register a listening event for the front camera in the camera frame. When an application calls the front camera, the camera framework will monitor the call event and issue an event notification. When the operating system captures the notification, a popup is displayed below the screen as shown in fig. 4, the popup includes "some application requests to open the front camera", a disable button and an enable button, and a countdown is also displayed near the disable button. If the electronic equipment receives the clicking operation of the user based on the forbidding button or does not receive any operation before the countdown is finished, the front camera is kept closed. And if the electronic equipment receives the clicking operation of the user based on the permission button, opening the front camera. However, the prompting method shown in fig. 4 is an intrusive prompting, and during the popup display process, the user can only continue to perform other operations after selecting permission or prohibition.
In some embodiments, the prompting of the front camera by the electronic device may be as shown in fig. 5. Referring to fig. 5, the application requests the camera frame to open the front camera, and the camera frame notifies the status bar that the calling status of the front camera is on, so that the status bar displays a yellow (black) light spot on the display screen as shown in fig. 6. Accordingly, when the camera is turned off, the camera frame can inform the status bar that the calling state of the front camera is off, so that the status bar hides the yellow light spot. For the user, whether the camera is in the open state can be judged by observing whether the light spot exists in the upper right corner of the display screen. However, this prompting method is not obvious enough, and the user may ignore the light spot, which results in poor prompting effect. In addition, the electronic device also displays another color light spot with the same size at the same position at the upper right corner of the display screen to indicate that another sensitive function (such as positioning or recording) is in an on state, which also causes that the user needs to spend time and energy to memorize the meaning indicated by the light spots with different colors, and the learning cost is higher.
In order to solve at least some of the above technical problems, an embodiment of the present application provides another method for prompting a camera status.
In this embodiment of the application, when it is detected that the first camera is in an open state, the electronic device may display the first prompt signal through a first area adjacent to the second area in the display screen. Since the second area is the area where the second camera is located, when the first prompt signal is displayed in the first area, the user can easily associate the first prompt signal with the camera, so as to determine that the content prompted by the first prompt signal is that the first camera is currently in an open state. Compared with the mode of displaying the popup window under the display screen, the prompt process cannot interrupt other operations of the user, and the user experience is good. Compared with the mode of displaying the yellow light spot at the upper right corner of the display screen, the method can more accurately prompt the user that the camera is called at present instead of other functions such as positioning or recording and the like are called, the prompt mode is more remarkable, and the learning cost of the user is also reduced.
In some embodiments, when the user perceives the first prompt signal, the user may wish to turn off the first camera, so the electronic device may quickly turn off the turned-on first camera based on the first area or the additional touch perception area, thereby providing a better privacy protection effect.
It should be noted that the embodiments of the present application may be implemented as an application (e.g., a camera prompting program) or a component (e.g., a prompting component) in an application, and operations performed by the electronic device described below in connection with the embodiments of the present application may actually be performed by the electronic device through the prompting program or the prompting component.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 7 is a flowchart of a method for prompting a camera status according to an embodiment of the present disclosure. It should be noted that the method is not limited by the specific sequence shown in fig. 7 and described below, and it should be understood that in other embodiments, the sequence of some steps in the method may be interchanged according to actual needs, or some steps may be omitted or deleted. The method comprises the following steps:
and S701, the electronic equipment opens a camera prompting function.
The method for prompting the camera state provided by the embodiment of the application aims to prompt a user in time when a camera is opened so as to reduce the occurrence of privacy leakage situations such as malicious shooting of third-party application programs and the like, but in practical application, not all users may want to be prompted, and even if the same user does not want to be prompted in some scenes, so that the electronic equipment can provide corresponding configuration access ports for the user so as to be convenient for the user to flexibly configure whether to prompt the camera state and relevant details of a prompting mode.
In some embodiments, the electronic device may determine to turn on the camera prompting function when receiving the first setting operation submitted by the user, and then may prompt the state of the camera through subsequent steps. Correspondingly, the electronic device may also determine to turn off the camera prompting function when receiving the second setting operation, and then no longer prompt the user no matter what the state of the camera is, or at least no longer prompt according to the manner provided by the embodiment of the present application.
The operation types of the first setting operation and the second setting operation may be determined in advance by the electronic device, and the operation types of the first setting operation and the second setting operation are not particularly limited in the embodiment of the present application.
In some embodiments, the electronic device may provide a system configuration page to the user as shown in fig. 8, the system configuration page including a "camera call reminder" option and a switch corresponding to the option. When the electronic device receives a click operation (i.e., a first setting operation) of a user based on the switch, the camera prompt function may be turned on, and the switch may be switched to an on state. When the electronic device receives the click operation (second setting operation) of the user based on the switch again, the camera prompting function is turned off, and the switch is switched to the off state.
It should be noted that, in practical applications, the electronic device may also determine to turn on or turn off the camera prompting function in other manners, which is not limited to the manner shown in fig. 8, and the manner in which the electronic device determines to turn on or turn off the camera prompting function is not specifically limited in the embodiment of the present application.
It should be further noted that, in some embodiments, the electronic device may also determine whether to turn on the camera prompting function based on a configuration of the user, for example, the electronic device may determine whether to turn on the camera prompting function based on a determination policy preset by a technician. Or, in other embodiments, at any time, the electronic device may prompt the user about the state of the camera according to the manner provided in the embodiment of the present application, without turning on or off the camera prompting function, that is, without performing S701, where S701 is an optional step.
S702, the electronic equipment determines a first area on a display screen.
The electronic device may be provided with a display screen which may always or for the most part be user-oriented when the user is using the electronic device. The electronic device may further include more than one camera, which may include a second camera disposed on a plane where a display screen of the electronic device is located (e.g., a panel on a side of the display screen or a certain position within the display screen), and may also include a first camera disposed at any position of the electronic device (e.g., a surface facing away from the display screen). Therefore, when a certain camera is called and is in an open state, a prompt signal can be sent out in a first area of the display screen close to the second camera, so that on one hand, a user can conveniently perceive the prompt signal, on the other hand, the user can conveniently associate the prompt signal with the camera, and the prompt signal is more easily determined to be used for prompting that the camera is in the open state at present. Thus, to facilitate subsequent prompting of the user in the first area, the electronic device may determine the first area on the display screen.
The electronic equipment can automatically match and determine the first area based on the position of the second camera; alternatively, the relevant operation of the user can be received to determine the first area, that is, the user manually determines the first area; or, the electronic device may automatically match and determine the first area, and then precisely adjust the first area based on the related operation of the user, so as to finally determine the first area.
In some embodiments, the manner in which the electronic device determines the first region on the display screen may be as shown in fig. 9 below.
In some embodiments, the first area and the second area may be in the same plane, so that the relevance between the prompt signal displayed in the first area and the camera is stronger, and the prompt effect is improved.
In some embodiments, the first region may be adjacent to a second region where the second camera is located.
The first region and the second region are adjacent to each other, and the distance between the first region and the second region is smaller than or equal to a specific distance when the electronic device is in at least one use form. It should be noted that, the size and the calculation unit of the specific distance may be determined by a related person, taking the calculation unit as a pixel as an example, the specific distance may be any number of pixels between 100 and 200, and the size of the specific distance is not specifically limited in this embodiment of the application.
Taking the electronic device shown in fig. 8 as an example, the electronic device has only one use form, and the second camera is disposed inside the display screen, so that the areas of the display screen and the second area where the second camera is located are all in the same plane. Alternatively, another electronic device is a foldable device, and when the electronic device is in a folded state, as shown in fig. 10, a part of a display area of a display screen is in the same plane as a second area 1001 where a camera is located, so that a first area 1002 can be determined in the part of the display area to prompt a camera state.
In some embodiments, if there is a partial area of the display screen between the first area and the second area, the partial area is not used to display any notification messages, nor does it include any objects (such as icons of applications) for interacting with the user. If a part of the body shell of the electronic device is arranged between the first area and the second area, the part of the body shell does not comprise any exposed components, such as a loudspeaker or a microphone. When the first area and the second area are closer, and the elements, notification messages or icons and the like included in the first area and the second area are fewer in factors which possibly interfere with perception of a user, the relevance between the display signal displayed in the first area and the camera in the second area is stronger, so that the user can better associate the prompt signal with the camera, and the prompt effect is better.
In some embodiments, the first region may surround the second region, such that the cue signal may surround the second camera, improving the relevance of the cue signal to the camera.
Wherein, if the second area is not in the same plane as the first area, the first area surrounding the second area may include the first area surrounding a projection of the second area on a plane in which the first area is located.
In some embodiments, the shape of the first region may be similar to the shape of the second region.
In some embodiments, as shown in fig. 11, the area where the second camera is located is a second area 1001, and the first area 1002 is adjacent to the second area 1001. The first region 1002 may be a circular region, as in a of fig. 11 and b of fig. 11; alternatively, the first region 1002 may be a rectangular region, as shown in c in fig. 11 and d in fig. 11; alternatively, the first region 1002 may be a sector region, as shown by e in fig. 11. Compared with a in fig. 11, in b in fig. 11, there is no common boundary between the first region 1002 and the second region 1001, that is, there may be a gap between the first region 1002 and the second region 1001, and therefore the first region 1002 in b may also be referred to as an annular region; similarly, there is also no common boundary between the first region 1002 and the second region 1001 in d in fig. 11, compared to c in fig. 11; in contrast to a, b, c and d in fig. 11, the first region 1001 is disposed at one side of the second region 1001 in e in fig. 11 instead of surrounding the second region 1001.
Of course, in actual use, the first region may have another shape for the purpose of appearance, improvement of a presentation effect, or the like.
The first region may include a corner or a fillet formed by chamfering the corner. For example, the first region may be a rectangular region having four right angles, or may be a rectangle having four rounded corners, wherein the four rounded corners are obtained by chamfering the four right angles, respectively.
It should be noted that, since the first area is determined based on the second area where the second camera is located, for the same electronic device, since the position and arrangement of the second camera are fixed, that is, the second area (shape, size, and position) is fixed, the first area may be fixed; for different electronic devices, the second areas are different due to different positions of the second cameras, so that the corresponding first areas can be different, and different electronic devices can comprise first areas with different forms.
In some embodiments, to control the camera, such as turning off the camera, the electronic device may further provide a third area for a user touch operation. In some embodiments, the third region may be the same region as the first region. In other embodiments, the first display area may be used only for displaying the prompt signal, the electronic device additionally determines a third area as the touch sensing area, and the third area may be adjacent to the first area.
It should be noted that the electronic device may determine the third area in the same or similar manner as the first area.
In some embodiments, if the third region is not the same region as the first region, the first region and the third region may be in the same plane.
In some embodiments, the third region may surround the first region if the third region is not the same region as the first region.
In some embodiments, the shape of the third region may be a similar figure to the shape of the first region.
It should be noted that the electronic device may determine the third area in a manner similar to the determination of the first area, and the position and the size of the determined third area may also be different depending on different electronic devices, which is not described herein again.
In some embodiments, the first camera and the second camera may be the same camera disposed on a plane where a display screen of the electronic device is located, such as the same front-facing camera. In other embodiments, the first camera is disposed on a side facing away from the display screen, i.e., the rear camera, and the second camera is the front camera.
And S703, the electronic equipment detects the state of the camera.
In order to prompt the user in time when the camera is in the open state, the electronic device can detect whether the camera is currently in the open state or the closed state.
In some embodiments, the manner in which the electronic device detects the status of the camera may be as follows in fig. 12. However, in practical applications, the way in which the electronic device detects the state of the camera is not limited to the way shown in fig. 12, and the embodiment of the present application does not specifically limit the way in which the electronic device detects the state of the camera.
S704, if the electronic equipment detects that the camera is in an open state, a prompt signal is displayed in the first area.
When the electronic equipment determines the first area with strong relevance with the camera and detects that the camera is in an open state, the user can be prompted in the first area in time, so that the user can more timely and accurately sense the event that the camera is opened, and the follow-up control operation on the camera is facilitated.
In some embodiments, the cue signal may include at least one of a light signal, text, and an icon.
The light signal may have three color attributes, such as hue, and lightness. Any color attribute of the light signal included in the cue signal may be fixed or may vary with the duration of the cue. When any color attribute of the optical signal can change along with the prompting duration, a better prompting effect can be achieved.
In some embodiments, the hue and hue of the light signal may be constant, with the brightness varying cyclically from low to high to low. In some embodiments, the hue and brightness of the light signal may be constant, with the hue varying cyclically in a preset manner.
In some embodiments, the text and the icon may include an application identifier requesting to open the camera, so as to prompt more information related to the state of the camera, such as an application program instructing to open the camera, thereby further improving the prompting effect.
In practical applications, the indication signal is not limited to the optical signal, the text and the icon. The electronic device may also prompt the user that the camera is in the open state through more or fewer types of prompt signals, for example, the electronic device may perform inverse processing on a second image of a picture to be displayed on a current display screen in the first area to obtain a first image, and then display the first image in the first area as the prompt signal, where the type of the prompt signal is not specifically limited in the embodiment of the present application.
In some embodiments, the electronic device may determine the type of alert signal based on a third setting operation submitted by the user.
For example, if the electronic device receives a click operation from the user based on the switch of the system setting interface shown in fig. 8, the alert configuration options may be displayed as shown in fig. 13. The reminding signal configuration options comprise three options of 'displaying light signals', 'displaying application names' and 'displaying application icons', and each option is correspondingly provided with a switch. For any option, if the electronic device receives a click operation of a user based on a switch corresponding to the option, it may be determined that the type of the prompt signal includes the type indicated by the option.
In some embodiments, if the electronic device detects that the first camera is in an open state, the first prompt signal may be displayed in the first area. In some embodiments, the first cue signal may include at least one of a first light signal and an application identification of a first application, wherein the first application is an application requesting opening of the first camera.
Similarly, if the electronic device detects that the second camera is in the open state, the third prompt signal may be displayed in the first area. In some embodiments, the third cue signal may include at least one of a third light signal and an application identification of a second application, wherein the second application is an application requesting opening of the second camera.
It should be noted that, if the first camera and the second camera are the same camera, the third cue signal is the first cue signal, or it can be understood that the third cue signal is the same as the first cue signal. Of course, no matter whether the first camera and the second camera are the same camera or not, the third prompt signal can be distinguished from the first prompt signal and can also be the same as the first prompt signal, and the difference is that if the first prompt signal is the same as the third prompt signal, the prompt mode is simpler, and the requirement on the equipment capacity can be reduced; if the first prompt signal and the third prompt signal are different, the user can be more accurately prompted which camera is in the open state at present.
S705, the electronic equipment detects a preset control event.
Since the camera may not be allowed by the user when being turned on, or the camera may be turned on by a user misoperation, in short, the user may currently want to turn off the camera quickly, and therefore, the electronic device may detect a preset control event for triggering turning off the camera.
In some embodiments, the preset control events may include a first preset control event corresponding to the first camera and a second preset control event corresponding to the second camera. The first preset control event is used for triggering the closing of the first camera, and the second preset control event is used for triggering the closing of the second camera
In some embodiments, the first preset control event may include a first touch event and a first sensitivity-down event.
The first touch event may be an event based on the third area receiving the first trigger operation.
As can be seen from the foregoing, the first area has a strong association with the camera, so that a prompt number for prompting the camera to turn on can be visually displayed, and the third area is the first area or a touch sensing area adjacent to the first area, and has a strong association with the camera, so that the camera is controlled based on the first touch event, which can conform to the thinking habits of most users and improve the convenience of operation.
It should be noted that the operation type of the first trigger operation may be determined in advance by the electronic device. In some embodiments, the first trigger operation may be a sliding operation that slides in the first direction and at least part of the sliding track is in the third area.
It should also be noted that the first direction may be determined in advance by the electronic device. In some embodiments, the first direction may be a direction pointing from the third region to the second region.
The first sensitivity drop event may be an event in which a numerical change amount of the sensitivity drop detected by the first camera is greater than or equal to a first sensitivity threshold, or the first sensitivity event may be an event in which the sensitivity drop detected by the first camera is less than a second sensitivity threshold.
Note that the first sensitivity threshold or the second sensitivity threshold may be determined in advance by the electronic device.
In some embodiments, the second preset control event may include a first touch event and a second sensitivity down event.
The second sensitivity drop event may be an event in which a numerical change amount of the sensitivity drop is greater than or equal to a third sensitivity threshold based on the detection of the sensitivity drop by the second camera, or the second sensitivity event may be an event in which the sensitivity drop is detected by the second camera and the reduced numerical value is less than a fourth sensitivity threshold.
Note that the third sensitivity threshold or the fourth sensitivity threshold may be determined in advance by the electronic apparatus. In some embodiments, the third threshold of sensitivity may be the same as the first threshold of sensitivity, and the fourth threshold of sensitivity may be the same as the second threshold of sensitivity.
It should be further noted that, if the first camera and the second camera are the same camera, the second preset control event is the first preset control event.
It should be noted that the type and number of the first preset control event (or the second preset control event) may be preset by the electronic device.
In some embodiments, the electronic device may register a preset control event before performing S705. In some embodiments, the electronic device may register a first preset control event after determining that the first camera is in an open state; after determining that the second camera is in the open state, registering a second preset control event.
For example, the electronic device may register the first touch event from the touch-related component and register the first sensitivity-decreasing event from the camera-related component (e.g., a camera frame) through the prompting component.
And S706, if the electronic equipment detects a preset control event, closing the camera in the open state.
The electronic equipment can be driven by the camera to control the camera to be closed. In some embodiments, the electronic device may turn off the first camera if detecting the first preset control event; if a second preset control event is detected, the second camera can be closed.
It should be noted that, as can be seen from the foregoing, the electronic device may set the type or the number of the first preset control events (or the second preset control events) in advance. Therefore, taking the first preset control event as an example, if the number of the preset first preset control events is more than one, the electronic device may close the camera in the open state when detecting part of the first preset control events (for example, at least one) in the electronic device, or close the camera in the open state when detecting all the first preset control events.
In some embodiments, the electronic device may detect a plurality of preset control events in a preset detection order. And when a certain preset control event is detected, continuously detecting the next preset control event until the detection is finished and closing the corresponding camera. If the next preset control event is not detected, the camera may be maintained in an open state. Correspondingly, the user can close the camera through the operation sequence matched with the preset detection sequence, so that the possibility of closing the camera due to misoperation can be reduced, and the accuracy of controlling the camera is improved.
In some embodiments, the manner in which the electronic device turns off the camera may be as described below with reference to fig. 14.
In some embodiments, the display duration of the first prompt signal may be a first preset duration. If the electronic device does not detect at least one or all of the specified first preset control events within the first preset time period, it may be determined to keep the first camera turned on. Similarly, the display duration of the third prompt signal may be a second preset duration. If the electronic device does not detect at least one or all of the specified second preset control events within the second preset time period, it may be determined to keep the second camera turned on. That is, within a first preset time period after the electronic device sends the first prompt signal, if the user does not manually turn off the first camera, the electronic device may keep the first camera turned on; or, within a second preset time period after the electronic device sends the third prompt signal, if the user does not manually turn off the second camera, the electronic device may keep the second camera turned on.
It should be noted that the first preset time period or the second preset time period may be determined in advance by the electronic device. The embodiment of the application does not specifically limit the manner of determining the first preset time length and the second preset time length and the sizes of the first preset time length and the second preset time length.
It should be further noted that, if the first camera and the second camera are the same camera, the second preset time length is the first preset time length.
In some embodiments, if the first camera is currently in an on state and the electronic device receives the second trigger operation based on the third area, the first camera may be kept on. Similarly, if the second camera is currently in an on state, and the electronic device receives a fourth trigger operation based on the third area, the second camera may be kept on. That is, the user may actively keep the first camera turned on through the second trigger operation, or actively keep the second camera turned on through the fourth trigger operation.
Wherein the operation type of the second trigger operation or the fourth trigger operation may be determined in advance by the electronic device. Taking the second triggering operation as an example, in some embodiments, the second triggering operation may be a sliding operation in which the second area points to the third area, and at least a part of the sliding track is in the third area; in other embodiments, the second trigger operation may be a sliding operation in which the third area points to the first edge, and at least part of the sliding track is located in the third area, and the first edge is an edge of the display screen closest to the third area.
It should be noted that, if the first camera and the second camera are the same camera, the fourth triggering operation is the second triggering operation.
In some embodiments, if the electronic device does not detect the second trigger operation based on the third area within the first preset time period, it may be determined to keep the first camera turned on. Similarly, if the fourth trigger operation is not detected based on the third area within the second preset time period, it may be determined to keep the second camera turned on.
In some embodiments, if the electronic device determines to turn off the first camera (for example, within a first preset time period, the first trigger operation is received based on the third area) or determines to keep turning on the first camera (for example, within the first preset time period, the first trigger operation is not received based on the third area, or the second trigger operation is received), that is, the feedback of the user has been obtained based on the first prompt signal, or the electronic device has sufficiently played a role of prompting although the feedback of the user has not been obtained, the electronic device may stop displaying the first prompt signal, or may also understand to hide the first prompt signal; similarly, if the electronic device turns off the second camera or determines to keep turning on the second camera, the electronic device may stop displaying the third prompt signal.
In some embodiments, if it is determined that the first camera is kept turned on, the electronic device may stop displaying the first prompt signal in the first area and display the second prompt signal in the first area, that is, update the first prompt signal displayed in the first area to the second prompt signal. Similarly, if the electronic device determines to keep the second camera turned on, the display of the third prompt signal in the first area may be stopped and the display of the fourth prompt signal in the first area may be stopped, that is, the third prompt signal displayed in the first area is updated to the fourth prompt signal.
The second prompt signal can be used for prompting that the first camera is in an open state at present, and the open state is allowed by a user; the fourth prompt signal may be used to prompt that the second camera is currently in an open state, and the open state is allowed by the user.
It should be noted that the second prompt signal may include the second optical signal, and may also include an application identifier of the first application program; the fourth cue signal may include the fourth light signal and may also include an application identification of the second application program.
In some embodiments, the second optical signal or the fourth optical signal may also provide a fill light function. For example, the second optical signal or the fourth optical signal may be a white optical signal.
In some embodiments, if the first camera and the second camera are the same camera, the fourth optical signal is the second optical signal. Of course, the fourth prompt signal may be different from the second prompt signal or the same as the second prompt signal regardless of whether the first camera and the second camera are the same camera.
Take the electronic device shown in fig. 15 as an example. The first camera and the second camera are the same camera located in the second area 1001, the first area 1002 and the third area are the same area, and the electronic device sends out the first optical signal in the first area 1002.
If the electronic device is based on fig. 15, the electronic device receives the sliding operation from the position 1 to the position 2 as shown in fig. 16, where the position 1 is outside the first area 1002 and the position 2 is in the first area 1002. Accordingly, the electronic device may determine that the first touch event is detected. In practice, the user does not stop sliding to the position 2, but continues sliding to the position 3 in the second region 1001, and when the user slides to the position 3, the first camera located in the second region 1001 is shielded, and the sensitivity detected by the first camera is reduced, so the electronic device also detects the first sensitivity reduction event. When the electronic device detects the first touch event and the first sensitivity decreasing event, the first camera may be turned off, and the display of the first optical signal in the first region 1002 is stopped, as shown in fig. 17.
It should be noted that, in fig. 16, the user shields the first camera with a finger (i.e., gesture interaction) so as to decrease the sensitivity detected by the first camera, in other embodiments, the user may also decrease the sensitivity detected by the first camera by approaching the side of the electronic device where the first camera is disposed to an obstacle, for example, the user may flip the electronic device so as to flip the first camera on a desktop.
If the electronic device receives the sliding operation from the position 4 to the position 5 as shown in fig. 18 based on fig. 15, where the position 4 is in the first area 1002, and the position 5 is outside the first area 1002, which is the edge of the display screen closest to the first area 1002. The electronic device may keep the first camera in the second area 1001 turned on. When the electronic device keeps the first camera turned on, the display of the first optical signal in the first region 1002 may be stopped, as shown in fig. 17, or the first optical signal displayed in the first region 1002 may be updated to the second optical signal, as shown in fig. 19, so that the second optical signal is used for light supplement, and the shooting effect of the first camera is improved.
In some embodiments, the electronic device may jump to the first application program or the second application program if the fifth trigger operation is received based on the third area.
It should be noted that the type of the fifth trigger operation may be determined in advance by the electronic device. In some embodiments, the fifth trigger operation may comprise a press operation or a double-click operation. Of course, in practical application, the fifth trigger operation may also be other types of operations, and the operation type of the fifth trigger operation is not specifically limited in this embodiment of the application.
In this embodiment of the application, when it is detected that the first camera is in an open state, the electronic device may display the first prompt signal through a first area adjacent to the second area in the display screen. Since the second area is the area where the second camera is located, when the first prompt signal is displayed in the first area, the user can easily associate the first prompt signal with the camera, so as to determine that the content prompted by the first prompt signal is that the first camera is currently in an open state. Compared with the mode of displaying the popup window under the display screen, the prompt process can not interrupt other operations of the user, and the user experience is good. Compared with the mode of displaying the yellow light spot at the upper right corner of the display screen, the method can more accurately prompt the user that the camera is called at present instead of other functions such as positioning or recording and the like are called, the prompt mode is more remarkable, and the learning cost of the user is also reduced. In addition, when the user perceives the first prompt signal, the user may want to turn off the first camera, so the electronic device may quickly turn off the turned-on first camera based on the first area or the additional touch perception area, and a better privacy protection effect is provided.
Fig. 9 is a flowchart of a method for determining a first area according to an embodiment of the present disclosure. It should be noted that the method is not limited by the specific sequence shown in fig. 9 and described below, and it should be understood that in other embodiments, the sequence of some steps in the method may be interchanged according to actual needs, or some steps may be omitted or deleted. The method comprises the following steps:
s901, the electronic equipment determines a second area where the second camera is located.
In some embodiments, the second camera is disposed at the edge or inside of the display screen, and the display screen is correspondingly provided with a screen cutting area for placing the second camera, and the display screen may be "Liu Haibing", "water drop screen", or "hole digging screen", or the like. The electronic device may store a screen cutting area parameter in advance, so that the electronic device may obtain the screen cutting area parameter and determine the second area based on the screen cutting area parameter.
Wherein the screen cutting area parameter may be used to indicate the screen cutting area. In some embodiments, the screen cut region parameters may include a height, a width, and a distance from an edge of the display screen of the screen cut region. In some embodiments, the screen cut region parameters may include coordinate locations of a plurality of pixel points on an edge of the screen cut region. It should be noted that, in practical application, the screen cutting area parameter may be determined according to factors such as the shape of the screen cutting area, and the form of the screen cutting area parameter is not specifically limited in the embodiment of the present application.
In some embodiments, the second camera is embedded in the interior space below the display screen. The electronic device may pre-select a region parameter stored to indicate the second region, and then may determine the second region based on the region parameter, or the electronic device may detect the second region drawn on the display screen by the user.
It should be noted that, for any electronic device in which the second camera is disposed at the edge of or inside the display screen, such as the aforementioned "bang screen", "drip screen", and "hole-digging screen", and other shaped screens different from the aforementioned "bang screen", "drip screen", and "hole-digging screen", the electronic device may determine the second area by detecting a manner in which a user draws on the display screen.
It should be noted that, in practical applications, the electronic device may not need to perform S901 to determine the second area, but directly perform the subsequent steps, for example, in some embodiments, the second camera is disposed on a panel of the electronic device near the display screen, and the second area cannot be determined according to the screen cutting area, so S901 is an optional step.
S902, the electronic equipment determines the size of the minimum touch block of the display screen.
In order to display signals to the user through the prompt area and to interact with the user through the prompt area, so as to implement other operations for the camera, such as turning off, etc., the width of the prompt area may be determined based on the minimum touch block size of the display screen.
The minimum touch block may be a minimum area in the display screen capable of clearly and unambiguously receiving a touch operation of the user, and the size or the determination manner of the minimum touch block may be obtained by receiving a submission from a related technician by the electronic device. In some embodiments, the electronic device may obtain the stored TP row number and TP column number of the display screen, a value obtained by dividing the horizontal pixel number of the display screen by the TP column number may be a width of the minimum touch block, and a value obtained by dividing the vertical pixel number of the display screen by the TP row number may be a height of the minimum touch block. In other embodiments, the electronic device may obtain a stored minimum touch block size.
It should be noted that S902 may be an optional step, and in the subsequent step, the electronic device may determine the size of the prompt area in other manners.
S903, the electronic device determines the first area based on the second area and the minimum touch block size.
Since the second area is the area where the second camera is located, in order to facilitate the user to perceive the prompt signal and to associate the prompt signal with the camera, and to improve the prompt effect, the first area may be disposed at a position adjacent to the second area, that is, the second area may be located at a position used for determining the first area. In addition, the minimum touch block may be used to determine the width of the first area. Therefore, the first area can be determined based on the second area and the minimum touch block size.
It is understood that, for different electronic devices, due to the different positions of the second cameras, the second areas are different, and the corresponding first areas may be different, so that it appears that different electronic devices may include first areas with different forms.
Take the electronic device shown in fig. 20 as an example. The display screen of the electronic device is a hole digging screen, and the second camera is arranged in a second area 1001 inside the display screen. The second region 1001 is a rectangular region, and the width of the second region 1001 is extended by the width of the minimum touch block 1003 toward both ends and the height of the second region 1001 is extended by the height of the minimum touch block 1003 toward both ends, with the second region as a center, so that the first region 1002 which is also a rectangular region can be obtained.
Further, the electronic apparatus shown in fig. 21 is taken as an example. The display screen of the electronic device is 'Liu Haibing', and the second camera is arranged in a second area 1001 at the edge of the display screen. The second region 1001 is a rectangular region, and the width of the second region 1001 is extended by the width of the minimum touch block 1003 toward both ends and the height of the second region 1001 is extended by the height of the minimum touch block 1003 toward the end away from the second region 1001, with the second region as the center, so that the first region 1002 which is also a rectangular region can be obtained.
Further, the electronic apparatus shown in fig. 22 is taken as an example. The display screen of the electronic equipment is a special-shaped screen, and the second camera is arranged in a second area 1001 at the upper right corner of the edge of the display screen. The second region 1001 may have an irregular shape, and the first region 1002 may have a circular shape. The width of any of the first areas 1002 may be greater than or equal to the width of the smallest touch block 1003, and the height of any of the first areas 1002 may be greater than or equal to the height of the smallest touch block 1003.
Further, the electronic apparatus shown in fig. 23 is taken as an example. The second camera is arranged on the front panel of the electronic device and close to the top of the display screen, and the second area 1001 is circular. The first region 1002 may be semicircular, and one side of the straight line of the first region 1002 coincides with the top of the display screen. The first area 1002 may include at least one minimum touch block 1003.
In some embodiments, if the electronic device omits S901, that is, the second area is not determined, the electronic device may display a first frame on the display screen, where a shape of the first frame may be preset, and a size of the first frame may be determined based on the minimum touch block size. The electronic device may receive a second drag operation of the user based on the first frame, and determine a position of the first area based on the second drag operation. That is, the position of the first area is manually set by the user.
For example, for the electronic device shown in fig. 10, if S901 is not executed, the electronic device may execute S903 as shown in fig. 24. The electronic device displays a first frame 1004, wherein the position of the first frame 1004 is a side of the display screen close to the second area 1001 where the second camera is located, the width of the first frame 1004 is the width of the minimum touch block 1003, and the height of the first frame 1004 is a random value. The electronic device re-determines the position of the first frame 1004 based on the second drag operation of the first frame 1004 received by the user, and finally obtains the first area 1002 as shown in fig. 10.
In some embodiments, if the electronic device omits S902, that is, the minimum touch block size is not determined, the electronic device may display a second frame on the display screen, where the shape of the second frame may be preset, the position of the second frame may be determined based on the second area, and the size of the second frame may be a random size or a preset size. In still other embodiments, the electronic device may further receive a second zoom operation of the user based on the second border, and determine a size of the second zoom operation after zooming as the size of the first area. The minimum touch block size may be replaced with a random size or a preset size. That is, the size of the first area is manually set by the user.
In some embodiments, if the electronic device does not perform either S901 or S902, a third frame may be displayed on the display screen, where the shape of the third frame may be preset, and the position and size of the first frame may be random values or preset values. The electronic device may receive a second dragging operation of the user based on the third frame, and determine a position of the first area based on the second dragging operation; and receiving a second zooming operation of the user based on the third frame, and determining the zoomed size of the second zooming operation as the size of the first area. That is, the position and size of the first region are manually set by the user, so that the first region can be determined even when the first region is difficult to be automatically matched, and the reliability of determining the first region is improved.
Through the foregoing S901-S903, the electronic device may determine the first area through automatic adaptation, but in practical applications, the electronic device is various in variety, and the first area obtained through automatic adaptation may not perfectly match with the second area of the electronic device, and therefore, in order to improve the accuracy of the first area, the electronic device may further perform the following optional step S904 as long as at least one of the position and the size of the first area is obtained through automatic adaptation, so that the user manually performs precise adjustment on the first area obtained through automatic adaptation.
S904, the electronic device updates the first area based on the received first update operation.
Wherein the first updating operation is used for updating at least one of the position and the size of the first area.
In some embodiments, the first update operation is a first drag operation on the first region, and the electronic device may end updating the position of the first region based on the first drag operation.
In some embodiments, the first update operation is a first zoom operation on the first region, and the electronic device may update the size of the first region based on the first zoom operation.
In some embodiments, if the third region is not the same region as the first region, the electronic device may also determine the third region in a manner similar to S901-S904.
For example, if the electronic device shown in fig. 15 determines a touch sensing area different from the first area as a third area, it can be shown in fig. 25. Compared with fig. 15, fig. 25 further includes a third region 1005, and the third region 1005 surrounds the first region 1002 and the second region 1001, and the electronic device displays a prompt signal in the first region 1002, as shown in fig. 26. In the display manner shown in fig. 15, the first region 1002 may be used for displaying a prompt signal, and may also be used for subsequently receiving a relevant operation of a user to control the camera in the open state. In the display manner shown in fig. 26, the first area 1002 may be used only for displaying the prompt signal, and the third area 1005 outside the first area 1002 may be used for subsequently receiving the relevant operation of the user to control the camera in the open state, so as to facilitate the operation of the user.
In some embodiments, if the third area and the first area are not the same area, the size of the first area may not be limited by the size of the minimum touch block.
In the embodiment of the application, the electronic device can determine the second area where the second camera is located and the minimum touch block size of the display screen, and automatically match the second area and the minimum touch block size to obtain the first area, so that the efficiency of determining the first area is improved. In addition, the electronic device can also update and adjust the automatically matched first region based on the first update operation, so that a user can manually adjust the first region obtained through automatic matching, and the accuracy of the first region is improved.
Referring to fig. 12, which is a flowchart illustrating a method for detecting a status of a camera according to an embodiment of the present disclosure, an electronic device may complete detection of the status of the camera through interaction among a prompt component, a camera frame, and a camera driver. It should be noted that the method is not limited to the specific sequence shown in fig. 12 and described below, and it should be understood that in other embodiments, the sequence of some steps in the method may be interchanged according to actual needs, or some steps may be omitted or deleted. The method comprises the following steps:
s1201, the prompt component registers a camera listening event with the camera frame.
Wherein the camera listening event may be used to sense a camera open event from the camera frame.
The electronic device can send an event registration message to the camera framework through the prompting component, wherein the event registration message carries the registered event type. When the camera framework receives the event registration message, an event of the event type may be registered based on the event registration message.
S1202, the camera driver sends a camera call interrupt message to the camera framework.
The camera calling interrupt message is used for informing the camera frame of starting to open the camera, and the camera calling interrupt message comprises opening the first camera or the second camera.
In some embodiments, when an application program needs to call a camera, the electronic device may send a camera opening instruction to the camera driver through the application program, and correspondingly, if the electronic device receives the camera opening instruction through the camera driver, the electronic device may send a camera call interrupt message to the camera frame and open the camera.
In some embodiments, the electronic device may also notify the camera framework, through the camera driver, of an application identification of an application requesting opening of the camera.
Wherein the application identification can be used to identify the application. In some embodiments, the application identification may include at least one of an application package name, an application name, a process ID of a corresponding process, and an icon of the application. Of course, in practical applications, the application identifier may also include other information that can be used to identify the application program, and the embodiment of the present application is not particularly limited to the type of the application identifier.
It should be noted that the electronic device may carry the application identifier of the application program that invokes the camera in the invocation interrupt message through the camera driver, or may separately send the application identifier to the camera framework through the camera driver. The embodiment of the present application does not specifically limit the manner in which the electronic device provides the application identifier to the camera framework through the camera drive notification.
S1203, the camera framework sends a camera calling message to the prompting component.
The call message may be used to indicate that the camera is called and is in an open state, and correspondingly, if the electronic device detects the call message through the prompt component, it may be determined that the camera is currently in the open state.
In some embodiments, the electronic device can also notify the prompt component of the application identification of the application program that invoked the camera via the camera framework.
It should be noted that the electronic device may carry the application identifier of the application program that invokes the camera in the invocation message through the camera frame, or may separately send the application identifier to the prompt component through the camera frame. The embodiment of the application does not specifically limit the way in which the electronic device notifies the prompt component of the application identifier through the camera frame.
In some embodiments, the electronic device may send a first call message to the prompting component through the camera framework, where the first call message is used to indicate that the first camera is currently called and in an open state, and accordingly, if the electronic device detects the first call message through the prompting component, it may be determined that the first camera is currently called and in an open state.
In some embodiments, the electronic device may send an application identification of the application requesting opening of the first camera to the prompting component through the camera frame.
Similarly, the electronic device may send a second call message to the prompt component through the camera frame, where the second call message is used to indicate that the second camera is currently called and in an open state, and correspondingly, if the electronic device detects the second call message through the prompt component, it may be determined that the second camera is currently called and in an open state. In addition, the electronic device can send an application identification of the application requesting opening of the second camera to the prompt component through the camera framework.
It should be noted that, if the first camera and the second camera are the same camera, the second call message is the first call message.
In the embodiment of the application, the electronic device can sense the opened camera and the application program for opening the camera from the camera frame by registering the camera to monitor the event, so that prompt is convenient for a user in time.
Fig. 14 is a flowchart of a method for turning off a camera of an electronic device according to an embodiment of the present disclosure. It should be noted that the method is not limited by the specific sequence shown in fig. 14 and described below, and it should be understood that in other embodiments, the sequence of some steps in the method may be interchanged according to actual needs, or some steps may be omitted or deleted. It is further noted that fig. 14 provides a method of turning off the first camera upon detection of the first preset control event. The first preset control event comprises a first touch event and a first sensitivity reduction event, and the first camera and the second camera are the same camera, such as a front-facing camera. The method comprises the following steps:
s1401, the electronic device detects a first touch event.
The electronic device can detect the position and the operation type of any touch operation of a user on the display screen, and if the touch operation is based on the fact that the third area receives the first trigger operation, the electronic device can determine that the first touch event is detected.
S1402, if the electronic device detects the first touch event, the electronic device detects a first sensitivity reduction event.
The electronic device may detect a current sensitivity based on the first camera, and compare the detected sensitivity to the first sensitivity threshold or the second sensitivity threshold. In some embodiments, it may be determined that the first sensitivity-down event is detected if the detected sensitivity is down and the amount of numerical change in the down-sensitivity is greater than or equal to the first sensitivity threshold. In some embodiments, it may be determined that the first sensitivity-lowering event was detected if the detected sensitivity was lowered and the lowered value was less than the second sensitivity threshold.
S1403, if the electronic device detects the first sensitivity drop event, the electronic device turns off the first camera.
Since the first touch event and the first sensitivity down event have been detected, the first camera may be turned off.
In this embodiment of the application, when the first camera is in an open state, the first touch event and the first sensitivity reduction event may be sequentially detected, and when the first touch event and the first sensitivity reduction event are detected, the first camera is turned off. As can be seen from the foregoing, the first touch operation may be a sliding operation in which the first touch operation slides along a direction from the third area to the second area, and at least a portion of a sliding track is located in the third area, and the first camera and the second camera are the same camera located in the second area, so that a user may start sliding according to the direction from the third area to the second area to trigger the first touch event, and gradually approach and block the first camera in the sliding process, so that the sensitivity of the first camera is reduced, and the first sensitivity reduction event is triggered. According to the time sequence, the electronic equipment firstly displays a first prompt signal (a first condition) in a first area when detecting that a first camera is turned on, and then sequentially detects a first touch event (a second condition) and a first sensitivity reduction event (a third condition). When the three conditions are satisfied in sequence, the electronic device may turn off the first camera. On one hand, the method accords with the operation logic that the user senses that the first camera is turned on and then turned off, and on the other hand, the possibility of misoperation is reduced.
It should be noted that, in the embodiment of the present application, the case that the electronic device turns off the first camera is described only in a sequence of detecting the first touch event and then detecting the first sensitivity drop event, and the detection sequence of detecting the preset control event is not limited.
In addition, in the foregoing embodiment, the first area is a partial area in the display screen, and the electronic device may display a prompt signal in a manner of drawing a User Interface (UI), where the prompt signal is a result of overlapping pixels in a plurality of different display states in the first area. In other embodiments, the first area may not be a partial area of the display screen, but may include at least one light assembly, and the light assembly may include a flashlight or a light strip module. In the subsequent process, the electronic device can determine whether to close the camera according to the sensitivity change of the camera without depending on the gesture interaction of the user.
In some embodiments, if the light component is a light strip module, the light strip module may be a ring-shaped light strip module surrounding the second area.
In some embodiments, the first optical signal may be an optical signal output by the electronic device at a first power by controlling a flash or a light strip module in the first region. Similarly, the third light signal may be a light signal output by the electronic device at the second power by controlling a flashlight or light strip module in the first area.
The first power and the second power can be rated power of the light assembly or a preset first power threshold value, so that when the light assembly outputs light signals according to the first power, the light assembly can be considered to be in a low-power output mode, a user can distinguish the first light signals and the third light signals from the light emitting condition of the light assembly under other conditions, and a prompt effect is improved.
It should be noted that, if the first camera and the second camera are the same camera, the first power and the second power are the same. Of course, even if the first camera and the second camera are not the same camera, the first power and the second power may be the same.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Based on the same inventive concept, the embodiment of the application also provides the electronic equipment. Fig. 27 is a schematic structural diagram of an electronic device 2700 according to an embodiment of the present application, and as shown in fig. 27, the electronic device according to the embodiment includes: a memory 2710 and a processor 2720, the memory 2710 being configured to store computer programs; processor 2720 is configured to perform the methods of the above-described method embodiments when the computer program is invoked.
The electronic device provided by this embodiment may perform the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Based on the same inventive concept, the embodiment of the application also provides a chip system. The chip system comprises a processor coupled to a memory, the processor executing a computer program stored in the memory to implement the method of the above-described method embodiments.
The chip system can be a single chip or a chip module consisting of a plurality of chips.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method described in the above method embodiments.
The embodiment of the present application further provides a computer program product, which, when running on an electronic device, enables the electronic device to implement the method described in the foregoing method embodiment when executed.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer memory, read-only memory (ROM), random Access Memory (RAM), electrical carrier signal, telecommunication signal, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (20)

1. A method for prompting camera state is characterized by comprising the following steps:
detecting the state of the first camera;
and if the first camera is detected to be in the open state, displaying a first prompt signal in a first area of a display screen, wherein the first area is adjacent to a second area where a second camera is located, and the first prompt signal is used for prompting that the first camera is in the open state at present.
2. The method of claim 1, wherein the first cue signal comprises a first light signal.
3. The method according to claim 1 or 2, wherein the first prompt signal comprises an application identifier of a first application, the first application being an application requesting to open the first camera.
4. A method according to any of claims 1-3, wherein the first region surrounds the second region.
5. The method according to any one of claims 1-4, further comprising, prior to said displaying the first prompt signal in the first area of the display screen:
determining the second region based on preset screen cutting region parameters;
determining the size of the minimum touch block of the display screen;
determining the first area based on the second area and the minimum touch block size.
6. The method of claim 5, further comprising:
updating the position of the first area based on a first dragging operation on the first area; and or (b) a,
updating the size of the first region based on a first scaling operation on the first region.
7. The method of any of claims 1-6, further comprising:
if a first trigger operation is received based on a third area, the first camera is closed, and the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
8. The method of any of claims 1-6, further comprising:
if a first trigger operation is received based on a third area, and the first camera detects that the sensitivity is reduced and the reduced numerical variation is greater than or equal to a first sensitivity threshold, closing the first camera, wherein the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
9. The method of any of claims 1-6, further comprising:
if a first trigger operation is received based on a third area, and the first camera detects that the sensitivity is reduced and the reduced value is smaller than a second sensitivity threshold value, closing the first camera, wherein the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
10. The method according to any one of claims 7 to 9, wherein the first triggering operation is a sliding operation in which the third area points in a direction of the second area and at least a part of the sliding trajectory is in the third area.
11. The method of any of claims 1-10, further comprising:
if a second trigger operation is received based on a third area, the first camera is determined to be kept started, and the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
12. The method according to any one of claims 1-11, wherein the first prompt signal is displayed for a first predetermined duration.
13. The method of claim 12, further comprising:
if the first trigger operation is not received based on a third area within the first preset duration, the first camera is kept started, and the third area is the first area, or the third area is a touch sensing area adjacent to the first area.
14. The method according to any one of claims 7-11 and 13, wherein the third area is a touch sensing area adjacent to the first area, and the third area surrounds the first area.
15. The method of any of claims 1-14, further comprising:
and if the first camera is determined to be kept on, updating the first optical signal displayed in the first area into a second optical signal.
16. The method according to any one of claims 1 to 15, wherein displaying a first prompt signal in a first area of a display screen if it is detected that the first camera is in an open state comprises:
and if a first calling message is detected, displaying the first prompt signal in the first area of the display screen, wherein the first calling message is used for indicating that the first camera is called currently and is in an open state.
17. The method of any of claims 1-16, wherein the first camera and the second camera are the same camera.
18. The method according to any one of claims 1 to 17, wherein before the step of displaying the first prompt signal in the first area of the display screen if the first camera is detected to be in the open state, the method further comprises:
and receiving a first setting operation, wherein the first setting operation is used for opening a camera prompting function.
19. An electronic device, comprising: a memory for storing a computer program and a processor; the processor is adapted to perform the method of any of claims 1-18 when the computer program is invoked.
20. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-18.
CN202110859099.0A 2021-07-28 2021-07-28 Method for prompting camera state and electronic equipment Pending CN115695599A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110859099.0A CN115695599A (en) 2021-07-28 2021-07-28 Method for prompting camera state and electronic equipment
PCT/CN2022/108373 WO2023005999A1 (en) 2021-07-28 2022-07-27 Method for prompting state of camera, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110859099.0A CN115695599A (en) 2021-07-28 2021-07-28 Method for prompting camera state and electronic equipment

Publications (1)

Publication Number Publication Date
CN115695599A true CN115695599A (en) 2023-02-03

Family

ID=85057741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110859099.0A Pending CN115695599A (en) 2021-07-28 2021-07-28 Method for prompting camera state and electronic equipment

Country Status (2)

Country Link
CN (1) CN115695599A (en)
WO (1) WO2023005999A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180019392A (en) * 2016-08-16 2018-02-26 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN118042035A (en) * 2017-06-30 2024-05-14 华为技术有限公司 Method for displaying graphical user interface and mobile terminal
CN108196722A (en) * 2018-01-29 2018-06-22 广东欧珀移动通信有限公司 A kind of electronic equipment and its touch control method, computer readable storage medium
CN109190369A (en) * 2018-08-31 2019-01-11 努比亚技术有限公司 A kind of camera control method, terminal and computer storage medium
CN109711148A (en) * 2018-12-17 2019-05-03 深圳壹账通智能科技有限公司 Hold-up interception method, device, computer equipment and the storage medium of application behavior
CN111131613A (en) * 2019-12-25 2020-05-08 惠州Tcl移动通信有限公司 Data sending method, device, storage medium and mobile terminal

Also Published As

Publication number Publication date
WO2023005999A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
US10739854B2 (en) Terminal and touch response method and device
EP3113001B1 (en) Method and apparatus for displaying information
US10909894B2 (en) Display panel and terminal
US20210181923A1 (en) Always on Display Method and Electronic Device
US10564833B2 (en) Method and apparatus for controlling devices
CN106527867B (en) Method and device for moving suspension layer interface
EP3179711B1 (en) Method and apparatus for preventing photograph from being shielded
CN105260117B (en) Application program control method and device
US11416112B2 (en) Method and device for displaying an application interface
US10095377B2 (en) Method and device for displaying icon badge
EP3726514B1 (en) Display control method for terminal screen, device and storage medium thereof
CN106713696B (en) Image processing method and device
CN105426079B (en) The method of adjustment and device of picture luminance
US20170153754A1 (en) Method and device for operating object
EP3232301B1 (en) Mobile terminal and virtual key processing method
EP3109741B1 (en) Method and device for determining character
US11574415B2 (en) Method and apparatus for determining an icon position
CN112905136A (en) Screen projection control method and device and storage medium
CN112825040A (en) User interface display method, device, equipment and storage medium
EP3327718B1 (en) Method and device for processing a page
US9641737B2 (en) Method and device for time-delay photographing
CN113760139A (en) Information processing method and device, equipment and storage medium
CN115695599A (en) Method for prompting camera state and electronic equipment
CN112381729B (en) Image processing method, device, terminal and storage medium
CN113869295A (en) Object detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination