CN116709018A - Zoom bar segmentation method and electronic equipment - Google Patents

Zoom bar segmentation method and electronic equipment Download PDF

Info

Publication number
CN116709018A
CN116709018A CN202211260332.4A CN202211260332A CN116709018A CN 116709018 A CN116709018 A CN 116709018A CN 202211260332 A CN202211260332 A CN 202211260332A CN 116709018 A CN116709018 A CN 116709018A
Authority
CN
China
Prior art keywords
zoom
preset
target
variable focus
bar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211260332.4A
Other languages
Chinese (zh)
Other versions
CN116709018B (en
Inventor
牛思月
黄雨菲
易婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211260332.4A priority Critical patent/CN116709018B/en
Publication of CN116709018A publication Critical patent/CN116709018A/en
Application granted granted Critical
Publication of CN116709018B publication Critical patent/CN116709018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a zoom bar segmentation method and electronic equipment, wherein the method can be applied to the electronic equipment comprising a camera, and comprises the following steps: the photographing mode of the camera application can be determined to be the target mode, the target zoom multiple corresponding to the target mode is determined according to the target mode and the corresponding relation between the preset photographing mode and the zoom multiple, the initial zoom bar is determined according to the target zoom multiple, then the initial zoom bar is segmented according to the preset segmentation rule, and a plurality of segments which are different in length proportion and are segmented according to the segmentation proportion corresponding to the preset segmentation rule are obtained.

Description

Zoom bar segmentation method and electronic equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a zoom bar segmentation method and an electronic device.
Background
Currently, electronic devices such as mobile phones are equipped with cameras, and users can shoot images through the cameras. For example, a person may be photographed, a game may be photographed, a scenery may be photographed at a distance.
In some scenes, some users want to shoot images at a distance, or want to create some photographic works by shooting scenes at a distance, which requires adjusting a zoom bar applied by a camera of an electronic device to realize zooming of a lens so as to achieve the shooting purpose.
However, the division of the zoom bar between the individual zoom points in a camera application is detrimental to user operation.
Disclosure of Invention
The zoom bar segmentation method and the electronic device provided by the application solve the problems that in the shooting process of the electronic device, a user needs to manually adjust parameters, the operation is complex, the user needs to have certain shooting experience, and the shooting creation threshold is high.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides a zoom bar dividing method, which can be applied to an electronic apparatus including a camera, including: the photographing mode of the camera application can be determined to be a target mode, the target zoom multiple corresponding to the target mode is determined according to the target mode and the preset corresponding relation between the photographing mode and the zoom multiple, the initial zoom bar is determined according to the target zoom multiple, and then the initial zoom bar is segmented according to the preset segmentation rule, so that a plurality of segments which are different in length proportion and segmented according to the segmentation proportion corresponding to the preset segmentation rule are obtained.
The following is a detailed description of several cases:
the target zoom factor may include a target maximum zoom factor and a target minimum zoom factor, the target maximum zoom factor corresponding to the target mode may be determined according to the target mode and a preset corresponding relation between the photographing mode and the maximum zoom factor, and the target minimum zoom factor corresponding to the target mode may be determined according to the target mode and a preset corresponding relation between the photographing mode and the minimum zoom factor. And comparing the maximum zoom multiple passing through the target with the maximum zoom multiple being larger than a first preset multiple to determine the length of the initial zoom bar.
Specifically, the maximum zoom multiple of the target is greater than the first preset multiple, and if the maximum zoom multiple of the target is greater than the first preset multiple, the length of the distance between the minimum focus of the target corresponding to the minimum zoom multiple of the target and the first preset focus is determined to be the initial zoom bar.
If the maximum zoom multiple of the target is determined to be smaller than or equal to the first preset multiple, the length of the distance between the minimum target variable focus corresponding to the minimum target zoom multiple and the second preset variable focus can be determined to be the initial zoom bar. To segment the initial zoom bar.
The initial zoom bar may be divided according to different preset rules and division ratios by comparing the maximum zoom factor with a first preset factor.
In some implementations, when the maximum zoom multiple corresponding to the target mode is greater than the first preset multiple, the initial zoom bar may be divided into a plurality of segments with different length ratios according to a preset first division rule, and then each of the plurality of segments may be divided according to a division ratio corresponding to the preset first division rule, so as to obtain the first zoom bar.
Further, the zoom point determined according to the user operation, that is, the first zoom point, may be displayed with the corresponding zoom magnification on the first zoom bar. So as to facilitate the user to know the current magnification of the variable focus.
In some implementations, when the maximum zoom multiple corresponding to the target mode is greater than the first preset multiple, an intermediate zoom point may also be displayed on the initial zoom bar, and the first zoom point and the second preset multiple may be compared to determine whether the intermediate zoom point is the first multiple zoom point or the second zoom point;
specifically, if the first variable focus is determined to be smaller than the second preset multiple, the intermediate variable focus may be determined to be the first multiple variable focus, and if the first variable focus is determined to be greater than or equal to the second preset multiple, the intermediate variable focus may be determined to be the second multiple variable focus. After the intermediate zoom point is determined, the length of the distance between the target minimum zoom point and the intermediate zoom point on the initial zoom bar may be divided according to a preset first ratio, and the length of the distance between the intermediate focus point and the first preset zoom point on the initial zoom bar may be divided according to a preset second ratio, so as to obtain a second zoom bar.
In some implementations, when the target maximum zoom multiple is less than or equal to the first preset multiple, the initial zoom bar may be divided into a plurality of segments with different length ratios according to a preset second division rule, and then each of the plurality of segments may be equally divided according to a division ratio corresponding to the second division rule, so as to obtain the third zoom bar.
In a second aspect, the present application provides an electronic device comprising: a memory and a processor;
one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
In a third aspect, the present application provides a computer storage medium comprising computer instructions which, when run on a mobile terminal, cause the electronic device to perform the method of any of the first aspects.
In a fourth aspect, the application provides a computer program product comprising instructions; the instructions, when executed by an electronic device, cause the electronic device to perform the method of any of the first aspects.
According to the technical scheme, the application has the following beneficial effects:
the application provides a zoom bar segmentation method, which can be applied to electronic equipment comprising a camera, and comprises the following steps: the method can compare the maximum zoom multiple with the first preset multiple, can determine the preset segmentation rule under different conditions, can segment the initial zoom bar according to the determined preset segmentation rule, and can obtain a plurality of segments which are different in length proportion and are segmented according to the segmentation proportion corresponding to the preset segmentation rule, so that the common zoom area of a user in the zoom bar can be finely segmented, the user can be simply segmented, and the user requirements can be met, and the operation is convenient.
It should be appreciated that the description of technical features, aspects, benefits or similar language in the present application does not imply that all of the features and advantages may be realized with any single embodiment. Conversely, it should be understood that the description of features or advantages is intended to include, in at least one embodiment, the particular features, aspects, or advantages. Therefore, the description of technical features, technical solutions or advantageous effects in this specification does not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantageous effects described in the present embodiment may also be combined in any appropriate manner. Those of skill in the art will appreciate that an embodiment may be implemented without one or more particular features, aspects, or benefits of a particular embodiment. In other embodiments, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
Fig. 1 is a diagram illustrating a composition example of an electronic device according to an embodiment of the present application;
fig. 2 is a diagram illustrating a software structure of an electronic device according to an embodiment of the present application;
Fig. 3A is a schematic diagram of a user opening a camera application according to an embodiment of the present application;
fig. 3B is a schematic diagram of a camera interface preview according to an embodiment of the present application;
fig. 3C is a schematic view of a zoom point in a preview interface for displaying camera application according to an embodiment of the present application;
fig. 3D is a schematic view of a zoom bar in a display camera application shooting preview interface according to an embodiment of the present application;
fig. 3E is a schematic countdown diagram of a camera application in a photographing process according to an embodiment of the present application;
fig. 3F is a schematic diagram of a photographing mode according to an embodiment of the present application;
fig. 3G is a schematic diagram of a camera application switching photographing mode according to an embodiment of the present application;
fig. 4A is a schematic view of a camera interface preview scene according to an embodiment of the present application;
FIG. 4B is a schematic diagram of a split initial zoom bar according to an embodiment of the present application;
FIG. 4C is a schematic diagram of a detailed split initial zoom bar according to an embodiment of the present application;
FIG. 4D is a schematic diagram showing a first zoom point on a first zoom bar according to an embodiment of the present application;
FIG. 4E is a schematic diagram of another split initial zoom bar according to an embodiment of the present application;
FIG. 4F is a schematic diagram of determining the respective focal positions of an initial zoom bar according to an embodiment of the present application;
FIG. 4G is a schematic diagram of another detailed splitting of an initial zoom bar according to an embodiment of the present application;
FIG. 4H is a schematic diagram of a further detailed splitting of the initial zoom bar provided by an embodiment of the present application;
FIG. 4I is a schematic illustration of yet another split initial zoom bar provided by an embodiment of the present application;
fig. 5 is a flowchart of a zoom bar segmentation method according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and in the drawings are used for distinguishing between different objects and not for limiting the specified order.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
For clarity and conciseness in describing the following embodiments, a brief introduction to the related art will be given first:
The shooting modes of the mobile phone can be divided into a plurality of modes, such as a portrait mode, a night view mode, a common shooting mode, a video mode and the like, and users can have different image amplifying demands under different modes, so that a zoom bar can be arranged in camera application of the electronic equipment for the users to select the amplifying times of the images. Wherein, as shown in fig. 3D, the zoom bar is composed of a plurality of zoom points, and the user can click on the zoom control 309 or pull the zoom component 309A in the zoom control 309 to determine the magnification factor meeting the requirement.
Currently, the zoom bars in camera applications are equally divided among the zoom points, which is not beneficial for user operation.
In view of this, the zoom bar segmentation method and the electronic device provided by the application can determine the photographing mode of the camera application as the target mode, determine the target zoom multiple corresponding to the target mode according to the target mode and the corresponding relation between the preset photographing mode and the zoom multiple, determine the initial zoom bar according to the target zoom multiple, and then segment the initial zoom bar according to the preset segmentation rule to obtain a plurality of segments with different length ratios and corresponding segmentation ratios according to the preset segmentation rule, so that the zoom region commonly used by the user in the zoom bar can be finely segmented, and the zoom region very used by the user can be briefly segmented, thereby meeting the user requirements and being convenient for the user to operate.
In some embodiments, the electronic device may be a mobile phone, tablet, desktop, laptop, notebook, ultra mobile personal computer (Ultra-mobile Personal Computer, UMPC), handheld computer, netbook, personal digital assistant (Personal Digital Assistant, PDA), wearable electronic device, smart watch, etc., and the application is not limited in particular to the specific form of the above-described electronic device. In this embodiment, the structure of the electronic device may be shown in fig. 1, and fig. 1 is a schematic diagram of the composition of the electronic device according to the embodiment of the present application.
As shown in fig. 1, the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the configuration illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus. In other embodiments, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can be a neural center and a command center of the electronic device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing function of the electronic device. The processor 110 and the display screen 194 communicate via a DSI interface to implement the display functionality of the electronic device.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge an electronic device, or may be used to transfer data between the electronic device and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied on an electronic device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. for application on an electronic device. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, the antenna 1 and the mobile communication module 150 of the electronic device are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the electronic device can communicate with the network and other devices through wireless communication technology. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device implements display functions via a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro-led, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 194, N being a positive integer greater than 1.
A series of graphical user interfaces (graphical user interface, GUIs) may be displayed on the display 194 of the electronic device, all of which are home screens of the electronic device. Generally, the size of the display 194 of an electronic device is fixed and only limited controls can be displayed in the display 194 of the electronic device. A control is a GUI element that is a software component contained within an application program that controls all data processed by the application program and interactive operations on that data, and a user can interact with the control by direct manipulation (direct manipulation) to read or edit information about the application program. In general, controls may include visual interface elements such as icons, controls, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, and the like. The method comprises the steps of carrying out a first treatment on the surface of the 4
The electronic device may implement shooting functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the ISP can also perform algorithm optimization on noise, brightness and skin color of the image by opening the shutter. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, and so on.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of electronic devices can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device may listen to music, or to hands-free conversations, through speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device picks up a phone call or voice message, the voice can be picked up by placing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device may be provided with at least one microphone 170C. In other embodiments, the electronic device may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronics determine the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device. In some embodiments, the angular velocity of the electronic device about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device through the reverse motion, thereby realizing anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronics calculate altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device is a flip machine, the electronic device may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device may measure the distance by infrared or laser. In some embodiments, the scene is photographed and the electronic device can range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device emits infrared light outwards through the light emitting diode. The electronic device uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device. When insufficient reflected light is detected, the electronic device may determine that there is no object in the vicinity of the electronic device. The electronic device may detect that the user holds the electronic device near the ear to talk using the proximity light sensor 180G, so as to automatically extinguish the screen for power saving purposes. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device can adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect if the electronic device is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic equipment can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access the application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device performs a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, the electronics perform a reduction in performance of a processor located near temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device heats the battery 142 to avoid low temperatures causing the electronic device to shut down abnormally. In other embodiments, the electronic device performs boosting of the output voltage of the battery 142 when the temperature is below a further threshold to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device. The electronic device may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic equipment interacts with the network through the SIM card, so that the functions of communication, data communication and the like are realized. In some embodiments, the electronic device employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device and cannot be separated from the electronic device.
In addition, an operating system is run on the components. Such as the iOS operating system developed by apple corporation, the Android open source operating system developed by google corporation, the Windows operating system developed by microsoft corporation, etc. An operating application may be installed on the operating system.
The operating system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, an Android system with a layered architecture is taken as an example, and the software structure of the electronic equipment is illustrated.
Fig. 2 is a software configuration block diagram of an electronic device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, a framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The framework layer includes some predefined functions. As shown in fig. 2, the framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the framework layer run in virtual machines. The virtual machine executes java files of the application layer and the framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Although the Android system is taken as an example for explanation, the basic principle of the embodiment of the application is also applicable to electronic devices based on iOS, windows and other operating systems.
The electronic equipment provided by the embodiment of the application takes the picture through the camera. For example, the electronic device may take a photograph through a single camera, or may take a photograph through multiple cameras, which is not limited in this regard. Wherein taking a photograph with a single camera may refer to the electronic device taking a photograph with a single camera 193; taking pictures with multiple cameras may refer to an electronic device taking pictures with multiple cameras 193 (e.g., two cameras, which may be a main camera and a portrait camera, or a wide-angle camera and a tele camera, etc.). The embodiment of the application is not particularly limited to a single-path camera shooting or a multi-path camera shooting, and a person skilled in the art can set the electronic equipment to select the single-path camera or the multi-path camera according to an actual scene.
When a user performs a touch operation on the touch sensor 180K, the touch sensor 180K may acquire the touch operation of the user and report the touch operation to the processor 110, and after receiving the touch operation transmitted by the touch sensor 180K, the processor 110 may respond to the touch operation and start an application corresponding to the touch operation.
As shown in fig. 3A, fig. 3A shows a schematic diagram of a user opening a camera application, for example, a user touch operation is to open a camera, the touch sensor 180K may receive a touch operation of the user on the camera icon 301 and report the touch operation of the camera icon 301 to the processor 110, after receiving the touch operation of the camera icon 301, the processor 110 may start an application corresponding to the camera icon 301 (may be simply referred to as a camera application) in response to the touch operation, and display a shooting preview interface of the camera on the display screen 194.
In addition, in the embodiment of the present application, the electronic device may also enable the camera application to be started in other manners, and display a shooting preview interface of the camera on the display screen 194. For example, the user may send a voice command to turn on the camera, the electronic device may report the received voice command to turn on the camera sent by the user to the processor 110 through the microphone 170C, and after receiving the voice command to turn on the camera sent by the user, the processor 110 may start the camera application in response to the voice command of the user, and display a shooting preview interface of the camera application on the display screen 194. Still alternatively, the user may store a shortcut instruction to open the camera in the electronic device in advance, for example, the shortcut instruction to open the camera may be set as an operation to touch the screen lower end of the electronic device to slide upward. When the user triggers a shortcut command for opening the camera, the touch sensor 180K may receive the shortcut command for opening the camera from the user and report the shortcut command to the processor 110, and after receiving the shortcut command, the processor 110 may start an application corresponding to the camera icon 301 in response to the shortcut command, and display a shooting preview interface of the camera on the display screen 194.
In some embodiments, the camera application may automatically enter the photographing mode after the processor 110 starts the camera application. As shown in fig. 3B, various functional controls, such as a mode selection control 302, a camera setup control 303, and the like, may be included on the capture preview interface of the camera application. The touch sensor 180K may receive a touch operation of the user on the functional control, and report the touch operation to the processor 110, so that the processor 110 controls the camera application to enter the corresponding interface in response to the touch operation. In some examples, the user may trigger the camera application to enter a different shooting mode, such as a night view mode, a tele mode, etc., and a rainy mode, a snowy mode, etc., via the mode selection control 302. The functions such as a filter, watermark, panorama and the like can be started through the camera setting control 303. For example, the user may set an initial mode of the camera application to a photographing mode through the camera setting control 303, and when the user reopens the camera application again, the camera application automatically enters the photographing mode. The initial mode of the camera application may also be set to a record mode by the camera setup control 303, and the camera application automatically enters the record mode when the user reopens the camera application again.
In some embodiments, after the camera application automatically enters the photographing mode, the camera application may display zoom points in the photographing preview interface, such as a zoom point diagram in the photographing preview interface of the camera application shown in fig. 3C, where each zoom point represents a zoom multiple, for example, clicking on numeral 1 in the camera application, displaying 1×inthe photographing preview interface of the camera application, that is, representing zoom by a factor of 2, clicking on numeral 5 in the camera application, displaying 5×inthe photographing preview interface of the camera application, that is, representing zoom by a factor of 5, and so on. The touch sensor 180K may acquire the operation of clicking the zoom point triggered by the user and report the operation to the processor 110, and the processor 110 may control the camera application to display to the corresponding zoom point according to the operation of clicking the zoom point triggered by the user, so as to implement zooming of the camera application. In some embodiments, as shown in fig. 3D, where the zoom bar is schematically displayed in the camera application shooting preview interface, the touch sensor 180K may also obtain an operation of the user to press the zoom control 309 long or to pull the zoom component 309A in the zoom control 309, and report the operation to the processor 110, and the processor 110 may control the camera application to display a zoom bar composed of zoom points. Meanwhile, the camera application can be controlled to display the adjusted zoom multiple according to the operation of pulling the zoom component 309A in the zoom control 309 by a user, for example, the zoom component 309A in the zoom control 309 can be pulled to adjust the zoom multiple from 1 to 1.6 times, the zoom component 309A in the zoom control 309 can be pulled to adjust the zoom multiple from 1.6 to 2 times, and the like, so that the user can know the current zoom multiple of the camera application. The camera application may also automatically enter a video mode in which the user may adjust the zoom factor of the camera application.
In some embodiments, after determining the zoom multiple of the camera application, the user may trigger a photo taking operation through a touch capture key 304 in a photo preview interface of the camera application, and when the user triggers the photo taking operation, the touch sensor 180K may acquire the photo taking operation triggered by the user and report the photo taking operation to the processor 110, and the processor 110 may control the camera application to take a photo and may acquire a photo taking time and a photo taking place of the photo taking operation triggered by the user. The photographing time may be a time corresponding to when the user touches the photographing key 304 to trigger a photographing operation. The photographing place may be a geographical position of the electronic device when the user touches the photographing key 304 to trigger the photographing operation, and further, the user may also set a time delay photographing time of the camera application in the electronic device through the camera setting control 303, for example, the time delay photographing time may be set to be 3 seconds, 5 seconds, 10 seconds, etc., where the time delay photographing time refers to photographing after triggering the photographing operation of the camera application for a certain time.
In the following, taking the delay time set to 3 seconds as an example, firstly, the user can set the delay photographing time of the electronic device to 3 seconds through the camera setting control 303, the electronic device can acquire the operation of setting the delay photographing time of the user on the screen of the electronic device through the touch sensor 180K, and transmit the operation to the processor 110, and the processor 110 can store the 3-second delay photographing time set by the user. When the user triggers the photo taking operation, the touch sensor 180K may acquire the photo taking operation triggered by the user and report the photo taking operation to the processor 110, and the processor 110 may control the camera application to count down, as shown in fig. 3E, which shows a count down schematic diagram of the camera application in the process of taking a photo, and the processor 110 may control the count down number of the camera application to be displayed in the screen of the electronic device, so that the user may intuitively acquire the count down information. In response to the processor 110 determining that the 3 second countdown is over, that is, the countdown time meets the delay time set by the user, the processor 110 may control the camera application to take a photograph, so that by setting the delay time, the problem that no other person takes a photograph in some scenes may be solved. Wherein other persons may refer to persons who do not need to appear in the photograph.
In some embodiments, the camera application may trigger the photo taking operation by means of gesture taking, voice taking, and the like, in addition to the photo taking operation by touching the photographing key 304. The gesture photographing refers to that the camera application triggers a photo taking operation by identifying a photographing gesture set by a user, and the sound photographing refers to that the camera application triggers the photo taking operation by identifying a sound set by the user. Specifically, as shown in a schematic diagram of a photographing mode setting shown in fig. 3F, after the user clicks the camera setting control 303, the photographing mode control 305 may be displayed in the camera application, and clicking the photographing mode control 305 may display gesture photographing and sound photographing. Further, the user may autonomously set a photographing gesture for triggering the photographing operation and/or a user sound for triggering the photographing operation according to the needs, for example, gesture photographing is performed by taking a specific gesture set by the user as an example, the user may set a specific gesture in a camera application to trigger the photographing operation, when the user needs to photograph, the user may put out the specific gesture, the electronic device may process a front camera or a rear camera in the camera 193 to transmit the obtained light to the photosensitive element of the camera through the lens, convert the light signal into an electrical signal through the photosensitive element, transmit the electrical signal to the ISP to process, and the ISP transmits the processed data to the processor 110, so that the processor 110 may determine that the user triggers the photographing operation to control the camera application to photograph.
The voice photographing takes the user setting recognition voice as a photographing example, when the user needs to photograph, the user can speak the voice of the photographing word, the microphone 170C can acquire the voice of the user, the voice is transmitted to the processor 110 after being processed by the signal, and the processor 110 can determine that the user triggers photographing operation to control the camera application to photograph.
In order to avoid that the function keys on the screen block the scene image displayed by the camera application in the process of taking the photo, so that the user experience is poor, the touch sensor 180K can acquire the operation triggered by the user and report the operation to the processor 110, the processor 110 can hide the mode selection control 302 of the camera application in the process of taking the photo according to the operation of the user, as shown in fig. 4A, in some examples, in the process of previewing the scene image displayed by the camera, the operation of the user can be a touch screen, the touch sensor 180K acquires the operation touched by the user and reports the operation to the processor 110, and the processor 110 hides the mode selection control 302 of the camera application in the process of taking the photo, so that the image is not blocked and displayed in the screen of the electronic device, so as to improve the experience of the user.
Currently, electronic devices such as mobile phones are equipped with cameras, and users can shoot images through the cameras. For example, a person may be photographed, a game may be photographed, a scenery may be photographed at a distance.
In some scenes, some users want to shoot images at a distance, or want to create some photographic works by shooting scenes at a distance, which requires adjusting a zoom bar applied by a camera of an electronic device to realize zooming of a lens so as to achieve the shooting purpose.
Currently, the zoom bars in camera applications are equally divided among various zoom points, so that a user cannot quickly adjust the zoom factors to the zoom factors required by the user, and the user operation is not facilitated.
In order to make the technical scheme of the application clearer and easier to understand, the zoom bar dividing method provided by the embodiment of the application is described below with reference to the embodiment and the corresponding drawings. The method can be implemented on an electronic device with the structure shown in fig. 1, wherein the electronic device comprises a camera, a microphone and a display screen. See a flow chart of a zoom bar segmentation method shown in fig. 5. As shown in fig. 5, the zoom bar segmentation method provided by the embodiment of the present application may include:
s501: the camera application is started.
In some examples, when a user needs to use a camera application of the electronic device (e.g., take a photograph or record a video, etc.), the user may click on the camera icon 301 shown in fig. 3A, and after the electronic device detects that the user clicks on the camera icon 301, the electronic device starts the camera application. In other examples, the user may cause the electronic device to launch the camera application in other ways as well. For example, by voice commands or other preset gestures, etc., embodiments of the present application are not limited in this regard.
S502: a photographing mode of the camera application is determined.
After the electronic device starts the camera application, a preset photographing mode can be entered, taking a common photographing mode as an example, the zooming multiple of the common photographing mode can be defaulted to be 1 time, and a video stream can be acquired through the camera so as to display a preview scene image on a photographing preview interface. In some embodiments, the camera of the electronic device may be a single-path camera, and the electronic device may default to a single-path photographing mode, that is, the electronic device obtains a single-path video stream through the single-path camera; in other embodiments, the camera of the electronic device may be a multi-path camera, and the electronic device default photographing mode is a multi-path photographing mode, so that the multi-path video stream may be acquired through the multi-path camera. For example, the electronic device may obtain the first video stream through the first camera and obtain the second video stream through the second camera, and of course, the electronic device may also obtain the third video stream through the third camera. Taking an example that the multi-path camera comprises two paths of cameras, the first path of camera can be a main shooting sceneShooting and photographing deviceThe camera, the second path camera can be a wide-angle camera or a camera with other functions, and the embodiment of the application does not limit the application 。
In some embodiments, the electronic device may further receive a switching operation triggered by the user, where the switching operation is used to instruct the electronic device to switch the photographing mode of the camera. For example, when the camera shooting mode is the single-path shooting mode, the user may trigger the electronic device display control 307 by clicking the camera shooting mode in the control 305 in fig. 3G, and the multi-path shooting mode control electronic device in the control 307 may trigger the electronic device display control 308 when the user clicks the multi-path shooting mode control, and further the user may trigger the switching operation through the touch control 308, and the electronic device switches the single-path shooting mode of the camera application into the multi-path shooting mode in response to the switching operation triggered by the user. For another example, when the camera shooting mode is the multi-path shooting mode, the user may trigger a switching operation by clicking the camera mode in the control 305 of fig. 3G, and the electronic device switches the multi-path shooting mode of the camera application to the single-path shooting mode in response to the switching operation triggered by the user. The multi-path shooting mode refers to shooting of photos through cooperation of a plurality of cameras. The multi-path photographing mode can comprise a main photographing extended focus mode, a main photographing extended wide angle mode, a main photographing extended portrait mode and the like, and the electronic equipment can also respond to switching operation triggered by a user to perform random switching in the multi-path photographing mode so as to meet the requirements of the user.
S503: and determining the target zoom multiple according to the photographing mode.
The zoom factor refers to the electronic device increasing or decreasing the area of each pixel within the preview scene image displayed in the preview interface by the processor 110, thereby achieving the purpose of enlarging or reducing the image. It will be appreciated that the preview scene image may be enlarged or reduced by a multiple of the zoom factor, with the larger zoom factor increasing the area of each pixel within the preview scene image by a multiple and the smaller zoom factor decreasing the area of each pixel within the preview scene image by a multiple. The maximum zoom factor refers to the maximum factor that allows each pixel area within the preview scene image to be magnified.
In some embodiments, after determining the photographing mode, the target zoom multiple may be determined according to a correspondence between the photographing mode and the target zoom multiple, where the target zoom multiple may include a target maximum zoom multiple and a target minimum zoom multiple, taking the normal photographing mode in the single-path photographing mode as an example, a zoom range of the normal photographing mode may be preset to be 1-10 times, that is, the correspondence between the normal mode and the target zoom multiple may be preset to be 10 times the maximum zoom multiple and 1 time the minimum zoom multiple of the normal mode. When the photographing mode is determined to be the common photographing mode, the maximum zoom multiple and the minimum zoom multiple of the camera applied in the common mode can be determined. For another example, taking a video mode in a single-path shooting mode as an example, the zoom range of the video mode may be preset to be 1-15 times, that is, the maximum zoom factor of the video mode is preset to be 15 times, the minimum zoom factor is preset to be 1 time, and when the shooting mode is determined to be the video mode, the maximum zoom factor of the camera applied in the video mode may be determined to be 15 times, and the minimum zoom factor is determined to be 1 time according to the corresponding relation of the video mode.
In some embodiments, in order to meet the user requirement, the maximum zoom multiple may be preset to be less than 1 time, for example, the user needs to take a picture wider in the wide-angle mode, the zoom range of the wide-angle mode may be preset to be 0.5 time to 1 time, and when the photographing mode is determined to be the wide-angle mode, the maximum zoom multiple may be determined to be 1 time, and the minimum zoom multiple may be determined to be 0.5 time.
S504: and determining the initial zoom bar length according to the target zoom multiple and dividing the initial zoom bar.
The maximum zoom multiple corresponding to the target mode can be compared with a first preset multiple, so that under different conditions, the initial zoom bar is divided into a plurality of segments with different length ratios according to preset division rules, and each segment is divided according to the division ratio corresponding to the preset division rules. Wherein the first preset multiple may be 2.
The following description will be made in different cases.
Case one:
in some embodiments, when the maximum zoom factor is greater than the first preset factor by a factor of 2, less than or equal to 10, the distance between the 1-fold zoom point and the 10-fold zoom point may be determined, thereby determining the length of the initial zoom bar. For example, a distance between the 1-fold and 10-fold zoom points may be preset to 180dp, so that the initial zoom bar may be determined with the 1-fold and 10-fold zoom points at both ends, 180dp being the length, wherein the preset distance may be characterized in units of device-independent pixels (dp, device independent pixels).
Further, as shown in fig. 4B, the zoom bar may be divided according to a preset first division rule according to a scale in the drawing and positions of 2-fold and 5-fold focal points may be determined on the initial zoom bar. Specifically, the initial zoom bar may be divided into three sections, wherein the first section length, that is, the distance between the 1-fold focal point and the 2-fold focal point may be 0.38 times the total length of the initial zoom bar, the second section length, that is, the distance between the 2-fold focal point and the 5-fold focal point may be 0.37 times the total length of the initial zoom bar, and the third section length, that is, the distance between the 5-fold focal point and the 10-fold focal point may be 0.25 times the total length of the initial zoom bar. After determining the positions of the 2-fold and 5-fold zoom points, the initial zoom bar may be further divided according to a division ratio corresponding to the first division rule, as shown in fig. 4C, a distance between the 1-fold and 2-fold zoom points may be referred to as a first distance, the length between the first distances may be 1, the first distance may be equally divided into 10 parts in units of 0.1, each part may occupy 0.1 times of the first distance, a division line may be marked every 0.1, each corresponding division line may represent one zoom multiple, and it may be understood that the zoom multiple between every two adjacent division lines in the first distance differs by 0.1. Meanwhile, the distance between the 2-time zoom point and the 5-time zoom point can be called a second distance, the length of the second distance is 3, the second distance can be equally divided into 12 parts in units of 0.25, one dividing line can be marked every 0.25, and the zoom multiple difference between every two adjacent dividing lines in the second distance is 0.25. The distance between the 5-time zoom point and the 10-time zoom point may be referred to as a third distance, the length of the third distance may be 5, the third distance may be equally divided into 10 parts in units of 0.5, each part may be 0.1 times the third distance, one division line may be marked every 0.5, and the zoom multiple difference between every two adjacent division lines in the third distance may be 0.5, so that the division of the initial zoom bar may be completed. Thereby determining a split zoom bar shown in the camera application. When the user needs to adjust the zoom multiple, the touch sensor 180K may acquire the operation of the user to press the zoom control 309 for a long time or pull the zoom component 309A in the zoom control 309, and report the operation to the processor 110, and the processor 110 may control the camera application to display the zoom point determined according to the user operation on the segmented zoom bar, for example, as shown in fig. 4D, the user adjusts the zoom multiple to 3.5 times according to the requirement, and the processor 110 receives the user operation reported by the touch sensor 180K, and controls the camera application to display the zoom multiple 3.5 on the segmented zoom bar. The embodiment can complete the division of the zoom bar, and then display the zoom point multiple reported according to the user operation on the zoom bar after the division. The zoom magnification reported according to the user operation may be acquired first, and then the zoom bar may be divided according to the reported zoom magnification, as described in the following embodiment in the second case.
And a second case:
in some embodiments, the maximum zoom magnification is greater than the first preset magnification by a factor of 2, less than or equal to 10, the distance between the 1-fold focal point and the 10-fold focal point is determined to be the length of the initial zoom bar, for example, the distance between the 1-fold focal point and the 10-fold focal point may be preset to 180dp, as shown in fig. 4E, the zoom bar may be initially divided according to the golden section law according to the scale in the figure, and the positions of the 2-fold focal point and the 5-fold focal point may be determined on the initial zoom bar. Specifically, the initial zoom bar may be divided into three sections, wherein the first section length, that is, the distance between the 1-fold focal point and the 2-fold focal point may be 0.25 times the total length of the initial zoom bar, the second section length, that is, the distance between the 2-fold focal point and the 5-fold focal point may be 0.425 times the total length of the initial zoom bar, and the third section length, that is, the distance between the 5-fold focal point and the 10-fold focal point may be 0.325 times the total length of the initial zoom bar. After determining the positions of the 2-fold and 5-fold variable focal points, the initial zoom bar may be further divided, as shown in fig. 4F, and the second segment length is equally divided into three, so that the positions of the 3-fold and 4-fold variable focal points on the initial zoom bar may be determined. The third segment length is equally divided into 5 parts, and the positions of the 6-fold, 7-fold, 8-fold, and 9-fold zoom points on the initial zoom bar can be determined. For comparison with the reported multiple of the variable focus. When the user needs to adjust the zoom multiple, the processor 110 may receive the user operation reported by the touch sensor 180K, and further control the camera application to display the zoom point determined according to the user operation on the zoom bar after the preliminary segmentation. The following describes the case in two ways into two application scenarios in detail.
The first application scenario of case two:
in some embodiments, the zoom point on the displayed zoom bar may be determined based on a user-operated determined zoom point, or may be referred to as a first zoom point. Specifically, when the reported zoom point, that is, the zoom point determined according to the user operation, the multiple is smaller than a second preset multiple, where the second preset multiple is exemplified by 4 times, and of course, other multiples may also be used. Only the minimum focus, the maximum focus, the intermediate focus and the reported focus may be displayed on the zoom bar. Specifically, if the first variable focus is determined to be smaller than the second preset multiple, the middle variable focus is determined to be the first multiple variable focus, and if the first variable focus is determined to be larger than or equal to the second preset multiple, the middle variable focus is determined to be the second multiple variable focus. Wherein, the first multiple variable focus point can take 5 as an example, and the second multiple variable focus point can take 4 as an example. Further, the length of the distance between the minimum focal point and the intermediate focal point on the initial zoom bar may be divided according to a preset first ratio, and the length of the distance between the intermediate focal point and the first preset focal point on the initial zoom bar may be divided according to a preset second ratio, where the first preset focal point may be 10, the first ratio may be exemplified by 0.25, and the second ratio may be exemplified by 0.5. That is, the minimum variable focus point and the intermediate variable focus point may be divided in units of 0.25, and the intermediate variable focus point and the maximum variable focus point may be divided in units of 0.5. As shown in fig. 4G, taking the reported zoom point of 2 times, the minimum zoom point of 1 time, and the maximum zoom point of 10 times as examples, 1 time, 2 time, 5 time, and 10 time zoom points can be displayed on the zoom bar, wherein the length of the distance between 1 time and 5 time zoom points is 4, and the length of the distance between 5 time and 10 time zoom points is 5. The length of the distance between the 1-fold variable focus and the 5-fold variable focus is divided in units of 0.25, every 0.25, and the length of the distance between the 5-fold variable focus and the 10-fold variable focus is divided in units of 0.5, every 0.5. To complete the segmentation of the zoom bar.
Second application scenario of case two:
in some embodiments, when the reported focal point, that is, the zoom point determined according to the user operation, is greater than or equal to the second preset multiple 4, only the minimum focal point, the maximum focal point, the intermediate focal point, and the reported focal point may be displayed on the zoom bar. The minimum variable focus and the intermediate variable focus can be divided according to 0.25 unit, the intermediate variable focus and the maximum variable focus can be divided according to 0.5 unit, the intermediate variable focus can be set as 4 times of variable focus, and the variable focus can be other times of variable focus. As shown in fig. 4H, taking the reported zoom point of 4 times, the reported minimum zoom point of 1 time and the reported maximum zoom point of 10 times as examples, the zoom bar can display 1 time, 4 time and 10 time of zoom points, and in order to facilitate the user to more intuitively understand the current zoom multiple, the zoom point of 2 times can also be displayed, wherein the length of the distance between the 1 time of zoom point and the 4 time of zoom point is 3, and the length of the distance between the 4 time of zoom point and the 10 time of zoom point is 6. The length of the distance between the 1-fold variable focus and the 4-fold variable focus may be divided in units of 0.25, divided every 0.25, and the length of the distance between the 4-fold variable focus and the 10-fold variable focus may be divided in units of 0.5, divided every 0.5. To complete the segmentation of the zoom bar.
And a third case:
in some embodiments, when the maximum zoom multiple is less than or equal to 2 times of the first preset multiple, determining that the distance between the 1-time zoom point and the second preset zoom point is the length of the initial zoom bar, wherein the second preset zoom point may be the 2-time zoom point, and dividing the initial zoom bar into a plurality of segments with different length ratios according to a preset second division rule;
for example, the distance between the 1-fold zoom point and the 2-fold zoom point may be preset to be 0.7 times the distance between the 1-fold zoom point and the 10-fold zoom point in the above-described embodiment. The length of the initial zoom bar may be divided into 2 parts, as shown in fig. 4I, the first distance may be a distance between 1 time of the zoom point and the zoom point corresponding to the middle scale line, the second distance may be a distance between the zoom point corresponding to the middle scale line pair and 2 times of the zoom point, the first distance may be 0.572 of the length of the initial zoom bar, and the second distance may be 0.428 of the length of the initial zoom bar. The middle graduation mark can be marked as 1.5, and can be marked as other numbers. And then, the zoom bar can be further segmented, each of the plurality of segments is equally segmented according to the segmentation proportion corresponding to the second segmentation rule, and a third zoom bar is obtained. For example, the first distance and the second distance may be equally divided into 10 parts, respectively, to complete the division of the zoom bar.
Case four:
in some embodiments, for example where the minimum magnification is less than 1 time in the wide angle mode, the length of the distance between the minimum magnification and the 1-time zoom point may be determined, for example, with the minimum magnification being 0.5 times and the length of the distance between the minimum magnification and the 1-time zoom point being 50 dp. The zoom point corresponding to the minimum zoom multiple, namely, the length 50dp of the distance between the 0.5-time zoom point and the 1-time zoom point can be equally divided into 5 parts according to 10dp of each part, so that the division of the zoom bar between the minimum zoom multiple and the 1-time zoom point is completed.
Case five:
in some embodiments, for example in tele mode, the maximum magnification is greater than 10 times, the length of the distance between the intermediate variable focus and the intermediate variable focus may be determined, the length of the distance between the intermediate variable focus and the maximum variable focus may be determined, for example taking the intermediate variable focus as 50 times the variable focus, the maximum variable focus as 100 times the variable focus, the length of the distance between the 10 times the variable focus and the intermediate variable focus as 50dp, and the length of the distance between the intermediate variable focus and the maximum variable focus as 44dp as an example. The length 50dp of the distance between the 10-fold zoom point and the 50-fold zoom point is equally divided into 10 parts, and the length 44dp of the distance between the 50-fold zoom point and the 100-fold zoom point is equally divided into 10 parts, thereby completing the division of the zoom bar between the 10-fold zoom point and the maximum zoom point.
According to the embodiment provided by the application, the target zoom multiple corresponding to the target mode can be determined according to the determined target mode and the corresponding relation between the preset photographing mode and the zoom multiple, the initial zoom bar is determined according to the target zoom multiple, then the initial zoom bar is segmented according to the corresponding preset segmentation rules under different conditions of different modes, a plurality of segments with different length ratios and corresponding segmentation ratios according to the preset segmentation rules are obtained, so that the zoom area commonly used by a user in the zoom bar can be finely segmented, the very-used zoom area of the user can be briefly segmented, the user requirements can be met, and the user operation is convenient.
In the above-described embodiments, after the zoom bar is divided, the size of the zoom bar, the transparency gradation specification of the zoom bar, the color of the scale line in the zoom bar, the format of the text in the zoom bar, and the like may be adjusted according to the operation of the user.
The technical solution of the present embodiment may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the respective embodiments. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A zoom bar segmentation method applied to an electronic device, comprising:
determining a photographing mode in which a camera application is located as a target mode;
determining a target zoom multiple corresponding to the target mode according to the target mode and a preset corresponding relation between a photographing mode and the zoom multiple;
determining an initial zooming bar according to the target zooming multiple;
the initial zoom bar is segmented according to a preset segmentation rule, wherein the preset segmentation rule is used for segmenting the initial zoom bar into a plurality of segments with different length ratios, and each segment is segmented according to a segmentation ratio corresponding to the preset segmentation rule.
2. The method according to claim 1, wherein the target zoom factor includes a target maximum zoom factor and a target minimum zoom factor, and the determining the target zoom factor corresponding to the target mode according to the target mode and a preset correspondence between the photographing mode and the zoom factor includes:
Determining a target maximum zoom multiple corresponding to the target mode according to the target mode and a preset corresponding relation between a photographing mode and the maximum zoom multiple;
and determining a target minimum zoom multiple corresponding to the target mode according to the target mode and the preset corresponding relation between the photographing mode and the minimum zoom multiple.
3. The method of claim 2, wherein determining an initial zoom bar based on the target zoom factor comprises:
if the maximum zoom multiple of the target is determined to be larger than a first preset multiple, determining that the length of the distance between the minimum target variable focus corresponding to the minimum target zoom multiple and the first preset variable focus is an initial zoom bar;
if the maximum zoom multiple of the target is smaller than or equal to the first preset multiple, determining that the length of the distance between the minimum target variable focus corresponding to the minimum target zoom multiple and the second preset variable focus is the initial zoom bar.
4. The method of claim 2, wherein the segmenting the initial zoom bar according to a preset segmentation rule comprises:
responding to the fact that the maximum zoom multiple corresponding to the target mode is larger than a first preset multiple, and dividing the initial zoom bar into a plurality of segments with different length ratios according to a preset first dividing rule;
And dividing each of the plurality of segments according to a dividing ratio corresponding to a preset first dividing rule to obtain a first zoom bar.
5. The method of claim 4, wherein after the obtaining the first zoom bar, the method further comprises:
acquiring a first variable focus; the first variable focus is determined according to user operation;
and displaying the zooming multiple corresponding to the first zooming point on a first zooming bar.
6. The method according to claim 2, wherein the method further comprises:
acquiring a first variable focus and displaying an intermediate variable focus on the initial zoom bar; the first variable focus is determined according to user operation;
if the first variable focus is determined to be smaller than the second preset multiple, determining that the middle variable focus is the first multiple variable focus;
and if the first variable focus is determined to be larger than or equal to the second preset multiple, determining the middle variable focus to be the second multiple variable focus.
7. The method of claim 6, wherein the segmenting the initial zoom bar according to a preset segmentation rule comprises:
and in response to the maximum zoom multiple corresponding to the target mode being greater than a first preset multiple, dividing the length of the distance between the target minimum variable focus and the middle variable focus on the initial zoom bar according to a preset first proportion, and dividing the length of the distance between the middle variable focus and the first preset variable focus on the initial zoom bar according to a preset second proportion, so as to obtain a second zoom bar.
8. A method according to claim 3, wherein the splitting the initial zoom bar according to a preset splitting rule comprises:
responding to the fact that the maximum zoom multiple of the target is smaller than or equal to a first preset multiple, and dividing the initial zoom bar into a plurality of segments with different length ratios according to a preset second dividing rule;
and equally dividing each of the plurality of segments according to the dividing proportion corresponding to the second dividing rule to obtain a third zoom bar.
9. An electronic device, comprising: the device comprises a camera, a processor and a memory;
the camera is used for collecting video streams;
wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the processor, cause the electronic device to perform the method of any of claims 1-8.
10. A computer storage medium comprising computer instructions which, when run on an electronic device, perform the method of any of claims 1-8.
11. A computer program product, characterized in that the computer performs the method according to any of claims 1-8 when the computer program product is run on a computer.
CN202211260332.4A 2022-10-14 2022-10-14 Zoom bar segmentation method and electronic equipment Active CN116709018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211260332.4A CN116709018B (en) 2022-10-14 2022-10-14 Zoom bar segmentation method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211260332.4A CN116709018B (en) 2022-10-14 2022-10-14 Zoom bar segmentation method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116709018A true CN116709018A (en) 2023-09-05
CN116709018B CN116709018B (en) 2024-04-09

Family

ID=87842067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211260332.4A Active CN116709018B (en) 2022-10-14 2022-10-14 Zoom bar segmentation method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116709018B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104756475A (en) * 2012-10-28 2015-07-01 谷歌公司 Camera zoom indicator in mobile devices
US20160353012A1 (en) * 2015-05-25 2016-12-01 Htc Corporation Zooming control method for camera and electronic apparatus with camera
CN107924113A (en) * 2016-06-12 2018-04-17 苹果公司 User interface for camera effect
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
US20190394388A1 (en) * 2018-06-21 2019-12-26 Canon Kabushiki Kaisha Accessory device, imaging apparatus, and methods for controlling the same
CN111010506A (en) * 2019-11-15 2020-04-14 华为技术有限公司 Shooting method and electronic equipment
US20200267318A1 (en) * 2019-02-19 2020-08-20 Samsung Electronics Co., Ltd. Electronic device and image control method of the electronic device
US20210112206A1 (en) * 2019-10-10 2021-04-15 Samsung Electronics Co., Ltd. Electronic device and method for adjusting camera magnification
CN115002338A (en) * 2021-12-10 2022-09-02 荣耀终端有限公司 Shooting parameter control method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104756475A (en) * 2012-10-28 2015-07-01 谷歌公司 Camera zoom indicator in mobile devices
US20160353012A1 (en) * 2015-05-25 2016-12-01 Htc Corporation Zooming control method for camera and electronic apparatus with camera
CN107924113A (en) * 2016-06-12 2018-04-17 苹果公司 User interface for camera effect
US20190394388A1 (en) * 2018-06-21 2019-12-26 Canon Kabushiki Kaisha Accessory device, imaging apparatus, and methods for controlling the same
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
US20200267318A1 (en) * 2019-02-19 2020-08-20 Samsung Electronics Co., Ltd. Electronic device and image control method of the electronic device
US20210112206A1 (en) * 2019-10-10 2021-04-15 Samsung Electronics Co., Ltd. Electronic device and method for adjusting camera magnification
CN111010506A (en) * 2019-11-15 2020-04-14 华为技术有限公司 Shooting method and electronic equipment
CN115002338A (en) * 2021-12-10 2022-09-02 荣耀终端有限公司 Shooting parameter control method and device

Also Published As

Publication number Publication date
CN116709018B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN114679537B (en) Shooting method and terminal
CN114467297B (en) Video call display method and related device applied to electronic equipment
US11785329B2 (en) Camera switching method for terminal, and terminal
US11669242B2 (en) Screenshot method and electronic device
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN114915726A (en) Shooting method and electronic equipment
CN112351156B (en) Lens switching method and device
CN112532892B (en) Image processing method and electronic device
CN114089932B (en) Multi-screen display method, device, terminal equipment and storage medium
CN113448382B (en) Multi-screen display electronic device and multi-screen display method of electronic device
CN115129410B (en) Desktop wallpaper configuration method and device, electronic equipment and readable storage medium
CN113141483B (en) Screen sharing method based on video call and mobile device
CN114115770A (en) Display control method and related device
CN112150499A (en) Image processing method and related device
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN112532508B (en) Video communication method and video communication device
CN116048358B (en) Method and related device for controlling suspension ball
CN116055856B (en) Camera interface display method, electronic device, and computer-readable storage medium
CN113438366A (en) Information notification interaction method, electronic device and storage medium
CN113497888B (en) Photo preview method, electronic device and storage medium
CN116709018B (en) Zoom bar segmentation method and electronic equipment
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN116389884B (en) Thumbnail display method and terminal equipment
CN116017138B (en) Light measuring control display method, computer equipment and storage medium
CN116055859B (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant