CN117714877A - Zoom method, electronic device, and readable medium - Google Patents

Zoom method, electronic device, and readable medium Download PDF

Info

Publication number
CN117714877A
CN117714877A CN202311015228.3A CN202311015228A CN117714877A CN 117714877 A CN117714877 A CN 117714877A CN 202311015228 A CN202311015228 A CN 202311015228A CN 117714877 A CN117714877 A CN 117714877A
Authority
CN
China
Prior art keywords
reflecting prism
target position
camera
module
shake
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311015228.3A
Other languages
Chinese (zh)
Inventor
冯帅
杨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311015228.3A priority Critical patent/CN117714877A/en
Publication of CN117714877A publication Critical patent/CN117714877A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application provides a zooming method, an electronic device and a readable storage medium, wherein the zooming method is applied to the electronic device and comprises the following steps: displaying a camera preview interface, wherein the camera preview interface displays a first object, and the first object is of a first size; receiving a first operation of a user on a camera preview interface aiming at a first object; and responding to the first operation, displaying a picture of which the first object is changed from the first size to the second size on the camera preview interface, wherein the second size is larger than the first size, and the first object is positioned in the central area of the picture in the process of changing the first object from the first size to the second size, so that the user can designate the close-up object, the electronic equipment automatically tracks the user-designated object, and an image with the close-up object as the center is shot.

Description

Zoom method, electronic device, and readable medium
Technical Field
The present disclosure relates to the field of photographing technologies, and in particular, to a zooming method, an electronic device, a computer program product, and a computer readable storage medium.
Background
In a shooting scene, a user can zoom in on a picture of a shooting object (including a human body) according to the need. For example, a user takes a close-up of a long-distance object in photographing, particularly a moving object.
A user can shoot a close-up picture through zooming operation, the electronic equipment comprises a multi-camera module supporting wide angle and far focus, the electronic equipment firstly adopts the wide-angle camera module to view a view, and then switches the wide-angle camera module to shoot the close-up picture for the far focus camera module, so that zooming is realized. However, this method relies on the shooting object being at the center of the screen before and after the switching of the camera module, and if not at the center of the screen, the user needs to manually move to re-pattern the image, which is inconvenient for the user to operate.
Disclosure of Invention
The application provides a zooming method, electronic equipment, a computer program product and a computer readable storage medium, which can realize that a user designates a close-up object, and the electronic equipment automatically tracks the user-designated object and shoots an image centering on the close-up object.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect, the present application provides a zooming method applied to an electronic device, where the zooming method includes: displaying a camera preview interface, wherein the camera preview interface displays a first object, and the first object is of a first size; receiving a first operation of a user on a camera preview interface aiming at a first object; in response to the first operation, displaying a picture of the first object changing from the first size to the second size or displaying the first object of the second size and positioned in the center area on the camera preview interface; the second size is larger than the first size, and the first object is positioned in the central area of the picture in the process of changing the first size into the second size.
From the above, it can be seen that: the user inputs a first operation to a first object of the camera preview interface, the camera preview interface displays the first object with the enlarged size, the first object is positioned in the central area of the picture, the user designates the close-up object, the electronic equipment automatically tracks the user designates the object, and an image with the close-up object as the center is shot.
In one possible embodiment, an electronic device includes a first camera module including a rotatable reflective prism; before the camera preview interface displays the picture of the first object changed from the first size to the second size, the method further comprises: determining a target position of a reflecting prism; the reflecting prism is driven to rotate to the target position.
In the possible implementation manner, the reflecting prism rotates to the target position, so that the first object is located in the central area of the picture in the zooming process of the electronic equipment.
In one possible embodiment, determining the target position of the reflecting prism includes: and determining the target position of the reflecting prism according to the position information of the first operation and the position information of the reflecting prism.
In one possible embodiment, driving the reflecting prism to rotate to the target position includes: the reflection prism is driven to rotate to the target position at the output moment of the image bound at the target position, and the image bound at the target position is collected and output by the first camera module.
In the possible embodiments, the driving reflection prism rotates to the target position at the output time of the image bound to the target position, so that the driving reflection prism can rotate during the specific image output.
In one possible embodiment, the first camera module further comprises a brake element, wherein: before driving the reflecting prism to rotate to the target position, the method further comprises: driving the brake component to unlock; after driving the reflecting prism to rotate to the target position, the method further comprises: the brake component is driven to lock.
In this possible embodiment, the module of making a video recording includes brake parts, before the reflecting prism rotates, brake parts can be unlocked, and reflecting prism rotates and ends, and brake parts lock, can be convenient for reflecting prism arrive the position after quick stability.
In one possible embodiment, the first camera module further includes a focusing component, and the method further includes: the focusing member is driven to perform focusing on the first object in the process of rotating the reflecting prism to the target position.
In this possible embodiment, the camera module includes a focusing component, and in the process that the reflecting prism rotates to the target position, the focusing component is driven to focus on the first object, so as to further ensure the definition of the first object in the image shot by the camera module.
In one possible embodiment, before driving the focusing member to perform focusing, the method further includes: compensating focusing parameters by utilizing the target position of the reflecting prism; in the process of rotating the reflecting prism to the target position, driving the focusing component to perform focusing comprises the following steps: during the rotation of the reflecting prism to the target position, the focusing component is driven to focus on the first object with the compensated focusing parameter.
In one possible embodiment, the first camera module further includes an anti-shake component, wherein: in the process that the reflecting prism rotates to the target position, the anti-shake component does not operate, and after the reflecting prism rotates to the target position, the anti-shake component operates to perform optical anti-shake.
In this possible implementation manner, in the process that the reflection prism rotates to the target position, the anti-shake component does not operate, and after the reflection prism rotates to the target position, the anti-shake component operates to perform optical anti-shake, so that ineffective anti-shake of the anti-shake component in the process that the reflection prism rotates to the target position can be avoided, and power consumption is increased.
In one possible embodiment, the method further comprises: and periodically acquiring and storing the position information of the reflecting prism.
In one possible embodiment, the anti-shake unit updates the anti-shake parameter using the position information of the reflection prism, and operates with the updated anti-shake parameter.
In one possible implementation, the electronic device includes a second camera module; after the camera preview interface displays the picture of the first object changed from the first size to the second size, the method further comprises: receiving a second operation of a user on the camera preview interface aiming at the first object; displaying a picture of a first object changing from a second size to a first size on a camera preview interface, wherein the first object is an image of the first size and is acquired by a second camera module; or displaying the first object with the first size on the camera preview interface, and collecting the image of the first object with the first size by the second camera shooting module.
In one possible implementation, a hardware abstraction layer of an operating system of an electronic device includes: the inner core layer of the operating system comprises a drive for rotating the reflecting prism; determining a target position of a reflecting prism, comprising: the algorithm module determines a target position of the reflecting prism; driving the reflecting prism to rotate to a target position, comprising: the control module for the rotation of the reflecting prism controls the driving operation of the rotation of the reflecting prism so as to drive the reflecting prism to rotate to a target position. The control module for rotation of the reflecting prism refers to a Scan control module, and the driving of rotation of the reflecting prism refers to Scan driving.
In one possible implementation, a hardware abstraction layer of an operating system of an electronic device includes: the anti-shake control module comprises an inner core layer of an operating system and an outer core layer of the operating system, wherein the inner core layer comprises an anti-shake drive; the algorithm module is also used for generating anti-shake parameters, and the electronic equipment controls the anti-shake component to perform optical anti-shake through the anti-shake control module and the anti-shake drive. The anti-shake control module refers to an OIS control module, and the anti-shake drive refers to an OIS drive.
In one possible implementation, a hardware abstraction layer of an operating system of an electronic device includes: the inner core layer of the operating system comprises a drive for rotating the reflecting prism; determining a target position of a reflecting prism, comprising: the algorithm module determines a target position of the reflecting prism; driving the reflecting prism to rotate to a target position, comprising: the control module for the rotation of the reflecting prism controls the driving operation of the rotation of the reflecting prism so as to drive the reflecting prism to rotate to a target position; driving the brake element to unlock or lock, comprising: and the controller for controlling the rotation of the reflecting prism drives the brake component to unlock or lock.
In a second aspect, the present application provides an electronic device, comprising: the system comprises one or more processors, a memory, a first camera module and a display screen; the memory, the first camera module and the display screen are coupled to one or more processors, the memory being for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform a zoom method according to any one of the first aspects.
In a third aspect, the present application provides a computer-readable storage medium storing a computer program, which when executed, is specifically adapted to carry out a zoom method according to any one of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program, which when executed, is specifically adapted to carry out a zoom method according to any one of the first aspects.
Drawings
Fig. 1 is an interface diagram of a user performing a zooming operation in a photographing mode according to an embodiment of the present application;
fig. 2 is another interface diagram of a user performing a zooming operation in a photographing mode according to an embodiment of the present application;
FIG. 3 is another interface diagram of a user performing a zoom operation in a photographing mode according to an embodiment of the present application;
fig. 4 is a hardware configuration diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic software structure of an electronic device according to an embodiment of the present application;
fig. 6 is an exhibition diagram of an operation sequence of a module executing a zooming process according to an embodiment of the present application;
fig. 7 is a timing chart of a zooming process according to an embodiment of the present application;
FIG. 8 is a timing diagram of another zoom process according to an embodiment of the present disclosure;
Fig. 9 is a flowchart for implementing a focusing operation according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The terminology used in the following embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in embodiments of the present application, "one or more" means one, two, or more than two; "and/or", describes an association relationship of the association object, indicating that three relationships may exist; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The plurality of the embodiments of the present application refers to greater than or equal to two. It should be noted that, in the description of the embodiments of the present application, the terms "first," "second," and the like are used for distinguishing between the descriptions and not necessarily for indicating or implying a relative importance, or alternatively, for indicating or implying a sequential order.
In a shooting scene of an image, a user can zoom in on a shooting object picture as required. For example, a user takes a close-up of a long-distance object in photographing, particularly a moving object. The user can control the electronic device to zoom to achieve shooting of the close-up picture.
The electronic device may include a multi-camera module that supports wide angle + tele. In a scene of shooting a close-up picture, the electronic equipment firstly adopts the wide-angle shooting module to view, and then responds to the operation of a user to switch the wide-angle shooting module to the far-focus shooting module to shoot the close-up picture. However, this method relies on the shooting object being at the center of the screen before and after the switching of the camera module, and if not at the center of the screen, the user needs to manually move to re-pattern the image, which is inconvenient for the user to operate. Also, the manual movement of the composition by the user may cause blurring of an image due to the generation of shake, affecting the image quality.
Therefore, for a close-up picture of a moving object at a long distance in photographing, the electronic device cannot realize convenient user operation, and a high-quality image is photographed. Based on this, the embodiment of the application provides a shooting method, which can realize that a user only needs to specify a close-up object, the electronic equipment automatically tracks the user-specified object, and a high-quality image centering on the close-up object is automatically shot through actions such as composition, framing, imaging and the like.
The shooting method provided by the embodiment of the application can be applied to a shooting mode or a video recording mode of a camera, a user can input zooming operation to a shooting object to specify a close-up object, and a shooting module of electronic equipment can automatically amplify the object.
The following describes an application scenario of the photographing method provided in the embodiment of the present application with reference to fig. 1 to 3.
The user can zoom the photographed object, and the photographing module of the electronic device can automatically amplify the photographed object.
Fig. 1 illustrates an interface diagram of a scene of an image captured by a camera, taking a mobile phone as an example. As shown in fig. 1 (a), the mobile phone starts a photographing mode of a camera application, the mobile phone displays a camera preview interface 101, and the camera preview interface 101 displays a picture acquired by a camera module of the mobile phone, wherein a person in the picture is far away from the mobile phone. The user may want to zoom in on the person 102 by clicking (e.g., clicking or double clicking) the person 102 to input a zoom operation, the mobile phone may respond to the zoom operation input by the user, and by turning the reflecting prism of the camera module, the person 102 may be tracked, and an enlarged image of the person 102 may be obtained, as shown in fig. 1 (b), in the camera preview interface 103 displayed by the mobile phone, the person 102 may be displayed in an enlarged manner, and the other two persons may not be displayed.
In the zoom scenario illustrated in fig. 1, the mobile phone performs a zooming operation based on an object or area designated by the user, such as the person 102 in fig. 1 (a). That is, in the process of the camera preview interface transitioning from (a) in fig. 1 to (b) in fig. 1, the person 102 is at the center of the screen.
In some embodiments, the camera may configure a switch that tracks the object function. The switch is turned on, and as shown in fig. 1 (a), the user clicks the person 102 to input a tracking object and zoom, and the mobile phone responds to the user input operation to track the person 102 and obtain an enlarged image of the person 102 by rotating the reflecting prism of the camera module. The switch is in the off state, and the user clicks the character 102 to input the tracking object and zoom operation, as shown in fig. 1 (a), and the mobile phone does not respond to the operation.
The user can also carry out another zooming operation on the shot object, and the camera module of the electronic equipment can automatically reduce the object. The zoom operation can be understood as an operation of exiting the magnified object screen mode.
As shown in fig. 2 (a), in the camera preview interface 103 displayed on the mobile phone, a person 104 is displayed in an enlarged manner. The user may want to zoom out the displayed character 104 by clicking (e.g., clicking or double clicking) the character 104 to input another zoom operation, the mobile phone generates a camera preview interface using an image captured by another camera module of the mobile phone in response to the user input, the character 104 is zoomed out, as shown in fig. 2 (b), the character 104 is zoomed out, and two other characters are displayed in the camera preview interface 101 displayed by the mobile phone.
The zoom operation input by the user is not limited to clicking the shooting object, but may be other operation modes, such as opposite movement or opposite movement of the thumb and the index finger, which can indicate a zoomed-out object, and opposite movement of the thumb and the index finger, which can indicate a zoomed-in object.
As shown in fig. 3 (a), in the camera preview interface 103 displayed on the mobile phone, a person 104 is displayed in an enlarged manner. The user inputs another zoom operation by moving the thumb and index finger toward each other in the display area of the character 104, and the mobile phone displays a reduced image of the character 104 in response to the other zoom operation input by the user, as shown in fig. 3 (b), the character 104 is reduced and the other two characters are displayed in the camera preview interface 101 displayed by the mobile phone.
The zooming method provided by the embodiment of the application can be suitable for touchable electronic devices such as mobile phones, tablet computers, personal digital assistants (PersonalDigital Assistant, PDA), desktop, laptop, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, wearable devices and the like.
Taking a mobile phone as an example, fig. 4 is a composition example of an electronic device provided in an embodiment of the present application. As shown in fig. 4, the electronic device 100 may include a processor 110, an internal memory 120, a camera 130, a display screen 140, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, a camera module 190, and the like.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, a smart sensor hub (sensor hub) and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The internal memory 120 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 120. The internal memory 120 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 120 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 120 and/or instructions stored in a memory provided in the processor.
In some embodiments, the internal memory 120 stores instructions for performing a zooming process in the photographing method. The processor 110 may implement a photographing mode or a video recording mode in the camera by executing instructions stored in the internal memory 120, and control the image capturing module 190 to automatically zoom in or out on the object with respect to a zoom operation input to the photographed object by the user.
The electronic device may implement some shooting functions through the ISP, the camera 130, the video codec, the GPU, the display screen 140, the application processor, and the like. The photographing function can be understood as a conventional photographing function.
The ISP is used to process the data fed back by the camera 130. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene.
The camera 130 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device may include 1 or N cameras 130, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, and so on.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (movingpicture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of electronic devices can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The electronic device implements display functions through a GPU, a display screen 140, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 140 and the application processor. GPUs are used for image rendering by performing mathematical and geometric calculations. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 140 is used to display images, video interfaces, and the like. The display screen 140 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic lightemitting diode), a flexible light-emitting diode (flex), a mini, a Micro-led, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 140, N being a positive integer greater than 1.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 110 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate the electromagnetic waves.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless localarea networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2.
The electronic device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device may listen to music, or to hands-free conversations, through speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device picks up a phone call or voice message, the voice can be picked up by placing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device may be provided with at least one microphone 170C. In other embodiments, the electronic device may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
In the sensor module 180, the pressure sensor 180A is configured to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 180A may be disposed on display screen 140. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronics determine the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 140, the electronic apparatus detects the touch operation intensity according to the pressure sensor 180A. The electronic device may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
The touch sensor 180B, also referred to as a "touch device". The touch sensor 180B may be disposed on the display screen 140, and the touch sensor 180B and the display screen 140 form a touch screen, which is also referred to as a "touch screen". The touch sensor 180B is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 140. In other embodiments, the touch sensor 180B may also be disposed on the surface of the electronic device at a different location than the display 140.
The acceleration sensor 180C may detect the magnitude of acceleration of the electronic device in various directions (typically three axes). The gravity and direction can be detected when the electronic equipment is static, and the gravity and direction can be used for identifying the gesture of the electronic equipment.
The gyro sensor 180D may be used to determine a motion gesture of the electronic device. In some embodiments, the angular velocity of the electronic device about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180D.
In some embodiments, the electronic device implements other capture functions through the ISP, camera module 190, video codec, GPU, display screen 140, and application processor, among others. Other photographing functions may refer to: in a photographing mode or a video recording mode of the camera, a user performs an zoom operation on a photographed object to automatically zoom in or zoom out on the object.
In some embodiments, the camera module 190 includes: lens photosensitive elements Sensor, OIS IC, and Scan IC. The camera module 190 may further include an AF IC, not shown.
In some embodiments, the camera module 190 is a tele module.
In the processor 110, the ISP is also configured to process the data fed back by the camera module 190. For example, when photographing, the shutter is opened, light is transmitted to the Sensor through the lens, the optical signal is converted into an electrical signal, and the Sensor transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to the naked eye. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene.
The lens of the camera module 190 includes a reflecting prism, which is located in the incident light path of the Sensor, and is used for reflecting light to the Sensor. In some embodiments, the reflecting prism supports rotational movement in the xz plane, illustratively in the range + -24 degrees, and also supports rotational movement in the yz plane, illustratively in the range + -7, with the xz plane and yz plane being intersecting planes. The rotation of the reflecting prism can be understood as the rotation of the lens of the camera module. The rotation of the camera module lens can realize that the shooting object is positioned in the center of the picture in the process of enlarging or reducing the shooting object.
The Scan IC may be understood as a controller that controls the rotation of the reflecting prism, and may be, for example, a chip with logic control capability. The Scan motor receives the control command of the Scan IC and drives the reflecting prism to rotate. In some embodiments, the Scan motor includes two, one driving the rotational rotation of the reflecting prism in the xz-plane and the other driving the rotational rotation of the reflecting prism in the yz-plane.
In some embodiments, the brake component can be used for locking and unlocking the lens position, before the reflecting prism rotates, the brake component can be unlocked, the reflecting prism rotates to be over, and the brake component is locked, so that the lens can be conveniently and quickly stabilized after reaching the position. In some embodiments, the brake component comprises a brake IC and an actuator, and the brake IC can receive a control instruction of the Scan IC to control the actuator to lock or unlock. A brake IC may be understood as a controller, such as a chip with logic control capability.
In some embodiments, OIS ICs (Optical Image Stabilizer IC, optical image stabilizer ICs) are used for anti-shake. The OIS IC may control the OIS motor to prevent shake. In some embodiments, the OIS motor does not perform anti-shake during rotation of the reflecting prism by the Scan motor, based on which the Scan IC may control the OIS motor to not operate during operation of the Scan motor, and to operate after the Scan motor has finished operating. OIS ICs and OIS motors may be collectively referred to as anti-shake features.
In some embodiments, the AF motor is used for focusing, and the OIS IC may perform algorithm compensation to control the AF motor to focus. The AF motor may also be referred to as a focusing member.
The electronic device also runs an operating system on top of the hardware components shown in fig. 4. For exampleThe operating system is such that the operating system,operating System (OS)>An operating system, etc. An application program such as a camera application can be installed and run on the operating system.
Fig. 5 is a schematic software structure of an electronic device according to an embodiment of the present application.
The layered architecture divides the operating system of the electronic device into several layers, each layer having distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the operating system of the electronic device is an Android system. The Android system can be divided into five layers, namely an Application (APP) layer, an application framework layer (FWK for short), a system library, a hardware abstraction layer (Hardware Abstraction Layer, HAL) and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in fig. 5, the application package may include applications such as cameras and calls.
The application framework layer provides an application programming interface (application programminginterface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 5, the application framework layer may include a window management service, a content provider, a phone manager, a view system, a resource manager, and a camera service, etc.
The window management service is used to manage window programs. The window management service can realize the addition, deletion, display, hiding control and the like of the window. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.). The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
In some embodiments, the camera service may also be referred to as a camera framework, and is configured to receive a video recording request, an image capturing request, and the like from a camera application, and maintain business logic of the video recording request, the image capturing request, and the like for internal circulation, and send a final result of the request to the camera application.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. In some embodiments of the present application, the application cold start may run in the Android run time, and the Android run time obtains the optimized file state parameter of the application from the running start, and further the Android run time may determine whether the optimized file is outdated due to system upgrade through the optimized file state parameter, and return the determination result to the application management and control module.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media libraries (MediaLibraries), three-dimensional graphics processing libraries (e.g., openGL ES), two-dimensional graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The two-dimensional graphics engine is a drawing engine for 2D drawing.
The hardware abstraction layer is software located between the operating system kernel and the hardware circuitry, and is generally used to abstract the hardware to implement the interaction between the operating system and the hardware circuitry at the logic layer.
In some embodiments, the hardware abstraction layer includes a camera Hal, which includes image links Pipeline and CamxHal3, where the camera Hal is mainly responsible for building, linking, interaction control, etc. of all modules in the image link, and the CamxHal3 is used to interact with an upper module to turn on or off the camera Hal, and is used to transmit data to the camera Hal.
In some embodiments, the image link Pipeline comprises: the system comprises a module control module, an ISP control module and an algorithm module; the module is responsible for processing the management of the camera module and comprises a Sensor control module, a Scan control module, an OIS control module and a motor Actuator control module.
In some embodiments, the Sensor control module is configured to interface with the image Sensor, and the Sensor control module may control the operation of the Sensor through a Sensor driver and a Sensor Firmware.
In some embodiments, a Scan control module is used to interface the Scan motor to effect rotation of the reflective prism, and the Scan control module may control the operation of the Scan motor via a Scan drive and a Scan Firmware.
In some embodiments, the OIS control module is configured to implement anti-shake processing during rotation of the camera module, where the OIS control module may control operation of the OIS motor through OIS driving and OIS Firmware. In other embodiments, the Scan control module may control OIS motor operation via Scan driving, scan Firmware, and control OIS Firmware.
In some embodiments, the Actuator control module is configured to focus AF during rotation of the camera module. The Actator control module can control the AF motor to run through AF driving, OIS Firmware and AF Firmware.
The ISP control module is used for receiving the image data sent by the Sensor, processing the image data and then sending the processed image data to the algorithm module. In some embodiments, sensor refers to the Sensor of camera module 190.
The algorithm module is used for receiving the image data sent by the ISP control module, analyzing the image data to obtain the operation strategy of the lower module and feeding back the operation strategy to the module in the module control module. In some embodiments, the algorithm module obtains a rotation strategy of the reflecting prism, and feeds the rotation strategy back to the Scan control module.
The kernel layer is a layer between hardware and software. The kernel layer receives the command of the control layer, supports the conversion of the control command into an actual device operation command, and the operation command comprises power-on and power-off control, device mode register command generation and issuing. The register control commands of the driver layer are passed to the firmware layer via the device bus. The kernel layer at least comprises a Sensor driver, a Scan driver, an OIS driver, an AF driver and the like.
The Firmware layer refers to a software system integrated inside each component in the camera module, and comprises Sensor Firmware, scanFirmware, OIS Firmware, AF Firmware and brake Firmware. In some embodiments, the Firmware layer may not include brake Firmware.
In some embodiments, sensor Firmware is a software system running inside the Sensor, responsible for Sensor exposure and image data output. Scan Firmware is a software system running inside a Scan IC and used for controlling rotation of a camera module lens and converting codes issued by an upper layer into control signals so as to drive the camera module lens to rotate, thereby achieving the focus-tracking and view-finding purposes. The brake Firmware is a software system running inside the brake IC and used for locking the position of the camera module, so that the camera module can be quickly and stably reached to the target position. OIS Firmware is a software system running inside the OIS IC for anti-shake. AF Firmware is a software system running inside an AF IC and used for tracking focus and is responsible for converting a target position issued by an upper layer into motor current to drive an AF motor to focus.
Under the above five-layer architecture, the electronic device is further provided with a hardware layer, and the hardware layer may include the aforementioned hardware components of the electronic device. By way of example, fig. 5 shows a circuit comprising Sensor, scan IC, OIS IC, AF motor and brake element.
It should be noted that the embodiments of the present application are described in the followingThe system is described by way of example, but its basic principle is equally applicable to the +. >And the like operating the electronic device of the system.
The following describes the operation sequence of the zooming flow in the shooting method provided in the embodiment of the present application with reference to the modules in the software structure of the electronic device illustrated in fig. 5.
The camera application starts to display a camera preview interface, a user inputs zooming operation on the camera preview interface, and the camera application sends each frame request to the image link Pipeline through the camera frame to trigger execution of zooming flow, wherein the request carries coordinates of a zooming operation position.
Fig. 6 shows the operation timing of the zoom process. As shown in fig. 6, the image link Pipeline receives the request, execution 1, calls the algorithm module to process the request. In some embodiments, the algorithm module analyzes the coordinates of the zoom operation position and the position information of the reflecting prism to obtain the target position of the reflecting prism, and the Scan motor drives the reflecting prism to operate, so that the target position of the reflecting prism obtained by the algorithm module is usually an operation parameter of the Scan motor, and the target position is expressed in terms of code. In some embodiments, the algorithm module may also obtain the operation parameters of the OIS motor according to the coordinates OIS of the zoom operation position and the position information of the motor, and the expression may also be a code. In some embodiments, the algorithm module may also calculate the focusing parameters of the AF motor, which may also be represented as code.
The algorithm module calculates the code of the Scan motor and executes 2. Write the code into Metadata Pool.
The Scan control module executes 3-1. Listening to the data to which the Metadata Pool is written, the OIS control module executes 3-2. Listening to the data to which the Metadata Pool is written, and the Actator control module executes 3-3. Listening to the data to which the Metadata Pool is written.
When the Scan control module monitors that Metadata Pool is written into the code of the Scan motor, the 4-1.Scan control module is executed to check the code, and the code is issued to the Scan driver through the CSL bus and the V4L2 bus.
Scan drive conversion code is a register instruction, the instruction is transmitted to Scan Firmware through buses such as I2C, 5-1 brake unlocking, 5-2 rotation of a reflecting prism and 5-3 brake locking are achieved through control of Scan Firmware.
And 4, the OIS control module monitors that Metadata Pool is written into the code of the OIS motor, and then executes 4-2. OIS commands are issued to the OIS drive through the CSL bus and the V4L2 bus, wherein the commands can carry the code so as to drive the OIS drive to perform anti-shake.
The OIS driver can send the code to the OIS Firmware through bus transmission such as I2C, etc. to drive the OIS Firmware to perform anti-shake processing.
And controlling the brake Firmware to execute 6-1 according to the register instruction, wherein before the reflecting prism rotates, the brake is unlocked, and after the reflecting prism rotates, the brake is locked. Scan Firmware also performs 6-2. Control the reflecting prism rotation.
7-1. Updating TMR data to OIS Firmware in real time; 7-2, controlling the OIS Firmware to enter HOLDON at the moment of controlling the rotation of the reflecting prism; and after the rotation is finished, controlling the OIS Firmware to enter the OISON. In some embodiments, the TMR data indicates the position of the reflective prism.
OIS Firmware responds to the control of Scan Firmware to execute 8-1. In the rotation process, the rotation is in HOLDON, and after the rotation is finished, the rotation is switched to OISON;8-2, receiving and updating the anti-shake parameters according to the TMR data; 8-3, synchronously receiving information of the OIS drive. OIS Firmware enters HOLDON in response to control of Scan Firmware, then no OIS-driven command is executed for anti-shake.
Scan Firmware may also periodically acquire TMP data and perform 9-1. Report TMP data to the Scan driver. The OIS Firmware may also periodically acquire HALL data and perform 9-2 reporting HALL data, which is used to indicate the position of the OIS motor.
Scan driver execution 10-1. TMR data is reported to the Scan control module over the V4L2 bus and CSL bus. OIS driver execution 10-2. HALL data is reported to the OIS control module via the V4L2 bus and CSL bus.
The Scan control module executes 11-1. Report TMR data to the NCS. The OIS control module performs 11-2 reporting HALL data to the NCS.
The NCS executes 12. Summarize the data reported by the Scan control module and the data reported by the OIS control module, report the summarized data to the algorithm module. The algorithm module stores summary data.
The Actator control module can monitor the codes of the AF motor written in by the Metadata Pool and send the codes to the AF motor; and the AF motor execution 13 is used for transmitting the AF motor code to the OIS Firmware of the OIS, and carrying out position compensation by the OIS Firmware to obtain a new code, and sending the new code to the AF Firmware of the AFIC to execute focusing so as to ensure the imaging definition.
In the process of executing the zooming flow by the electronic device, two execution modes exist in the Scan drive:
mode one: the motor code issued by the Scan control module is directly validated.
Mode two: the motor code issued by the Scan control module binds the request id and takes effect by matching the request id.
The zoom process implemented by the two implementations is described below with reference to fig. 7 and 8. Mode one corresponds to the embodiment shown in fig. 7 and mode two corresponds to the embodiment shown in fig. 8.
As shown in fig. 7, a zoom procedure of an electronic device provided in an embodiment of the present application includes:
s701, the user inputs a zoom operation on the camera preview interface.
In some embodiments, the user activates a camera application, the camera 130 captures images to obtain a camera preview interface, the camera application displays the camera preview interface on a display screen, the user can select an operating mode of the camera at the camera preview interface, input a zoom operation, and so on. Fig. 1 to 3 illustrate a user inputting a zoom operation at a camera preview interface.
In some embodiments, the camera application receives an input operation of the user, and may also acquire an operation position of the user.
As shown in fig. 1 (a), 2 (a), or 3 (a), the user inputs a zoom operation by clicking an object in a camera preview interface, and the camera application receives the zoom operation of the user and acquires the operation position of the user.
In some embodiments, the user's operation position may be understood as the coordinates of the operation position at which the user displays the image input at the camera preview interface.
S702, the camera application issues a request to the camera service, the request carrying coordinates of the zoom operation position.
The camera application receives the zoom operation input by the user, obtains the operation position of the user, and then issues a request to the camera service, wherein the request carries the operation position of the user, namely, the coordinates of the operation position of the user. The camera application issues a request to the camera service for requesting execution of the zoom procedure.
S703, the camera service sends a request to the algorithm module, the request carrying the coordinates of the zoom operation position.
In some embodiments, the manner in which the camera service sends the request to the algorithm module includes:
the camera service issues a request to CamxHal3, which carries the coordinates of the zoom operation position. After receiving the request sent by the camera application, the camera service issues the request to CamxHal 3. After the CamxHal3 receives the request sent by the camera service, it sends the request to the Pipeline. After the Pipeline receives the request, the request is sent to the algorithm module to enable the algorithm module to be invoked to process the request.
In some embodiments, after the user inputs the tracking object and zooms, the CamxHal3 receives a request sent by the camera service, and sends the request to the Pipeline corresponding to the camera module 190.
It should be noted that, the user turns on the camera, the camera 130 (which may be understood as a main camera) of the electronic device operates to collect images, and the functional module of the Pipeline corresponding to the camera 130 operates to implement that the display screen displays a camera preview interface, and the camera preview interface displays images shot by the camera 130.
The user inputs zooming operation on the camera preview interface, the functional module of Pipeline corresponding to the camera module 190 is driven to run, and the image collected by the camera module 190 can be displayed on the display screen to maintain the camera preview interface. The camera module 190 is a far focus module, and the main body in the image collected by the camera module 190 is large, so that the image of the shooting object can be amplified. In some embodiments, the Pipeline corresponding to the camera 130 may be running or inactive, and is not limited.
S704, the algorithm module analyzes the coordinates and the position information of the reflecting prism to obtain the target position of the reflecting prism.
In some embodiments, the algorithm module stores data such as position information of the reflecting prism and status of the brake element. In some embodiments, the position information of the reflecting prism may be a detection value of the TMR sensor, abbreviated as TMR data.
After receiving the request, the algorithm module acquires the coordinates carried by the request and the position information of the reflecting prism, and obtains the position reached after the reflecting prism rotates, namely the target position, by utilizing the coordinates carried by the request and the position information of the reflecting prism. In some embodiments, the target location may be a code.
In some embodiments, the algorithm module may store calibration data, where the calibration data includes a rotation angle of the reflecting prism corresponding to a coordinate operated by a user, and the algorithm module may obtain a target position of the reflecting prism by combining the position information of the reflecting prism and the rotation angle, where the target position may be understood as a position to be reached after the reflecting prism rotates.
S705, the algorithm module writes the code in Metadata Pool.
In some embodiments, the target position, i.e., code, of the reflecting prism obtained by the algorithm module may be written into the data pool MetadataPool for other modules to monitor changes in the data pool. In some embodiments, the Metadata Pool includes a plurality of Metadata, each Metadata for storing a value for a different function. The algorithm module writes the code into metadata corresponding to the zoom function.
In some embodiments, the algorithm module also stores anti-shake parameters of the OIS motor, and in some embodiments, the anti-shake parameters of the OIS motor are detection values of HALL sensors, abbreviated as HALL data. After the algorithm module receives the request, the instruction value of the anti-shake strategy of the OIS motor can be obtained according to the coordinates carried by the request and the anti-shake parameters of the OIS motor, and in some embodiments, the anti-shake strategy includes an operation mode and operation parameters of the OIS.
In some embodiments, the instruction value of the anti-shake policy of the OIS motor obtained by the algorithm module may also be written into the metadata Pool for other modules to monitor the change of the data in the data Pool. In some embodiments, the instruction value of the anti-shake policy of the OIS motor obtained by the algorithm module is written into metadata corresponding to the anti-shake function.
S706, the Scan control module monitors data of the Metadata Pool.
S707, the Scan control module monitors the Metadata Pool writing code, and after checking the code, issues a control command to the Scan driver, wherein the control command carries the code.
In some embodiments, before the Scan control module issues a code to the Scan driver, the code is checked, which can be understood as checking whether the target position of the reflecting prism indicated by the code is a legal position. If the check is passed, the code is issued to the Scan driver.
In some embodiments, after the algorithm module obtains the target position of the reflecting prism, it may also be directly sent to the Scan control module.
S708, scan driving control Scan Firmware executes brake unlocking, reflecting prism rotation and brake locking.
After the Scan drive receives the target position of the reflecting prism, the Scan Firmware may be controlled to perform brake unlocking, reflecting prism rotation, and brake locking.
In some embodiments, the Scan driver translates the target position of the reflecting prism into a register instruction that is passed to the Scan Firmware over an I2C or other bus. Scan Firmware executes the following steps S709, S710, S712 and S713 according to the register instruction.
In the embodiment of the application, the Scan driver receives the control command carrying the code, and directly responds to the control command to control the Scan Firmware to execute brake unlocking, rotation of the reflecting prism and brake locking, so that the rotation of the reflecting prism can be quickly reached to the target position, and further the user can quickly move the center of the picture and amplify the zoom operation object.
The execution order of step S709 and step S710 is not limited to that shown in fig. 7, and in some embodiments, scan Firmware may execute step S709 and step S710 in parallel.
In the rotation process of the reflecting prism, the electronic equipment executes the anti-shake process belongs to ineffective anti-shake, so the electronic equipment can not execute the anti-shake process, the rotation of the reflecting prism is finished, and the electronic equipment executes the anti-shake process. Based on this, scan Firmware controls OIS Firmware to enter holon at the rotation start time of the reflecting prism until OIS Firmware is controlled to enter OISON at the rotation end time of the reflecting prism. The HOLD ON may be understood as an anti-shake suspension state, and the OIS ON may be understood as an anti-shake start state.
S709, scan Firmware sends HOLD ON instruction to OIS Firmware.
The HOLD ON instruction is used to control OIS Firmware to suspend the anti-shake process and to perform HOLD ON state. OIS FIRRM ware enters the HOLD ON state, and for the control instruction issued by the OIS driver, only buffering is performed and not directly executing.
S710, controlling the brake Firmware to execute brake unlocking by the Scan Firmware.
S711, brake Firmware executes brake unlocking.
The brake Firmware controls the brake component to execute brake unlocking.
S712, scan Firmware controls the reflecting prism to rotate to the position indicated by the code.
In some embodiments, scan Firmware controls Scan motor rotation to rotate the reflecting prism until the reflecting prism rotates to a target position, i.e., the code indicated position.
For the zooming operation illustrated in fig. 1 to 3, after the reflecting prism rotates to the target position, the display screen displays the interface shown in fig. 1 (b), fig. 2 (b) or fig. 3 (b), and the user can view the zoomed image through the display screen.
In this embodiment of the present application, the algorithm module analyzes the coordinates of the zoom operation and the position information of the reflecting prism through step S704 to obtain the target position of the reflecting prism, where the target position can be understood to be a position at the center of the screen after the zoom operation is performed on the zoom operation object, so that Scan Firmware controls the Scan motor to rotate to drive the reflecting prism to rotate until the reflecting prism rotates to the target position, so that after the zoom operation is performed on an object by a user, the object is automatically located at the center of the screen in the screen displayed by the display screen, and no user operation is required. In addition, the user does not need to operate, and the problem of blurring of a shot image caused by tracking of the mobile equipment of the user is avoided.
S713, the Scan Firmware controls the brake Firmware to execute brake locking.
S714, brake Firmware executes brake locking.
The brake Firmware controls the brake components to perform brake locking.
S715, the Scan control module sends a query command to the Scan driver to query TMR data.
The Scan control module has a timer, reads TMR data and braking state regularly, and transmits the TMR data and braking state to the NCS, and the NCS transmits the TMR data and braking state to the algorithm module for the decision of the next operation mode and code of the Scan to form Scan closed-loop control.
In some embodiments, a timer may also be present in the Scan driver. The Scan driver periodically reads TMR data and brake status and reports to the algorithm module via NCS.
In some embodiments, the OIS control module may also have a timer to periodically read HALL data from the OIS driver, which is passed to the NCS, which in turn passes the data to the algorithm module for anti-shake.
S716, the Scan driver sends a query command to Scan Firmware to query TMR data.
The Scan driver receives the query command of the Scan control module or sends the query command to the Scan Firmware to query TMR data triggered by its own timer.
S717, scan Firmware reads TMR data.
Scan Firmware receives the query command and reads TMR data from the TMR sensor.
S718, the Scan Firmware sends TMR data and OIS ON instructions to the OIS Firmware.
After the reflecting prism rotates to the target position, the Scan Firmware transmits TMR data and an OIS ON instruction to the OIS Firmware, the OIS Firmware enters an OIS ON state, and the anti-shake parameter is updated according to the TMR data to control the OIS motor to recover the state before the HOLD ON state or to recover to a cached state.
In some embodiments, the Scan Firmware passes OIS ON instructions to the OIS Firmware after the reflective prism is rotated to the target position. The sending of TMR data by Scan Firmware to OIS Firmware may be independent of the condition that the reflecting prism is rotated to the target position, and may be sent to OIS Firmware when TMR data is read.
And S719, updating the anti-shake parameters by OIS Firmware according to the TMR data.
In some embodiments, the OIS Firmware may update the anti-shake parameters according to the TMR data before the anti-shake start, so as to ensure the anti-shake effect of the Scan motor at different positions.
S720, the Scan Firmware reads the brake state value from the brake Firmware.
The brake status value indicates an operational status of the brake element.
S721, the Scan Firmware reports TMR data and brake status values to the Scan driver.
After the Scan Firmware reads the TMR data and the brake status value, the TMR data and the brake status value may be reported to the Scan driver.
S722, reporting TMR data and a brake state value to the Scan control module by the Scan driver.
After receiving the TMR data and the brake state value, the Scan driver reports the TMR data and the brake state value to the Scan control module.
S723, the Scan control module reports TMR data and brake status values to the NCS.
After receiving the TMR data and the brake status value, the Scan control module reports the TMR data and the brake status value to the NCS.
The NCS (Non Camera Sensor Service, non-camera sensor service) is located at a hardware abstraction layer for managing data of other auxiliary devices except the sensor as shown in fig. 6.
The NCS may pass TMR data to an algorithm module. In some embodiments, the algorithm module registers TMR data subscription, after which the TMR data may be passed from the Scan control module to the NCS, which in turn passes the data to the algorithm module.
S724, NCS sends TMR data and brake status values to the algorithm module.
The algorithm module receives TMR data and a brake status value, which may be updated and stored.
In some embodiments, the Scan control module may send TMR data and brake status values directly to the algorithm module or via other modules, and is not limited to the manner in which steps S723 and S724 of the present embodiment are performed.
As shown in fig. 8, a zoom procedure of an electronic device provided in an embodiment of the present application includes:
s801, a user inputs zooming operation on a camera preview interface.
The specific implementation manner of step S801 may be referred to the content of step S701 in the foregoing embodiment, which is not described herein.
S802, the camera application issues a request to the camera service, the request carrying coordinates of the zoom operation position.
The specific implementation manner of step S802 can be referred to the content of step S702 in the foregoing embodiment, which is not described herein.
S803, the camera service issues a request to the algorithm module, wherein the request carries the coordinates of the zoom operation position.
The specific implementation manner of step S803 can be referred to the content of step S703 in the foregoing embodiment, which is not described herein.
S804, the algorithm module analyzes the coordinates and the position information of the reflecting prism to obtain a code.
The specific implementation manner of step S804 can be referred to the content of step S704 in the foregoing embodiment, which is not described herein.
In one shooting scene, a user inputs a zoom operation for an object at a camera preview interface, and the electronic device displays the object and displays a gradually enlarged screen at the camera preview interface. In the shooting scene, the algorithm module analyzes the coordinates and the position information of the reflecting prism to obtain a plurality of codes, the codes are used for indicating a plurality of position points of the moving track of the reflecting prism, and the moving track of the reflecting prism can be understood to be the moving track between the current position of the reflecting prism and the target position indicated by the zooming operation. The algorithm module may also write multiple codes in turn to Metadata Poo.
It will be appreciated that each code corresponds to a frame of image of the camera module 190, and that a plurality of codes correspond to consecutive frames of images of the camera module 190.
S805, the algorithm module writes the code in Metadata Pool.
The specific implementation manner of step S805 may be referred to the content of step S705 in the foregoing embodiment, which is not described herein.
S806, the Scan control module monitors data of the Metadata Pool.
The specific implementation manner of step S806 may refer to the content of step S706 in the foregoing embodiment, which is not described herein.
S807, the Scan control module monitors the Metadata Pool write code, and after checking the code, issues a control command to the Scan driver, wherein the control command carries the code.
The specific implementation manner of step S807 can be referred to the content of step S707 in the foregoing embodiment, which is not described herein.
S808, scan driver writes the code at CRM.
After receiving the control command carrying the code, the Scan driver does not immediately respond to the control command, and needs to wait for the moment when the image capturing module 190 is in the image capturing module 190 to be in the image capturing module, and then control the reflecting prism to rotate so as to realize the synchronization of the image capturing module 190 in the image capturing module 190 and the reflecting prism.
In the shooting scenario proposed in the foregoing, the Scan driver writes a plurality of codes, each corresponding to one frame image, into the CRM.
CRM (camera request manager) is a module for managing camera application requests. The requirements issued by the modules of the hardware abstraction layer can be transmitted to the CRM firstly, the CRM is matched according to the frame output time sequence of the sensor, and then the configuration is issued to the corresponding modules by the effective mechanism of N+1 or N+2. The data interrupt of the frame output by the bottom layer sensor can be reported to a module of the hardware abstraction layer through the CRM.
S809, IFE receives an interrupt signal corresponding to the image frame.
The IFE (Image front-end) can be understood as a hardware processing unit, and the Image frame output by the Sensor of the camera module 190 can be pre-processed by the IFE. The IFE receives an interrupt signal corresponding to the image frame, which indicates that the Sensor of the camera module 190 outputs the image frame.
S810, the IFE schedules the CRM to send codes to the Scan driver.
The IFE receives the interrupt signal corresponding to the image frame and can indicate the Sensor output image frame of the camera module 190, so that the IFE schedules CRM to send a code to the Scan drive to control the rotation of the reflecting prism driven by the Scan drive at the moment of the Sensor output image frame of the camera module 190, so that the rotation of the reflecting prism follows the action of the Sensor output image frame of the camera module 190, and the two actions are synchronously executed.
In the shooting scenario set forth in the foregoing, the Sensor of the camera module 190 outputs an image frame, the IFE schedules CRM to send a code corresponding to the image frame to the Scan driver, and the Scan driver uses the code to control Scan Firmware to perform brake unlocking, rotation of the reflecting prism, and brake locking. For a plurality of codes, the Scan driver synchronously controls Scan Firmware to execute brake unlocking and brake locking at the moment of each image frame picture corresponding to each code, so that the rotation of the reflecting prism and the following of the Sensor image of the camera module 190 are realized, namely, the zooming operation object is displayed on the display screen to be gradually enlarged from the original position in the center of the picture.
In some embodiments, the IFE may also schedule the CRM to issue codes to the Scan driver when the Sensor outputs a particular image frame.
After the Scan driver receives the code through step S807, the code may bind the request id, and the Scan driver may register with the CRM according to the request id, so that the CRM may issue the code to the Scan driver when the image frame indicated by the request id is output. The CRM may determine the number of frames of the Sensor output image using the number of times of reception of the interrupt signal corresponding to the image frame.
It should be noted that, based on the reason why the image frame is delayed to be effective, when the Scan driver registers to the CRM according to the request id, the number of frames that the image frame is delayed to be effective may be configured, so that the CRM may issue a code to the can driver before the number of frames that the image frame indicated by the request id is delayed to be effective.
Illustratively, the image frame is delayed by 2 frames to take effect, and the Scan driver registers the rotation of the reflecting prism when registering the 10 th frame image output when registering the CRM with the request id. Therefore, the interrupt signal corresponding to the image frame is received by the CRM for 8 times, namely the interrupt signal corresponding to the image frame is received by the CRM for 8 times, the code is issued to the Scan drive by the CRM, and the rotation of the reflecting prism at the output moment of the image of the 10 th frame is ensured.
S811, scan driving control Scan Firmware executes brake unlocking, reflecting prism rotation and brake locking.
The specific implementation manner of step S811 can be referred to the content of step S708 in the foregoing embodiment, which is not described herein.
S812, scan Firmware sends HOLDON instruction to OIS Firmware.
The specific implementation manner of step S812 can be referred to the content of step S709 in the foregoing embodiment, which is not described herein.
S813, controlling the brake Firmware to execute brake unlocking by the Scan Firmware.
The specific implementation manner of step S813 can be referred to the content of step S710 in the foregoing embodiment, which is not described herein.
S814, executing brake unlocking by the brake Firmware.
The specific implementation manner of step S814 can be referred to the content of step S711 in the foregoing embodiment, which is not described herein.
S815, scan Firmware controls the reflecting prism to rotate to the position indicated by code.
The specific implementation manner of step S815 can be referred to the content of step S712 in the foregoing embodiment, which is not described herein.
In this embodiment of the present application, the algorithm module analyzes the coordinates of the zoom operation and the position information of the reflecting prism through step S804 to obtain the target position of the reflecting prism, where the target position can be understood to be a position at the center of the screen after the zoom operation is performed on the zoom operation object, so that Scan Firmware controls the Scan motor to rotate to drive the reflecting prism to rotate until the reflecting prism rotates to the target position, so that after the zoom operation is performed on an object by a user, the object is automatically located at the center of the screen in the screen displayed by the display screen, and no user operation is required. In addition, the user does not need to operate, and the problem of blurring of a shot image caused by tracking the shot image by the user by using the mobile device is avoided.
S816, controlling the brake Firmware to execute brake locking by the Scan Firmware.
The specific implementation manner of step S816 may be referred to the content of step S713 in the foregoing embodiment, which is not described herein.
S817, brake Firmware executes brake locking.
The specific implementation manner of step S817 can be seen in the content of step S714 in the foregoing embodiment, which is not described herein.
S818, the Scan control module sends a query command to the Scan driver to query TMR data.
The specific implementation manner of step S818 can be referred to the content of step S715 in the foregoing embodiment, which is not repeated here.
S819, scan driver sends a query command to Scan Firmware to query TMR data.
The specific implementation manner of step S819 may refer to the content of step S716 in the foregoing embodiment, which is not described herein.
S820, scan Firmware reads TMR data.
The specific implementation manner of step S820 can be referred to the content of step S717 in the foregoing embodiment, which is not described herein.
S821, scan Firmware sends TMR data and OIS ON instructions to OIS Firmware.
The specific implementation manner of step S821 can be referred to the content of step S718 in the foregoing embodiment, and will not be described again here.
S822, the OIS Firmware updates the anti-shake parameters according to the TMR data.
The specific implementation manner of step S822 can be referred to the content of step S719 in the foregoing embodiment, which is not described herein.
S823, reading a brake state value by Scan Firmware.
The specific implementation manner of step S823 may refer to the content of step S720 in the foregoing embodiment, which is not described herein.
S824, the Scan Firmware reports TMR data and brake status values to the Scan driver.
The specific implementation manner of step S824 may be referred to the content of step S721 in the foregoing embodiment, which is not described herein.
S825, the Scan driver reports TMR data and brake status values to the Scan control module.
The specific implementation manner of step S825 can be referred to the content of step S722 in the foregoing embodiment, which is not repeated here.
S826, the Scan control module reports TMR data and brake status values to the NCS.
The specific implementation manner of step S826 can be referred to the content of step S723 in the foregoing embodiment, which is not described herein.
S827, NCS sends TMR data and a brake status value to the algorithm module.
The specific implementation manner of step S827 can be seen in the content of step S724 in the foregoing embodiment, which is not repeated here.
Aiming at the zooming operation input by the user, the electronic equipment can also execute a focusing process so as to ensure the definition of the shot image. The following describes a focusing procedure performed by the electronic device in conjunction with fig. 9.
It is understood that the electronic device may execute the zooming process illustrated in fig. 7 and 8 in parallel, and the focusing process illustrated in the following embodiments.
As shown in fig. 9, a focusing process of the electronic device provided in the embodiment of the present application includes:
s901, a user inputs a zoom operation on a camera preview interface.
The specific implementation manner of step S901 can be referred to the content of step S701 in the foregoing embodiment, which is not described herein.
S902, the camera application issues a request to the camera service, the request carrying coordinates of the zoom operation position.
The specific implementation manner of step S902 can be referred to the content of step S702 in the foregoing embodiment, which is not described herein.
S903, the camera service issues a request to the algorithm module, wherein the request carries the coordinates of the zoom operation position.
The specific implementation manner of step S903 may refer to the content of step S703 in the foregoing embodiment, which is not described herein.
S904, the algorithm module obtains a focusing parameter code.
The focus parameter can be understood to be an operating parameter of the AF motor. In some embodiments, the algorithm module may use a contrast focus algorithm or a phase focus algorithm to obtain the focus parameters. In other embodiments, the algorithm module obtains one path of focusing parameters by using a contrast type focusing algorithm, obtains the other path of focusing parameters by using a phase focusing algorithm, and obtains the focusing parameters to be issued by combining the two paths of focusing parameters.
S905, the algorithm module writes the code in Metadata Pool.
The algorithm module obtains focusing parameters, and can write the focusing parameters into Metadata corresponding to the focusing function of the Metadata Pool.
And S906, the actor control module monitors data of the Metadata Pool.
S907, the creator control module monitors the Metadata Pool writing code, and after checking the code, issues a control command to the AF driver, wherein the control command carries the code.
Before the identifier control module sends the code to the AF driver, the code is checked, and it can be understood that it is checked whether the target position of the AF motor indicated by the code is a legal position. If the test is passed, the code is issued to the AF driver.
In some embodiments, after the algorithm module obtains the focus parameters, the focus parameters may be directly sent to the actor control module.
S909, the AF driver writes the code in CRM.
After the AF driver receives the control command carrying the code, the AF driver does not immediately respond to the control command, and the AF driver needs to wait for the moment when the image capturing module 190 is in the picture to control focusing again so as to realize the synchronization of the picture capturing module 190 and the focusing motor operation.
S909, IFE receives an interrupt signal corresponding to the image frame.
The specific implementation manner of step S909 may be referred to the content of step S807 in the foregoing embodiment, which is not described herein.
S910, the IFE schedules the CRM to issue codes to the AF driver.
The specific implementation manner of step S910 can be referred to the content of step S808 in the foregoing embodiment, which is not described herein.
S911, AF drive controls OIS Firmware to execute focusing according to code.
S912, OIS Firmware compensates focusing parameters according to the target position of the reflecting prism.
In some embodiments, the OIS Firmware stores a correspondence between target positions of a plurality of reflection prisms and compensation values of focusing parameters, and the OIS Firmware can determine the compensation value corresponding to the target position of the reflection prism by using the correspondence, and compensate the focusing parameters by using the compensation value.
In some embodiments, the AF motors include two, and OIS Firmware compensates the focus parameter by two compensation values of the AF motors according to the target position of the reflecting prism, so as to obtain compensated focus parameters of the two AF motors.
In some embodiments, the Scan Firmware periodically sends the position information of the reflecting prism to the OIS Firmware, which may send the position information of the reflecting prism at the latest time by the Scan Firmware as the target position of the reflecting prism.
S913, OIS Firmware controls AF Firmware to execute focusing according to the compensated focusing parameters.
In some embodiments, the OIS Firmware receives the focusing parameters, and the focusing parameters may be transmitted to the AF Firmware without executing step S912, and the AF Firmware performs focusing according to the focusing parameters.
S914, AF Firmware executes focusing according to the compensated focusing parameters.
And the AF Firmware controls the AF motor to operate according to the compensated focusing parameters so as to realize focusing and ensure the definition of the shot image. In some embodiments, the number of the AF motors is two, and the AF Firmware controls the two AF motors to operate according to the compensated focusing parameters so as to realize focusing.
In some embodiments, the AF-target code value may be passed to the Scan IC, which has TMR data inside, and may also support compensation operations.
In some embodiments, the AF code may also be directly transferred to the AF motor for execution without compensation based on TMR data.
Another embodiment of the present application also provides a computer-readable storage medium having instructions stored therein, which when run on a computer or processor, cause the computer or processor to perform one or more steps of any of the methods described above.
The computer readable storage medium may be a non-transitory computer readable storage medium, for example, a Read-Only Memory (ROM), a random access Memory (RandomAccess Memory, RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Another embodiment of the present application also provides a computer program product comprising instructions. The computer program product, when run on a computer or processor, causes the computer or processor to perform one or more steps of any of the methods described above.

Claims (16)

1. A zoom method, applied to an electronic device, comprising:
displaying a camera preview interface, wherein the camera preview interface displays a first object, and the first object is of a first size;
receiving a first operation of a user on the camera preview interface aiming at the first object;
and responding to the first operation, displaying a picture of which the first object is changed from the first size to the second size or a first object which is in the second size and is positioned in a central area on the camera preview interface, wherein the second size is larger than the first size, and the first object is positioned in the central area of the picture in the process of changing the first object from the first size to the second size.
2. The zoom method of claim 1, wherein the electronic device comprises a first camera module comprising a rotatable reflective prism;
Before the camera preview interface displays the picture of the first object changed from the first size to the second size, the method further comprises:
determining a target position of the reflecting prism;
and driving the reflecting prism to rotate to the target position.
3. The zoom method according to claim 2, wherein the determining the target position of the reflecting prism comprises:
and determining the target position of the reflecting prism according to the position information of the first operation and the position information of the reflecting prism.
4. The zoom method according to claim 2, wherein the driving the reflecting prism to rotate to the target position includes:
and driving the reflecting prism to rotate to the target position at the output moment of the image bound at the target position, wherein the image bound at the target position is collected and output by the first camera module.
5. The zoom method according to any one of claims 2 to 4, wherein the first camera module further comprises a brake member, wherein:
before the driving the reflecting prism to rotate to the target position, the method further comprises: driving the brake component to unlock;
After the driving the reflecting prism to rotate to the target position, the method further comprises: and driving the brake component to lock.
6. The zoom method according to any one of claims 2 to 5, wherein the first camera module further comprises a focusing member, the method further comprising:
and driving the focusing component to focus on the first object in the process of rotating the reflecting prism to the target position.
7. The photographing method of claim 6, wherein before said driving said focusing means to perform focusing, further comprising:
compensating focusing parameters by utilizing the target position of the reflecting prism;
and driving the focusing component to perform focusing in the process of rotating the reflecting prism to the target position, wherein the focusing component comprises the following components:
and driving the focusing component to execute focusing on the first object by the compensated focusing parameter in the process of rotating the reflecting prism to the target position.
8. The zoom method according to any one of claims 2 to 7, wherein the first camera module further comprises an anti-shake member, wherein: and in the process that the reflecting prism rotates to the target position, the anti-shake component does not operate, and after the reflecting prism rotates to the target position, the anti-shake component operates to perform optical anti-shake.
9. The zoom method according to claim 8, further comprising: and periodically acquiring and storing the position information of the reflecting prism.
10. A zoom method according to claim 9, wherein the anti-shake unit updates an anti-shake parameter using position information of the reflection prism, and operates with the updated anti-shake parameter.
11. The zooming method according to any one of claims 1 to 10, wherein the electronic apparatus includes a second camera module; after the camera preview interface displays the screen of the first object changed from the first size to the second size, the method further includes:
receiving a second operation of a user on the camera preview interface aiming at the first object;
displaying a picture of the first object changing from the second size to the first size on the camera preview interface, wherein the first object is an image of the first size and is acquired by the second camera module; or displaying the first object with the first size on the camera preview interface, and collecting the image of the first object with the first size by the second camera shooting module.
12. The zoom method of claim 2, wherein the hardware abstraction layer of the operating system of the electronic device comprises: the inner core layer of the operating system comprises a drive for rotating the reflecting prism;
The determining the target position of the reflecting prism includes: the algorithm module determines a target position of the reflecting prism;
the driving the reflecting prism to rotate to the target position includes: and the control module for rotating the reflecting prism controls the driving operation of the reflecting prism to rotate so as to drive the reflecting prism to rotate to the target position.
13. The photographing method of claim 12, wherein the hardware abstraction layer of the operating system of the electronic device comprises: the anti-shake control module is characterized in that a kernel layer of the operating system comprises an anti-shake driver;
the algorithm module is also used for generating anti-shake parameters, and the electronic equipment controls the anti-shake component to perform optical anti-shake through the anti-shake control module and the anti-shake drive.
14. The photographing method of claim 5, wherein the hardware abstraction layer of the operating system of the electronic device comprises: the inner core layer of the operating system comprises a drive for rotating the reflecting prism;
the determining the target position of the reflecting prism includes: the algorithm module determines a target position of the reflecting prism;
The driving the reflecting prism to rotate to the target position includes: the control module for the rotation of the reflecting prism controls the driving operation of the rotation of the reflecting prism so as to drive the reflecting prism to rotate to the target position;
the driving the brake component to unlock or lock comprises the following steps: and the brake component is driven to unlock or lock by a controller for controlling the rotation of the reflecting prism.
15. An electronic device, comprising:
the system comprises one or more processors, a memory, a first camera module and a display screen;
the memory, the first camera module and the display screen being coupled to the one or more processors, the memory being for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the zoom method of any of claims 1 to 14.
16. A computer readable storage medium for storing a computer program, which when executed is adapted to carry out a zoom method according to any one of claims 1 to 14.
CN202311015228.3A 2023-08-11 2023-08-11 Zoom method, electronic device, and readable medium Pending CN117714877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311015228.3A CN117714877A (en) 2023-08-11 2023-08-11 Zoom method, electronic device, and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311015228.3A CN117714877A (en) 2023-08-11 2023-08-11 Zoom method, electronic device, and readable medium

Publications (1)

Publication Number Publication Date
CN117714877A true CN117714877A (en) 2024-03-15

Family

ID=90152179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311015228.3A Pending CN117714877A (en) 2023-08-11 2023-08-11 Zoom method, electronic device, and readable medium

Country Status (1)

Country Link
CN (1) CN117714877A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572804A (en) * 2009-03-30 2009-11-04 浙江大学 Multi-camera intelligent control method and device
US20180069983A1 (en) * 2016-09-06 2018-03-08 Lg Electronics Inc. Terminal and controlling method thereof
WO2022116650A1 (en) * 2020-12-03 2022-06-09 中兴通讯股份有限公司 Camera module, electronic device and optical zoom method
WO2023124610A1 (en) * 2021-12-30 2023-07-06 Oppo广东移动通信有限公司 Anti-shake method and apparatus, electronic device, and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572804A (en) * 2009-03-30 2009-11-04 浙江大学 Multi-camera intelligent control method and device
US20180069983A1 (en) * 2016-09-06 2018-03-08 Lg Electronics Inc. Terminal and controlling method thereof
WO2022116650A1 (en) * 2020-12-03 2022-06-09 中兴通讯股份有限公司 Camera module, electronic device and optical zoom method
WO2023124610A1 (en) * 2021-12-30 2023-07-06 Oppo广东移动通信有限公司 Anti-shake method and apparatus, electronic device, and computer readable storage medium

Similar Documents

Publication Publication Date Title
US11831977B2 (en) Photographing and processing method and electronic device
US11669242B2 (en) Screenshot method and electronic device
WO2021147482A1 (en) Telephoto photographing method and electronic device
US20220337742A1 (en) Camera switching method for terminal, and terminal
CN113726950B (en) Image processing method and electronic equipment
WO2021185250A1 (en) Image processing method and apparatus
CN112398855B (en) Method and device for transferring application contents across devices and electronic device
WO2021052111A1 (en) Image processing method and electronic device
JP2023508685A (en) Filming methods and devices in long-focus scenarios
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113837920B (en) Image rendering method and electronic equipment
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN114185503A (en) Multi-screen interaction system, method, device and medium
WO2023231697A1 (en) Photographing method and related device
WO2023226634A1 (en) Photographing method and electronic device
CN117714877A (en) Zoom method, electronic device, and readable medium
CN115633255A (en) Video processing method and electronic equipment
CN117714848A (en) Focus tracking method, electronic device, and readable medium
CN117714878A (en) Shooting method, electronic device and readable medium
EP4398594A1 (en) Photographing method and related device
CN116723382B (en) Shooting method and related equipment
WO2022206600A1 (en) Screen projection method and system, and related apparatus
CN117714665A (en) Abnormality detection method, electronic device, and readable medium
CN117762279A (en) Control method, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination