CN117528265A - Video shooting method and electronic equipment - Google Patents

Video shooting method and electronic equipment Download PDF

Info

Publication number
CN117528265A
CN117528265A CN202210911440.7A CN202210911440A CN117528265A CN 117528265 A CN117528265 A CN 117528265A CN 202210911440 A CN202210911440 A CN 202210911440A CN 117528265 A CN117528265 A CN 117528265A
Authority
CN
China
Prior art keywords
camera
attribute
cameras
image
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210911440.7A
Other languages
Chinese (zh)
Inventor
艾尚宥
卞超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210911440.7A priority Critical patent/CN117528265A/en
Publication of CN117528265A publication Critical patent/CN117528265A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a video shooting method and electronic equipment, and relates to the technical field of shooting. The first equipment determines the compensation value of the imaging attribute between the cameras of the first equipment by utilizing the color mosaic image displayed by the second equipment, and the calibration of the imaging attribute deviation between the cameras of the first equipment is realized. In the process of shooting videos by the first equipment, the first equipment is used for switching a camera used for shooting the videos from the first camera to the second camera based on the operation of a user, and because the imaging attribute between the second camera and the first camera is different, in order to avoid the problem of picture mutation of shot images caused by the difference of the imaging attribute, the first equipment compensates the images acquired by the second camera by using the compensation value of the imaging attribute between the first camera and the second camera, avoids the phenomenon of picture mutation or picture tearing caused by the switching of the cameras, and improves the quality of the shot videos.

Description

Video shooting method and electronic equipment
Technical Field
The application relates to the technical field of photographing, in particular to a video photographing method and electronic equipment.
Background
With the development of mobile terminal technology, the photographing function of the mobile terminal has also been rapidly developed, and more users prefer to photograph videos using the mobile terminal. Because of the size of the mobile terminal, in order to achieve the imaging effect of different focal segments, the mobile terminal generally adopts the physical configuration of multiple camera combinations to achieve the change of the focal length of the cameras.
In the process of shooting video, a user can adjust the mobile terminal to be switched to a corresponding camera according to shooting requirements. For example, a mobile terminal is configured with a main camera, a wide-angle camera, and a telephoto camera. In the process of shooting video, when a user wants to shoot a distant object, the user can adjust the mobile terminal to be switched from the main camera to the tele camera for shooting, so that the tele object is shot by using the tele camera with a larger focal segment. When the range of shooting desired by the user is larger, the user can adjust the mobile terminal to switch to the wide-angle camera shooting.
However, in the process of switching the camera, the mobile terminal may have a phenomenon that an image in the photographed video is suddenly changed due to the switching of the camera, so that the quality of the photographed video is affected.
Disclosure of Invention
In view of this, the present application provides a video shooting method and an electronic device, which improve the quality of the video obtained by shooting.
In a first aspect, the present application provides a video capturing method applied to a first device of a plurality of cameras, the method including:
the method comprises the steps that first equipment displays a first interface, wherein the first interface is a shooting interface displayed in the process of recording video by the first equipment, and a first image acquired by a first camera in the plurality of cameras is displayed on the first interface;
under the condition that a first operation of a user on a first interface is received, the first device responds to the first operation, switches and uses a second camera of the plurality of cameras, and obtains a first compensation value of a preset imaging attribute between the second camera and the first camera; the first camera and the second camera both belong to a front camera or a rear camera of the first device; the preset imaging attribute influences the imaging effect of an object in an image acquired by the camera; a first compensation value of a preset imaging attribute between the second camera and the first camera is calibrated in advance and stored in the first device;
The first device acquires a second image acquired by the second camera, and compensates the preset imaging attribute of the second image according to the first compensation value of the preset imaging attribute to obtain a compensated second image;
the first equipment displays a second interface, wherein the second interface is a shooting interface displayed in the process of recording video by the first equipment, and the second interface displays the compensated second image;
and under the condition that the first equipment receives the second operation of the user on the second interface, stopping recording the video, and generating a corresponding video file, wherein the video file comprises the first image and the compensated second image.
The first operation is used for triggering the first device to zoom and switch the camera.
In the embodiment of the application, in the process of shooting video, the user can trigger the first device to zoom and switch the camera according to shooting requirements. The first device switches a camera used for shooting videos from the first camera to the second camera based on a first operation input by a user, and because the second camera and the first camera belong to different types of cameras, imaging attributes between the two cameras are different, so that in order to avoid the problem that a video picture obtained by the first device during zooming and switching the cameras is suddenly changed due to the difference of the imaging attributes, the first device utilizes the deviation of the imaging attributes between the first camera and the second camera, namely, a first compensation value carries out deviation compensation on preset imaging attributes of a second image shot by the second camera, so as to compensate the imaging attributes of the second camera, the imaging attributes of the compensated second image are consistent with those of the first image shot by the first camera, the imaging deviation is eliminated, the imaging quality of the video is improved, and smooth use experience is provided for a user when the camera is switched.
In one possible design, the preset imaging properties include at least one of the following: a field of view attribute, an exposure attribute, a white balance attribute, and a color attribute; the field of view attribute may affect the imaging position of the object captured by the camera, i.e. may affect the position of the object in the image captured by the camera. The exposure property may affect the exposure level of the subject in the image captured by the camera, the white balance property may affect the color temperature of the subject in the image captured by the camera, and the color property may affect the color of the subject in the image captured by the camera.
In this embodiment of the present application, the first device may compensate the field of view attribute of the second image by using the first compensation value of the field of view attribute, so as to avoid a problem that an object in the second image has a position mutation. The first device may compensate the exposure attribute of the second image by using the first compensation value of the exposure attribute, so as to avoid a problem that an object in the second image has abrupt exposure degree. The first device may compensate the white balance attribute of the second image by using the first compensation value of the white balance attribute, so as to avoid the problem that the object in the second image has abrupt color temperature change. The first device may compensate the color attribute of the second image by using the first compensation value of the color attribute, so as to avoid the problem that the object in the second image has color abrupt change.
In one possible design, the calibration process of the first compensation value of the preset imaging attribute includes:
the first equipment shoots first mosaic images based on at least two cameras in the plurality of cameras respectively to obtain first calibration images corresponding to the at least two cameras respectively;
the first device determines a first compensation value of each preset imaging attribute between cameras in the at least two cameras according to the first calibration images corresponding to the at least two cameras.
In this embodiment of the present application, when the first device needs to calibrate the difference of imaging attributes between the cameras, each camera may be first used to capture the same mosaic image (i.e., the first mosaic image) to obtain a first calibration image corresponding to each camera. Then, the first device can determine imaging differences among the cameras by comparing the first calibration images corresponding to the cameras, namely, determine a first compensation value of preset imaging attributes among the cameras, so that calibration of imaging attribute deviation among the cameras of the first device is realized, and the calibration mode is simple. Moreover, as the first equipment only needs to shoot corresponding mosaic images, the camera can be calibrated, so that a user can actively trigger the calibration of the first equipment to the camera, even if the camera device of the first equipment is aged, the precision is reduced, the calibration of the camera of the first equipment can be realized, further, the imaging deviation caused by the aging of the camera device can be avoided, and the quality of the shot video is improved. Meanwhile, the first equipment can realize one-time calibration of a plurality of imaging attributes, and the calibration efficiency of the imaging attributes is improved. Correspondingly, when the first device shoots a video, the picture mutation phenomenon caused by the deviation of a plurality of imaging attributes can be eliminated, and the smoothness of zooming switching of the camera is ensured.
In one possible design, the first mosaic image may be a color mosaic image, so that calibration of a specific imaging attribute (e.g., color attribute) may be achieved.
In one possible design, the first mosaic image may be displayed by a second device, and the preset imaging attribute includes a field of view attribute. Correspondingly, when a first compensation value of a preset imaging attribute is calibrated, the first device receives image information of the first mosaic image sent by the second device; the image information includes position coordinate values of corner points of respective color patches in the first mosaic image. The first device determines a first compensation value of each preset imaging attribute between the cameras according to the image information of the first mosaic image and the first calibration image corresponding to each camera.
In this embodiment of the present application, since the determining of the first compensation value of the field of view attribute needs to use the image information of the original first mosaic image, the second device may send the image information of the first mosaic image to the first device, and the first device may determine, by using the image information of the first mosaic image and the first calibration images corresponding to the respective cameras, the first compensation value of the preset imaging attribute between the cameras, where the preset imaging attribute may include the preset field of view attribute, so as to achieve successful calibration of the deviation of the preset field of view attribute.
In one possible design, the at least two cameras include a first camera and a second camera. After obtaining a first compensation value of a preset imaging attribute between the first camera and the second camera, verifying whether the first compensation value is accurate, wherein the verification process comprises the following steps:
the first equipment shoots a second mosaic image by using the first camera and the second camera respectively to obtain a second calibration image corresponding to the first camera and a second calibration image corresponding to the second camera;
the first device determines a second compensation value of each preset imaging attribute between the first camera and the second camera according to a second calibration image corresponding to the first camera and a second calibration image corresponding to the second camera;
for each preset imaging attribute, calculating a difference value between a first compensation value of the preset imaging attribute and a second compensation value of the preset imaging attribute between the first camera and the second camera to obtain a compensation value error of the preset imaging attribute between the first camera and the second camera;
if the compensation value error of the preset imaging attribute is larger than or equal to the preset error value of the preset imaging attribute, returning to the step of shooting the first mosaic image by the first equipment based on at least two cameras in the plurality of cameras respectively;
If the error of the compensation value of each preset imaging attribute is smaller than the preset error value of the corresponding preset imaging attribute, the first device can store the first compensation value of each preset imaging attribute.
In this embodiment of the present application, for two cameras (e.g., a first camera and a second camera) in a first device, the first device may determine, using a second mosaic image, a second compensation value of each preset imaging attribute between the two cameras, so as to test whether a first compensation value corresponding to the preset imaging attribute is accurate or not using the second compensation value, that is, verify whether deviation calibration of each imaging attribute is accurate, so that the accuracy of the determined first compensation value of the preset imaging attribute is poor due to some reasons (e.g., shake of the first device when the first mosaic image is captured), thereby ensuring the accuracy of deviation calibration of the preset imaging attribute between the cameras of the first device. When the first compensation value of each imaging attribute is determined to be accurate, the first device can store the first compensation value of each preset imaging attribute between the two cameras, and the accuracy of the stored first compensation value of the imaging attribute is ensured. When the first compensation value of the preset imaging attribute is inaccurate, the first device can reuse the first mosaic image to calibrate the deviation of the imaging attribute between the two cameras, so that iterative calibration of the imaging attribute is realized, and the calibration error of the imaging attribute is gradually reduced.
In one possible design, the field of view attribute includes a projection matrix, where the projection matrix is determined according to a position coordinate value of each color block in a mosaic image displayed by the second device and a position coordinate value of each color block in a calibration image obtained by shooting the mosaic image by a camera of the first device;
the exposure attribute comprises a sensitivity ISO conversion coefficient, wherein the sensitivity ISO conversion coefficient is determined according to an ISO value when a camera of the first equipment shoots the mosaic image and a brightness value of a calibration image obtained when the camera of the first equipment shoots the mosaic image;
the color attribute comprises a color offset coefficient and a color offset cut-off, wherein the color offset coefficient and the color offset cut-off are determined according to color values of all color blocks in a calibration image obtained by shooting the mosaic image by a camera of the first equipment;
the white balance attribute comprises a color temperature deviation coefficient and a color temperature deviation cut-off, wherein the color temperature deviation coefficient and the color temperature deviation cut-off are determined according to the color temperature of each white color block in a calibration image obtained by shooting the mosaic image by a camera of the first equipment.
In one possible design, the first mosaic image and the second mosaic image are displayed by a second device. Correspondingly, when imaging attribute deviation between cameras is required to be calibrated, the first device sends an image display request to the second device, and the image display request is used for triggering the second device to display mosaic images, so that the first device can achieve calibration of the imaging attribute deviation by shooting the mosaic images displayed by the second device.
In one possible design, the first mosaic image and the second mosaic image are color mosaic images, so that calibration of deviation of specific imaging properties (such as color properties) can be achieved.
In one possible design, the second device displays the first mosaic image based on a first screen brightness value; the first screen brightness value is randomly generated by the second equipment or is selected from preset screen brightness values by the second equipment, so that calibration of imaging attribute deviation can be better realized by the first equipment.
In a second aspect, the present application provides a calibration system comprising a first device and a second device; the first device includes a plurality of cameras; at least two rear cameras and/or at least two front cameras exist in the plurality of cameras; the second device is provided with a display screen; the display screen is used for displaying mosaic images, and the camera of the first device can shoot the complete mosaic images displayed by the second device.
In a third aspect, the present application provides an electronic device comprising a display screen, a plurality of cameras, a memory, and one or more processors; the display screen, the plurality of cameras, the memory and the processor are coupled; the plurality of cameras are used for acquiring images, the display screen is used for displaying the images generated by the processor and the images acquired by the cameras, and the memory is used for storing computer program codes, and the computer program codes comprise computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the method as described above.
In a fourth aspect, the present application provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method as described above.
In a fifth aspect, the present application provides a computer program product which, when run on an electronic device, causes the electronic device to perform the method as described above.
It will be appreciated that the advantages achieved by the calibration system according to the second aspect, the electronic device according to the third aspect, the computer readable storage medium according to the fourth aspect, and the computer program product according to the fifth aspect may refer to the advantages of the first aspect and any of the possible designs thereof, and are not repeated herein.
Drawings
Fig. 1 is a schematic diagram of a zoom magnification adjustment according to an embodiment of the present application;
fig. 2 is a schematic diagram of a picture mutation according to an embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic diagram of a rear camera according to an embodiment of the present application;
fig. 5 is a flowchart of a video shooting method according to an embodiment of the present application;
Fig. 6 is a second flowchart of a video shooting method according to an embodiment of the present application;
fig. 7 is a flowchart of a video shooting method according to an embodiment of the present application;
fig. 8 is a flowchart of a video shooting method according to an embodiment of the present application;
fig. 9 is a schematic diagram of a mosaic image according to an embodiment of the present application;
fig. 10 is a schematic diagram of a shooting control procedure according to an embodiment of the present application;
fig. 11 is a schematic diagram of mosaic image capturing according to an embodiment of the present application;
fig. 12 is a flowchart fifth of a video shooting method according to an embodiment of the present application;
fig. 13 is a schematic diagram of a shooting interface according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The volume of the camera with the variable focal length is generally larger, and if the camera with the variable focal length is configured on the electronic device, the volume of the electronic device is larger, so that the electronic device is inconvenient for a user to carry. Therefore, the electronic equipment can adopt the physical configuration of a plurality of cameras, and the focal segments (namely focal lengths) corresponding to different cameras are different, so that imaging effects of different focal lengths can be realized, and the overlarge volume of the electronic equipment can be avoided.
The imaging properties of different cameras are different due to different physical properties such as sensor properties (e.g., the size of a Complementary Metal Oxide Semiconductor (CMOS) sensor), camera properties (e.g., the light transmittance of a camera lens), physical positions (i.e., the position of the camera on an electronic device), etc., so that the imaging effects of the photographing imaging of different cameras are not very identical. In the shooting process, a user can adjust the focal length of the electronic equipment according to shooting requirements, and the electronic equipment responds to the adjustment operation and can be switched to a corresponding camera by the current camera. For example, the electronic apparatus is configured with a main camera and a tele camera. In the process of shooting video, the electronic equipment displays a shooting interface, and the electronic equipment currently uses a main camera for shooting. The photographic interface may display a plurality of zoom factors (e.g., 1x,3x,10x,100x, etc., as included in zoom factor 20 of fig. 1 and 2). When the user wants to photograph a distant subject, the user can input an adjustment operation on the photographing interface to adjust the magnification, i.e., the zoom magnification, to 9.4x (as shown in fig. 1). And the electronic equipment responds to the zoom magnification adjusting operation, and is switched to the long-focus camera by the current main camera, and the long-focus camera with larger focal length is used for shooting. However, due to the different imaging properties between the main camera and the telephoto camera, during the zooming switching of the camera, the image acquired by the telephoto camera within a certain period of time from the start of shooting may have a picture tearing (for example, the position of the shot object is misplaced) or a picture abrupt change phenomenon, where the picture abrupt change includes a position abrupt change of the shot object in the shot image (for example, as shown in fig. 2, the position abrupt change of the object 10 in fig. 2 is compared with the object 10 in fig. 1), a color abrupt change (i.e., the color of the object changes), a white balance abrupt change (i.e., the color temperature of the object changes, for example, the color temperature changes, and the exposure degree changes abruptly (for example, the color of the object turns white), thereby resulting in a lower quality of the shot video and a low user experience.
Wherein the imaging properties include at least one of the following properties: a field of view attribute, an exposure attribute, a white balance attribute, and a color attribute; the visual field range attribute influences the position of an object shot by the camera in an image acquired by the camera, the exposure attribute influences the exposure degree of the object, the white balance attribute influences the color temperature of the object, and the color attribute influences the color of the object.
In some embodiments, in order to improve the quality of video captured by a camera of an electronic device, the camera of the electronic device is calibrated, i.e. calibrated, using a professional high-precision device before the electronic device leaves the factory. But requires the camera to be calibrated using specialized equipment and can only be calibrated before the electronic device is shipped. With the increase of the service life of the electronic device, the camera device of the electronic device may be aged, the accuracy is reduced, and the user cannot actively trigger calibration, so that the phenomenon of picture mutation or picture tearing in the video shot by the camera during zooming and switching of the camera may be caused, the quality of the video is reduced, and the user experience is lower.
In other embodiments, the single type of imaging attribute is used to target the field of view attribute of the camera of the electronic device by using the motor driving module and the calibration target, which may cause other types of abrupt changes (such as color abrupt changes, white balance abrupt changes, exposure abrupt changes, etc.) in the video shot by the electronic device, thereby resulting in lower quality of the video. And the motor driving model is required to be calibrated, so that the difficulty of camera calibration is increased.
Therefore, in view of the above problems, the present application proposes a scheme of video shooting. Firstly, the first equipment calibrates the deviation of imaging attributes among cameras of the first equipment, namely, the first equipment controls each camera of the first equipment to shoot a color mosaic image displayed by the second equipment, and determines the compensation value of the imaging attribute (such as a visual field range attribute, an exposure attribute, a white balance attribute, a color attribute and the like) among the cameras by comparing imaging differences among the cameras, so that calibration of the imaging attribute deviation is realized, and further, smooth zoom calibration of multiple cameras is realized. And moreover, the calibration of imaging attribute deviation between cameras of the first equipment can be realized by using the second equipment capable of displaying the color mosaic image, the calibration process is simple, the calibration of professional equipment is not needed, the calibration difficulty is reduced, and meanwhile, the user can actively trigger the calibration of the cameras of the first equipment. And then, in the process of shooting the video by the user by using the first device, the user can input related operations on a shooting interface displayed by the electronic device according to shooting requirements, wherein the operations are used for triggering the first device to zoom and switch the camera. The first equipment responds to the operation, and the first camera used by current shooting is switched to a second camera, and the second camera and the first camera belong to a rear camera or a front camera in the cameras. The first equipment compensates the imaging attribute of each object in the image shot by the second camera by using the compensation value of the imaging attribute between the second camera and the first camera, so that the imaging attribute of the object after compensation is consistent with the imaging attribute of the object shot by the first camera, thereby avoiding the problem that the first equipment has picture mutation or picture tearing in the image shot during the zooming switching period of the camera, realizing the effect of smooth change of the video picture shot during the zooming switching period of the camera, and improving the quality of the shot video.
For example, the first device may be an electronic device capable of capturing video, such as a mobile phone, a tablet computer, a wearable device, a personal digital assistant (personal digital assistant, PDA), etc., and the specific form of the electronic device is not limited in the embodiments of the present application. The second device may be a device with a display screen, such as a mobile phone, a tablet computer, a wearable device, a computer, or the like.
Fig. 3 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfaces between the modules illustrated in the embodiments of the present application are only illustrative, and do not constitute a structural limitation of the electronic device 100.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code divisionmultiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (longterm evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquidcrystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the camera, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image by a camera and projects the optical image to a photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include N cameras 193, N being a positive integer greater than 1.
Illustratively, the N cameras 193 may include: at least two rear cameras. For example, the handset shown in fig. 4 includes three rear cameras, respectively rear cameras 301, 302, and 303. Of course, the number of rear cameras shown in fig. 4 is only an example, and the number of rear cameras may be other numbers, which is not limited in this application.
For example, the N cameras 194 may include: at least two front cameras.
It should be appreciated that at least two of the N cameras on the electronic device are different in type and/or at least two of the front cameras are different in type. The camera comprises a main camera, a long-focus camera, a wide-angle camera, a super-wide-angle camera, a micro-distance camera, a fish-eye camera, an infrared camera, a depth camera and a black-and-white camera.
(1) And a main camera.
The main camera has the characteristics of large light incoming quantity, high resolution and centered visual field range. The primary camera is typically the default camera for an electronic device, such as a cell phone. That is, the electronic device (e.g., a cell phone) may default to the primary camera in response to a user launching the "camera" application, displaying the image captured by the primary camera on the preview interface. The field of view of the camera is determined by the field of view (FOV) of the camera. The larger the FOV of the camera, the larger the field of view of the camera.
(2) A tele camera.
The long-focus camera has longer focal length and can be suitable for shooting objects far away from the mobile phone (namely, far objects). However, the amount of light entering the tele camera is small. Using a tele camera to capture images in a dim light scene may affect image quality due to insufficient light input. In addition, the field of view of the tele camera is small, and the tele camera is not suitable for shooting images of larger scenes, namely is not suitable for shooting larger shooting objects (such as buildings or landscapes).
(3) Wide angle camera.
The wide-angle camera has a larger visual field range and can be suitable for shooting larger shooting objects (such as buildings or landscapes). However, the resolution of the wide-angle camera is low. In addition, the image of the subject is easily distorted, that is, the image of the subject is easily deformed, due to the fact that the image obtained by photographing with the wide-angle camera is used.
(4) Ultra-wide angle camera.
The ultra-wide angle camera and the wide angle camera are the same camera. Alternatively, the ultra-wide angle camera may have a wider field of view than the wide angle camera described above.
(5) And a micro-camera.
The micro camera is a special camera used for micro photography and is mainly used for shooting very fine objects such as flowers, insects and the like. The micro-range camera is used for shooting fine natural scenes, so that microscopic scenes which cannot be seen by people generally can be shot.
(6) Fish-eye camera.
The fisheye camera is an auxiliary camera with a focal length of 16mm or less and a field angle of view of approximately or equal to 180 °. A fisheye camera may be considered an extreme wide angle camera. The front lens of such a camera is very short in diameter and projects in a parabolic shape toward the front of the camera, quite similar to the eyes of a fish, and is therefore called a fish-eye camera. The image shot by the fish-eye camera is very different from the real world image in the eyes of people; therefore, the fisheye camera is generally used when a special shooting effect is obtained.
(7) An infrared camera.
The infrared camera has the characteristic of large spectrum range. For example, an infrared camera may sense not only visible light but also infrared light. Under a dim light scene (namely weak light), the characteristic that the infrared camera can sense infrared light is utilized, and the infrared camera is used for shooting images, so that the image quality can be improved.
(8) Depth camera.
A time of flight (ToF) camera or a structured light camera, etc., are depth cameras. Taking a depth camera as an example of a ToF camera. The ToF camera has the characteristic of accurately acquiring depth information of a shooting object. The ToF camera can be suitable for face recognition and other scenes.
(9) Black and white cameras.
Black and white cameras do not have filters. Therefore, the amount of light input to the black-and-white camera is large compared to the color camera. However, images acquired by the black-and-white camera can only show gray scales of different levels, and cannot show the true color of a shooting object. The main camera, the tele camera, the wide camera and the like are all color cameras.
It will be appreciated that the location of the different cameras on the handset is different. Therefore, when the mobile phone is fixed in a viewing environment, the field of view of different cameras may be different, and the images acquired by the mobile phone in the viewing environment may be different.
Of course, factors affecting the field of view of the camera include not only the position of the camera on the electronic device, but also hardware parameters of the camera (e.g., field angle). The position of the camera on the electronic device affects the position of the camera's field of view (e.g., left, right, up or down, etc.), as described above; the hardware parameters of the camera (such as the angle of view) affect the size of the field of view of the camera.
In the process of capturing video by using the electronic device 100 (such as a mobile phone), according to the above features of the various cameras (such as the position of the camera on the mobile phone and the hardware parameters of the camera), at least two cameras may be used in a switching manner based on different capturing requirements. For example, when a user wants to shoot a distant object by using a mobile phone, the user can switch to use the tele camera; when a user wants to photograph a large photographic subject using a mobile phone, a wide-angle camera may be used.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. The NPU can implement applications such as intelligent cognition of the electronic device 100.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The keys 190 include a power-on key, a volume key, etc. The motor 191 may generate a vibration cue. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1.
In an embodiment of the present application, the first device comprises a plurality of cameras, the plurality of cameras comprising at least two different types of rear cameras and/or at least two different types of front cameras. In order to improve the quality of video shot by the cameras of the first equipment, the first equipment shoots mosaic images by using the cameras, and determines the compensation value of the imaging attribute between the cameras by comparing imaging differences between the cameras so as to realize the calibration of the imaging attribute deviation between the cameras, thereby realizing the smooth zoom calibration of multiple cameras, enabling the first equipment to compensate the images acquired by the cameras after switching according to the compensation value of the imaging attribute between the cameras when the cameras are switched, improving the consistency of the imaging attribute between different cameras, reducing the imaging difference between the cameras, solving or relieving the phenomenon of picture mutation or picture tearing in the video shot by the first equipment due to the zooming switching of the cameras, improving the quality of the video shot by shooting, and having high user experience.
The video capturing method provided in the embodiments of the present application is described in two embodiments, where a first embodiment is used to describe a process in which a first device determines a deviation of at least one imaging attribute between cameras, and a second embodiment is used to describe a process in which the first device uses the deviation of the imaging attribute between cameras to adjust an image captured by the switched cameras.
Example 1
The embodiment of the application provides a video shooting method. In this embodiment, the first device calibrates the deviation, that is, the compensation value, of the imaging attribute between the cameras of the first device by using the mosaic image displayed by the second device, and the compensation value of the imaging attribute can eliminate the imaging difference between the cameras, so that the problem that the shot video has abrupt picture change or picture tearing due to the switching of the cameras by the first device can be avoided. Specifically, as shown in fig. 5, the process of determining the compensation value provided in this embodiment is as follows:
s501, the first device shoots a first mosaic image based on each camera of the first device respectively, and a first calibration image corresponding to each camera is obtained.
The first mosaic image may be any mosaic image, or the first mosaic image may be displayed by the second device.
The first mosaic image may be a black-and-white mosaic image or a color mosaic image, where the color of the color block in the black-and-white mosaic image is only two colors of black and white. The color of the color patches in the color mosaic image may comprise at least three colors, e.g. red, white, green, etc. For example, the first mosaic image is displayed by a second device, which may display a corresponding mosaic image according to the calibration requirements of the imaging properties of the first device. Illustratively, to better calibrate the compensation value of the color attribute between the cameras of the first device, the first mosaic image may be a color mosaic image.
In some embodiments, the second device may be configured to display the first mosaic image under the trigger of the first device. Specifically, as shown in fig. 6, S10, the first device sends an image display request to the second device, where the image display request is used to trigger the second device to display a mosaic image. S11, the second device receives the image display request and responds to the image display request to display a first mosaic image for shooting by the first device.
In order for the second device to communicate with the first device, so that the second device can receive the request sent by the first device, the second device needs to first establish a connection with the first device. In one case, a first device receives a first trigger operation entered by a user, the first trigger operation for triggering the first device to calibrate imaging properties between cameras of the first device. The first device responds to the first triggering operation and sends a calibration request to the second device. The second device receives the calibration request and establishes a connection with the first device in response to the calibration request.
In another case, the second device may receive a second trigger operation input by the user, where the second trigger operation is used to trigger the second device to assist the first device in calibrating the imaging attribute deviation between the cameras, and then the second device responds to the second trigger operation to generate a connection request, and sends the connection request to the first device. The first device receives the connection request and establishes a connection with the second device in response to the connection request.
Illustratively, the first device and the second device are each provided with a predetermined calibration application. The first triggering operation may be an operation input by a user in the preset calibration application running on the first device, for example, when a deviation of imaging properties between rear cameras of the first device needs to be calibrated, the user may click a first button in the preset calibration application on the first device, where the operation of clicking the first button is the first triggering operation, and the user may click a second button in the preset calibration program in the second device, where the operation of clicking the second button is the second triggering operation.
In other embodiments, the second device may actively display the first mosaic image. Specifically, the second device may display the first mosaic image in response to a display operation input by the user after receiving the display operation. The display operation may be a preset gesture (e.g., an S-gesture) input by the user in the relevant interface in the preset calibration application, or may be an operation of clicking a relevant button in the relevant interface in the preset calibration application by the user.
In some embodiments, the connection manner of the second device and the first device includes at least one of Wi-Fi, near field communication (near field communication, NFC), bluetooth, cellular network, and the like.
In some embodiments, the first mosaic image displayed by the second device may be randomly generated by the second device, or may be acquired by the second device from a preset storage location (for example, a gallery on the second device), where a plurality of mosaic images are stored. Illustratively, the second device, in response to the image display request, first randomly generates a mosaic image and uses the mosaic image as the first mosaic image. Thereafter, the second device displays the first mosaic image.
Wherein the second device may be generated based on a random algorithm when randomly generating the first mosaic image. The random algorithm may be any random function having any random distribution property.
In some embodiments, the second device may first determine a first screen brightness when displaying the first mosaic image, where the first screen brightness indicates a screen brightness when the second device displays the first mosaic image. Then, the second device adjusts the screen brightness of the second device to the first screen brightness. Thereafter, the second device displays the first mosaic image.
The first screen brightness may be randomly generated by the second device, or may be selected from preset screen brightness values by the second device. For example, the second device may first randomly generate a screen brightness in response to the image display request, and take the generated screen brightness as the first screen brightness. Then, the second device displays the first mosaic image based on the first screen brightness.
In some embodiments, the first mosaic image is displayed by a second device. Because the first device needs to use the related information of the original first mosaic image when calibrating the imaging attribute (such as the field of view attribute) deviation between the cameras, the second device can send the first mosaic image to the first device, so that the first device can acquire the required image information according to the first mosaic image.
In other embodiments, the second device may determine the image information of the first mosaic image and send the image information of the first mosaic image to the first device, so that the first device may directly utilize the image information of the first mosaic image when calibrating the imaging attribute deviation between the cameras, without determining from the first mosaic image, thereby improving the calibration efficiency.
The image information includes the position coordinate value of the corner point (i.e., the second corner point) of each color block in the first mosaic image, the color value of each color block, the size of the color block, the number of color blocks (e.g., the total number of color blocks in the first mosaic image, the number of color blocks of each color), and so on.
In some embodiments, the second device may further send the first screen brightness to the first device, so that the first device records the first screen brightness.
It can be understood that the placement positions of the first device and the second device are not limited, and the first device only needs to be able to shoot the complete mosaic image displayed by the second device.
In this embodiment of the present application, the first device captures the first mosaic image by using each camera of the first device, to obtain an image captured by each camera, and uses the captured image as the first calibration image corresponding to the corresponding camera. Wherein, each camera indicates the rear camera or the front camera of first equipment.
In some embodiments, the first device may set the aperture value of each camera to a preset aperture value and the shutter time to a preset shutter time before capturing the first mosaic image with each camera. The rear camera of the first device may include a wide angle camera, a main camera, and a tele camera. When imaging attribute deviation between rear cameras of first equipment is calibrated, the first equipment respectively uses a wide-angle camera, a main camera and a long-focus camera to shoot a first mosaic image, three first calibration images are obtained, the three first calibration images are respectively first calibration images corresponding to the wide-angle camera, the first calibration images corresponding to the main camera and the first calibration images corresponding to the long-focus camera, and aperture values of the wide-angle camera, the main camera and the long-focus camera when shooting the first mosaic image are preset aperture values and shutter times are preset shutter times.
In one case, the first device may capture the first mosaic image based on a related operation input by the user. Specifically, the rear camera of the first device includes a tele camera and a main camera. The first equipment displays a shooting interface, wherein a camera currently used by the first equipment is a main camera, and the shooting interface comprises a shooting button and a zoom magnification adjusting control. In response to a click operation of the shooting button by a user, the first device shoots the first Semark image by using the main camera. Thereafter, the user adjusts the zoom magnification by the zoom magnification adjustment control, for example, adjusting the zoom magnification from 1x to 9.4x. The first device responds to the zoom magnification increasing operation, switches a camera used for shooting from a main camera to a long-focus camera, and shoots the first Semark image by using the long-focus camera.
In another case, the first device may capture the first mosaic image under the control of the second device. Specifically, after displaying the first mosaic image, the second device sends a control instruction to the first device, where the control instruction is used to trigger the first device to use each camera to capture the first mosaic image. The first device receives the control instruction and responds to the control instruction, and each camera is used for shooting the first mosaic image.
S502, the first device determines a first compensation value of at least one imaging attribute among cameras according to the first calibration images corresponding to the cameras, wherein the imaging attribute comprises at least one of the following attributes: a field of view attribute, an exposure attribute, a white balance attribute, and a color attribute.
In this case, when the calibrated imaging attribute does not include a specific imaging attribute (such as a field of view attribute), which indicates that the imaging attribute calibrated by the first device does not need to use the image information of the original mosaic image, the first device may determine the first compensation value of each imaging attribute between the cameras according to the first calibration image corresponding to each camera. For example, when the calibrated imaging attribute includes a white balance attribute and a color attribute, the first device may determine a deviation of the white balance attribute (i.e. a first compensation value) and a deviation of the color attribute between the cameras directly according to the first calibration image corresponding to each camera, so as to implement deviation calibration of the white balance attribute and the color attribute.
In another case, the first mosaic image is displayed by the second device. When the calibrated imaging attribute comprises a specific imaging attribute (such as a visual field range attribute), indicating that the first equipment needs to utilize the image information of the original image displayed by the second equipment, the first equipment receives the image information of the first mosaic image sent by the second equipment, and combines and calculates a first calibrated image corresponding to the cameras with the image information to determine a first compensation value of each imaging attribute between the cameras. Or the first device receives the first mosaic image sent by the second device, and determines the image information of the first mosaic image according to the first mosaic image. And then, the first equipment determines a first compensation value of each imaging attribute among the cameras according to the image information and the first calibration image corresponding to each camera.
In some embodiments, the first device may determine a deviation of the imaging attribute between any two cameras in the respective cameras of the first device, that is, the first compensation value, to achieve calibration of the imaging attribute deviation between the cameras.
In other embodiments, the camera switches are switched in a sequence. For example, the rear camera of the first device includes a main camera, a wide angle camera, and a tele camera. The camera that initial use was generally the main camera when first equipment was shot, and the user can be according to shooting demand, switches the camera that first equipment was shot and uses to other cameras (such as wide-angle camera or long burnt camera) by main camera, therefore, first equipment can regard main camera as the benchmark camera, confirm respectively the first compensation value of each imaging attribute between main camera and other cameras, realizes the demarcation of the imaging attribute of camera, realizes the demarcation of the imaging attribute deviation between the camera promptly.
The method for determining the first compensation value of the imaging attribute between the cameras is described herein by taking, as an example, the first compensation value of the imaging attribute from the wide-angle camera to the main camera and the first compensation value of the imaging attribute from the main camera to the telephoto camera determined by the first device.
In one case, the imaging attribute includes a field of view attribute. The field of view attribute affects the position of the photographed object in the photographed image, i.e., affects the imaging position of the photographed object. The field of view attribute may be expressed by a projection matrix of the rear camera used to capture the captured object (e.g., a display screen of the second device). When the visual field range attribute of the rear camera is calibrated, first, the first equipment identifies the angular point (namely the first angular point) of each color block in the first calibration image corresponding to the wide-angle camera, and sequentially determines the position coordinate value of the first angular point of each color block according to a first preset sequence (such as a sequence from left to right and then from top to bottom), so as to obtain the position coordinate value of the first angular point corresponding to the wide-angle camera. Similarly, the first device determines a position coordinate value of a first angular point corresponding to the main camera and a position coordinate value of a first angular point corresponding to the tele camera.
Then, the first device may determine a projection matrix from the first mosaic image (i.e. the display screen of the second device) displayed by the second device to the wide-angle camera according to the position coordinate value of the first angular point corresponding to the wide-angle camera and the position coordinate value of the angular point (i.e. the second angular point) of the color block in the first mosaic image in the image information, so as to obtain a projection matrix H corresponding to the wide-angle camera 1 The H is 1 May beThe position coordinate values of the corner points (namely the second corner points) of each color block in the first mosaic image are also determined by the second equipment according to the first preset sequence.
Exemplary, H 1 Each parameter (e.g. h 11 ,h 22 ) Can be according to H 1 =N*P[M]Determining the P [ M ]]Moore-Penrose generalized inverse matrix for M. Wherein,
wherein, (x) 1 ,y 1 )、(x 2 ,y 2 )、…(x n ,y n ) For the position coordinate values of the second corner points, (u) 1 ,v 1 )、(u 2 ,v 2 )、…(u n ,v n ) For the position coordinate values of the first corner points, the (u n ,v n ) Corresponding first corner point and the (x) n ,y n ) The corner points indicated by the corresponding second corner points are identical. Correspondingly, H 1 Can be according to the respective parameters ofAnd (5) determining.
The position coordinate value of the second corner may be a coordinate value of a corner on a color block in the first mosaic image in a screen coordinate system (i.e., an image physical coordinate system). The position coordinate value of the first angular point corresponding to the wide-angle camera can be the coordinate value of the angular point of the color block in the first calibration image shot by the wide-angle camera in the image coordinate system (namely the pixel screen coordinate system).
Similarly, the first device may determine a projection matrix H of the display screen of the second device to the primary camera 2 That is, determining the projection matrix H corresponding to the main camera 2 . And the first device may determine a projection matrix H of the display screen of the second device to the tele camera 3 Namely, determining a projection matrix H corresponding to the tele camera 3
Then, the first device may use the projection matrix H corresponding to the main camera 2 Projection matrix H corresponding to the wide-angle camera 1 Calculating a first compensation value H of a projection matrix from the wide-angle camera to the main camera 12 The first device can utilize the compensation value of the corresponding field-of-view range attribute to carry out field-of-view range attribute compensation on the image shot by the main camera, so that the field-of-view range attribute of the main camera is consistent with the field-of-view range attribute of the wide-angle camera, and the problem of position mutation of the image is avoided.
Similarly, the first device may determine the projection matrix H corresponding to the primary camera 2 Projection matrix H corresponding to the tele camera 3 Calculating a first compensation value H of a projection matrix from the main camera to the tele camera 23 (i.e. a conversion matrix) to determine a first compensation value of a field-of-view range attribute from a main camera to a tele camera, thereby realizing calibration of a field-of-view range attribute deviation between the main camera and the tele camera, so that when a camera used for shooting by a first device is switched from the main camera to the tele camera, the compensation value of the corresponding field-of-view range attribute can be utilized to compensate the field-of-view range attribute of an image shot by the tele camera, and the field-of-view range attribute of the tele camera and the field-of-view of the main camera can be madeThe range attribute is consistent, and the problem that the image shot by the long-focus camera has position mutation is avoided.
Exemplary, the first compensation value H of the projection matrix of the wide-angle camera to the main camera 12 Can be according to H 12 =H 1 ×H 2 -1 And (5) determining. Similarly, the first compensation value H of the projection matrix from the main camera to the tele camera 23 Can be according to H 23 =H 2 ×H 3 -1 And (5) determining.
It should be understood that the projection matrix is only one expression of the field of view attribute, and the field of view attribute may be expressed by other types of parameters, which is not limited in this application.
In another case, the imaging attribute includes an exposure attribute. The compensation mode of the exposure property can be expressed by a sensitivity ISO conversion coefficient. When the exposure attribute of the rear camera is calibrated, first, the first device may acquire an ISO value when the wide-angle camera captures a first mosaic image, and use the ISO value as a first ISO value corresponding to the wide-angle camera. And the first equipment evaluates, namely, determines the brightness value of the first calibration image corresponding to the wide-angle camera. Similarly, the first device may determine the first ISO value corresponding to the main camera and the brightness value of the first calibration image corresponding to the main camera. And the first device may determine a first ISO value corresponding to the tele camera and a luminance value of a first calibration image corresponding to the tele camera.
Then, the first device may determine an ISO conversion coefficient K corresponding to the wide-angle camera according to a first ISO value corresponding to the wide-angle camera and a brightness value of a first calibration image corresponding to the wide-angle camera 1 . Similarly, the first device may determine an ISO conversion coefficient K corresponding to the main camera according to a first ISO value corresponding to the main camera and a brightness value of a first calibration image corresponding to the main camera 2 . The first device can determine an ISO conversion coefficient K corresponding to the long-focus camera according to a first ISO value corresponding to the long-focus camera and a brightness value of a first calibration image corresponding to the main camera 3
Exemplary, K 1 = (Ω 1/I1), where Ω 1 is the luminance value of the first calibration image corresponding to the wide-angle camera, and I1 is the first ISO value corresponding to the wide-angle camera. Similarly, K 2 = (Ω 2/I2), where Ω 2 is the luminance value of the first calibration image corresponding to the main camera, and I2 is the first ISO value corresponding to the main camera. K (K) 3 = (Ω 3/I3), where Ω 3 is the luminance value of the first calibration image corresponding to the telephoto camera, and I2 is the first ISO value corresponding to the telephoto camera.
Then, the first device can convert the coefficient K according to the ISO corresponding to the wide-angle camera 1 ISO conversion coefficient K corresponding to main camera 2 Determining a compensation value (i.e. a first compensation value) K of an ISO conversion coefficient of the wide-angle camera to the main camera 12 The method and the device realize the determination of the first compensation value of the exposure attribute from the wide-angle camera to the main camera, thereby realizing the calibration of the exposure attribute deviation between the wide-angle camera and the main camera, so that when the camera used by the first device for shooting is switched to the main camera from the wide-angle camera, the first device can carry out the exposure attribute compensation on the image shot by the main camera by utilizing the compensation value of the corresponding exposure attribute, the exposure attribute of the main camera is consistent with the exposure attribute of the wide-angle camera, and the problem of exposure mutation of the image is avoided.
Similarly, the first device may convert the coefficient K according to ISO corresponding to the tele camera 3 ISO conversion coefficient K corresponding to the main camera 2 Determining a compensation parameter k of an ISO conversion coefficient from the main camera to the wide-angle camera 23 The method and the device realize the determination of the first compensation value of the exposure attribute from the main camera to the tele camera, thereby realizing the calibration of the exposure attribute deviation between the main camera and the tele camera, so that when the camera used by the first device for shooting is switched to the tele camera by the main camera, the first device can utilize the compensation value of the corresponding exposure attribute to carry out the exposure attribute compensation on the image shot by the tele camera, the exposure attribute of the tele camera is consistent with the exposure attribute of the main camera, and the problem of exposure mutation of the image is avoided.
Exemplary, broadCompensation parameter k of ISO conversion coefficient from angle camera to main camera 21 Can be according to K 1 *k 12 =K 2 And (5) determining. Compensation parameter k of ISO conversion coefficient from main camera to tele camera 23 Can be according to K 2 *k 23 =K 3 And (5) determining.
In some embodiments, in order to calibrate the relationship between the sensitivity conversion coefficients of the rear cameras more accurately, the first device may control the aperture size and the shutter time of the rear cameras in the process of capturing the first mosaic image.
It should be understood that the above-described ISO conversion factor is only one expression of an exposure property, which may also be expressed by other types of parameters, and the present application is not limited thereto.
In another case, the imaging attribute includes a color attribute. The first mosaic image may be a color mosaic image. The color attribute compensation mode can be expressed by a color shift coefficient and a color shift truncation. When the color attribute of the rear camera is calibrated, first, the first device may obtain the color value of each color block in the first calibration image corresponding to the wide-angle camera, the color value of each color block in the first calibration image corresponding to the main camera, and the color value of each color block in the first calibration image corresponding to the tele camera.
For example, when the first device obtains the color values of the color blocks in the first calibration image corresponding to the wide-angle camera, the first device may sequentially determine the color values of the color blocks in the first calibration image according to a second preset sequence (e.g. a sequence from left to right and then from top to bottom), that isWherein (1)>And the color value of the ith color block in the first calibration image corresponding to the wide-angle camera is represented, and N represents the total number of the color blocks in the first calibration image. Similarly, the firstThe device can sequentially determine the color values of the color blocks in the first calibration image corresponding to the main camera according to the second preset sequence, namely +.>Wherein (1)>And representing the color value of the ith color block in the first calibration image corresponding to the main camera. Similarly, the first device may sequentially determine color values of each color block in the first calibration image corresponding to the tele camera according to the second preset sequence, that is +.>Wherein (1)>And representing the color value of the ith color block in the first calibration image corresponding to the tele camera. The ith color block in the first calibration image corresponding to the long-focus camera, the ith color block in the first calibration image corresponding to the main camera and the ith color block in the first calibration image corresponding to the wide-angle camera indicate the same color block.
After that, the first device can determine the color deviation coefficient and the color deviation cutoff of the wide-angle camera to the main camera according to the color value of each color block corresponding to the wide-angle camera and the color value of each color block corresponding to the main camera, so as to determine the color deviation between the wide-angle camera and the main camera, namely, determine the first compensation value of the color attribute between the wide-angle camera and the main camera, so that the first device can utilize the first compensation value to compensate the color attribute of the image shot by the main camera when the camera used by the first device is switched to the main camera, eliminate the color deviation between the two images, ensure that the color attribute of the main camera is consistent with the color attribute of the wide-angle camera, and avoid the problem that the color of the image is suddenly changed, namely, the color of the shot object is changed.
And the first equipment can determine the color deviation coefficient and the color deviation cutoff of the main camera to the tele camera according to the color value of each color block corresponding to the main camera and the color value of each color block corresponding to the tele camera, so as to determine the color deviation between the main camera and the tele camera, namely, the first compensation value of the color attribute between the main camera and the tele camera is determined, so that the first equipment can utilize the first compensation value to compensate the color attribute of the image shot by the tele camera when the camera shot by the first equipment is switched to the tele camera, the color deviation between the main camera and the tele camera is eliminated, the color attribute of the main camera is consistent, and the problem of color mutation of the image is avoided.
The first device may determine based on a preset color deviation calculation model in order to calculate the color difference of the same color block shot by different cameras, that is, when determining the color deviation coefficient and the color deviation cutoff. The process of determining the color shift coefficient and the color shift cutoff between the wide-angle camera and the main camera, and the color shift coefficient and the color shift cutoff between the main camera and the wide-angle camera by the first device will be described below by taking the mathematical calculation model in which the preset color shift calculation model is a linear color shift as an example.
Assume that a color deviation calculation model between a wide-angle camera and a main camera lens isThe a and b can be determined in a least squares way, i.e. +.>Wherein x is i Representing color value, y of ith color block in first calibration image corresponding to wide-angle camera i Representing the color value of the ith color block in the first calibration image corresponding to the main camera, the +.>Mean value of color values representing each color block in a first calibration image corresponding to a wide-angle camera,/>The average value of the color values of all color blocks in the first calibration image corresponding to the main camera is represented.
Thus, the first device can determine the color deviation calculation model between the wide-angle camera and the main camera asThis->And->The color shift coefficient and the color shift cutoff from the wide-angle camera to the main camera lens are respectively. And, the first device may determine that the color deviation calculation model between the main camera and the wide-angle camera is +>This->And->And respectively cutting off the color offset coefficient and the color offset from the main camera to the tele camera.
The color value of the color block is a color value in CIE format, and of course, the color value of the color block may be color values in other color formats (such as RGB format and yuv format), and the color value of the color block may be converted into a corresponding color value in CIE format. Of course, since the differences between cameras are mainly determined, if the color value of the white color block is in other color formats (such as RGB format and yuv format), the first device may replace a and b with corresponding matrixes or columns, so that the least square method result may be calculated as well.
In another case, the imaging attribute includes a white balance attribute. The compensation mode of the white balance attribute can be expressed by color temperature deviation coefficient and color temperature deviation cut-off. When calibrating the white balance attribute of the rear camera, first, the first device may calculate the color value of each white color block in the first calibration image corresponding to the wide-angle camera, the color value of each white color block in the first calibration image corresponding to the main camera, and the color value of each white color block in the first calibration image corresponding to the telephoto camera.
For example, when the first device obtains the color values of the respective white color blocks in the first calibration image corresponding to the wide-angle camera, the first device may sequentially determine the color values of the respective white color blocks in the first calibration image according to a third preset sequence (for example, a sequence from left to right and then from top to bottom). Similarly, the first device may determine, according to the third preset sequence, a color value of each white color block in the first calibration image corresponding to the main camera and a color value of each white color block in the first calibration image corresponding to the tele camera.
And then, the first equipment sequentially determines the color temperature of each white color block corresponding to the wide-angle camera according to the color value of each white color block corresponding to the wide-angle camera according to a preset color temperature formula. Similarly, the first device sequentially determines the color temperature of each white color block corresponding to the main camera according to the preset color temperature formula, and sequentially determines the color temperature of each white color block corresponding to the tele camera according to the color value of each white color block corresponding to the tele camera.
Wherein the color values can be CIE format color values, and the preset color temperature formula is
CCT=437×n 3 +361×n 2 +6861×n+5517, and n= (x-0.3320)/(0.18S8-y), the CCT is the color temperature of a white color patch, and x, y is the color temperature of the white color patchColor values in CIE format.
After that, the first device can determine the color temperature deviation coefficient and the color temperature deviation cutoff of each white color block corresponding to the wide-angle camera and each white color block corresponding to the main camera according to the color temperature of each white color block corresponding to the wide-angle camera, so as to determine the color temperature deviation between the wide-angle camera and the main camera, namely, determine the first compensation value of the white balance attribute between the wide-angle camera and the main camera, so that the first device can utilize the first compensation value to perform white balance attribute compensation on the image shot by the main camera when the camera shot by the first device is switched to the main camera, the difference of the white balance attribute between the two is eliminated, the white balance attribute of the main camera and the white balance attribute of the wide-angle camera are consistent, and the problem that the white balance mutation (namely, white balance distortion, namely, picture color distortion) of the image is avoided. Meanwhile, the calibration of the white balance attribute does not limit the illumination intensity of the environment where the first equipment is located, so that the first equipment can solve the problem of white balance mutation under various illumination intensities through one-time calibration of the white balance attribute.
And the first equipment can determine the color temperature deviation coefficient and the color temperature deviation cutoff of the main camera to the tele camera according to the color temperature of each white color block corresponding to the main camera and the color temperature of each white color block corresponding to the tele camera, so as to determine the color temperature deviation between the main camera and the tele camera, namely, determine the first compensation value of the white balance attribute between the main camera and the tele camera, so that the first equipment can utilize the first compensation value to perform white balance attribute compensation on the image shot by the tele camera when the camera shot by the first equipment is switched to the tele camera, the difference of the white balance attribute between the two is eliminated, the white balance attribute of the main camera and the white balance attribute of the tele camera are consistent, and the problem of white balance mutation of the image is avoided.
Wherein, when determining the color temperature deviation coefficient and the color temperature deviation cutoff, the first device may determine based on a preset color temperature deviation calculation model. Taking the mathematical calculation model with the preset color temperature deviation calculation model as a linear color temperature deviation as an example, the process of determining the color temperature deviation coefficient and the color temperature deviation cutoff from the wide-angle camera to the main camera by the first device according to the color temperature of each white color block corresponding to the wide-angle camera and the color temperature of each white color block corresponding to the main camera is described below.
For example, the mathematical calculation model of the linear color temperature deviation of the wide angle camera to the main camera may beWherein, the->For the linear color temperature deviation coefficient from the wide-angle camera to the main camera, the +.>Cut off the linear color temperature deviation from the wide-angle camera to the main camera, the +.>For the color temperature of the ith white color block corresponding to the wide-angle camera, the +.>And the color temperature of the ith white color block corresponding to the main camera. The ith white color block corresponding to the wide-angle camera and the ith white color block corresponding to the main camera indicate the same color block.
Accordingly, the determination can be based on the least square methodAnd->I.e. theWherein the method comprises the steps of,/>For the average value of the color temperature of all white color blocks corresponding to the wide-angle camera, the +.>The color temperature average value of all white color blocks corresponding to the main camera is obtained.
Similarly, the first device may determine the linear color temperature deviation from the main camera to the tele camera according to the process of determining the linear color temperature deviation from the wide-angle camera to the main camera based on the mathematical calculation model of the linear color temperature deviation from the wide-angle camera to the main camera.
In some embodiments, if the color value of the white color block is the color value of other color formats (such as RGB format, yuv format), the first color value may be converted to the color format first, so as to obtain the color value of the corresponding CIE format. The first device may then determine the color value of the white color block according to the preset color temperature formula and the color value of the CIE format. Of course, since the differences between cameras are mainly determined, if the color value of the white color block is in other color formats (such as RGB format and yuv format), the first device may replace a and b with corresponding matrixes or columns, so that the least square result may be calculated as well.
In this embodiment of the present application, when the first device needs to calibrate the difference of imaging attributes between the cameras, each camera may be first used to capture the same mosaic image (i.e., the first mosaic image) to obtain a first calibration image corresponding to each camera. Then, the first device can determine imaging differences among the cameras by comparing the first calibration images corresponding to the cameras, namely, determine a first compensation value of preset imaging attributes among the cameras, so that calibration of imaging attribute deviation among the cameras of the first device is realized, and the calibration mode is simple. Moreover, as the first equipment only needs to shoot corresponding mosaic images, the camera can be calibrated, so that a user can actively trigger the calibration of the first equipment to the camera, even if the camera device of the first equipment is aged, the precision is reduced, the calibration of the camera of the first equipment can be realized, further, the imaging deviation caused by the aging of the camera device can be avoided, and the quality of the shot video is improved. Meanwhile, the first equipment can realize one-time calibration of a plurality of imaging attributes, and the calibration efficiency of the imaging attributes is improved. Correspondingly, when the first device shoots a video, the picture mutation phenomenon caused by the deviation of a plurality of imaging attributes can be eliminated, and the smoothness of zooming switching of the camera is ensured.
In some embodiments, to ensure accuracy of the first compensation value of the imaging attribute between the cameras, the first device may test whether the first compensation value is accurate by using the test image displayed by the second device, as shown in fig. 7, and this process specifically includes:
s503, the first device shoots a second mosaic image based on the cameras respectively to obtain second calibration images corresponding to the cameras.
S504, the first device determines a second compensation value of at least one imaging attribute between the cameras according to the second calibration images corresponding to the cameras.
The implementation process of S503-S504 is similar to the implementation process of S501-S502, and will not be described here again.
S505, the first device calculates the difference between the first compensation value and the second compensation value of each imaging attribute, so as to obtain the compensation value error of each imaging attribute.
S506, the first device judges whether the compensation value errors of the imaging attributes are smaller than preset error values corresponding to the corresponding imaging attributes.
S507, if the error of the compensation value of the imaging attribute is greater than or equal to the preset error value corresponding to the imaging attribute, the first device may return to the step S501.
S508, if the error of the compensation value of each imaging attribute is smaller than the preset error value corresponding to the corresponding imaging attribute, the first device stores the first compensation value of each imaging attribute.
In this embodiment of the present application, for each imaging attribute between two cameras, the first device calculates a difference between a first compensation value of the imaging attribute and a second compensation value of the imaging attribute, to obtain a compensation value error of the imaging attribute. Then, the first device may determine whether the error of the compensation value of the imaging attribute is smaller than a preset error value corresponding to the imaging attribute, so as to determine whether the first compensation value of the imaging attribute is accurate. If there is an error of the compensation value of at least one imaging attribute between the two cameras greater than or equal to the preset error value corresponding to the imaging attribute, which indicates that the first compensation value of the imaging attribute between the two cameras is inaccurate, that is, the calibration accuracy of the imaging attribute deviation is low, and the imaging attribute between the two cameras needs to be recalibrated, the first device may return to step S501, where the first device recalibrates the imaging attribute deviation between the two cameras using a new mosaic image displayed by the second device.
For example, the imaging attribute calibrated by the first device includes a field of view attribute and an exposure attribute, and the first device determines the first compensation value of the projection matrix (i.e. the field of view attribute) from the wide-angle camera to the main camera to be H 12 And the second compensation value of the field of view attribute is H 12 T The method comprises the steps of carrying out a first treatment on the surface of the And the first device determines a first compensation value of the ISO conversion coefficient (namely exposure attribute) from the wide-angle camera to the main camera as k 12 And the second compensation value of the field of view attribute is k 12 T . Then, the first device calculates a first compensation value of the field of view attribute as H 12 And the second compensation value is H 12 T And obtaining a compensation value error of the field-of-view attribute, and judging whether the compensation value error is smaller than a preset error value corresponding to the field-of-view attribute. And the first device calculates a first compensation value of the exposure property as k 12 And a second compensation value of k 12 T And obtaining a compensation value error of the exposure attribute by the difference value, and judging whether the compensation value error is smaller than a preset error value corresponding to the exposure attribute. If the compensation value error of the visual field range attribute is larger than or equal toAnd if the error of the compensation value of the exposure attribute is greater than or equal to the preset error value corresponding to the exposure attribute, the first device determines that the error of the imaging attribute (i.e., the first compensation value) between the wide-angle camera and the main camera needs to be redetermined, and the second device can redelay a mosaic image so that the first device can redetermine the first compensation value according to the mosaic image.
In some embodiments, if the error of the compensation value of each imaging attribute between the two cameras is smaller than the preset error value of the corresponding imaging attribute, which indicates that the first compensation value of each imaging attribute between the two cameras is accurate, that is, the accuracy of calibration of the imaging attribute is higher, and no recalibration is required, the first device may store the first compensation value of each imaging attribute between the two cameras, so that when the first device switches the cameras between the two cameras, the first compensation value of each imaging attribute between the two cameras may be used to compensate the shot image, so as to avoid abrupt picture changes. For example, the imaging attribute calibrated by the first device includes a field of view attribute and an exposure attribute, and the first device calculates a compensation value error of the field of view attribute between the wide-angle camera and the main camera of the first device. And the first device calculates a compensation value error of the exposure property between the wide-angle camera and the main camera of the first device. If the error of the compensation value of the field of view attribute is smaller than the preset error value corresponding to the field of view attribute and the error of the compensation value of the exposure attribute is smaller than the preset error value corresponding to the exposure attribute, which indicates that the deviation determination of each imaging attribute between the wide-angle camera and the main camera is accurate, the first device can store the first compensation value of each imaging attribute between the wide-angle camera and the main camera, so that when the first device switches the cameras between the wide-angle camera and the main camera, the phenomenon of picture mutation caused by the deviation of each imaging attribute between the wide-angle camera and the main camera can be eliminated by using the first compensation value.
In other embodiments, if the compensation value error of the imaging attribute between all cameras determined by the first device using the first mosaic image is less than the preset error value of the corresponding imaging attribute, the first device may save the first compensation value of each imaging attribute between all cameras. If the compensation value error of the imaging attribute between the cameras is larger than or equal to the preset error value of the corresponding imaging attribute, the first device can re-calibrate the imaging attribute between the two cameras by utilizing the mosaic image displayed by the second device, or re-calibrate the imaging attribute between all the cameras of the first device. For example, the imaging attribute of the first device calibration includes a field of view attribute and an exposure attribute, the imaging attribute deviation between the first device calibration wide-angle camera and the main camera, and the imaging attribute deviation between the calibration main camera and the tele camera. The first device calculates compensation value errors of the field-of-view range attribute between the wide-angle camera and the main camera and between the main camera and the tele camera of the first device, respectively. And the first equipment calculates compensation value errors of the exposure attribute between the wide-angle camera and the main camera and between the main camera and the tele camera respectively. If the compensation value errors of the field-of-view attribute between the wide-angle camera and the main camera and between the main camera and the tele camera are respectively smaller than the preset error value corresponding to the field-of-view attribute, and the compensation value errors of the exposure attribute between the wide-angle camera and the main camera and between the main camera and the tele camera are respectively smaller than the preset error value corresponding to the exposure attribute, the first device can store the first compensation values of the field-of-view attribute and the exposure attribute between the wide-angle camera and the main camera and between the main camera and the tele camera; otherwise, the first device needs to re-determine the deviation of the imaging attribute (i.e. the first compensation value) between the wide-angle camera and the main camera, or the first device needs to re-calibrate the deviation of the imaging attribute between the wide-angle camera and the main camera, and the deviation of the imaging attribute between the main camera and the tele camera. The second device may redisplay a mosaic image for the first device to redetermine the corresponding first compensation value based on the mosaic image.
The preset error value corresponding to the imaging attribute may be selected by a related person according to an actual test standard. For example, according to different test standards, the preset error value corresponding to the exposure attribute may take a value in the range of 0.1-10.
In some embodiments, the first device may save the first compensation value to a persistent memory, i.e., a non-volatile memory, in the first device to avoid loss of the first compensation value when saving the first compensation value for each imaging attribute between the two cameras.
In some embodiments, when the first device stores the first compensation value of each imaging attribute between the two cameras, the first device may store the first compensation value in a preset file, and then, when the first device stores the preset file in a persistent memory in the first device, the first device may directly obtain the first compensation value of the required imaging attribute from the preset file when switching the rear camera, so that loss of the first compensation value may be avoided, and acquisition of the first compensation value may be facilitated.
It should be appreciated that the focal lengths of the calibrated cameras (e.g., the wide-angle camera, the main camera, and the tele camera) are all invariable, and the cameras have corresponding zoom magnification ranges. For example, the wide-angle camera corresponds to a zoom magnification range of [0.6x,1.0 x), and when the zoom magnification selected by the user falls within the [0.6x,1.0x ] section, the first device photographs video using the wide-angle camera. Of course, the zoom magnification range corresponding to the wide-angle camera is only an example, the zoom magnification ranges corresponding to different cameras can be set according to actual requirements, and the zoom magnification ranges corresponding to the cameras are pre-stored on the first device.
In the embodiment of the application, the second device can improve the accuracy of the compensation value of the imaging attribute between the cameras determined by the first device by displaying different mosaic images (such as randomly generating the mosaic images), that is, improve the accuracy of the compensation value of the imaging attribute between the cameras determined by the first device, and reduce the calibration error. Correspondingly, the screen brightness of the second device displaying the mosaic image also affects the compensation value of the imaging attribute calibrated by the first device, so that the second device can also improve the accuracy of the compensation value of the imaging attribute between cameras determined by testing the first device by displaying the mosaic image according to different screen brightness.
In this embodiment of the present application, for two cameras in the first device, the first device may determine, using a second mosaic image (e.g., a second mosaic image displayed by the second device), a second compensation value of each imaging attribute between the two cameras, so as to test whether the first compensation value of the corresponding imaging attribute is accurate or not, that is, whether calibration of each imaging attribute is accurate or not, using each second compensation value. When the first compensation value of each imaging attribute is determined to be accurate, the first device can store the first compensation value of each imaging attribute between the two cameras, so that the accuracy of imaging attribute calibration is ensured, and the abnormality of the determined first compensation value of the imaging attribute caused by certain reasons (such as shaking of the first device during shooting) is avoided, so that the accuracy of imaging attribute calibration can be ensured. When the first compensation value of the imaging attribute is inaccurate, the first device recalibrates the deviation of the imaging attribute between the two cameras by using the first mosaic image redisplayed by the second device, so that iterative calibration of the imaging attribute is realized, and the calibration error of the imaging attribute is gradually reduced.
In the embodiment of the application, based on the related triggering operation of the user, the first device establishes a connection with the second device with a screen, the second device can display a mosaic image (such as a color mosaic image) with known image information, and the first device shoots the mosaic image by using each camera of the first device. The first equipment compares imaging results of the cameras, determines differences of imaging attributes among the cameras, namely determines relative deviations of the imaging attributes among the cameras, and achieves external parameter calibration of multiple cameras. Moreover, the user can actively trigger the calibration process of the imaging attribute deviation between the cameras of the first equipment only by inputting simple triggering operation, so that even if the camera device of the first equipment is aged, the accuracy is reduced, the calibration of the cameras of the first equipment can be realized, the imaging deviation caused by the aging of the camera device can be avoided, and the quality of video shooting is improved. Meanwhile, the first equipment can realize one-time calibration of a plurality of imaging attributes, improves the calibration efficiency of the imaging attributes, eliminates the picture mutation phenomenon caused by the deviation of the plurality of imaging attributes when shooting video, and ensures the smoothness of switching zooming.
It can be understood that after obtaining the first compensation value of the imaging attribute between the cameras, the first device may reuse each camera to capture a second mosaic image, so as to determine the second compensation value of the imaging attribute between the cameras by using the obtained second calibration image corresponding to each camera, thereby testing whether the first compensation value is accurate by using the second compensation value. The first device may also use each camera to capture a second mosaic image after capturing the first mosaic image by using each camera, so that the first device uses the corresponding first calibration image to determine the first compensation value, and uses the corresponding second calibration image to determine the second compensation value, so that whether the first compensation value is accurate or not is tested by using the second compensation value.
The process of implementing the above-described first device to determine the imaging attribute deviation between cameras of the first device will be described below with reference to specific examples.
For example, the first device may calibrate a deviation of imaging properties between a rear camera of the first device, including a wide angle camera, a main camera, and a tele camera, using the second device, the first device may be a calibration end and the second device may be a coordination end. As shown in fig. 8, the calibration end initiates a calibration request to the coordination end when the imaging attribute of the rear camera needs to be calibrated. The coordination terminal receives the calibration request and establishes calibration connection (such as Bluetooth connection) with the calibration terminal. After the calibration end is connected with the coordination end, an image display request is sent to the coordination end to request the coordination end to display a color mosaic image. The coordination terminal receives the image display request, and in response to the image display request, randomly generates a color mosaic image (as shown in fig. 9 (a)), and randomly generates screen brightness.
And then, the collaborative end displays the color mosaic image on a display screen of the collaborative end with the screen brightness, and records the image information of the color mosaic image and the screen brightness. After the color mosaic image is displayed on the display screen of the coordination terminal with the screen brightness, the coordination terminal can send the image information of the color mosaic image and a control instruction to the calibration terminal, wherein the control instruction instructs to trigger the calibration terminal to take a picture (as shown in fig. 10). The calibration end responds to the control instruction, as shown in fig. 11, the wide-angle camera, the main camera and the tele camera are used for shooting mosaic images displayed by the coordination end respectively, so as to obtain three photos (namely three first calibration images).
Then, the calibration terminal sends the image display request to the coordination terminal again to request the coordination terminal to display the next frame of color mosaic image (as shown in fig. 10). The coordination terminal randomly generates a color mosaic image (as shown in fig. 9 (b)) in response to the image display request, and randomly generates screen brightness. And then, the collaboration terminal displays the color mosaic image on a screen of the collaboration terminal with the screen brightness, and records the image information of the mosaic image and the screen brightness. And when the color mosaic image is displayed on the screen of the coordination end in the screen brightness mode, the coordination end sends the image information and the control instruction of the color mosaic image to the calibration end again. The calibration end responds to the control instruction, and the wide-angle camera, the main camera and the long-focus camera are respectively used for shooting mosaic images displayed by the coordination end to obtain three photos (namely three second calibration images).
And then, the calibration end calculates a first compensation value of the imaging attribute between the wide-angle camera and the main camera and a first compensation value of the imaging attribute between the main camera and the tele camera by using three first calibration images. And the calibration end calculates a second compensation value of the imaging attribute between the wide-angle camera and the main camera and a second compensation value of the imaging attribute between the main camera and the tele camera by using the three second calibration images.
And then, the calibration end calculates a compensation value error between a first compensation value and a second compensation value of the imaging attribute between the wide-angle camera and the main camera. And the calibration end calculates a compensation value error between a first compensation value and a second compensation value of the imaging attribute between the main camera and the tele camera. If the compensation value error between the wide-angle camera and the main camera and the compensation value error between the main camera and the tele camera are smaller than the preset error value of the corresponding imaging attribute, the calibration end can end calibration and save the first compensation value of the imaging attribute between the wide-angle camera and the main camera and the first compensation value of the imaging attribute between the main camera and the tele camera. If the compensation value error between the wide-angle camera and the main camera is larger than or equal to the preset error value of the corresponding imaging attribute, or the compensation value error between the main camera and the long-focus camera is larger than or equal to the preset error value of the corresponding imaging attribute, which indicates that the calibration end needs to recalibrate the deviation of the imaging attribute between the rear cameras, the calibration end resends an image display request to the coordination end so as to request the coordination end to display the next frame of image. The coordination terminal again responds to the image display request, randomly generates a color mosaic image (as shown in fig. 9 (c)), and displays the color mosaic image. The calibration end shoots the color mosaic image to obtain three photos (namely three first calibration images) respectively, so that the calibration end can utilize the three photos to redetermine a first compensation value of imaging attributes between the wide-angle camera and the main camera and a first compensation value of imaging attributes between the wide-angle camera and the main camera, and accordingly recalibration of imaging attributes between rear cameras of the calibration end is achieved. The mosaic image shown in (a), (b) and (c) in fig. 9 is different in the number, color and size (i.e., size) of color patches.
It should be appreciated that the above description describes the determination of the first compensation value of the imaging property between the rear cameras of the first device, and if the front cameras of the first device include at least two cameras of different types, the first device may also determine the first compensation value of the imaging property between the front cameras according to the determination of the first compensation value of the imaging property between the rear cameras.
Example two
The embodiment of the application provides a video shooting method. After calibration of imaging attribute deviation between cameras of the first device is completed, if a camera used by current shooting is switched to another camera in the process of shooting video by the first device, the two cameras belong to rear cameras or front cameras, the first device can acquire compensation values of imaging attributes between the two cameras, and compensate images shot by the other camera by using the compensation values of the imaging attributes, so that deviation of the imaging attributes between the two cameras is eliminated, and accordingly a problem that images are suddenly changed due to camera switching is avoided, and smooth change of the images during zooming and camera switching is realized. The procedure of this embodiment will be described below taking a mobile phone as an example of the first device. Specifically, as shown in fig. 12, the procedure of this embodiment is as follows:
S1201, displaying a shooting interface by the mobile phone. The shooting interface is a view finding interface displayed when the mobile phone records video. The shooting interface comprises a first image acquired by a first camera in a plurality of cameras on the mobile phone.
The shooting interface is an interface displayed in the process of recording video by the mobile phone. For example, the cell phone may display the photographing interface 101 shown in fig. 13. The shooting interface 101 may include a stop recording button 102, where the timing duration of the timing option 103 in the shooting interface 101 is 00:20. In the case that the mobile phone is recording video, the shooting interface of the mobile phone for recording video can comprise a stop recording button, and the timing option in the shooting interface of the mobile phone for recording video can be used for timing.
S1202, the mobile phone receives a first operation of the shooting interface by a user, wherein the first operation is used for triggering the mobile phone to switch to use a second camera in the cameras, and the second camera and the first camera belong to a front camera or a rear camera.
In one implementation, the shooting interface may include a first control for adjusting zoom magnification, i.e., for triggering mobile phone zoom shooting, i.e., for triggering mobile phone to switch cameras. For example, the shooting interface 101 shown in fig. 13 may include a first control 104 therein. The first operation may be a sliding operation (e.g., sliding left or sliding right) of the first control 104 by the user. For example, a camera (i.e., a first camera) used by the mobile phone for recording video currently is a main camera, if a user needs to shoot a distant object, the user can slide the first control 104 rightward, and slide the first control 104 to 9x, where the 9x does not belong to a zoom magnification range corresponding to the main camera, but belongs to a zoom magnification range corresponding to a tele camera of the mobile phone, so that the mobile phone needs to switch the camera. The mobile phone switches the camera from the main camera to the tele camera in response to the user sliding the first control 104 to the right, and continues recording the video using the tele camera.
In another implementation manner, the first operation may also be a first preset gesture input by the user on the shooting interface, such as a gesture of pinching or moving away with two fingers. The first preset gesture may be preconfigured in the mobile phone. And the mobile phone can prompt the first preset gesture to the user at the video viewing interface and the function triggered by the first preset gesture (namely, the function of adjusting the magnification in the video recording process, namely, the function of switching the camera shooting picture).
The type of the second camera is different from that of the first camera, but the second camera and the first camera are both front cameras or rear cameras.
S1203, responding to the first operation, the mobile phone switches to use a second camera of the plurality of cameras, and obtains a first compensation value of an imaging attribute between the first camera and the second camera.
S1204, the mobile phone acquires a second image acquired by the second camera, and compensates the second image acquired by the second camera based on a first compensation value of an imaging attribute between the first camera and the second camera to obtain a compensated second image.
In the embodiment of the application, in response to the first operation, the mobile phone stops recording the video with the first camera, but starts recording the video with the second camera. The mobile phone acquires the image (i.e. the second image) acquired by the second camera, and as the imaging attribute of the second camera is different from the imaging attribute of the first camera, for each imaging attribute, the mobile phone compensates the imaging attribute of the second image (i.e. the imaging attribute of the second camera) by using the first compensation value of the imaging attribute between the second camera and the first camera, so that the imaging attribute of the second camera is consistent with the imaging attribute of the first camera, that is, the imaging attribute of the compensated second image is consistent with the imaging attribute of the first image, thereby avoiding the problem that the second image has picture mutation or picture tearing and realizing smooth zooming.
Illustratively, the imaging attribute includes an exposure attribute. The first camera is a main camera, and the second camera is a long-focus camera. According to K 2 *k 23 =K 3 The mobile phone can acquire the exposure attribute K of the second image acquired by the second camera 3 Compensation is carried out, namely K 3 /K 23 The exposure attribute of the compensated second image is consistent with the exposure attribute of the first image shot by the main camera, so that the difference of the exposure attribute between the main camera and the long-focus camera is eliminated, namely, the phenomenon of abrupt change of the exposure degree of the picture during the switching of the cameras is eliminated.
And S1205, the mobile phone displays the compensated second image in the shooting interface.
In the embodiment of the application, after the compensated second image is obtained, the compensated second image is continuously displayed on the shooting interface, and the original second image acquired by the second camera is not displayed on the shooting interface, so that a user cannot perceive the process of sudden change of the picture, and the use satisfaction degree of the user is improved.
S1206, the mobile phone responds to the second operation of the shooting interface by the user, stops recording the video, and generates a video file, wherein the video file comprises the first image and the compensated second image.
The second operation may be a click operation (e.g., a clicking operation) of the stop recording button by the user. For example, the shooting interface 101 for recording video shown in fig. 13 includes a stop recording button 102, and the mobile phone can receive the click operation of the stop recording button 102 from the user at any time during the video recording process. And the mobile phone responds to the clicking operation, finishes recording and generates a video file. The video file comprises a first image acquired by a first camera on the mobile phone and a compensated second image in the video recording process.
In some embodiments, the camera interface displayed by the cell phone may include a start recording button when video recording is stopped. When the user clicks the start recording button, the mobile phone starts to record video by using the camera of the mobile phone. The start recording button and the stop recording button may be in different states of the same button.
It should be understood that when the mobile phone compensates the second image collected by the second camera, the mobile phone compensates all the second images captured by the second camera, that is, the mobile phone compensates the imaging attribute of the second image collected by the second camera in the period from the start of the first frame of the second image collected by the second camera to the last frame of the second image collected by the second camera.
In the embodiment of the application, the user can trigger the mobile phone to zoom and switch the camera according to shooting requirements in the process of shooting the video. The mobile phone switches a camera used for shooting videos from a first camera to a second camera based on related triggering operation of a user, and because the second camera and the first camera belong to different types of cameras, imaging attributes between the two cameras are different, so that in order to avoid the problem that a video picture obtained by the mobile phone during zooming and switching the cameras is suddenly changed due to the difference of the imaging attributes, the mobile phone performs deviation compensation on the imaging attribute of a second image shot by the second camera according to the difference of the imaging attributes between the first camera and the second camera, namely, a first compensation value, so that the compensation on the imaging attribute of the second camera is realized, the imaging attribute of the compensated second image is consistent with the imaging attribute of a first image shot by the first camera, imaging deviation is eliminated, the phenomenon that a picture is suddenly changed or torn due to camera switching is avoided, the imaging quality of the video is improved, and smooth use experience is provided for the user when the zooming is switched.
In some embodiments, the present application provides a computer-readable storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform a video capture method as described above.
In some embodiments, the present application provides a computer program product that, when run on an electronic device, causes the electronic device to perform a video capture method as described above.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A video capturing method, applied to a first device including a plurality of cameras, the method comprising:
the first device displays a first interface; the first interface is a view finding interface of the first device recording video, and comprises a first image acquired by a first camera of the plurality of cameras;
the first device receives a first operation of a user on the first interface, and responds to the first operation, and switches to use a second camera of the plurality of cameras;
acquiring a first compensation value of a preset imaging attribute between the second camera and the first camera; the first camera and the second camera both belong to a front camera or a rear camera of the first device; the preset imaging attribute influences the imaging effect of an object in an image acquired by the camera;
The first device acquires a second image acquired by the second camera, and compensates the preset imaging attribute of the second image according to the first compensation value of the preset imaging attribute to obtain a compensated second image;
the first device displays a second interface; wherein the second interface is a viewfinder interface of the first device recording video, and the second interface includes the compensated second image;
the first equipment receives a second operation of the user on the second interface, stops recording the video and generates a video file; wherein the video file includes the first image and the compensated second image.
2. The method of claim 1, wherein the preset imaging properties include at least one of the following properties: a field of view attribute, an exposure attribute, a white balance attribute, and a color attribute; the field of view attribute influences the position of the object in the image collected by the camera, the exposure attribute influences the exposure degree of the object in the image collected by the camera, the white balance attribute influences the color temperature of the object in the image collected by the camera, and the color attribute influences the color of the object in the image collected by the camera.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the first equipment shoots first mosaic images based on at least two cameras in the plurality of cameras respectively to obtain first calibration images corresponding to the at least two cameras respectively;
and the first equipment determines first compensation values of all preset imaging attributes between cameras in the at least two cameras according to the first calibration images corresponding to the at least two cameras.
4. The method of claim 3, wherein the first mosaic image is displayed by a second device, and wherein the preset imaging attribute comprises a field of view attribute;
the first device determines a first compensation value of each preset imaging attribute between cameras in the at least two cameras according to the first calibration images corresponding to the at least two cameras respectively, and the first compensation value comprises:
the first device receives image information of the first mosaic image sent by the second device;
the first device determines a first compensation value of each preset imaging attribute between cameras in the at least two cameras according to the image information of the first mosaic image and the first calibration images corresponding to the at least two cameras.
5. The method according to claim 3 or 4, wherein after said determining a first compensation value for each preset imaging property between cameras of said at least two cameras, the method further comprises:
the first device stores first compensation values for each preset imaging attribute between the cameras.
6. The method of claim 3 or 4, wherein the at least two cameras comprise the first camera and the second camera;
the method further comprises the steps of:
the first equipment shoots a second mosaic image based on the first camera and the second camera respectively to obtain a second calibration image corresponding to the first camera and a second calibration image corresponding to the second camera;
the first device determines second compensation values of all preset imaging attributes between the first camera and the second camera according to a second calibration image corresponding to the first camera and a second calibration image corresponding to the second camera;
for each preset imaging attribute, calculating a difference value between a first compensation value of the preset imaging attribute and a second compensation value of the preset imaging attribute between the first camera and the second camera to obtain a compensation value error of the preset imaging attribute;
If the compensation value error of the preset imaging attribute is larger than or equal to the preset error value of the preset imaging attribute, returning to the step of shooting a first mosaic image by the first equipment based on at least two cameras in the plurality of cameras respectively;
and if the error of the compensation value of each preset imaging attribute is smaller than the preset error value corresponding to the preset imaging attribute, the first equipment stores the first compensation value of each preset imaging attribute.
7. The method of claim 4, wherein the preset imaging properties include at least one of the following properties: a field of view attribute, an exposure attribute, a white balance attribute, and a color attribute; the visual field range attribute comprises a projection matrix, and the projection matrix is determined according to the position coordinate value of each color block in a mosaic image displayed by second equipment and the position coordinate value of each color block in a calibration image obtained by shooting the mosaic image by a camera of first equipment;
the exposure attribute comprises a sensitivity ISO conversion coefficient, wherein the sensitivity ISO conversion coefficient is determined according to an ISO value when a camera of the first equipment shoots the mosaic image and a brightness value of a calibration image obtained by the camera shooting the mosaic image;
The color attribute comprises a color offset coefficient and a color offset cut-off, and the color offset coefficient and the color offset cut-off are determined according to color values of all color blocks in a calibration image obtained by shooting the mosaic image by the camera;
the white balance attribute comprises a color temperature deviation coefficient and a color temperature deviation cut-off, and the color temperature deviation coefficient and the color temperature deviation cut-off are determined according to the color temperature of each white color block in a calibration image obtained by shooting the mosaic image by the camera.
8. The method according to claim 4, wherein the method further comprises:
the first device sends an image display request to the second device, wherein the image display request is used for triggering the second device to display a mosaic image, and the mosaic image comprises the first mosaic image or the second mosaic image.
9. The method of any one of claims 3 to 7, wherein the first mosaic image comprises a color mosaic image.
10. The method of claim 4, wherein the second device displays the first mosaic image based on a first screen brightness value; the first screen brightness value is randomly generated by the second device or is selected from preset screen brightness values by the second device.
11. A calibration system, wherein the calibration system comprises a first device and a second device; the first device includes a plurality of cameras; at least two rear cameras and/or at least two front cameras exist in the plurality of cameras; the second device is provided with a display screen; the display screen is used for displaying mosaic images, and the camera of the first device can shoot the complete mosaic images displayed by the second device.
12. An electronic device comprising a display screen, a plurality of cameras, a memory, and one or more processors; the display screen, the plurality of cameras, the memory and the processor are coupled; the plurality of cameras are used for acquiring images, the display screen is used for displaying the images generated by the processor and the images acquired by the cameras, and the memory is used for storing computer program codes, and the computer program codes comprise computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the video capture method of any one of claims 1 to 10.
13. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the video capture method of any of claims 1-10.
CN202210911440.7A 2022-07-30 2022-07-30 Video shooting method and electronic equipment Pending CN117528265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210911440.7A CN117528265A (en) 2022-07-30 2022-07-30 Video shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210911440.7A CN117528265A (en) 2022-07-30 2022-07-30 Video shooting method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117528265A true CN117528265A (en) 2024-02-06

Family

ID=89765038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210911440.7A Pending CN117528265A (en) 2022-07-30 2022-07-30 Video shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117528265A (en)

Similar Documents

Publication Publication Date Title
WO2022262260A1 (en) Photographing method and electronic device
CN112150399B (en) Image enhancement method based on wide dynamic range and electronic equipment
CN113810598B (en) Photographing method, electronic device and storage medium
CN107438163B (en) Photographing method, terminal and computer readable storage medium
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
WO2022262344A1 (en) Photographing method and electronic device
CN112530382B (en) Method and device for adjusting picture color of electronic equipment
US10187566B2 (en) Method and device for generating images
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN108200352B (en) Method, terminal and storage medium for adjusting picture brightness
US11032484B2 (en) Image processing apparatus, imaging apparatus, image processing method, imaging method, and program
WO2021032117A1 (en) Photographing method and electronic device
CN113810590A (en) Image processing method, electronic device, medium, and system
US11032483B2 (en) Imaging apparatus, imaging method, and program
CN113660408A (en) Anti-shake method and device for video shooting
US10778903B2 (en) Imaging apparatus, imaging method, and program
CN108156392B (en) Shooting method, terminal and computer readable storage medium
CN115412678B (en) Exposure processing method and device and electronic equipment
CN115631250B (en) Image processing method and electronic equipment
CN117528265A (en) Video shooting method and electronic equipment
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN117119314B (en) Image processing method and related electronic equipment
CN116051368B (en) Image processing method and related device
CN116029914B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination