CN115812312A - Image acquisition method, terminal device and computer-readable storage medium - Google Patents

Image acquisition method, terminal device and computer-readable storage medium Download PDF

Info

Publication number
CN115812312A
CN115812312A CN202180044452.8A CN202180044452A CN115812312A CN 115812312 A CN115812312 A CN 115812312A CN 202180044452 A CN202180044452 A CN 202180044452A CN 115812312 A CN115812312 A CN 115812312A
Authority
CN
China
Prior art keywords
image information
camera
picture
target
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180044452.8A
Other languages
Chinese (zh)
Inventor
赵紫辉
代文慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Publication of CN115812312A publication Critical patent/CN115812312A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image acquisition method, wherein terminal equipment comprises at least two cameras, and the method comprises the following steps: opening at least two cameras, controlling the at least two cameras to pick up image information respectively, and forming a camera preview interface; and when the photographing triggering operation is detected, generating a picture according to the camera preview interface. The application also discloses a terminal device and a computer readable storage medium. The picture that this application terminal equipment shot is formed by the different image information combinations that two at least cameras were gathered simultaneously, has promoted the foreground and background definition of picture and the stereoscopic effect of image in the picture to it is good to make the shooting formation of image effect.

Description

Image acquisition method, terminal device and computer-readable storage medium
The present application claims priority of chinese patent application having an application number of 202010654953.5 entitled "image acquisition method, terminal device, and computer-readable storage medium" filed by the chinese patent office on 10/07/2020, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image obtaining method, a terminal device, and a computer-readable storage medium.
Background
With the development of science and technology, the shooting function of the terminal equipment is more and more advanced. If terminal equipment can realize mode such as super wide angle, wide angle and tele and shoot the image nowadays, super wide angle and wide angle can realize the large visual angle and shoot, can be suitable for in the shooting scene of difference with the tele shooting mode.
Realize the terminal equipment that super wide angle, wide angle and long burnt mode were shot generally to be equipped with at least three camera in the existing market, for example super wide-angle camera, wide-angle camera and long burnt camera etc. shoot super wide-angle image and adopt super wide-angle camera to shoot, shoot wide-angle image and adopt wide-angle camera to shoot, shoot long burnt image and adopt long burnt camera to shoot. In the specific shooting process, when a user opens a camera, a camera preview interface is formed by preview data collected by a default camera, then the user sets a zoom value according to the camera preview interface, the camera identifies the camera corresponding to the set zoom value, then the corresponding camera is started to pick up image data, the image data is displayed in the camera preview interface, and the user adopts the camera to obtain an image after pressing down to take a picture.
However, the terminal device still forms images with a single camera, and the shooting imaging effect is not good.
The above is only for the purpose of assisting understanding of the technical solutions of the present application, and does not represent an admission that the above is prior art.
Disclosure of Invention
The application mainly aims to provide an image acquisition method, a terminal device and a computer readable storage medium, and aims to solve the technical problem that the shooting and imaging effect of the terminal device is poor.
In order to achieve the above object, the present application provides an image obtaining method, where the terminal device includes at least two cameras, and the image obtaining method includes the following steps:
opening at least two cameras, controlling the at least two cameras to respectively pick up image information, and forming a camera preview interface;
and when the photographing triggering operation is detected, generating a picture according to the camera preview interface.
Optionally, the focal lengths photographed by the cameras are different, and the step of forming a camera preview interface includes:
determining a target camera according to a preset zoom value;
combining the main preview image information and the supplementary preview image information to form the camera preview interface;
the main preview image information is image information picked up by the target camera, and the supplementary preview image information is image information picked up by other cameras except the target camera.
Optionally, the step of combining the main preview image information and the supplemental preview image information to form the camera preview interface includes:
acquiring an area overlapped with the main preview image information in the supplementary preview image information;
and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
Optionally, the step of merging the region of the supplemental preview image information that overlaps with the main preview image information into the main preview image information to form the camera preview interface includes:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
Optionally, the target camera is a camera corresponding to the preset zoom value, wherein the target camera includes at least one of a wide-angle camera, a telephoto camera, and a super wide-angle camera.
Optionally, after the step of generating a picture according to the camera preview interface, the method further includes:
storing the image information picked up by the camera and associating the picture with the image information;
after the step of saving the image information picked up by the camera and associating the picture with the image information, the method further comprises the following steps:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters from the image information associated with the picture;
and generating edited target picture preview data according to the target image information.
Optionally, the step of acquiring target image information corresponding to the editing parameter from the image information associated with the picture includes:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal length where the zoom value is located, and taking a camera matched with the focal length as a target camera;
and taking the image information picked up by the target camera as the target image information.
Optionally, the step of generating edited target picture preview data according to the target image information includes:
and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
Optionally, the step of generating edited target picture preview data according to the target image information includes:
and combining and generating edited target picture preview data according to the target image information and other image information associated with the picture.
Optionally, when the editing operation is a zoom-in operation, a focal length in which the zoom value of the adjusted picture is located is larger than a focal length in which the current zoom value of the picture is located;
and when the editing operation is a zooming-out operation, the focal length of the zoom value of the adjusted picture is smaller than the focal length of the current zoom value of the picture.
Optionally, when the editing operation is cropping, after the step of generating edited target picture preview data according to the target image information, the method further includes:
after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters;
wherein the editing parameter comprises at least one of magnification, reduction magnification and cutting size.
The application also provides an image acquisition method, which comprises the following steps:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters according to at least two pieces of image information related to the pictures;
and generating edited target picture preview data according to the target image information.
Optionally, the pictures are associated with image information acquired by at least two cameras when being generated;
the photographing triggering operation of the picture is associated with at least two pieces of acquired image information, and the at least two pieces of image information comprise the image information of the picture.
Optionally, the step of obtaining, according to at least two pieces of image information associated with the picture, target image information corresponding to the editing parameter includes:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal length where the zoom value is located, and taking a camera matched with the focal length as a target camera;
and taking the image information picked up by the target camera as the target image information, wherein at least two pieces of image information are acquired by the cameras based on different focal sections.
Optionally, the at least two focal length focal lengths include at least two zoom values, and the step of generating edited target picture preview data according to the target image information includes:
and after the target image information is determined, adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
The present application further provides a terminal device, including: a memory, a processor and an image acquisition program stored on the memory and executable on the processor, the image acquisition program, when executed by the processor, implementing the steps of the image acquisition method as described above.
Optionally, the processor includes at least two image processing modules, and each image processing module is connected to one camera.
Furthermore, the present application also provides a computer-readable storage medium having an image acquisition program stored thereon, which when executed by a processor, implements the steps of the image acquisition method as described above.
According to the image acquisition method, the terminal device and the computer readable storage medium provided by the embodiment of the application, when the terminal device detects the trigger operation of the camera, the terminal device starts at least two cameras and controls the at least two cameras to respectively pick up image information, so that the image information picked up by the cameras generates a camera preview interface, and when the trigger operation of taking a picture is detected, a picture is generated according to the camera preview interface. Because the picture is formed by combining different image information acquired by at least two cameras simultaneously, the front and back scene definition of the picture and the stereoscopic effect of the image in the picture are improved, and the imaging effect of the camera is good.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of a first embodiment of an image obtaining method of the present application;
FIG. 3 is a detailed flowchart of the step S10 in FIG. 2;
fig. 4 is a detailed flowchart of step S12 in the second embodiment of the image obtaining method of the present application;
FIG. 5 is a flowchart illustrating a third embodiment of an image obtaining method according to the present application;
FIG. 6 is a detailed flowchart of step S60 in FIG. 5;
FIG. 7 is a flowchart illustrating a fourth embodiment of an image capturing method according to the present application;
FIG. 8 is a schematic diagram of a hardware system involved in the image acquisition method of the present application;
fig. 9 is a schematic diagram of a mobile terminal operating according to the image acquisition method of the present application;
fig. 10 is a schematic interface diagram of the mobile terminal according to the present application.
The implementation, functional features and advantages of the object of the present application will be further explained with reference to the embodiments, and with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The main solution of the embodiment of the application is as follows: opening at least two cameras, controlling the at least two cameras to respectively pick up image information, and forming a camera preview interface; and when the photographing triggering operation is detected, generating a picture according to the camera preview interface.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a terminal device in a hardware operating environment according to an embodiment of the present application.
The terminal device can be a PC, a smart phone, a tablet computer and other terminal devices with shooting functions.
As shown in fig. 1, the terminal device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. The communication bus 1002 is used to implement connection communication among these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001 described previously.
Further, the terminal device includes at least two cameras, each of which takes different focal length, for example, referring to fig. 8, when the terminal device includes three cameras, the first camera 100 (for example, a telephoto camera), the second camera 200 (for example, a wide-angle camera), and the third camera 300 (for example, an ultra-wide-angle camera) are respectively provided. Optionally, the focal length shot by the tele camera is 3X to 30X, the focal length shot by the wide camera is 1X to 3X, and the focal length shot by the ultra-wide camera is 0.6X to 1X. The cameras are all connected with the processor.
Optionally, the processor comprises at least two image processing modules 400, and each image processing module 400 is connected with one camera (100/200/300). After the front end of the camera (100/200/300) collects the image information, the image information is transmitted to the image processing module 400 connected with the camera, and the image processing module 400 processes the image information collected by the camera (100/200/300) to form the image information in the focal length and stores the image information in the memory. Because the image information acquired by each camera (100/200/300) is processed by the independent image processing module 400, the original image data information acquired by the camera (100/200/300) can be stored in a memory, when at least two cameras (100/200/300) acquire images simultaneously, the image data acquired by at least two cameras (100/200/300) can be simultaneously and respectively processed, and then data merging is carried out based on the image data acquired by at least two cameras (100/200/300) simultaneously, so that images merged based on different focal sections or different angle data are formed, and the shooting effect is improved.
Optionally, referring to fig. 9, the arrangement manner of the first camera 100, the second camera 200, and the third camera 300 in the mobile terminal includes, but is not limited to, two manners shown in fig. 9, such as that the first camera 100, the second camera 200, and the third camera 300 are sequentially arranged along the length direction of the mobile terminal, or that the first camera 100, the second camera 200, and the third camera 300 are sequentially arranged along the width direction of the mobile terminal.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image acquisition program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting a background server and communicating data with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke an image acquisition program stored in the memory 1005 and perform the following operations:
when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to respectively pick up image information, and forming a camera preview interface;
when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and saving the picture.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
determining a target camera according to a preset zoom value;
and combining main preview image information and supplementary preview image information to form the camera preview interface, wherein the main preview image information is image information picked up by the target camera, and the supplementary preview image information is image information picked up by other cameras except the target camera.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
acquiring an area overlapped with the main preview image information in the supplementary preview image information;
and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and storing the image information picked up by each camera, and associating the picture with each image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and further perform the following operations:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters from each image information associated with the picture;
and generating edited target picture preview data according to the target image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal length where the zoom value is located, and taking a camera matched with the focal length as a target camera;
and taking the image information picked up by the target camera as the target image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and combining the target image information and other image information associated with the picture to generate edited target picture preview data.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters.
Referring to fig. 2, the present application provides a first embodiment of an image acquisition method, where the image acquisition method is applied to a terminal device with a camera, where the terminal device includes at least two cameras, and optionally, focal segments taken by the cameras are different, and the image acquisition method includes:
step S10, opening at least two cameras, controlling the at least two cameras to respectively pick up image information, and forming a camera preview interface;
the terminal device in this embodiment may be a mobile phone, a tablet, a camera, or the like. The terminal equipment is provided with a camera application, and a user can trigger the camera application to carry out shooting work.
When a user triggers the camera application of the terminal equipment, the terminal equipment starts at least two cameras, the at least two cameras respectively pick up image information, then the cameras respectively transmit the picked-up image information to image processing modules connected with the cameras, and the image processing modules respectively process the image information.
The acquired image information is different because the focal sections shot by the cameras are different, and the cameras respectively transmit the acquired image information to different processing modules, and the image information is stored in different storage areas after being processed by the processing modules. And the same trigger operation is correspondingly associated with the acquired image data.
It should be noted that the terminal device in this embodiment includes, but is not limited to, a tele camera, a wide camera, and an ultra-wide camera, where a focal length shot by the tele camera is 3X to 30X, a focal length shot by the wide camera is 1X to 3X, and a focal length shot by the ultra-wide camera is 0.6X to 1X.
The terminal equipment is provided with a display interface, and after the camera application is triggered, image data collected by the cameras are displayed on the display interface in a preview mode.
It can be understood that, based on the embodiment that at least two cameras pick up image information, and the picked-up image information is obtained based on different focal segments and different coordinates, the camera preview interface may be formed based on a combination of multiple sets of the image information.
Optionally, in an embodiment, referring to fig. 3, the camera preview interface is formed in a manner including, but not limited to, one of the following:
s11, determining a target camera according to a preset zoom value;
and step S12, combining main preview image information and supplementary preview image information to form the camera preview interface, wherein the main preview image information is image information picked up by the target camera, and the supplementary preview image information is image information picked up by other cameras except the target camera.
Namely, the main preview image information is determined according to the preset zoom value, and then the main preview image information is corrected by using the image information of other cameras so as to improve the display effect of the shot image.
Optionally, the preset zoom value may be a default zoom value of the terminal device, or the preset zoom value is a set zoom value of a user. And if the user triggers the camera application, the terminal equipment controls each camera to be started, and each camera respectively picks up image information and transmits the image information back to each processing module. At this time, if the terminal device does not detect the user-set zoom value, the default zoom value of the terminal device is adopted as the preset zoom value. And if the terminal equipment detects that the user sets the zoom value of the current camera, the set zoom value of the user is adopted as the preset zoom value. It is understood that a zoom control 500 (shown in fig. 10) is provided on the terminal device, and a user can implement zoom value setting by triggering the zoom control 500.
When a camera application of the terminal device of this embodiment is triggered, the default of the camera preview interface is a default zoom value set by the system, and when it is detected that a user triggers the zoom control 500 to adjust the zoom value of the preview interface, the default zoom value is adjusted to the set zoom value. In the focusing process, since the present embodiment adopts a plurality of cameras to pick up image information, when the implied zoom value is adjusted to the set zoom value, the terminal device directly takes the image information picked up by the camera corresponding to the set zoom value as the main preview image information and the image information picked up by the other cameras as the supplementary preview image information, and combines and images without the need of the exemplary technique: when the default zoom value is adjusted to the set zoom value, the camera corresponding to the default zoom value is turned off, the camera corresponding to the set zoom value is turned on, and then image information is picked up. The embodiment omits the pick-up time of the image information after the zoom value is adjusted in the image pick-up process, and can avoid missed moment pictures caused by adjusting the zoom value to a certain extent.
It should be noted that, in this embodiment, a target camera is determined according to the preset zoom value, where the target camera is a camera corresponding to the preset zoom value, and if the preset zoom value is 1.0X, the corresponding target camera is a wide-angle camera; if the preset zoom value is 0.6X, the corresponding target camera is an ultra-wide-angle camera.
In this embodiment, after a target camera is determined according to a preset zoom value, image information picked up by the target camera is used as main preview image information, image information picked up by other cameras except the target camera is used as supplementary preview image information, and then the main preview image information is corrected by using the supplementary preview image information, so as to finally form a preview interface.
Because the picture is all by the object through the equal proportion formation of image and present on the photosensitive element of camera, and each photosensitive element all comprises different plane pixel, because the position coordinate of each camera has relative position, a plurality of cameras shoot same when the object of shooing, there are the visual angle of a plurality of differences, if merge into a picture with the data at a plurality of different visual angles, the little stereoeffect of the object of shooing is stronger, also can be stronger on the reduction degree of visual angle impression, simultaneously because the difference of focus section formation of image, make at the composite picture, the fore-and-aft scene of picture also has the promotion by a wide margin in the definition.
The camera preview interface in the embodiment is formed by combining image information picked up by at least two cameras, and the at least two cameras form images, so that the camera preview interface in the embodiment has a good imaging effect.
Step S20, when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and step S30, storing the picture.
And when the user triggers the photographing determining control, judging that the terminal equipment detects photographing triggering operation, and generating a picture according to the image information currently displayed on the camera preview interface by the terminal equipment, storing the picture and finishing photographing.
Optionally, based on the above image obtaining manner, the embodiment takes the mobile terminal having three cameras as an example to explain the image obtaining principle of the embodiment:
referring to fig. 8, the mobile terminal includes a first camera 100, a second camera 200, and a third camera 300. The first camera 100, the second camera 200 and the third camera 300 are correspondingly connected with an image processing module 400. Optionally, focal lengths of the first camera 100, the second camera 200, and the third camera 300 are different, for example, the focal length of the first camera 100 is 3X to 30X, the focal length of the second camera 200 is 1X to 3X, and the focal length of the third camera 300 is 0.6X to 1X.
When the mobile terminal starts to preview a camera image, the first camera 100, the second camera 200 and the third camera 300 work effectively to generate previews, and image information collected by the first camera 100, the second camera 200 and the third camera 300 is respectively transmitted back to the corresponding image processing modules 400 to process the images. The mobile terminal judges which camera of the three cameras the preset zoom value is matched with based on the preset zoom value set by the upper layer, if the preset zoom value is 1X, the matched camera is the second camera 200, at the moment, the image information collected by the second camera 200 is adopted as target image information (main display picture), the images collected by the first camera 100 and the third camera 300 are adopted as auxiliary image information (auxiliary display), the display picture of the preview interface is synthesized based on the three image data, and when a user clicks a photographing control in the camera picture, the image displayed in the preview interface generates a picture and keeps the picture.
Optionally, when the picture is saved, the image information acquired by the three cameras is respectively saved and associated with the picture. Therefore, the original image information collected by the three cameras is stored in the picture storage process in a correlated mode, the original image data can be called when the pictures serve as indexes on the basis of the correlation relation of the pictures, and therefore the pictures can be edited on the basis of the stored original image information.
In this embodiment, when the terminal device detects a camera trigger operation, the terminal device starts each camera, controls each camera to respectively pick up image information, generates a camera preview interface according to the image information picked up by each camera, and generates and stores a picture according to the camera preview interface when the terminal device detects a photographing trigger operation. Because the picture is formed by combining the image information of different focal sections acquired by a plurality of cameras simultaneously, the front and back scene definition of the picture and the three-dimensional effect of the image in the picture are improved, and the imaging effect of the camera is good.
In addition, the image information of different focal segments acquired by the at least two cameras is adopted to synthesize the picture, and compared with the exemplary technology for zooming the image, the image zooming processing flow is simplified, the slicing efficiency of the clear picture is improved, and compared with the exemplary slicing efficiency, the slicing efficiency is improved by more than 2 times.
Further, referring to fig. 4, the present application provides a second embodiment of an image obtaining method, and based on the first embodiment, the step of combining main preview image information and supplemental preview image information to form the camera preview interface includes:
step S121, acquiring an area overlapping with the main preview image information in the supplementary preview image information;
step S122, merging the region overlapping with the main preview image information in the supplemental preview image information into the main preview image information, and forming the camera preview interface.
At least two cameras pick up the image information of the same object from different angles, so that an overlapping area and a non-overlapping area are necessary between each image information, the depth information of the edge position of the formed picture is increased through the combination of the overlapping areas in the embodiment, and the transparency of the picture content can be improved qualitatively. And based on multi-focus segment fusion, the edge of the picture can be supplemented and corrected through the supplementary preview image information of other focus segments, and compared with a single shot picture, the edge of the shot picture cannot be distorted.
In this embodiment, the supplementary preview data is converted to the plane where the main preview data is located by coordinate conversion, so that the supplementary preview data is calibrated on one plane, and thus, under the condition that a pixel point is not changed, four-axis scattering extension may exist by taking the point as a center.
The coordinate conversion mode is that the coordinate of the main preview image data is used as a central coordinate, and the supplementary preview information is converted to the central coordinate based on the relative relation between the central coordinate and the coordinate of the supplementary preview image information, so as to complete the coordinate conversion.
Optionally, the step of merging the region of the supplemental preview image information that overlaps with the main preview image information into the main preview image information to form the camera preview interface includes:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of a pixel of an area overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
It should be noted that the position where the target camera is located is taken as the center coordinate, and the relative position parameter refers to a relative position of the coordinate position of the other camera with respect to the coordinate of the target camera. After the coordinates of the target camera and the relative position parameters are obtained, calculating the coordinate position of the pixel in the supplementary preview image information in the coordinate of the target camera in the area overlapped with the main preview image information, and specifically converting the coordinate of the pixel in the corresponding camera to the coordinate of the target camera based on the relative position parameters, so that each pixel point is transferred to the pixel plane of the main preview image information to complete the combination with the main preview image information, and then displaying the combined image in the camera preview interface.
In the embodiment of the invention, the edge depth information of a single photo is increased through multi-path fusion imaging, so that the transparency of the photo content can be qualitatively improved; the problem of poor edge resolution is optimized, so that edge noise of the picture is generated, the picture is indirectly changed into a large pixel through multi-pixel splicing and proofreading, and the light sensitivity and the color restoration degree of the camera are improved.
If the user needs to adjust the zoom value first in the shooting process of the terminal device and then takes a picture, the process of adjusting the zoom value needs time, and the user may miss an instant picture due to adjustment of the zoom value.
Optionally, a third embodiment of an image obtaining method is provided as in the present application, and based on the first and/or second embodiment, please refer to fig. 5, the image obtaining method performs the step of saving the picture and also performs:
and step S40, storing the image information picked up by each camera, and associating the picture with each image information.
That is, after the user triggers the photographing operation, the terminal device generates a picture according to the camera preview interface, stores the picture, stores the image information picked up by each camera, and associates the picture with the image information picked up by each camera. Therefore, the pictures are correspondingly associated with the multi-path image information data, when the terminal equipment carries out zooming processing on the pictures, the multi-path image information can be called based on the association relation between the pictures and the image information data, and the data processing is carried out by adopting the originally picked image information, so that when the zooming processing is carried out on the pictures, the definition of the pictures can be always kept as same as that of the original pictures.
Based on the picture associated with the image information picked up by each camera, when the user edits the image, the image acquisition method of the embodiment of the application may perform the following processing on the picture:
referring to fig. 5 specifically, after the step of saving the image information picked up by each camera and associating the picture with each image information, the method further includes:
s50, when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
step S60, acquiring target image information corresponding to the editing parameters from each image information associated with the picture;
and step S70, generating edited target picture preview data according to the target image information.
The method comprises the steps that a user can click the picture to edit the picture, when the terminal device detects that the user carries out editing operation triggered based on the picture, editing parameters corresponding to the editing operation are obtained, then target image information corresponding to the editing parameters is obtained from each piece of image information related to the picture, and edited target picture preview data are generated by adopting the target image information.
Optionally, this embodiment illustrates a principle that the image editing definition is high by taking an example that the mobile terminal acquires an image by using three cameras to generate the image:
when a user clicks a picture, the mobile terminal decompresses original image information (image information acquired by a first camera, image information acquired by a second camera and image information acquired by a third camera) acquired by three cameras related to the picture based on a trigger operation. When a user edits the picture (such as zooming the picture), editing parameters (such as zooming values) are determined based on editing operation, then target image information of which the editing parameters are matched with the image information in the three cameras is judged (such as a zooming section where the zooming values are determined based on the zooming values and the zooming sections where the cameras are located, and the image information corresponding to the zooming section where the zooming values are located is the target image information), and then the picture is displayed according to the target image information, wherein the displayed picture is the edited picture. The zoomed picture is also the original image collected by one of the cameras, and the definition of the original image is not damaged, so that the definition of the edited image is high.
Optionally, the editing operation includes at least one of zooming in, zooming out, and cropping, and the editing parameters include one or more of a zoom in factor, a zoom out factor, and a crop size. When a user magnifies and edits a picture, target image information corresponding to the magnification factor is searched from each image information associated with the picture, and then the target image information is used as target picture preview data for the user to preview the display effect of the magnified picture. When the user triggers the zoom-out operation or the clipping operation, the terminal device processes the picture in the same manner as described above, which is not described herein again.
Optionally, in this embodiment, after the picture is sliced, if the picture is edited for the first time or the second time, the corresponding originally acquired image data can be called for processing, so that the image editing can be performed for many times while the definition is maintained, and the definition effect of the image editing is improved. Optionally, the embodiment can achieve an effect of improving the definition by 2 times after editing the film.
It is understood that, in this embodiment, the focal length of each camera is different, and the picked-up image information is also divided according to the zoom value, so as to facilitate reasonable invoking of the image information associated with the picture to generate the target image preview data, so that the edited picture display effect is optimal, in an embodiment, referring to fig. 6, the step of obtaining the target image information corresponding to the editing parameter from each image information associated with the picture includes:
s61, determining a zoom value of the adjusted picture according to the editing parameters;
s62, acquiring a focal section where the zoom value is located, and taking a camera matched with the focal section as a target camera;
and step S63, taking the image information picked up by the target camera as the target image information.
That is, in this embodiment, when the editing parameter corresponding to the editing operation is obtained, the editing parameter is converted into a zoom value, then a camera matched with a focal length where the zoom value is located is determined according to the zoom value, image information picked up by the camera is adopted as target image information, and then the target picture preview data is generated according to the target image information.
The image information picked up by each camera is different based on different focal sections of the cameras, the terminal equipment determines a target camera according to the zoom value of the picture adjusted by the editing parameters, the target image information corresponding to the target camera is adopted to generate target picture preview data, and the target image information is original information collected by the cameras and is the same as the image information of the focal section of the zoom value adjusted by the picture, and pixels of the target image information do not need to be changed, so that the image displayed by the adjusted target picture preview data is clear, and the problem that the definition can be reduced in a mode of achieving the purpose of focusing by changing the image pixels in the exemplary technology is solved. In contrast, the embodiment improves the clear effect of the edited picture.
In addition, in this embodiment, the image information that is most matched with the editing parameter is determined by the zoom value, and then the target picture preview data is generated by using the image information, so that the display effect of the convenient picture is optimized.
It should be noted that each focal length has an upper limit value and a lower limit value, the upper limit value or the lower limit value of two adjacent focal lengths is the same, and if the zoom value is between the upper limit value or the lower limit value, but does not belong to the upper limit value or the lower limit value, the target picture preview data is generated in the following two ways:
firstly, generating edited target picture preview data based on the target image information;
that is, the edited target picture preview data is generated by directly adopting the target image information, so that the adjusted target picture preview data meets the focal length.
Secondly, the target image information is adjusted according to the zoom value, and edited target picture preview data are generated based on the adjusted target image information.
That is, in this embodiment, after the target image information is determined according to the focal length where the zoom value is located, the target image information is adjusted according to the zoom value, so that the adjusted target image information matches the zoom value, and the edited target picture preview data is generated according to the adjusted target image information, so that the target picture preview data meets the requirement of the zoom value.
When the editing operation is enlargement or reduction, the terminal device generates enlarged or reduced picture preview data, and displays the enlarged or reduced picture. At this time, the user may select to intercept the enlarged or reduced picture and store it in the memory, or may select to quit the picture editing, so that the picture is restored to the state before editing.
When the editing operation is cutting, the terminal device generates target picture preview data in a cutting area, at this time, a user can select cutting determination, and after the terminal device detects the cutting determination operation, the terminal device cuts the target picture according to the target picture preview data and the editing parameters. Namely, the terminal equipment determines the cutting size according to the editing parameters to form a cutting area, and then forms the data in the cutting area into the target picture.
Further, the present application provides a fourth embodiment of the image obtaining method based on the third embodiment, and with reference to fig. 7, the step of generating edited target picture preview data according to the target image information includes:
and step S71, combining the target image information and other image information related to the picture to generate edited target picture preview data.
In this embodiment, when a user edits a generated picture, and after determining target image information, in order to make the definition of a foreground and a background presented by edited target picture preview data high, an image presented by the edited target picture preview data also achieves a stereoscopic effect, so that the picture effect presented by the edited target picture preview data is consistent with the picture effect obtained during shooting, when the edited target picture preview data is generated, the target image information is used as main preview image information, and other image information associated with the picture is used as supplementary preview image information, and the main target image information and the other image information are combined to form the target picture preview data.
It should be noted that, when the terminal device edits the picture, the picture is edited from all image information associated with the picture, and the target picture preview data may be generated after merging based on all image information. The specific merging method is the same as the merging method for the camera preview interface in the photographing process of the terminal device, and reference may be made to the second embodiment specifically, which is not repeated here.
Optionally, the present application further provides an image obtaining method, where the method includes:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters according to at least two pieces of image information related to the pictures;
and generating edited target picture preview data according to the target image information.
The embodiment is applied to editing operation after picture slicing. When a user carries out editing operation based on a picture stored in the mobile terminal, acquiring corresponding editing parameters, further determining target image information based on at least two image information associated with the picture, and then generating edited target image preview data according to the target image information.
Optionally, the specific editing process and the obtaining manner of the target image information in this embodiment are the same as those in the above embodiments, and are not described herein again.
Optionally, the picture in this embodiment is associated with image information acquired by at least two cameras when being generated, the photographing triggering operation of the picture is associated with at least two pieces of image information acquired correspondingly, and the at least two pieces of image information include the image information of the picture. If the camera interface is used for triggering photographing, the triggering operation is used for associating all the acquired image information, and then the image formed based on the triggering operation is associated with all the image information. Based on the pictures, at least two cameras of the mobile terminal pick up the pictures in the acquisition process, so that at least two pieces of image information of the pictures are stored in the mobile terminal, the image information is associated with the pictures, and when a user edits the pictures, the pictures can be edited based on the image information, so that the edited images can keep the maximum definition.
Furthermore, an embodiment of the present application also provides a computer-readable storage medium, on which an image acquisition program is stored, and the image acquisition program, when executed by a processor, implements the steps of the image acquisition method as described above.
The present application further provides a terminal device, the terminal device includes: memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method as described above.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device installed with the chip executes the method described in the foregoing various possible embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of a claim "comprising a" 8230a "\8230means" does not exclude the presence of additional identical elements in the process, method, article or apparatus in which the element is incorporated, and further, similarly named components, features, elements in different embodiments of the application may have the same meaning or may have different meanings, the specific meaning of which should be determined by its interpretation in the specific embodiment or by further combination with the context of the specific embodiment.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "a, B or C" or "a, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or partially with other steps or at least some of the sub-steps or stages of other steps.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (17)

  1. An image acquisition method is characterized in that a terminal device comprises at least two cameras, and the image acquisition method comprises the following steps:
    opening at least two cameras, controlling the at least two cameras to pick up image information respectively, and forming a camera preview interface;
    and when the photographing triggering operation is detected, generating a picture according to the camera preview interface.
  2. The method for acquiring images according to claim 1, wherein the focal length captured by the camera is different, and the step of forming the camera preview interface comprises:
    determining a target camera according to a preset zoom value;
    combining the main preview image information and the supplementary preview image information to form the camera preview interface;
    the main preview image information is the image information picked up by the target camera, and the supplementary preview image information is the image information picked up by other cameras except the target camera.
  3. The method for acquiring an image according to claim 2, wherein the step of combining the main preview image information and the supplemental preview image information to form the camera preview interface comprises:
    acquiring an area overlapped with the main preview image information in the supplementary preview image information;
    and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
  4. The method of claim 3, wherein the step of combining the area of the supplemental preview image information that overlaps the main preview image information into the main preview image information to form the camera preview interface comprises:
    acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
    calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
    and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
  5. The image acquisition method according to claim 2, wherein the target camera is a camera corresponding to the preset zoom value, and wherein the target camera includes at least one of a wide camera, a tele camera, and an ultra-wide camera.
  6. The method for acquiring an image according to any one of claims 1 to 5, wherein the step of generating a picture according to the camera preview interface is followed by further comprising:
    storing the image information picked up by the camera and associating the picture with the image information;
    after the step of saving the image information picked up by the camera and associating the picture with the image information, the method further comprises the following steps:
    when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
    acquiring target image information corresponding to the editing parameters from the image information associated with the picture;
    and generating edited target picture preview data according to the target image information.
  7. The image acquisition method according to claim 6, wherein the step of acquiring target image information corresponding to the editing parameter from the image information associated with the picture comprises:
    determining a zoom value of the adjusted picture according to the editing parameters;
    acquiring a focal section where the zoom value is located, and taking a camera matched with the focal section as a target camera;
    and taking the image information picked up by the target camera as the target image information.
  8. The image acquisition method according to claim 7, wherein the step of generating edited target picture preview data from the target image information includes:
    and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
  9. The image acquisition method according to claim 6, wherein the step of generating edited target picture preview data from the target image information comprises:
    and combining the target image information and other image information associated with the picture to generate edited target picture preview data.
  10. The image acquisition method according to claim 9, wherein when the editing operation is a zoom-in operation, a focal length in which the zoom value of the adjusted picture is located is larger than a focal length in which the current zoom value of the picture is located;
    and when the editing operation is a zoom-out operation, the focal length of the zoom value of the adjusted picture is smaller than the focal length of the current zoom value of the picture.
  11. The method for acquiring an image according to claim 9, wherein when the editing operation is cropping, after the step of generating edited target picture preview data from the target image information, the method further comprises:
    after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters;
    wherein the editing parameter includes at least one of a zoom-in magnification, a zoom-out magnification, and a crop size.
  12. An image acquisition method, characterized in that the image acquisition method comprises:
    when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
    acquiring target image information corresponding to the editing parameters according to at least two pieces of image information related to the pictures;
    and generating edited target picture preview data according to the target image information.
  13. The image acquisition method according to claim 12, wherein the image is generated in association with image information acquired by at least two cameras;
    the photographing triggering operation of the picture is associated with at least two pieces of acquired image information, and the at least two pieces of image information comprise the image information of the picture.
  14. The method for acquiring an image according to claim 12, wherein the step of acquiring target image information corresponding to the editing parameter from at least two pieces of image information associated with the picture comprises:
    determining a zoom value of the adjusted picture according to the editing parameters;
    acquiring a focal section where the zoom value is located, and taking a camera matched with the focal section as a target camera;
    and taking the image information picked up by the target camera as the target image information, wherein at least two pieces of image information are acquired by the cameras based on different focal sections.
  15. The method for acquiring an image according to any one of claims 12 to 14, wherein at least two of the focal length sections include at least two zoom values, and the step of generating edited target picture preview data based on the target image information includes:
    and after determining target image information, adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
  16. A terminal device, comprising: memory, a processor and an image acquisition program stored on said memory and executable on said processor, said image acquisition program, when executed by said processor, implementing the steps of the method of acquisition of an image according to any one of claims 1 to 15.
  17. A computer-readable storage medium, characterized in that an image acquisition program is stored thereon, which when executed by a processor implements the steps of the method of acquiring an image according to any one of claims 1 to 15.
CN202180044452.8A 2020-07-10 2021-06-21 Image acquisition method, terminal device and computer-readable storage medium Pending CN115812312A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2020106549535 2020-07-10
CN202010654953.5A CN111787224B (en) 2020-07-10 2020-07-10 Image acquisition method, terminal device and computer-readable storage medium
PCT/CN2021/101320 WO2022007622A1 (en) 2020-07-10 2021-06-21 Image acquisition method, terminal device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115812312A true CN115812312A (en) 2023-03-17

Family

ID=72758941

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010654953.5A Active CN111787224B (en) 2020-07-10 2020-07-10 Image acquisition method, terminal device and computer-readable storage medium
CN202180044452.8A Pending CN115812312A (en) 2020-07-10 2021-06-21 Image acquisition method, terminal device and computer-readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010654953.5A Active CN111787224B (en) 2020-07-10 2020-07-10 Image acquisition method, terminal device and computer-readable storage medium

Country Status (2)

Country Link
CN (2) CN111787224B (en)
WO (1) WO2022007622A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787224B (en) * 2020-07-10 2022-07-12 深圳传音控股股份有限公司 Image acquisition method, terminal device and computer-readable storage medium
CN112887603B (en) * 2021-01-26 2023-01-24 维沃移动通信有限公司 Shooting preview method and device and electronic equipment
CN116051368B (en) * 2022-06-29 2023-10-20 荣耀终端有限公司 Image processing method and related device

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4956988B2 (en) * 2005-12-19 2012-06-20 カシオ計算機株式会社 Imaging device
US9007508B2 (en) * 2012-03-29 2015-04-14 Sony Corporation Portable device, photographing method, and program for setting a target region and performing an image capturing operation when a target is detected in the target region
CN104168414A (en) * 2013-05-17 2014-11-26 光道视觉科技股份有限公司 Object image shooting and splicing method
KR102145542B1 (en) * 2014-08-14 2020-08-18 삼성전자주식회사 Image photographing apparatus, image photographing system for photographing using a plurality of image photographing apparatuses and methods for photographing image thereof
CN104349063B (en) * 2014-10-27 2018-05-15 东莞宇龙通信科技有限公司 A kind of method, apparatus and terminal for controlling camera shooting
WO2016119150A1 (en) * 2015-01-28 2016-08-04 宇龙计算机通信科技(深圳)有限公司 Photographing method of mobile terminal having multiple cameras and mobile terminal
CN104767937A (en) * 2015-03-27 2015-07-08 深圳市艾优尼科技有限公司 Photographing method
CN204721459U (en) * 2015-06-05 2015-10-21 深圳市星苑科技有限公司 A kind of device of zoom lens imaging
CN104967775B (en) * 2015-06-05 2017-12-01 深圳市星苑科技有限公司 A kind of device and method of zoom lens imaging
US10291842B2 (en) * 2015-06-23 2019-05-14 Samsung Electronics Co., Ltd. Digital photographing apparatus and method of operating the same
KR20170020069A (en) * 2015-08-13 2017-02-22 엘지전자 주식회사 Mobile terminal and image capturing method thereof
CN105847674B (en) * 2016-03-25 2019-06-07 维沃移动通信有限公司 A kind of preview image processing method and mobile terminal based on mobile terminal
CN105676563B (en) * 2016-03-31 2018-09-18 深圳市极酷威视科技有限公司 A kind of focusing method and camera of zoom camera
CN106131408A (en) * 2016-07-11 2016-11-16 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN106254780A (en) * 2016-08-31 2016-12-21 宇龙计算机通信科技(深圳)有限公司 A kind of dual camera camera control method, photographing control device and terminal
CN106385534A (en) * 2016-09-06 2017-02-08 努比亚技术有限公司 Focusing method and terminal
CN106791376B (en) * 2016-11-29 2019-09-13 Oppo广东移动通信有限公司 Imaging device, control method, control device and electronic device
CN106791377B (en) * 2016-11-29 2019-09-27 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN107360364B (en) * 2017-06-28 2019-10-18 维沃移动通信有限公司 A kind of image capturing method, terminal and computer readable storage medium
CN111885294B (en) * 2018-03-26 2022-04-22 华为技术有限公司 Shooting method, device and equipment
CN108769485A (en) * 2018-06-27 2018-11-06 北京小米移动软件有限公司 Electronic equipment
CN110830756B (en) * 2018-08-07 2022-05-17 华为技术有限公司 Monitoring method and device
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
CN109436344B (en) * 2018-11-16 2022-04-22 航宇救生装备有限公司 Airborne photography pod based on parachute ballistic trajectory
CN109361794B (en) * 2018-11-19 2021-04-20 Oppo广东移动通信有限公司 Zoom control method and device of mobile terminal, storage medium and mobile terminal
CN109194881A (en) * 2018-11-29 2019-01-11 珠海格力电器股份有限公司 Image processing method, system and terminal
CN109639997B (en) * 2018-12-20 2020-08-21 Oppo广东移动通信有限公司 Image processing method, electronic device, and medium
CN110072058B (en) * 2019-05-28 2021-05-25 珠海格力电器股份有限公司 Image shooting device and method and terminal
CN110312075B (en) * 2019-06-28 2021-02-19 Oppo广东移动通信有限公司 Device imaging method and device, storage medium and electronic device
CN110248101B (en) * 2019-07-19 2021-07-09 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
CN111292278B (en) * 2019-07-30 2023-04-07 展讯通信(上海)有限公司 Image fusion method and device, storage medium and terminal
CN110536057B (en) * 2019-08-30 2021-06-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111654629B (en) * 2020-06-11 2022-06-24 展讯通信(上海)有限公司 Camera switching method and device, electronic equipment and readable storage medium
CN111787224B (en) * 2020-07-10 2022-07-12 深圳传音控股股份有限公司 Image acquisition method, terminal device and computer-readable storage medium

Also Published As

Publication number Publication date
CN111787224B (en) 2022-07-12
WO2022007622A1 (en) 2022-01-13
CN111787224A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111294517B (en) Image processing method and mobile terminal
US10311649B2 (en) Systems and method for performing depth based image editing
CN115812312A (en) Image acquisition method, terminal device and computer-readable storage medium
WO2021073331A1 (en) Zoom blurred image acquiring method and device based on terminal device
EP2981061A1 (en) Method and apparatus for displaying self-taken images
CN113141450B (en) Shooting method, shooting device, electronic equipment and medium
WO2018076460A1 (en) Photographing method for terminal, and terminal
JP2010524279A (en) Distance map generation type multi-lens camera
JP4730478B2 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN112887617B (en) Shooting method and device and electronic equipment
WO2022111330A1 (en) Image stitching method and apparatus for multi-camera device, storage medium, and terminal
CN113727001B (en) Shooting method and device and electronic equipment
CN112911059B (en) Photographing method and device, electronic equipment and readable storage medium
CN108810326B (en) Photographing method and device and mobile terminal
CN112839166B (en) Shooting method and device and electronic equipment
CN112532875B (en) Terminal device, image processing method and device thereof, and storage medium
CN112887624B (en) Shooting method and device and electronic equipment
CN114071009A (en) Shooting method and equipment
RU2792413C1 (en) Image processing method and mobile terminal
JP2020009099A (en) Image processing device, image processing method, and program
CN112640430A (en) Imaging element, imaging device, image data processing method, and program
CN114071010B (en) Shooting method and equipment
CN109639983B (en) Photographing method, photographing device, terminal and computer-readable storage medium
CN117528250A (en) Multimedia file processing method, multimedia file processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination