CN111787224B - Image acquisition method, terminal device and computer-readable storage medium - Google Patents

Image acquisition method, terminal device and computer-readable storage medium Download PDF

Info

Publication number
CN111787224B
CN111787224B CN202010654953.5A CN202010654953A CN111787224B CN 111787224 B CN111787224 B CN 111787224B CN 202010654953 A CN202010654953 A CN 202010654953A CN 111787224 B CN111787224 B CN 111787224B
Authority
CN
China
Prior art keywords
image information
camera
picture
target
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010654953.5A
Other languages
Chinese (zh)
Other versions
CN111787224A (en
Inventor
赵紫辉
代文慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202010654953.5A priority Critical patent/CN111787224B/en
Publication of CN111787224A publication Critical patent/CN111787224A/en
Priority to PCT/CN2021/101320 priority patent/WO2022007622A1/en
Priority to CN202180044452.8A priority patent/CN115812312A/en
Application granted granted Critical
Publication of CN111787224B publication Critical patent/CN111787224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image acquisition method, which is applied to terminal equipment with cameras, wherein the terminal equipment comprises at least two cameras, focal sections shot by each camera are different, and the method comprises the following steps: when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to respectively pick up image information, and forming a camera preview interface; when the photographing triggering operation is detected, generating a picture according to the camera preview interface; and saving the picture. The application also discloses a terminal device and a computer readable storage medium. The picture that this application terminal equipment shot is formed by the image information combination of the different focal length that a plurality of cameras were gathered simultaneously, has promoted the fore-and-aft scene definition of picture and the stereoscopic effect of image in the picture to make and shoot the formation of image effect good.

Description

Image acquisition method, terminal device and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image obtaining method, a terminal device, and a computer-readable storage medium.
Background
With the development of science and technology, the shooting function of the terminal equipment is more and more advanced. If terminal equipment can realize mode such as super wide angle, wide angle and tele and shoot the image nowadays, super wide angle and wide angle can realize the large visual angle and shoot, can be suitable for in the shooting scene of difference with the tele shooting mode.
The terminal equipment who realizes on the existing market that super wide angle, wide angle and long burnt mode were shot generally is equipped with at least three camera, for example super wide-angle camera, wide-angle camera and long burnt camera etc. shoots super wide-angle image and adopts super wide-angle camera to shoot, shoots wide-angle image and adopts wide-angle camera to shoot, shoots long burnt image and adopts long burnt camera to shoot. In the specific shooting process, when a user opens a camera, a camera preview interface is formed by preview data collected by a default camera, then the user sets a zoom value according to the camera preview interface, the camera identifies the camera corresponding to the set zoom value, then the corresponding camera is started to pick up image data, the image data is displayed in the camera preview interface, and the user adopts the camera to obtain an image after pressing down to take a picture.
However, the terminal device still forms images with a single camera, and the shooting imaging effect is not good.
The above is only for the purpose of assisting understanding of the technical solutions of the present application, and does not represent an admission that the above is prior art.
Disclosure of Invention
The application mainly aims to provide an image acquisition method, a terminal device and a computer readable storage medium, and aims to solve the technical problem that the shooting and imaging effect of the terminal device is poor.
In order to achieve the above object, the present application provides an image acquisition method applied to a terminal device with a camera, where the terminal device includes at least two cameras, and a focal length captured by each camera is different, and the image acquisition method includes the following steps:
when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to respectively pick up image information, and forming a camera preview interface;
when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and saving the picture.
Optionally, the step of forming a camera preview interface includes:
determining a target camera according to a preset zoom value;
and combining main preview image information and supplementary preview image information to form the camera preview interface, wherein the main preview image information is image information picked up by the target camera, and the supplementary preview image information is image information picked up by other cameras except the target camera.
Optionally, the step of combining the main preview image information and the supplemental preview image information to form the camera preview interface includes:
acquiring an area overlapped with the main preview image information in the supplementary preview image information;
and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
Optionally, the step of merging an area overlapping with the main preview image information in the supplemental preview image information into the main preview image information to form the camera preview interface includes:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
Optionally, the preset zoom value is a default zoom value of the terminal device, or the preset zoom value is a set zoom value of a user.
Optionally, the step of saving the picture is executed while:
and storing the image information picked up by each camera, and associating the picture with each image information.
Optionally, after the step of saving the image information picked up by each camera and associating the picture with each image information, the method further includes:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters from each image information associated with the picture;
and generating edited target picture preview data according to the target image information.
Optionally, the step of acquiring target image information corresponding to the editing parameter from each piece of image information associated with the picture includes:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal section where the zoom value is located, and taking a camera matched with the focal section as a target camera;
and taking the image information picked up by the target camera as the target image information.
Optionally, the step of generating edited target picture preview data according to the target image information includes:
and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
Optionally, the step of generating edited target picture preview data according to the target image information includes:
and combining the target image information and other image information associated with the picture to generate edited target picture preview data.
Optionally, the editing operation comprises at least one of zooming in, zooming out, and cropping.
Optionally, when the editing operation is a zoom-in operation, a focal length in which the zoom value of the adjusted picture is located is larger than a focal length in which the current zoom value of the picture is located; and when the editing operation is a zooming-out operation, the focal length of the zoom value of the adjusted picture is smaller than the focal length of the current zoom value of the picture.
Optionally, when the editing operation is cropping, after the step of generating edited target picture preview data according to the target image information, the method further includes:
and after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters.
The present application further provides a terminal device, including: a memory, a processor and an image acquisition program stored on the memory and executable on the processor, the image acquisition program, when executed by the processor, implementing the steps of the image acquisition method as described above.
Optionally, the processor includes at least two image processing modules, and each image processing module is connected to one camera.
Furthermore, the present application also provides a computer-readable storage medium having stored thereon an image acquisition program which, when executed by a processor, implements the steps of the image acquisition method as described above.
According to the image acquisition method, the terminal device and the computer readable storage medium provided by the embodiment of the application, when the terminal device detects the trigger operation of the camera, the terminal device starts each camera, controls each camera to respectively pick up image information, generates a camera preview interface according to the image information picked up by each camera, and generates and stores the picture according to the camera preview interface when the trigger operation of taking the picture is detected. Because the picture is formed by combining the image information of different focal sections acquired by a plurality of cameras simultaneously, the front and back scene definition of the picture and the three-dimensional effect of the image in the picture are improved, and the imaging effect of the camera is good.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a first embodiment of an image acquisition method according to the present application;
FIG. 3 is a detailed flowchart of the step S10 in FIG. 2;
fig. 4 is a detailed flowchart of step S12 in the second embodiment of the image obtaining method of the present application;
FIG. 5 is a flowchart illustrating a third embodiment of an image obtaining method according to the present application;
FIG. 6 is a detailed flowchart of the step S60 in FIG. 5;
fig. 7 is a schematic flowchart of a fourth embodiment of an image obtaining method according to the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The main solution of the embodiment of the application is as follows: when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to pick up image information respectively, and forming a camera preview interface; when the photographing triggering operation is detected, generating a picture according to the camera preview interface; and saving the picture.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a terminal device in a hardware operating environment according to an embodiment of the present application.
The terminal device can be a PC, and also can be a terminal device with a shooting function, such as a smart phone and a tablet personal computer.
As shown in fig. 1, the terminal device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Further, terminal equipment includes two at least cameras, every the focus section that the camera was shot is different, for example, when terminal equipment includes three camera, be long focus camera, wide-angle camera and super wide-angle camera respectively, wherein, long focus camera corresponds the focus section of shooing and is 3X-30X, wide-angle camera corresponds the focus section of shooing and is 1X-3X, and super wide-angle camera corresponds the focus section of shooing and is 0.6X-1X. The cameras are all connected with the processor.
Optionally, the processor includes at least two image processing modules, and each image processing module is connected to one of the cameras. The camera transmits image information to an image processing module connected with the camera after the image information is collected at the front end of the camera, and the image processing module processes the image information collected by the camera to form image information in the focal length and stores the image information in a memory. Because the image information that every camera gathered alone is handled by solitary image processing module, so can save the original image data information that this camera gathered in the memory, at least two when the camera gathered the image simultaneously, can also simultaneously respectively to at least two the image data that the camera gathered are handled, and then carry out data merging based on at least two the image data that the camera gathered simultaneously, form the image based on different focal length or different angle data merging, improve the shooting effect.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image acquisition program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke an image acquisition program stored in the memory 1005 and perform the following operations:
when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to pick up image information respectively, and forming a camera preview interface;
when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and saving the picture.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
determining a target camera according to a preset zoom value;
and combining main preview image information and supplementary preview image information to form the camera preview interface, wherein the main preview image information is image information picked up by the target camera, and the supplementary preview image information is image information picked up by other cameras except the target camera.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
acquiring an area overlapped with the main preview image information in the supplementary preview image information;
and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of a pixel of an area overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and storing the image information picked up by each camera, and associating the picture with each image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters from each image information associated with the picture;
and generating edited target picture preview data according to the target image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal section where the zoom value is located, and taking a camera matched with the focal section as a target camera;
and taking the image information picked up by the target camera as the target image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and combining and generating edited target picture preview data according to the target image information and other image information associated with the picture.
Further, the processor 1001 may call the image acquisition program stored in the memory 1005, and also perform the following operations:
and after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters.
Referring to fig. 2, the present application provides a first embodiment of an image acquisition method, where the image acquisition method is applied to a terminal device with a camera, the terminal device includes at least two cameras, and focal segments captured by the cameras are different, and the image acquisition method includes:
step S10, when detecting the trigger operation of the camera, opening each camera, controlling each camera to pick up image information respectively, and forming a camera preview interface;
the terminal device in this embodiment may be a mobile phone, a tablet, a camera, or the like. The terminal equipment is provided with a camera application, and a user can trigger the camera application to carry out shooting work.
When a user triggers a camera application of the terminal equipment, the terminal equipment starts each camera and controls each camera to pick up image information respectively. The acquired image information is different because the focal sections shot by the cameras are different, and the cameras respectively transmit the acquired image information to different processing modules, and the image information is stored in different storage areas after being processed by the processing modules. And the same trigger operation is correspondingly associated with the acquired image data.
It should be noted that the terminal device in this embodiment includes, but is not limited to, a tele camera, a wide camera, and an ultra-wide camera, where a focal length shot by the tele camera is 3X to 30X, a focal length shot by the wide camera is 1X to 3X, and a focal length shot by the ultra-wide camera is 0.6X to 1X.
The terminal equipment is provided with a display interface, and after the camera application is triggered, image data collected by the cameras are displayed on the display interface in a preview mode.
It can be understood that, based on the embodiment that at least two cameras simultaneously pick up image information, and the picked-up image information is obtained based on different focal lengths and different coordinates, the camera preview interface may be formed based on a combination of multiple sets of the image information.
Specifically, in an embodiment, referring to fig. 3, the camera preview interface is formed in a manner including, but not limited to, one of the following:
step S11, determining a target camera according to a preset zoom value;
step S12, combining main preview image information and supplemental preview image information to form the camera preview interface, where the main preview image information is image information picked up by the target camera, and the supplemental preview image information is image information picked up by other cameras except the target camera.
Namely, the main preview image information is determined according to the preset zoom value, and then the main preview image information is corrected by using the image information of other cameras, so that the display effect of the shot image is improved.
Specifically, the preset zoom value may be a default zoom value of the terminal device, or the preset zoom value may be a set zoom value of a user. And if the user triggers the camera application, the terminal equipment controls each camera to be started, and each camera respectively picks up image information and transmits the image information back to each processing module. At this time, if the terminal device does not detect the user-set zoom value, the default zoom value of the terminal device is adopted as the preset zoom value. And if the terminal equipment detects that the user sets the zoom value of the current camera, the set zoom value of the user is adopted as the preset zoom value. It can be understood that, a zoom control is arranged on the terminal device, and a user can set a zoom value by triggering the zoom control.
When a camera application of the terminal device is triggered, the default of the camera preview interface is a default zoom value set by the system, and when it is detected that a user triggers a zoom control to adjust the zoom value of the preview interface, the default zoom value is adjusted to be the set zoom value. In the focusing process, since the present embodiment employs a plurality of cameras to pick up image information simultaneously, when the implied zoom value is adjusted to the set zoom value, the terminal device directly takes the image information picked up by the camera corresponding to the set zoom value as the main preview image information and the image information picked up by the other cameras as the supplementary preview image information, and combines and images without the need of the exemplary technique: when the default zoom value is adjusted to the set zoom value, the camera corresponding to the default zoom value is closed, the camera corresponding to the set zoom value is opened, and then image information is picked up. The embodiment omits the pick-up time of the image information after the zoom value is adjusted in the image pick-up process, and can avoid missed moment pictures caused by adjusting the zoom value to a certain extent.
It should be noted that, in this embodiment, a target camera is determined according to the preset zoom value, where the target camera is a camera corresponding to the preset zoom value, and if the preset zoom value is 1.0X, the corresponding target camera is a wide-angle camera; if the preset zoom value is 0.6X, the corresponding target camera is an ultra-wide-angle camera.
In this embodiment, after a target camera is determined according to a preset zoom value, image information picked up by the target camera is used as main preview image information, image information picked up by other cameras except the target camera is used as supplementary preview image information, and then the main preview image information is corrected by using the supplementary preview image information, so as to finally form a preview interface.
Because the picture is all by the object through the equal proportion formation of image and present on the photosensitive element of camera, and each photosensitive element all comprises different plane pixel, because the position coordinate of each camera has relative position, a plurality of cameras shoot same when the object of shooing, there are the visual angle of a plurality of differences, if merge into a picture with the data at a plurality of different visual angles, the little stereoeffect of the object of shooing is stronger, also can be stronger on the reduction degree of visual angle impression, simultaneously because the difference of focus section formation of image, make at the composite picture, the fore-and-aft scene of picture also has the promotion by a wide margin in the definition.
Based on the camera preview interface in the embodiment, the image information picked up by the multiple cameras is combined to form the camera preview interface, and the multiple cameras form images, so that the camera preview interface in the embodiment has a good imaging effect.
Step S20, when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and step S30, saving the picture.
And the display interface of the terminal equipment is provided with a photographing determining control, when a user triggers the photographing determining control, the terminal equipment is judged to detect photographing triggering operation, and the terminal equipment generates a picture according to the image information currently displayed on the camera preview interface, stores the picture and finishes photographing.
In this embodiment, when the terminal device detects a camera trigger operation, the terminal device starts each camera, controls each camera to respectively pick up image information, generates a camera preview interface according to the image information picked up by each camera, and generates and stores a picture according to the camera preview interface when the terminal device detects a photographing trigger operation. Because the picture is formed by combining the image information of different focal segments acquired by a plurality of cameras simultaneously, the front and back scene definition of the picture and the stereo effect of the image in the picture are improved, and the imaging effect of the camera is good.
Further, referring to fig. 4, the present application provides a second embodiment of an image obtaining method, and based on the first embodiment, the step of combining the main preview image information and the supplemental preview image information to form the camera preview interface includes:
step S121, acquiring an area overlapping with the main preview image information in the supplemental preview image information;
step S122, merging the region overlapping with the main preview image information in the supplemental preview image information into the main preview image information, and forming the camera preview interface.
At least two cameras pick up the image information of the same object from different angles, so that an overlapping area and a non-overlapping area are necessary between each image information, the depth information of the edge position of the formed picture is increased through the combination of the overlapping areas in the embodiment, and the transparency of the picture content can be improved qualitatively. And based on multi-focus segment fusion, the edge of the picture can be supplemented and corrected through the supplementary preview image information of other focus segments, and compared with a single shot picture, the edge of the shot picture cannot be distorted.
In this embodiment, the supplementary preview data is converted to the plane where the main preview data is located by coordinate conversion, so that the supplementary preview data is calibrated on one plane, and thus, under the condition that a pixel point is not changed, four-axis scattering extension may exist by taking the point as a center.
The coordinate conversion mode is that the coordinate of the main preview image data is used as a central coordinate, and the supplementary preview information is converted to the central coordinate based on the relative relation between the central coordinate and the coordinate of the supplementary preview image information, so as to complete the coordinate conversion.
Specifically, the step of combining the region overlapping with the main preview image information in the supplemental preview image information into the main preview image information to form the camera preview interface includes:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
It should be noted that the target camera is located as the central coordinate, and the phase position parameter refers to a relative position of the coordinate positions of the other cameras with respect to the coordinate of the target camera. After the coordinates of the target camera and the relative position parameters are acquired, calculating the coordinate position of the pixel of the area, which is overlapped with the main preview image information, of the supplementary preview image information at the coordinates of the target camera, and specifically converting the coordinates of the pixel at the corresponding camera to the coordinates of the target camera based on the relative position parameters, so that each pixel point is converted to the pixel plane of the main preview image information to complete the combination with the main preview image information, and further displaying the combined image in the camera preview interface.
In the embodiment of the invention, the edge depth information of a single photo is increased through multi-path fusion imaging, so that the transparency of the photo content can be qualitatively improved; the problem of poor edge resolution is optimized, so that edge noise of the picture is generated, the picture is indirectly changed into a large pixel through multi-pixel splicing and proofreading, and the light sensitivity and the color restoration degree of the camera are improved.
If the user needs to adjust the zoom value first in the shooting process of the terminal device and then takes a picture, the process of adjusting the zoom value needs time, and the user may miss an instant picture due to adjustment of the zoom value.
Specifically, as a third embodiment of the image obtaining method provided by the present application, referring to fig. 5 based on the first and/or second embodiment, the image obtaining method performs the step of saving the picture and also performs:
step S40, saving the image information picked up by each camera and associating the picture with each image information.
That is, after the user triggers the photographing operation, the terminal device generates a picture according to the camera preview interface, stores the picture, stores the image information picked up by each camera, and associates the picture with the image information picked up by each camera. Therefore, the pictures are correspondingly associated with the multi-path image information data, when the terminal equipment carries out zooming processing on the pictures, the multi-path image information can be called based on the association relation between the pictures and the image information data, and the data processing is carried out by adopting the originally picked image information, so that when the zooming processing is carried out on the pictures, the definition of the pictures can be always kept as same as that of the original pictures.
Based on the picture associated with the image information picked up by each camera, when the user edits the image, the image obtaining method of the embodiment of the application may perform the following processing on the picture:
referring to fig. 5 specifically, after the step of saving the image information picked up by each camera and associating the picture with each image information, the method further includes:
step S50, when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
step S60 of acquiring target image information corresponding to the editing parameter from each piece of image information associated with the picture;
and step S70, generating edited target picture preview data according to the target image information.
The method comprises the steps that a user can click the picture to edit the picture, when the terminal device detects that the user carries out editing operation triggered based on the picture, editing parameters corresponding to the editing operation are obtained, then target image information corresponding to the editing parameters is obtained from each piece of image information related to the picture, and edited target picture preview data are generated by adopting the target image information.
Wherein the editing operation comprises at least one of zooming in, zooming out and cropping, and the editing parameters comprise one or more of zooming in multiple, zooming out multiple and cropping size. When a user magnifies and edits a picture, target image information corresponding to the magnification factor is searched from each image information associated with the picture, and then the target image information is used as target picture preview data for the user to preview the display effect of the magnified picture. When the user triggers the zoom-out operation or the clipping operation, the terminal device processes the picture in the same manner as described above, which is not described herein again.
It is understood that, in this embodiment, the focal length of each camera is different, and the picked-up image information is also divided according to the zoom value, so as to facilitate reasonable invoking of the image information associated with the picture to generate the target image preview data, so that the edited picture display effect is optimal, in an embodiment, referring to fig. 6, the step of obtaining the target image information corresponding to the editing parameter from each image information associated with the picture includes:
step S61, determining the zoom value of the adjusted picture according to the editing parameters;
step S62, acquiring a focal length corresponding to the zoom value, and taking a camera matched with the focal length as a target camera;
step S63, regarding the image information picked up by the target camera as the target image information.
That is, in this embodiment, when the editing parameter corresponding to the editing operation is obtained, the editing parameter is converted into a zoom value, then a camera matched with a focal length where the zoom value is located is determined according to the zoom value, image information picked up by the camera is adopted as target image information, and then the target picture preview data is generated according to the target image information.
The image information picked up by each camera is different based on different focal sections of the cameras, the terminal equipment determines a target camera according to the zoom value of the picture adjusted by the editing parameters, the target image information corresponding to the target camera is adopted to generate target picture preview data, and the target image information is original information collected by the cameras and is the same as the image information of the focal section of the zoom value adjusted by the picture, and pixels of the target image information do not need to be changed, so that the image displayed by the adjusted target picture preview data is clear, and the problem that the definition can be reduced in a mode of achieving the purpose of focusing by changing the image pixels in the exemplary technology is solved. In contrast, the embodiment improves the clear effect of editing the picture.
In addition, in the embodiment, the image information which is most matched with the editing parameter is determined through the zoom value, and then the target picture preview data is generated by adopting the image information, so that the display effect of the convenient picture is optimized.
It should be noted that each focal length has an upper limit value and a lower limit value, the upper limit value or the lower limit value of two adjacent focal lengths is the same, and if the zoom value is between the upper limit value or the lower limit value, but does not belong to the upper limit value or the lower limit value, the target picture preview data is generated in the following two ways:
firstly, generating edited target picture preview data based on the target image information;
that is, the edited target picture preview data is generated by directly adopting the target image information, so that the adjusted target picture preview data meets the focal length.
Secondly, the target image information is adjusted according to the zoom value, and edited target picture preview data is generated based on the adjusted target image information.
That is, in this embodiment, after the target image information is determined according to the focal length of the zoom value, the target image information is adjusted according to the zoom value, so that the adjusted target image information is matched with the zoom value, and the edited target picture preview data is generated according to the adjusted target image information, so that the target picture preview data meets the requirement of the zoom value.
When the editing operation is enlargement or reduction, the terminal device generates enlarged or reduced picture preview data, and displays the enlarged or reduced picture. At this time, the user may select to capture the enlarged or reduced picture and store it in the memory, or may select to quit the picture editing, so that the picture is restored to the state before editing.
When the editing operation is cutting, the terminal equipment generates target picture preview data in a cutting area, at the moment, a user can select cutting determination, and after the terminal equipment detects the cutting determination operation, the terminal equipment cuts the target picture according to the target picture preview data and the editing parameters. Namely, the terminal equipment determines the cutting size according to the editing parameters to form a cutting area, and then forms the data in the cutting area into the target picture.
Further, the present application provides a fourth embodiment of the image obtaining method based on the third embodiment, and with reference to fig. 7, the step of generating edited target picture preview data according to the target image information includes:
and step S71, combining the target image information and other image information related to the picture to generate edited target picture preview data.
In this embodiment, when a user edits a generated picture, and after determining target image information, in order to make the definition of a foreground and a background presented by edited target picture preview data high, an image presented by the edited target picture preview data also achieves a stereoscopic effect, so that the picture effect presented by the edited target picture preview data is consistent with the picture effect obtained during shooting, when the edited target picture preview data is generated, the target image information is used as main preview image information, and other image information associated with the picture is used as supplementary preview image information, and the main target image information and the other image information are combined to form the target picture preview data.
It should be noted that, when the terminal device edits the picture, the picture is edited from all image information associated with the picture, and the target picture preview data may be generated after merging based on all image information. The specific merging method is the same as the merging method for the camera preview interface in the photographing process of the terminal device, and reference may be made to the second embodiment specifically, which is not repeated here.
Furthermore, an embodiment of the present application also provides a computer-readable storage medium, on which an image acquisition program is stored, and the image acquisition program, when executed by a processor, implements the steps of the image acquisition method as described above.
The present application further provides a terminal device, the terminal device includes: a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method as described above.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (13)

1. An image acquisition method is applied to a terminal device with a camera, and is characterized in that the terminal device comprises at least two cameras, the focal length of each camera is different, and the image acquisition method comprises the following steps:
when the triggering operation of the camera is detected, starting each camera, controlling each camera to pick up image information respectively, and taking the camera corresponding to a focal section where a preset zoom value is located as a target camera, wherein the preset zoom value is a set zoom value of a user, and the preset zoom value is different based on different zoom value settings of the camera by the user;
combining main preview image information and supplementary preview image information to form a camera preview interface, wherein the main preview image information is image information picked up by the target camera, the supplementary preview image information is image information picked up by other cameras except the target camera, and the target camera is a camera of which the zoom value corresponds to the preset zoom value;
when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and saving the picture.
2. The method for acquiring an image according to claim 1, wherein the step of combining the main preview image information and the supplemental preview image information to form a camera preview interface comprises:
acquiring an area overlapped with the main preview image information in the supplementary preview image information;
and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
3. The method of acquiring an image according to claim 2, wherein the step of incorporating the area overlapping with the main preview image information in the supplemental preview image information into the main preview image information to form the camera preview interface comprises:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
4. The method for acquiring an image according to any one of claims 1 to 3, wherein the step of saving the picture is performed while:
and storing the image information picked up by each camera, and associating the picture with each image information.
5. The method for acquiring images according to claim 4, wherein the step of saving the image information picked up by each camera and associating the picture with each image information is further followed by:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters from each image information associated with the picture;
and generating edited target picture preview data according to the target image information.
6. The image acquisition method according to claim 5, wherein the step of acquiring target image information corresponding to the editing parameter from each image information associated with the picture comprises:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal length where the zoom value is located, and taking a camera matched with the focal length as a target camera;
and taking the image information picked up by the target camera as the target image information.
7. The image acquisition method according to claim 6, wherein the step of generating edited target picture preview data from the target image information comprises:
and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
8. The method for acquiring an image according to claim 5, wherein the step of generating edited target picture preview data based on the target image information includes:
and combining and generating edited target picture preview data according to the target image information and other image information associated with the picture.
9. The image acquisition method according to claim 5, wherein the editing operation includes at least one of enlargement, reduction, and cutting.
10. The image acquisition method according to claim 9, wherein when the editing operation is a zoom-in operation, a focal length in which a zoom value of the adjusted picture is located is larger than a focal length in which a current zoom value of the picture is located; and when the editing operation is a zooming-out operation, the focal length of the zoom value of the adjusted picture is smaller than the focal length of the current zoom value of the picture.
11. The method for acquiring an image according to claim 9, wherein when the editing operation is cropping, after the step of generating edited target picture preview data from the target image information, the method further comprises:
and after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters.
12. A terminal device, comprising: memory, processor and an image acquisition program stored on the memory and executable on the processor, the image acquisition program, when executed by the processor, implementing the steps of the method of acquiring an image according to any one of claims 1 to 11.
13. A computer-readable storage medium, characterized in that an image acquisition program is stored thereon, which when executed by a processor implements the steps of the method of acquiring an image according to any one of claims 1 to 11.
CN202010654953.5A 2020-07-10 2020-07-10 Image acquisition method, terminal device and computer-readable storage medium Active CN111787224B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010654953.5A CN111787224B (en) 2020-07-10 2020-07-10 Image acquisition method, terminal device and computer-readable storage medium
PCT/CN2021/101320 WO2022007622A1 (en) 2020-07-10 2021-06-21 Image acquisition method, terminal device and computer-readable storage medium
CN202180044452.8A CN115812312A (en) 2020-07-10 2021-06-21 Image acquisition method, terminal device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010654953.5A CN111787224B (en) 2020-07-10 2020-07-10 Image acquisition method, terminal device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111787224A CN111787224A (en) 2020-10-16
CN111787224B true CN111787224B (en) 2022-07-12

Family

ID=72758941

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010654953.5A Active CN111787224B (en) 2020-07-10 2020-07-10 Image acquisition method, terminal device and computer-readable storage medium
CN202180044452.8A Pending CN115812312A (en) 2020-07-10 2021-06-21 Image acquisition method, terminal device and computer-readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202180044452.8A Pending CN115812312A (en) 2020-07-10 2021-06-21 Image acquisition method, terminal device and computer-readable storage medium

Country Status (2)

Country Link
CN (2) CN111787224B (en)
WO (1) WO2022007622A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787224B (en) * 2020-07-10 2022-07-12 深圳传音控股股份有限公司 Image acquisition method, terminal device and computer-readable storage medium
CN112887603B (en) * 2021-01-26 2023-01-24 维沃移动通信有限公司 Shooting preview method and device and electronic equipment
CN116051368B (en) * 2022-06-29 2023-10-20 荣耀终端有限公司 Image processing method and related device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767937A (en) * 2015-03-27 2015-07-08 深圳市艾优尼科技有限公司 Photographing method
CN104967775A (en) * 2015-06-05 2015-10-07 深圳市星苑科技有限公司 Zoom lens imaging apparatus and method
CN204721459U (en) * 2015-06-05 2015-10-21 深圳市星苑科技有限公司 A kind of device of zoom lens imaging
CN105847674A (en) * 2016-03-25 2016-08-10 维沃移动通信有限公司 Preview image processing method based on mobile terminal, and mobile terminal therein
CN106664356A (en) * 2014-08-14 2017-05-10 三星电子株式会社 Image photographing apparatus, image photographing system for performing photographing by using multiple image photographing apparatuses, and image photographing methods thereof
CN106791376A (en) * 2016-11-29 2017-05-31 广东欧珀移动通信有限公司 Imaging device, control method, control device and electronic installation
CN107690649A (en) * 2015-06-23 2018-02-13 三星电子株式会社 Digital filming device and its operating method
CN109639997A (en) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 Image processing method, electronic device and medium
CN110072058A (en) * 2019-05-28 2019-07-30 珠海格力电器股份有限公司 Image capturing device, method and terminal
CN110312075A (en) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment
CN111183632A (en) * 2018-10-12 2020-05-19 华为技术有限公司 Image capturing method and electronic device
CN111292278A (en) * 2019-07-30 2020-06-16 展讯通信(上海)有限公司 Image fusion method and device, storage medium and terminal

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4956988B2 (en) * 2005-12-19 2012-06-20 カシオ計算機株式会社 Imaging device
US9007508B2 (en) * 2012-03-29 2015-04-14 Sony Corporation Portable device, photographing method, and program for setting a target region and performing an image capturing operation when a target is detected in the target region
CN104168414A (en) * 2013-05-17 2014-11-26 光道视觉科技股份有限公司 Object image shooting and splicing method
CN104349063B (en) * 2014-10-27 2018-05-15 东莞宇龙通信科技有限公司 A kind of method, apparatus and terminal for controlling camera shooting
WO2016119150A1 (en) * 2015-01-28 2016-08-04 宇龙计算机通信科技(深圳)有限公司 Photographing method of mobile terminal having multiple cameras and mobile terminal
KR20170020069A (en) * 2015-08-13 2017-02-22 엘지전자 주식회사 Mobile terminal and image capturing method thereof
CN105676563B (en) * 2016-03-31 2018-09-18 深圳市极酷威视科技有限公司 A kind of focusing method and camera of zoom camera
CN106131408A (en) * 2016-07-11 2016-11-16 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN106254780A (en) * 2016-08-31 2016-12-21 宇龙计算机通信科技(深圳)有限公司 A kind of dual camera camera control method, photographing control device and terminal
CN106385534A (en) * 2016-09-06 2017-02-08 努比亚技术有限公司 Focusing method and terminal
CN106791377B (en) * 2016-11-29 2019-09-27 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN107360364B (en) * 2017-06-28 2019-10-18 维沃移动通信有限公司 A kind of image capturing method, terminal and computer readable storage medium
CN111885295A (en) * 2018-03-26 2020-11-03 华为技术有限公司 Shooting method, device and equipment
CN108769485A (en) * 2018-06-27 2018-11-06 北京小米移动软件有限公司 Electronic equipment
CN110830756B (en) * 2018-08-07 2022-05-17 华为技术有限公司 Monitoring method and device
CN109436344B (en) * 2018-11-16 2022-04-22 航宇救生装备有限公司 Airborne photography pod based on parachute ballistic trajectory
CN109361794B (en) * 2018-11-19 2021-04-20 Oppo广东移动通信有限公司 Zoom control method and device of mobile terminal, storage medium and mobile terminal
CN109194881A (en) * 2018-11-29 2019-01-11 珠海格力电器股份有限公司 Image processing method, system and terminal
CN110248101B (en) * 2019-07-19 2021-07-09 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
CN110536057B (en) * 2019-08-30 2021-06-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111654629B (en) * 2020-06-11 2022-06-24 展讯通信(上海)有限公司 Camera switching method and device, electronic equipment and readable storage medium
CN111787224B (en) * 2020-07-10 2022-07-12 深圳传音控股股份有限公司 Image acquisition method, terminal device and computer-readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106664356A (en) * 2014-08-14 2017-05-10 三星电子株式会社 Image photographing apparatus, image photographing system for performing photographing by using multiple image photographing apparatuses, and image photographing methods thereof
CN104767937A (en) * 2015-03-27 2015-07-08 深圳市艾优尼科技有限公司 Photographing method
CN104967775A (en) * 2015-06-05 2015-10-07 深圳市星苑科技有限公司 Zoom lens imaging apparatus and method
CN204721459U (en) * 2015-06-05 2015-10-21 深圳市星苑科技有限公司 A kind of device of zoom lens imaging
CN107690649A (en) * 2015-06-23 2018-02-13 三星电子株式会社 Digital filming device and its operating method
CN105847674A (en) * 2016-03-25 2016-08-10 维沃移动通信有限公司 Preview image processing method based on mobile terminal, and mobile terminal therein
CN106791376A (en) * 2016-11-29 2017-05-31 广东欧珀移动通信有限公司 Imaging device, control method, control device and electronic installation
CN111183632A (en) * 2018-10-12 2020-05-19 华为技术有限公司 Image capturing method and electronic device
CN109639997A (en) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 Image processing method, electronic device and medium
CN110072058A (en) * 2019-05-28 2019-07-30 珠海格力电器股份有限公司 Image capturing device, method and terminal
CN110312075A (en) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment
CN111292278A (en) * 2019-07-30 2020-06-16 展讯通信(上海)有限公司 Image fusion method and device, storage medium and terminal

Also Published As

Publication number Publication date
WO2022007622A1 (en) 2022-01-13
CN111787224A (en) 2020-10-16
CN115812312A (en) 2023-03-17

Similar Documents

Publication Publication Date Title
US10311649B2 (en) Systems and method for performing depth based image editing
CN111294517B (en) Image processing method and mobile terminal
CN111787224B (en) Image acquisition method, terminal device and computer-readable storage medium
US20200267308A1 (en) Imaging capturing device and imaging capturing method
US10009543B2 (en) Method and apparatus for displaying self-taken images
KR20130112574A (en) Apparatus and method for improving quality of enlarged image
CN113141450B (en) Shooting method, shooting device, electronic equipment and medium
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN112887617B (en) Shooting method and device and electronic equipment
CN101742048A (en) Image generating method for portable electronic device
WO2022111330A1 (en) Image stitching method and apparatus for multi-camera device, storage medium, and terminal
CN112911059B (en) Photographing method and device, electronic equipment and readable storage medium
CN108810326B (en) Photographing method and device and mobile terminal
JP2011040896A (en) Image capturing apparatus and method of controlling the same
CN112532875B (en) Terminal device, image processing method and device thereof, and storage medium
CN112887624B (en) Shooting method and device and electronic equipment
RU2792413C1 (en) Image processing method and mobile terminal
EP3041219B1 (en) Image processing device and method, and program
TWI234682B (en) Method for capturing a digital image
CN117528250A (en) Multimedia file processing method, multimedia file processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant