CN110809115B - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment Download PDF

Info

Publication number
CN110809115B
CN110809115B CN201911054794.9A CN201911054794A CN110809115B CN 110809115 B CN110809115 B CN 110809115B CN 201911054794 A CN201911054794 A CN 201911054794A CN 110809115 B CN110809115 B CN 110809115B
Authority
CN
China
Prior art keywords
screen
target
under
camera
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911054794.9A
Other languages
Chinese (zh)
Other versions
CN110809115A (en
Inventor
杨诗琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911054794.9A priority Critical patent/CN110809115B/en
Publication of CN110809115A publication Critical patent/CN110809115A/en
Application granted granted Critical
Publication of CN110809115B publication Critical patent/CN110809115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly

Abstract

The embodiment of the invention provides a shooting method and electronic equipment, and relates to the technical field of communication. Wherein, the method comprises the following steps: determining a target position of a visual focus of a user on a screen; determining a target under-screen camera from all under-screen cameras according to the target position; shooting through a camera under a target screen to obtain a first image. According to the invention, the shooting visual angle of the target under-screen camera determined according to the target position of the visual focus of the user on the screen is matched with the visual focus of the user, and the user image with the front view visual angle effect can be obtained by shooting according to the shooting visual angle. In addition, because the target screen lower camera is arranged in the display area of the electronic equipment, a user can view the effect picture in real time through the preview picture displayed in the display area while looking forward at the target screen lower camera so as to adjust the posture, and therefore, the requirements of the user on adjusting the posture in real time through the preview picture and the requirements of the user on shooting an expected visual angle image can be met simultaneously.

Description

Shooting method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a shooting method and an electronic device.
Background
The shooting function is a basic function of the electronic apparatus, and the shooting angle thereof largely depends on the mounting position of the camera on the electronic apparatus. At present, a camera is usually installed above a screen of an electronic device, and when shooting is performed, a user can adjust the posture of the user in real time by observing a preview picture, and can press a shooting key to perform shooting after the posture is adjusted.
However, in a specific application, the camera is located above the screen of the electronic device, and the preview screen is displayed on the screen below the camera, so that if the user takes a picture while looking at the preview screen, the taken picture is a top view effect, not a front view effect, and if the user takes a picture while looking at the camera, the user cannot view the effect picture shown in the preview screen. Therefore, the user's requirement for adjusting the posture in real time through the preview screen cannot be considered together with the user's requirement for taking an image with an expected viewing angle.
Disclosure of Invention
The embodiment of the invention provides a shooting method and electronic equipment, and aims to solve the problem that the requirement of a user for adjusting the posture in real time through a preview picture cannot be taken into consideration with the requirement of the user for shooting an expected visual angle image.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a shooting method applied to an electronic device, where the electronic device includes at least two under-screen cameras, and the method includes:
determining a target position of a visual focus of a user on a screen;
determining a target under-screen camera from each under-screen camera according to the target position;
and shooting through the camera under the target screen to obtain a first image.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes at least two under-screen cameras, and the electronic device further includes:
the first determination module is used for determining the target position of the visual focus of the user on the screen;
the second determining module is used for determining a target under-screen camera from all the under-screen cameras according to the target position;
and the shooting module is used for shooting through the target screen lower camera to obtain a first image.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the above-mentioned shooting method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the shooting method described above are implemented.
In the embodiment of the invention, the electronic equipment can firstly determine the target position of the visual focus of the user on the screen, then can determine the target under-screen camera from the under-screen cameras according to the target position, and further shoot through the target under-screen camera to obtain the first image. In the embodiment of the invention, the shooting visual angle of the target under-screen camera determined according to the target position of the visual focus of the user on the screen is matched with the visual focus of the user, and the user image with the front-view visual focus effect can be obtained by shooting according to the shooting visual angle. In addition, because the target screen lower camera is arranged in the display area of the electronic equipment, a user can view the effect picture in real time through the preview picture displayed in the display area while looking forward at the target screen lower camera so as to adjust the posture, and therefore, the requirements of the user on adjusting the posture in real time through the preview picture and the requirements of the user on shooting an expected visual angle image can be met simultaneously.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating a photographing method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating another photographing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a layout of an off-screen camera according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a selection of a target location according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating another alternative embodiment of selecting a target location;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a block diagram of another electronic device according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a shooting method applied to an electronic device, where the electronic device includes at least two off-screen cameras, and the method includes:
in step 101, the electronic device determines a target position of a visual focus of a user on a screen.
In the embodiment of the present invention, at least two under-screen cameras may be disposed on a side of a screen of an electronic device facing an inside of the electronic device, specifically, the screen may be divided into at least two screen regions, and then the under-screen cameras may be disposed under a part or all of the screen regions according to a priori knowledge, where one or more under-screen cameras may be disposed under one screen region, which is not specifically limited in the embodiment of the present invention. For example, an off-screen camera may be disposed at 2/3 along the length of the electronic device and at 1/2 along the width of the electronic device, where the off-screen camera corresponds to the position of the screen corresponding to the eyes of the user when the user takes a front photograph.
The electronic device may determine a target position of the visual focus of the user on the screen, and specifically, the target position may be identified by the electronic device according to the line of sight of both eyes of the user, or may be manually selected by the user, and the subsequent user may focus the visual focus of the user on the selected target position.
And step 102, the electronic equipment determines a target under-screen camera from all under-screen cameras according to the target position.
In the embodiment of the present invention, when the target position is a position on the screen corresponding to a certain off-screen camera, the electronic device may determine the off-screen camera as the target off-screen camera that needs to be used for shooting this time. However, in practical applications, the setting position of each off-screen camera is fixed when the electronic device leaves the factory, and the line of sight of the user is flexible and changeable, so that the target position of the visual focus of the user on the screen may not exactly correspond to the position of any off-screen camera on the screen, but may correspond to an area on the screen where no off-screen camera is set. In this case, the electronic device may determine, as the target off-screen camera, an off-screen camera within a preset distance range centered on the target position. The number of the target under-screen cameras can be one or more.
According to the target under-screen camera determined by the target position of the user visual focus on the screen, the shooting visual angle of the target under-screen camera is matched with the user visual focus, and the user image with the front-view visual focus effect can be obtained by shooting according to the shooting visual angle.
And 103, shooting by the electronic equipment through the target under-screen camera to obtain a first image.
In the embodiment of the invention, after the electronic equipment determines the target under-screen camera with the shooting visual angle matched with the visual focus of the user, the user can be shot by the target under-screen camera. During shooting, on one hand, the visual focus of the user is focused on the position of the camera under the target screen or the position close to the position of the camera under the target screen, so that the first image can present an orthographic visual focus effect. On the other hand, because the target screen lower camera is arranged in the display area of the electronic equipment, when the user looks at the target screen lower camera, the user can also check the effect picture in real time through the preview picture displayed in the display area, and then can adjust the posture by referring to the effect picture, so that the requirements of the user on adjusting the posture in real time through the preview picture and the requirements of the user on shooting the expected visual angle image can be met at the same time. The number of the target under-screen cameras can be one or more, and correspondingly, the number of the first images can also be one or more.
In addition, in the related art, some electronic devices have cameras not on the middle line of the electronic device, but on the sides, so that even though the user is facing the screen when taking a picture, the side cameras have lens distortion at the position of the user on the basis that the visual focus of the user does not correspond to the shooting angle of the camera. Under the dual effects of lens distortion and camera position, the shot image can appear more that the size of user's both sides face is different, and the degree of difference in size can be more serious than the condition that only the lens distorts. Therefore, the distortion degree of the side camera when shooting the image can be reduced by the shooting method.
In the embodiment of the invention, the electronic equipment can firstly determine the target position of the visual focus of the user on the screen, then can determine the target under-screen camera from the under-screen cameras according to the target position, and further shoot through the target under-screen camera to obtain the first image. In the embodiment of the invention, the shooting visual angle of the target under-screen camera determined according to the target position of the visual focus of the user on the screen is matched with the visual focus of the user, and the user image with the front-view visual angle effect can be obtained by shooting according to the shooting visual angle. In addition, because the target screen lower camera is arranged in the display area of the electronic equipment, a user can view the effect picture in real time through the preview picture displayed in the display area while looking forward at the target screen lower camera so as to adjust the posture, and therefore, the requirements of the user on adjusting the posture in real time through the preview picture and the requirements of the user on shooting an expected visual angle image can be met simultaneously.
Referring to fig. 2, another embodiment of the present invention provides a shooting method applied to an electronic device including at least two off-screen cameras, the method including:
in step 201, the electronic device determines a target position of a visual focus of a user on a screen.
In the embodiment of the invention, the camera application of the electronic equipment can be provided with a view angle selecting function, and when a user selects the function, the camera selecting mode under the screen can be switched to. Then, optionally, the electronic device may implement the step by two ways, including:
the first mode is as follows: receiving an input to a first location on a screen; in response to the input, the first position is determined as a target position of the user's visual focus on the screen.
Fig. 3 shows a layout mode of the off-screen cameras, and referring to fig. 3, the display can be distinguished by lines or colors on the screen, and a screen area where each off-screen camera Cam is located is divided, a display mode of the screen area can be a grid display, and a display mode of a position where the off-screen camera Cam is located can be a circular selection frame displayed at the position where the off-screen camera Cam is located, so that the position is more accurate. In the embodiment of the invention, the cameras under the screen can be distributed under the whole screen, so that the original distribution that all the cameras are concentrated at the same position is changed, and the main visual angle for shooting is conveniently selected.
It should be noted that, in a specific application, the off-screen cameras may be uniformly distributed in each screen area, and certainly may also be distributed only in the screen area related to most shooting requirements, which is not specifically limited in this embodiment of the present invention.
After entering the screen lower camera selection mode, a user can select a first position on the screen according to a screen position or a screen area corresponding to each screen lower camera, so that the electronic equipment can receive input of the user to the first position on the screen. The electronic device may further determine, in response to the input, the first location as a target location of the user's visual focus on the screen. When the user shoots subsequently, the user can focus the visual focus on the target position selected by the user, namely, the user can select the target position in advance by the method, and the subsequent user can fix the visual focus according to the target position selected by the user.
By receiving the input of the user to the first position on the screen and responding to the input, the first position manually selected by the user is determined as the target position of the visual focus of the user on the screen, so that the target position can be determined based on the input of the user, and the operation mode is simple and easy for the user.
In the embodiment of the invention, the user can perform different operations and adjust different postures according to the requirements in different shooting scenes.
For example, when the user needs to take a front photograph, the user may click and select a first position where the off-screen camera located in the middle screen area in fig. 4 is located, and the electronic device may determine the first position as a target position of the user's visual focus on the screen. Then the user can focus the visual focus on the target position, so that the preview picture on the screen can be viewed in front, and the posture when the front photograph needs to be shot can be adjusted by contrasting the effect picture in the preview picture.
For another example, when the user wants to shoot at an angle of view close to that of binocular viewing, the user may click and select two first positions of the off-screen cameras in two screen regions located on the left and right sides in fig. 5, and the electronic device may determine the two first positions as target positions of the visual focus of the user on the screen, so that the electronic device may simulate the angle of view of binocular viewing in a binocular manner. It should be noted that a plurality of images obtained by the off-screen camera selected in this way need to be combined into one image subsequently.
For another example, the user may wish to click the capture button once to obtain multiple images of the same pose at different capture angles, and accordingly, the user may select the multi-angle capture option in the selection mode to distinguish from the multi-selection of the previous scene. After the user selects the multi-angle shooting option, a plurality of under-screen cameras corresponding to the required visual angles are further selected. It should be noted that, a plurality of images obtained by the off-screen camera selected in this way do not need to be combined into one image, and the images are respectively displayed to the user, and the user selects the images according to the requirement.
The second mode is as follows: in the case that the user gazes at the screen, identifying a second location on the screen at which the user gazes; the second position is determined as a target position of the user's visual focus on the screen.
After entering a screen camera selection mode, a user can watch a second position on a screen, the second position serves as a visual focus of the user, under the condition that the user watches the screen, the electronic equipment can recognize the second position watched by the user through a sight line positioning technology, and then the electronic equipment can determine the second position as a target position of the visual focus of the user on the screen. The line of sight positioning technology may refer to related technologies, and embodiments of the present invention are not described in detail herein.
Under the condition that a user watches the screen, the second position on the screen watched by the user is identified, and the second position is determined as the target position of the visual focus of the user on the screen, so that the user can determine the target position through the binocular visual focus without performing other operations on the electronic equipment, and therefore, during subsequent shooting, the user does not need to adjust the gestures of hands and arms, the gesture is adjusted in advance, and the target position is determined and controlled through the binocular visual focus, so that the operation convenience of the user is improved.
Step 202, the electronic device determines a target under-screen camera from the under-screen cameras according to the target position.
In an optional implementation manner, the step may specifically include: and under the condition that the target position is not matched with the screen positions corresponding to the cameras under the screens, determining the camera under the screen with the distance between the target position and the camera under the screen not exceeding the preset distance as the target camera under the screen. And under the condition that the target position is matched with the position of the camera under a certain screen, determining the camera under the screen as the target camera under the screen.
Specifically, if the target position is located at the position where the camera is located under a certain screen, the electronic device can determine that the target position matches with the position where the camera is located under the screen, and then the electronic device can directly determine the camera under the screen as the target camera under the screen. If the target position is located in an area where the off-screen camera is not arranged, that is, the target position is not located at the position where any off-screen camera is located, the electronic device can determine that the target position is not matched with the screen position corresponding to each off-screen camera, and then the electronic device can determine the off-screen camera with the distance between the target position not exceeding the preset distance as the target off-screen camera, that is, at least one off-screen camera closest to the target position is determined as the target off-screen camera.
In another optional implementation manner, the electronic device may further intelligently recognize a current scene, and determine the target off-screen camera according to the current scene. The step may specifically include: identifying scene characteristics in a preview picture acquired by any one of the off-screen cameras; and under the condition that the target position meets a first preset condition and the scene characteristics meet a second preset condition, determining the target under-screen camera corresponding to the first preset condition and the second preset condition from all under-screen cameras.
Specifically, the electronic device may first identify scene features in a preview picture acquired by any one of the off-screen cameras, and since a shooting scene is much larger than a screen area, the scene features in the preview pictures corresponding to different off-screen cameras may not be greatly different. The scene characteristics may specifically include an area ratio between a face and a screen, whether a background color other than a user is a pure color, whether the background color is a multiple-person combined scene, and the like, which is not specifically limited in this embodiment of the present invention. When the target position meets the first preset condition and the scene characteristics meet the second preset condition, it is indicated that the current picture meets a certain specific shooting scene, and correspondingly, the electronic device may determine, from the various off-screen cameras, a target off-screen camera corresponding to both the first preset condition and the second preset condition, that is, an off-screen camera required for shooting the current scene.
Optionally, in the embodiment of the present invention, different first preset conditions and different second preset conditions may be combined into different shooting scenes, and the camera application of the electronic device may preset the off-screen cameras corresponding to the different shooting scenes into different view angle modes, that is, each view angle mode may correspond to one shooting scene and correspond to at least one off-screen camera required for shooting the scene. When the user enters the selection mode, the electronic equipment can intelligently identify the shooting scene and recommend the view angle mode.
For example, the view angle mode 1 may be used to take a picture of a view angle at the time of binocular viewing, and it is generally preferable to approximate a binocular view angle of a user at the time of photographing, so the view angle mode 1 for simulating a binocular view angle in a double photographing manner may be set as a default mode, and the off-screen cameras corresponding to the view angle mode 1 may be off-screen cameras of two screen regions located on the left and right sides as shown in fig. 5. Upon entering the selection mode, view mode 1 may be selected by default if no scene is identified that corresponds to the other view modes. Of course, the user can freely change the default mode to other view modes.
For another example, the view mode 2 may be used to capture a positive identification photo, and the off-screen camera corresponding to the view mode 2 may be the off-screen camera located in the middle screen area shown in fig. 4, so as to achieve the view effect of looking at the user right in front. Under the condition that the following scene characteristics are intelligently identified by the electronic equipment and the following conditions are met, shooting can be performed through the camera under the screen corresponding to the view angle mode 2:
1) the front face of the user faces the screen, and the center of the screen is watched by two eyes, namely the target position meets a first preset condition.
2) The area ratio between the face and the screen is 70%.
3) The certificate photo standard is met: the background color except for the user is solid, no crown face, both ears and neck are recognized, and the head occupies over 2/3 of the whole image.
For another example, the view mode 3 may be a multi-view synthesis mode for taking a multi-person photo or a single-person photo with a side of the photo. In practical application, the lens distortion born by the portrait on the side of the photo is more serious, the portrait often presents a stretched feeling, the visual angle mode 3 can correspond to a screen lower camera close to the side and a screen lower camera in the middle screen area, the portrait on the side of the middle screen lower camera can be repaired and synthesized by the portrait in the side screen lower camera, and therefore the distortion feeling can be reduced.
The camera under the target screen is determined through the first preset condition met by the target position and the second preset condition met by the scene characteristics, whether the target position is matched with the position of the camera under any screen is not required to be paid attention to, that is, the target position is the position where the camera under any screen is located right away, or the target position is the region where the camera under the screen is not arranged, and only which shooting scene is needed to be paid attention to currently, and then the camera under the screen required by the current scene is determined.
In the embodiment of the invention, the target under-screen camera corresponding to the first preset condition and the second preset condition is determined by identifying the scene characteristic, determining the first preset condition met by the target position and the second preset condition met by the scene characteristic and serving as the under-screen camera required for shooting the current scene, so that a user does not need to manually select the target under-screen camera, intelligent selection can be carried out according to the scene, and the convenience for selecting the target under-screen camera is improved.
Step 203, the number of the cameras under the target screen is N, the number of the first images is N, and N is an integer greater than 1; the electronic equipment respectively shoots through N target screen lower cameras to obtain N first images.
In the embodiment of the invention, the camera application records the target position, and after the N target underscreen cameras used for shooting at this time are determined, the normal shooting interface can be returned, so that the electronic equipment can respectively shoot through the N target underscreen cameras to obtain N first images. If the user needs to obtain the N first images in the same posture under different shooting visual angles, the electronic equipment can display the N first images to the user respectively, and the user can select the N first images according to the requirement without synthesizing the N first images into one image.
Of course, if the number of the cameras under the target screen is 1, the number of the first images is also 1, and the first images are final images and can be displayed for the user to view.
And 204, determining the relative mapping information between the target position and the N target under-screen cameras by the electronic equipment under the condition that the target position is not matched with the screen positions corresponding to the under-screen cameras.
In the embodiment of the present invention, if a user needs to obtain an image at a shooting angle, and a target position of a visual focus of the user on a screen is not matched with a screen position corresponding to each of the off-screen cameras, the electronic device needs to synthesize N first images respectively shot by the N target off-screen cameras into one second image to be displayed to the user for viewing.
Specifically, if the user selects a related view angle mode based on multiple view angles, for example, the view angle mode 1 based on double shooting or the view angle mode 3 based on multiple view angles, the electronic device may determine the relative mapping information between the target position and the N target off-screen cameras respectively when the target position does not match the screen position corresponding to each off-screen camera. Optionally, the relative mapping information may specifically include distances between the target positions and the N target underscreen cameras, direction offset angles of the target positions relative to the N target underscreen cameras, and the like, which is not specifically limited in this embodiment of the present invention.
In step 205, the electronic device synthesizes the N first images into a second image according to the N pieces of relative mapping information.
In the embodiment of the invention, the electronic equipment can take the first image shot by the target under-screen camera closest to the target position as the main frame and take other N-1 first images as the supplementary frames, and the electronic equipment can be used for supplementing missing pixel information when the main frame is processed.
In an alternative implementation, this step may be implemented based on an object three-dimensional reconstruction method. Firstly, every two cameras under the target screen are used as a group of binocular cameras, so that two-dimensional pixel coordinates in each first image can be converted into three-dimensional world coordinates through binocular ranging and calibration, and the mapping relation between the two-dimensional pixel coordinates of each first image and a three-dimensional space is obtained, namely the three-dimensional reconstruction model of the object. Next, the model is rotated, where the rotation angle may be an included angle between a target under-screen camera closest to the target position and a vector from the target position to an origin of the three-dimensional coordinate system, that is, a direction offset angle of the target position relative to the target under-screen camera, and the rotation direction may be a direction from the target position to the target under-screen camera closest to the target position. And outputting the intermediate image rotated according to the rotation angle and the rotation direction according to the mapping relation. The intermediate image may lack some pixel information, the missing pixel information may be supplemented by an interpolation algorithm, or the image obtained by comparing the supplemented frame through the rotation of the above steps may have pixels with different pixel values from the intermediate image, and then the weights of the pixels with different values may be determined according to the distances between the target positions and the target under-screen cameras corresponding to the supplemented frame, and then the missing pixel information in the intermediate image may be interpolated according to the weights.
Therefore, under the condition that the target position and the screen position corresponding to each screen lower camera of the electronic equipment are not matched, the first images shot by the N target screen lower cameras are synthesized according to the target position and the relative mapping information between the N target screen lower cameras, so that the images shot by the screen lower cameras which are not at the target position can be simulated to form images at the target position, and the defect that the screen lower cameras cannot move along with the sight of a user is overcome.
After synthesizing the N first images into the second image, the electronic device may display the second image for the user to view.
Further optionally, in a case where the electronic device determines that the user needs to take a photo for a specific purpose, after obtaining the final first image or second image, the electronic device may further perform processing such as beautifying on the first image or second image before displaying according to the requirement of the purpose. For example, when the electronic device recognizes that the scene characteristics satisfy 3 conditions corresponding to the view angle mode 2, it may be determined that the user is to take a positive identification photo, and then the electronic device may perform beautification processing on the first image or the second image with more prominent definition and three-dimensional effect of the face, perform grain beautification processing on the texture of the real skin, and the like.
In the embodiment of the invention, the electronic equipment can firstly determine the target position of the visual focus of the user on the screen, and then can determine the target off-screen camera from the off-screen cameras according to the target position. And under the condition that the number of the cameras and the first images under the target screen is N, and N is an integer greater than 1, the electronic equipment respectively shoots through the N cameras under the target screen to obtain N first images. Then, under the condition that the target position is not matched with the screen positions corresponding to the cameras under the screens, the electronic equipment can synthesize the N first images into a second image according to the relative mapping information between the target position and the N target cameras under the screens. In the embodiment of the invention, the shooting visual angle of the target under-screen camera determined according to the target position of the visual focus of the user on the screen is matched with the visual focus of the user, and the user image with the front-view visual angle effect can be obtained by shooting according to the shooting visual angle. In addition, because the target screen lower camera is arranged in the display area of the electronic equipment, a user can view the effect picture in real time through the preview picture displayed in the display area while looking forward at the target screen lower camera so as to adjust the posture, and therefore, the requirements of the user on adjusting the posture in real time through the preview picture and the requirements of the user on shooting an expected visual angle image can be met simultaneously. In addition, under the condition that the target position is not matched with the screen positions corresponding to the cameras under the screens, the electronic equipment can synthesize the N first images into the second image according to the relative mapping information between the target position and the N target cameras under the screens, so that the image under the visual angle of the target position can be simulated, and the defect that the positions of the cameras under the screens cannot move along with the sight of the user is overcome.
With the above description of the shooting method provided by the embodiment of the present invention, the electronic device provided by the embodiment of the present invention will be described with reference to the accompanying drawings.
Referring to fig. 6, an embodiment of the present invention further provides an electronic device 600, where the electronic device 600 includes at least two off-screen cameras, and the electronic device 600 further includes:
a first determining module 601, configured to determine a target position of a visual focus of a user on a screen;
a second determining module 602, configured to determine a target off-screen camera from the off-screen cameras according to the target position;
and a shooting module 603, configured to shoot through the target screen lower camera to obtain a first image.
Optionally, referring to fig. 7, the second determining module 602 includes:
the identification submodule 6021 is used for identifying scene characteristics in the preview picture acquired by any one of the off-screen cameras;
the first determining submodule 6022 is configured to determine, from each of the off-screen cameras, a target off-screen camera corresponding to both the first preset condition and the second preset condition when the target position meets a first preset condition and the scene characteristic meets a second preset condition.
Optionally, referring to fig. 7, the second determining module 602 includes:
and a second determining submodule 6023, configured to determine, when the target position is not matched with the screen position corresponding to each of the off-screen cameras, an off-screen camera whose distance from the target position does not exceed a preset distance as a target off-screen camera.
Optionally, the number of the target under-screen cameras is N, the number of the first images is N, and N is an integer greater than 1;
referring to fig. 7, the photographing module 603 includes:
and a shooting submodule 6031, configured to respectively shoot through the N target underscreen cameras to obtain N first images.
Optionally, referring to fig. 7, the electronic device 600 further includes:
a third determining module 604, configured to determine, when the target position does not match the screen position corresponding to each of the off-screen cameras, relative mapping information between the target position and each of the N target off-screen cameras;
and a synthesizing module 605, configured to synthesize the N first images into a second image according to the N pieces of relative mapping information.
Optionally, referring to fig. 7, the first determining module 601 includes:
a receiving submodule 6011 configured to receive an input for a first position on a screen;
a third determining sub-module 6012, configured to determine, in response to the input, the first position as a target position of a visual focus of the user on the screen.
The electronic device 600 provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 2, and is not described here again to avoid repetition.
In the embodiment of the invention, the electronic device can firstly determine the target position of the visual focus of the user on the screen through the first determination module, then can determine the target under-screen camera from the various under-screen cameras according to the target position through the second determination module, and then the shooting module shoots through the target under-screen camera to obtain the first image. In the embodiment of the invention, the shooting visual angle of the target under-screen camera determined according to the target position of the visual focus of the user on the screen is matched with the visual focus of the user, and the user image with the front-view visual angle effect can be obtained by shooting according to the shooting visual angle. In addition, because the target screen lower camera is arranged in the display area of the electronic equipment, a user can view the effect picture in real time through the preview picture displayed in the display area while looking forward at the target screen lower camera so as to adjust the posture, and therefore, the requirements of the user on adjusting the posture in real time through the preview picture and the requirements of the user on shooting an expected visual angle image can be met simultaneously.
FIG. 8 is a diagram illustrating a hardware configuration of an electronic device implementing various embodiments of the invention;
the electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 810 for determining a target position of a visual focus of a user on a screen; determining a target under-screen camera from each under-screen camera according to the target position; and shooting through the camera under the target screen to obtain a first image.
In the embodiment of the invention, the electronic equipment can firstly determine the target position of the visual focus of the user on the screen, then can determine the target under-screen camera from the under-screen cameras according to the target position, and further shoot through the target under-screen camera to obtain the first image. In the embodiment of the invention, the shooting visual angle of the target under-screen camera determined according to the target position of the visual focus of the user on the screen is matched with the visual focus of the user, and the user image with the front-view visual angle effect can be obtained by shooting according to the shooting visual angle. In addition, because the target screen lower camera is arranged in the display area of the electronic equipment, a user can view the effect picture in real time through the preview picture displayed in the display area while looking forward at the target screen lower camera so as to adjust the posture, and therefore, the requirements of the user on adjusting the posture in real time through the preview picture and the requirements of the user on shooting an expected visual angle image can be met simultaneously.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 802, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the electronic apparatus 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The electronic device 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the electronic device 800 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 8061, and the Display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 6071 detects a touch operation on or near the touch panel 8071, the touch operation can be transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 can provide a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 808 is an interface for connecting an external device to the electronic apparatus 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 800 or may be used to transmit data between the electronic device 800 and external devices.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby monitoring the whole electronic device. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The electronic device 800 may also include a power supply 811 (e.g., a battery) for powering the various components, and preferably the power supply 611 may be logically coupled to the processor 810 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the electronic device 800 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 810, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program is executed by the processor 810 to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A shooting method is applied to electronic equipment, the electronic equipment comprises at least two under-screen cameras, and the method is characterized by comprising the following steps:
determining a target position of a visual focus of a user on a screen;
determining a target under-screen camera from each under-screen camera according to the target position;
shooting through the target under-screen camera to obtain a first image;
the step of determining a target under-screen camera from each under-screen camera according to the target position includes:
identifying scene characteristics in a preview picture acquired by any one of the off-screen cameras;
and under the condition that the target position meets a first preset condition and the scene characteristics meet a second preset condition, determining a target under-screen camera corresponding to the first preset condition and the second preset condition from each under-screen camera.
2. The method of claim 1, wherein the step of determining a target off-screen camera from among the off-screen cameras based on the target location comprises:
and under the condition that the target position is not matched with the screen positions corresponding to the various under-screen cameras, determining the under-screen camera with the distance between the target position and the under-screen camera not exceeding the preset distance as the target under-screen camera.
3. The method of claim 1, wherein the number of target off-screen cameras is N, the number of first images is N, and N is an integer greater than 1;
the step of obtaining a first image by shooting through the target screen lower camera comprises the following steps:
and respectively shooting through N lower cameras of the target screen to obtain N first images.
4. The method of claim 3, wherein after the step of obtaining N first images by N cameras under the target screen, the method further comprises:
under the condition that the target position is not matched with the screen positions corresponding to the cameras under the screen, determining relative mapping information between the target position and the N target cameras under the screen;
and synthesizing the N first images into a second image according to the N pieces of relative mapping information.
5. The method of claim 1, wherein the step of determining the target location of the user's visual focus on the screen comprises:
receiving an input to a first location on a screen;
in response to the input, determining the first location as a target location of a user's visual focus on the screen.
6. An electronic device, wherein the electronic device comprises at least two under-screen cameras, the electronic device further comprising:
the first determination module is used for determining the target position of the visual focus of the user on the screen;
the second determining module is used for determining a target under-screen camera from all the under-screen cameras according to the target position;
the shooting module is used for shooting through the target under-screen camera to obtain a first image;
the second determining module includes:
the recognition submodule is used for recognizing scene characteristics in a preview picture acquired by any one of the under-screen cameras;
and the first determining submodule is used for determining a target under-screen camera corresponding to the first preset condition and the second preset condition from all the under-screen cameras under the condition that the target position meets the first preset condition and the scene characteristics meet the second preset condition.
7. The electronic device of claim 6, wherein the second determining module comprises:
and the second determining submodule is used for determining the screen lower camera with the distance between the target position and the screen position corresponding to each screen lower camera not exceeding the preset distance as the target screen lower camera under the condition that the target position is not matched with the screen position corresponding to each screen lower camera.
8. The electronic device of claim 6, wherein the number of target off-screen cameras is N, the number of first images is N, and N is an integer greater than 1;
the photographing module includes:
and the shooting submodule is used for respectively shooting through the N lower target screen cameras to obtain N first images.
9. The electronic device of claim 8, further comprising:
the third determining module is used for determining the relative mapping information between the target position and the N target under-screen cameras respectively under the condition that the target position is not matched with the screen positions corresponding to the under-screen cameras respectively;
and the synthesis module is used for synthesizing the N first images into a second image according to the N pieces of relative mapping information.
10. The electronic device of claim 6, wherein the first determining module comprises:
a receiving submodule for receiving an input to a first location on a screen;
a third determination submodule, configured to determine, in response to the input, the first position as a target position of a visual focus of a user on the screen.
11. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the photographing method according to any of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the photographing method according to any one of claims 1 to 5.
CN201911054794.9A 2019-10-31 2019-10-31 Shooting method and electronic equipment Active CN110809115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911054794.9A CN110809115B (en) 2019-10-31 2019-10-31 Shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911054794.9A CN110809115B (en) 2019-10-31 2019-10-31 Shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110809115A CN110809115A (en) 2020-02-18
CN110809115B true CN110809115B (en) 2021-04-13

Family

ID=69489884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911054794.9A Active CN110809115B (en) 2019-10-31 2019-10-31 Shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110809115B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314610B (en) * 2020-02-26 2022-03-11 维沃移动通信有限公司 Control method and electronic equipment
CN111385415B (en) * 2020-03-10 2022-03-04 维沃移动通信有限公司 Shooting method and electronic equipment
CN111432155B (en) * 2020-03-30 2021-06-04 维沃移动通信有限公司 Video call method, electronic device and computer-readable storage medium
CN111917977A (en) * 2020-07-21 2020-11-10 珠海格力电器股份有限公司 Camera switching method and device applied to intelligent terminal, electronic equipment and storage medium
CN114071002B (en) * 2020-08-04 2023-01-31 珠海格力电器股份有限公司 Photographing method and device, storage medium and terminal equipment
CN114079766B (en) * 2020-08-10 2023-08-11 珠海格力电器股份有限公司 Under-screen camera shielding prompting method, storage medium and terminal equipment
CN114374815B (en) * 2020-10-15 2023-04-11 北京字节跳动网络技术有限公司 Image acquisition method, device, terminal and storage medium
CN113329172B (en) * 2021-05-11 2023-04-07 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN114007120A (en) * 2021-10-29 2022-02-01 海信视像科技股份有限公司 Method for shooting image by camera and display equipment
CN114827465A (en) * 2022-04-19 2022-07-29 京东方科技集团股份有限公司 Image acquisition method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018137627A (en) * 2017-02-22 2018-08-30 キヤノン株式会社 Display device, control method thereof, and program
CN208046672U (en) * 2018-02-12 2018-11-02 中兴通讯股份有限公司 A kind of electric terminal
CN108881530A (en) * 2018-06-04 2018-11-23 Oppo广东移动通信有限公司 Electronic device
CN208386618U (en) * 2018-05-16 2019-01-15 Oppo广东移动通信有限公司 Electronic device
CN109618027A (en) * 2018-12-03 2019-04-12 魏贞民 A kind of multi-screen splicing combined method of mobile phone
CN110493523A (en) * 2019-08-27 2019-11-22 Oppo广东移动通信有限公司 Image display method, device, terminal and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747183B (en) * 2014-01-15 2017-02-15 北京百纳威尔科技有限公司 Mobile phone shooting focusing method
CN105120178A (en) * 2015-09-21 2015-12-02 宇龙计算机通信科技(深圳)有限公司 Focusing shooting method and system for a terminal with multiple cameras, and mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018137627A (en) * 2017-02-22 2018-08-30 キヤノン株式会社 Display device, control method thereof, and program
CN208046672U (en) * 2018-02-12 2018-11-02 中兴通讯股份有限公司 A kind of electric terminal
CN208386618U (en) * 2018-05-16 2019-01-15 Oppo广东移动通信有限公司 Electronic device
CN108881530A (en) * 2018-06-04 2018-11-23 Oppo广东移动通信有限公司 Electronic device
CN109618027A (en) * 2018-12-03 2019-04-12 魏贞民 A kind of multi-screen splicing combined method of mobile phone
CN110493523A (en) * 2019-08-27 2019-11-22 Oppo广东移动通信有限公司 Image display method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN110809115A (en) 2020-02-18

Similar Documents

Publication Publication Date Title
CN110809115B (en) Shooting method and electronic equipment
CN109639970B (en) Shooting method and terminal equipment
CN109361865B (en) Shooting method and terminal
CN111083380B (en) Video processing method, electronic equipment and storage medium
CN111541845B (en) Image processing method and device and electronic equipment
CN108495029B (en) Photographing method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN109361867B (en) Filter processing method and mobile terminal
CN108924412B (en) Shooting method and terminal equipment
US11778304B2 (en) Shooting method and terminal
CN111031253B (en) Shooting method and electronic equipment
CN109819166B (en) Image processing method and electronic equipment
CN109474786A (en) A kind of preview image generation method and terminal
JP2023512966A (en) Image processing method, electronic device and computer readable storage medium
CN108924422B (en) Panoramic photographing method and mobile terminal
CN111464746B (en) Photographing method and electronic equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN107807488B (en) Camera assembly, aperture adjusting method and mobile terminal
CN111246111B (en) Photographing method, electronic device, and medium
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN111010508B (en) Shooting method and electronic equipment
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN110086998B (en) Shooting method and terminal
CN110913133B (en) Shooting method and electronic equipment
CN111416948A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant