CN113747050B - Shooting method and equipment - Google Patents

Shooting method and equipment Download PDF

Info

Publication number
CN113747050B
CN113747050B CN202011044018.3A CN202011044018A CN113747050B CN 113747050 B CN113747050 B CN 113747050B CN 202011044018 A CN202011044018 A CN 202011044018A CN 113747050 B CN113747050 B CN 113747050B
Authority
CN
China
Prior art keywords
image
focal length
target subject
size
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011044018.3A
Other languages
Chinese (zh)
Other versions
CN113747050A (en
Inventor
肖斌
朱聪超
胡斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2021/078543 priority Critical patent/WO2022062318A1/en
Publication of CN113747050A publication Critical patent/CN113747050A/en
Application granted granted Critical
Publication of CN113747050B publication Critical patent/CN113747050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a shooting method and equipment, relates to the technical field of electronics, and can automatically adjust the focal length of a camera when electronic equipment is far away from or close to a target main body, so that the imaging size of the target main body on a video image is basically unchanged to realize Hooke zooming, auxiliary instruments such as a slide rail and manual zooming are not needed, the operation difficulty of a user can be reduced, and the shooting experience of the user is improved. The scheme is as follows: the electronic equipment displays a first recorded image on a shooting interface, wherein the first recorded image is obtained according to a first original image, and the first original image is obtained by shooting through a zoom camera by adopting a first focal length value; determining a second focal length value corresponding to a second original image to be acquired according to the first size of the target main body image on the first original image; and displaying a second recorded image on the shooting interface, wherein the second recorded image is obtained according to a second original image, and the second original image is obtained by shooting through the zoom camera by adopting a second focal length value. The embodiment of the application is used for video recording.

Description

Shooting method and equipment
The present application claims priority of chinese patent application entitled "a zoom method and apparatus for distinguishing subject characters from background" filed by the national intellectual property office at 30/05/2020, 202010480536.3, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a shooting method and equipment.
Background
With the rapid development of shooting technology, more and more users are no longer satisfied with simple video recording, but want to shoot more personalized, rich and high-grade videos.
The heuche zoom (also called push rail zoom or sliding zoom) is a special video shooting method, which can make the target subject on the image look unchanged during the video recording process, while the background presents a visual effect far away from/close to the target subject, so as to bring a visual impact of background space compression/expansion to the user, thereby obtaining a special video effect.
At present, a professional camera can be far away from/close to a target main body through a sliding rail, and meanwhile, a user can adjust the focal length of the professional camera to zoom, so that shooting of a Hirschhorn zoom video is achieved. The scheme needs additional auxiliary instruments such as a slide rail and the like, is not easy to carry and has higher cost; and the user is required to operate the professional camera to carry out manual zooming, so that the user operation is complicated and the operation difficulty is high.
Disclosure of Invention
The embodiment of the application provides a shooting method and equipment, in the process of recording a kirk video in the Hirschhorn area, when a user holds an electronic device to be far away from/close to a target main body, the electronic device can automatically adjust the focal length of a camera according to the imaging size of the target main body, so that the imaging size of the target main body on a video image is basically unchanged, extra auxiliary instruments such as a slide rail and the like and manual zooming of the user are not needed, the operation of the user can be facilitated, the operation difficulty of the user is reduced, and the shooting experience of the user is improved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in one aspect, an embodiment of the present application provides a shooting method, which is applicable to an electronic device including a zoom camera. The method comprises the following steps: the electronic equipment displays a first recorded image on a shooting interface, the first recorded image comprises a target main body image, the first recorded image is obtained according to a first original image, and the first original image is obtained through shooting by a zoom camera through a first focal length value. And the electronic equipment determines a second focal length value corresponding to a second original image to be acquired according to the first size of the target main body image on the first original image. And the electronic equipment displays a second recorded image on the shooting interface, wherein the second recorded image comprises a target main body image, the second recorded image is obtained according to a second original image, and the second original image is obtained by shooting through the zoom camera by adopting a second focal length value.
In the scheme, the electronic equipment can determine the focal length value used by the zoom camera to acquire the subsequent original image according to the size of the target subject image on the current original image, so that the size of the target subject image on different original images is basically kept unchanged, the size of the target subject image on different video images obtained according to the original images is basically kept unchanged, and the Hirscoke zoom is realized.
That is to say, in a video recording scene, when a user holds the electronic device to be far away from/close to a target main body, the electronic device can automatically adjust the focal length value of the zoom camera according to the imaging size of the target main body, so that the size of the target main body on a video image is basically kept unchanged, and extra auxiliary instruments such as a slide rail and the like and manual zooming of the user are not needed, so that the user can conveniently carry out the heuchker zoom video shooting, the operation difficulty of the user is reduced, and the shooting experience of the user is improved.
Moreover, the electronic equipment realizes the Hirschbeck zoom through the optical zoom instead of the digital zoom adopted by cutting and zooming, so that the resolution and the definition of a recorded image and a video image obtained by the electronic equipment in the shooting process are high, and the shooting experience of a user is good.
In one possible design, after the electronic device determines, according to a first size of an image of a target subject on the first original image, a second focal length value corresponding to a second original image to be acquired, the method further includes: and the electronic equipment updates the second focal length value into a target focal length value, the target focal length value is obtained according to the focal length value corresponding to the historical original image, and the second original image is obtained by shooting through the zoom camera by using the updated second focal length value.
That is, the electronic device may update the second focal distance value in combination with the focal distance value corresponding to the historical original image, thereby performing focal distance smoothing. And the electronic equipment acquires a second original image according to the updated second focal length value.
In another possible design, a difference between the target focal length value and a first focal length value corresponding to the first original image is less than or equal to a first preset value, and a difference between the target focal length value and a second focal length value is less than or equal to a second preset value.
Therefore, the change of the focal length value after the focal length value is smoothed according to the difference of different original images in the shooting process is smoother, and the change of the focal length value back and forth between increase and decrease in the shooting process can not occur. The size change of the target main body image on the original image, the recorded image and the video image obtained according to the smooth focal length value is smooth, and the size of the target main body image is basically unchanged, so that the picture of the recorded image is in smooth transition in the whole shooting process without jumping and unsmooth, and the user experience and the shooting effect are improved. The imaging size of the background object can be prevented from being frequently reduced or increased, and the background space is prevented from being frequently changed between compression and expansion, so that better shooting experience can be provided for a user.
In another possible design, the target focal length value is an average value of focal length values corresponding to multiple frames of historical original images.
Therefore, the change of the focal length value after updating according to the target focal length value corresponding to different original images in the shooting process is smooth, the size change of the target main body image on the original image, the recorded image and the video image obtained according to the smooth focal length value is smooth, and the size of the target main body image is basically unchanged.
In another possible design, the target focal length value is obtained by performing curve fitting on the focal length values corresponding to the multiple frames of historical original images.
Therefore, the change of the focal length value after updating according to the target focal length value corresponding to different original images in the shooting process is smooth, the size change of the original image, the recorded image and the target main body image on the video image obtained according to the smooth focal length value is also smooth, and the size of the original image, the recorded image and the video image is basically unchanged.
In another possible design, the second position coordinates of the target subject image on the second captured image are obtained from the initial position coordinates of the target subject image on the second original image and the position coordinates of the target subject image on the history captured image.
That is, the electronic device may update the position of the target subject image on the second original image in conjunction with the position of the target subject image on the history-captured image, thereby performing position smoothing. And the electronic equipment generates a second recorded image according to the second original image after the position of the target main body image is adjusted.
In another possible design, the difference between the second position coordinates and the first position coordinates of the target subject image on the first captured image is less than or equal to a third preset value, and the difference between the second position coordinates and the initial position coordinates is less than or equal to a fourth preset value.
Therefore, the position change of the target main body image after position adjustment corresponding to different recorded images in the shooting process is smooth, the position change of the target main body image on the video image is smooth, and the phenomenon that the visual experience and the shooting effect of a user are influenced due to the fact that the position of the target main body image changes suddenly is avoided.
In another possible design, the second position coordinate is an average value of position coordinates of the target subject image on the plurality of frames of history captured images.
Therefore, the position change of the target main body image after position adjustment corresponding to different recorded images in the shooting process is smooth, the position change of the target main body image on the video image is smooth, and the phenomenon that the visual experience and the shooting effect of a user are influenced due to the fact that the position of the target main body image changes suddenly is avoided.
In another possible design, the second position coordinates are obtained by curve-fitting the position coordinates of the target subject image on the plurality of frames of history-captured images.
Therefore, the position change of the target main body image after position adjustment corresponding to different recorded images in the shooting process is smooth, the position change of the target main body image on the video image is also smooth, and the phenomenon that the visual experience and the shooting effect of a user are influenced due to sudden change of the position of the target main body image is avoided.
In another possible design, the method further includes: the method comprises the steps that the electronic equipment detects a first preset operation of a user indicating a target body based on a preview image in a preview state of a target shooting mode; the target subject is determined in response to a first preset operation.
That is, the user may specify the target subject.
In another possible design, the target subject is a person.
In another possible design, the electronic device determines, according to a first size of an image of a target subject on a first original image, a second focal length value corresponding to a second original image to be acquired, including: and the electronic equipment determines a second focal length value corresponding to a second original image to be acquired according to the first size and the size reference value of the target main body image on the first original image.
Therefore, the electronic equipment can determine the focal length value used by the zoom camera to acquire the subsequent original image according to the size and the size reference value of the target main body image on the current original image, so that the size of the target main body image on different original images is basically kept unchanged, the size of the target main body image on different video images is basically kept unchanged, and the Hirschner zoom is realized.
In another possible design, the electronic device determines, according to a first size of an image of a target subject on a first original image, a second focal length value corresponding to a second original image to be acquired, including: the electronic equipment determines a second focal length value corresponding to a second original image to be acquired by adopting a first formula according to the first size, the first focal length value and the size reference value of the target subject image on the first original image, wherein the second size of the target subject image on the second original image corresponding to the second focal length value is equal to the size reference value;
f2= S3 f1/S2 formula one
Wherein S3 denotes a size reference value, S2 denotes a first size, f1 denotes a first focal length value, and f2 denotes a second focal length value.
Therefore, the electronic equipment can adjust and collect the focal length value corresponding to the subsequent original image according to the size, the size reference value and the focal length value of the target main body image on the current original image, and the zoom camera can make the size of the target main body image on the subsequent different original images basically consistent with the size reference value according to the fact that the size of the target main body image on the subsequent original image collected by the adjusted focal length value is the same as the size reference value, so that the size of the target main body image on the adjacent original image basically remains unchanged, the size of the target main body image on the adjacent video image basically remains unchanged, and accordingly Hirschner zooming is achieved.
In another possible design, the method further includes: the electronic device prompts the user on a preview interface to set the size reference value. So that the user can set the size reference value of the target subject according to the prompt.
In another possible design, on the preview interface, the size of the target subject image on the preview image changes as the electronic device moves forward or backward, and the size reference value is the size of the target subject image on the preview image.
In this way, the user can move the handheld electronic device forward or backward to change the size of the target subject image on the preview image, thereby setting or adjusting the size of the size reference value.
In another possible design, on the preview interface, the size of the target subject image on the preview image is changed in response to a user operation to adjust the zoom magnification, and the size reference value is the size of the target subject image on the preview image.
In this way, the user can adjust the zoom magnification of the preview image to change the size of the target subject image on the preview image, thereby setting or adjusting the size of the size reference value.
In another possible design, when the first focal length value is a focal length value corresponding to a first frame of original image acquired by the electronic device after starting video recording, the first focal length value is a preset focal length value or a focal length value set by a user.
In another aspect, an embodiment of the present application provides a shooting device, which is included in an electronic device. The device has the function of realizing the behavior of the electronic equipment in any one of the above aspects and possible designs, so that the electronic equipment executes the shooting method executed by the electronic equipment in any one of the possible designs of the above aspects. The function can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes at least one module or unit corresponding to the above-described functions. For example, the apparatus may comprise a display unit, a determination unit, a processing unit, and the like.
In another aspect, an embodiment of the present application provides an electronic device, including: the zoom camera is used for acquiring images; a screen for displaying an interface, one or more processors; and a memory having code stored therein. When executed by an electronic device, cause the electronic device to perform the photographing method performed by the electronic device in any of the possible designs of the above aspects.
In another aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a memory having code stored therein. When executed by an electronic device, cause the electronic device to perform the photographing method performed by the electronic device in any of the possible designs of the above aspects.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, which includes computer instructions, when the computer instructions are executed on an electronic device, cause the electronic device to perform the shooting method in any one of the possible designs of the foregoing aspects.
In still another aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the shooting method performed by the electronic device in any one of the possible designs of the above aspect.
In another aspect, an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device. The chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from a memory of the electronic equipment and sending the signals to the processor, and the signals comprise computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the method of capturing in any of the possible designs of the above aspects.
For the advantageous effects of the other aspects, reference may be made to the description of the advantageous effects of the method aspects, which is not repeated herein.
Drawings
FIG. 1A is a schematic illustration of an imaging system provided by an embodiment of the present application;
FIG. 1B is a schematic illustration of another imaging provided by an embodiment of the present application;
fig. 1C is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a shooting method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a set of interfaces provided by an embodiment of the present application;
FIG. 4 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 5 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 6 is another set of schematic interfaces provided by embodiments of the present application;
FIG. 7 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 8 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 9 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 10 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 11 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 12 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 13 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 14 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 15A is a schematic diagram illustrating a comparison between a set of original images and a recorded image according to an embodiment of the present disclosure;
FIG. 15B is a schematic view of another imaging provided by embodiments of the present application;
FIG. 15C is a schematic diagram of a set of Bezier curve fits provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of a set of video images provided by an embodiment of the present application;
fig. 17 is a flowchart of another shooting method provided in the embodiment of the present application;
FIG. 18 is another set of schematic interfaces provided in accordance with embodiments of the present application;
FIG. 19 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 20 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 21 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 22A is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 22B is a schematic view of another set of interfaces provided by an embodiment of the present application;
FIG. 23 is a schematic view of another set of interfaces provided by embodiments of the present application;
fig. 24 is a flowchart of another shooting method according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the application provides a shooting method, which can be applied to electronic equipment and can automatically adjust the focal length of a camera (namely, perform optical zooming) according to the imaging size of a target main body when a user holds the electronic equipment to be far away from/close to the target main body in the Hirschhook video recording process, so that the size of the target main body in a video image is basically unchanged by the Hirschhook zooming, and the user can use a background object to be far away from/close to the target main body and the background space to compress/expand the visual experience. The target subject is an object to be shot which is concerned or interested by a user, and the background object is other objects except the target subject in the shooting range.
According to the shooting method, extra auxiliary instruments such as a sliding rail and the like and manual zooming of the user are not needed, the user can conveniently shoot the Hooke zooming video, the operation difficulty and the operation complexity of the user can be reduced, and the shooting experience of the user is improved.
It can be understood that the size of the object on the captured image of the electronic device can be changed by changing the focal length of the camera or the distance between the user (i.e. the capturing user) and the object to be captured without performing digital zooming. In the hek zoom scene, when the distance between the user and the target subject changes, if the size of the target subject on the shot image is not changed, the focal length of the camera is also changed at the same time.
Referring to the imaging schematic shown in fig. 1A, S1 denotes the actual size of the target subject, which is a fixed value; s2, representing the imaging size of the target subject, namely the size of the target subject image on the shot image; d1 represents the distance between the user and the target subject, d1 is approximately equal to the distance between the camera and the target subject, namely d1 can also be used for representing the object distance between the target subject and the camera; f denotes the focal length of the camera. As shown in equations 1 and 2, S1 is a fixed value, and if S2 is to be kept constant, i.e. S2/S1 is kept constant, when the user moves away from the target subject, i.e. d1 increases, the focal length f should also increase; when the user approaches the target subject, i.e. d1 decreases, the focal length f should also decrease.
S2/S1= f/(f + d 1) formula 1
S2/S1= f/d1 formula 2
Wherein, since d1 is usually much larger than f, the above formula 1 can be simplified to formula 2. For example, in a rear shooting scene, when the user moves backward, the distance d1 between the user and the target subject is increased, and if the imaging size of the target subject is not changed, the electronic device may increase the focal length f of the camera; when the user moves forward, the distance d1 between the user and the target subject is reduced, and if the imaging size of the target subject is not changed, the electronic device can reduce the focal length f of the camera.
As the focal length changes, the field angle of view (FOV) of the camera changes, and the range of the scene on the captured image changes. As shown in formula 3, L represents the size of the photosensitive sensor of the camera, and is a fixed value; if the focal length f of the camera is larger, the FOV is smaller, and the scene range of the shot image is smaller; when the focal length f of the camera is smaller, the field angle FOV is larger, and the field range of the captured image is smaller.
FOV =2 × arctan ((L/2)/f) formula 3
For example, in a rear shooting scene, when the user moves backward, the distance d1 between the user and the target subject increases, and if the imaging size of the target subject is not changed, the electronic device may increase the focal length f of the camera, decrease the field angle of the camera, decrease the scene range of the shot image, and decrease the background objects on the shot image. When the user moves forward, the distance d1 between the user and the target subject is reduced, and if the imaging size of the target subject is not changed, the focal length f of the camera can be reduced by the electronic device, the field angle of the camera is increased, the scene range of the shot image is increased, and background objects on the shot image are increased.
If the distance d1 between the user and the target subject is different, the imaging size of the background object (that is, the size of the background object image on the captured image) is also different. Referring to the imaging diagram shown in fig. 1B, S3 represents the actual size of the background object 1, which is a fixed value; d2 represents the depth difference between the target subject and the background object, and is a fixed value; s4 represents the imaged size of the background object 1. As shown in formulas 4 and 5, if S2/S1 is to be kept unchanged, when the distance d1 between the user and the target subject is larger, the focal length f of the camera is increased, the larger the background object image on the shot image is, and the visual experience that the background space expands outwards is provided for the user; when the distance d1 between the user and the target main body is smaller, the focal length f of the camera is reduced, the background object image on the shot image is smaller, and the user is provided with the visual experience of inward compression of the background space.
S4/S3= f/(f + d1+ d 2) = d 1= (S2/S1)/(d 1+ d 2) = (1-d 2/(d 1+ d 2)) = (S2/S1) formula 4
S4= S3 · (1-d 2/(d 1+ d 2)) (S2/S1) formula 5
Further, if the distance d1 between the user and the target subject is different, the imaging position of the background object is also different. Referring to the imaging diagram shown in fig. 1B, S5 represents a vertical distance between the central point of the background object 1 and the optical axis, which is a fixed value; d2 is also a fixed value. As shown in formulas 6 and 7, if S2/S1 is to be kept constant, when the distance d1 between the user and the target subject is larger, the distance between the imaging center point of the background object 1 and the optical center is also larger, that is, the imaging center point of the background object 1 is farther and farther from the image center, so as to provide the user with a visual experience of expanding the background space outward; when the distance d1 between the user and the target subject is smaller, the distance between the imaging center point of the background object 1 and the optical center is also smaller, that is, the imaging center point of the background object 1 is closer to the image center, so that the user is provided with a visual experience of inward compression of a background space.
S6/S5= f/(f + d1+ d 2) formula 6
S6= S5 (1-d 2/(d 1+ d 2)) (S2/S1) formula 7
In addition, in the case where the amount of change in the distance d1 between the user and the target subject is the same, the amount of change (i.e., the magnitude of change or the degree of change) in the imaging size and the imaging position of the background object at different depths is also different. As shown in equations 8 and 9, if d2 is larger, S4 and S6 are smaller if d1 has the same variation, that is, the variation of the imaging size and the imaging position of the background object 2 with a larger depth value is smaller than the variation of the imaging size and the imaging position of the background object 1 with a smaller depth value. That is, the closer the background object is to the photographer, the faster the change rate of the imaging size and the imaging position of the background object.
S4= S3 × d1 (S2/S1)/(d 1+ d 2) formula 8
S6= S5 × d1 (S2/S1)/(d 1+ d 2) formula 9
For example, in a rear shooting scene, when a user moves backwards, the distance d1 between the user and a target subject is increased, and if the imaging size of the target subject is not changed, the electronic device may increase the focal length f of the camera, decrease the field angle of the camera, decrease the range of a scene on a shot image, decrease a background object on the shot image, increase a background object image on the shot image, and make the imaging center point of the background object image farther from the center of the image, thereby providing the user with a visual experience of expanding the background space outwards. And compared with the background object with smaller depth, the imaging size of the background object with larger depth is increased more and faster, and the imaging position is far away from the center of the image with larger amplitude and faster speed.
When the user moves forward, the distance d1 between the user and the target body is reduced, if the imaging size of the target body is not changed, the focal length f of the camera can be reduced by the electronic equipment, the field angle of the camera is increased, the range of the scenery on the shot image is increased, background objects on the shot image are increased, background object images on the shot image are reduced, the imaging center point of the background object images is closer to the center of the image, and the user is provided with the visual experience of inward compression of the background space. And compared with the background object with smaller depth, the imaging size of the background object with larger depth is reduced more and faster, and the imaging position is closer to the center of the image with larger amplitude and faster speed.
It should be noted that d2 may be a positive value, that is, the background object may be located behind the target subject, and a distance between the background object and the user is greater than a distance between the target subject and the user; the above d2 may also be a negative value, that is, the background object may be located in front of the target subject, and the distance between the background object and the user is smaller than the distance between the target subject and the user. The image of the background object in front of the target subject has a changing tendency which is identical to the changing tendency of the image of the background object behind the target subject, but the changing speed is different, and the closer the background object is to the user, the faster the changing speed of the image is.
For example, the electronic device according to the embodiment of the present application may be a mobile terminal such as a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or a professional camera, and the specific type of the electronic device is not limited in any way in the embodiment of the present application.
For example, fig. 1C shows a schematic structural diagram of the electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In embodiments of the present application, camera 193 may include cameras of multiple focal segments, with cameras of different focal segments having different equivalent focal lengths. For example, the equivalent focal length of a common main camera may be about 20mm, the equivalent focal length of a wide camera may be about 15mm, and the equivalent focal length of a tele camera may be about 100 mm. The camera with the smaller equivalent focal length is suitable for shooting a wider picture, and the camera with the larger equivalent focal length is suitable for shooting the details of a remote object.
In an embodiment of the present application, camera 193 comprises a zoom camera. The zoom camera can adjust the focal length within a certain range, so that an object has different imaging sizes on an image, and a field angle and a scene range with different widths are obtained. For example, the equivalent focal length of a zoom camera may vary within 10-100 mm. In the embodiment of the present application, the zoom camera may be configured to automatically adjust the focal length according to the size of the target subject on the image while the user holds the electronic device 100 away from/close to the target subject.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: video instance segmentation, image recognition, face recognition, speech recognition, text understanding, and the like.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
In the embodiment of the present application, during the hek kock zoom video shooting process, while the user holds the electronic device 100 far away from/close to the target subject, the processor 110 may control the zoom camera to automatically adjust the focal length according to the imaging size of the target subject by executing the instructions stored in the internal memory 121, so that the size of the target subject in the video image is substantially unchanged.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music play, voice recording, voice commands, etc.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for identifying the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and the like.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation acting thereon or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In the embodiment of the application, the touch screen can be used for interface display when recording videos; the processor 110 such as the NPU may perform video instance segmentation on the target subject during the process of moving away from/close to the target subject, so as to determine the size of the target subject on the image in real time; the size of the target subject on the image changes as the user moves away from/approaches the target subject, and the processor 110 can implement by running the instructions stored in the internal memory 121, and control the zoom camera to automatically adjust the focal length according to the imaging size of the target subject, so that the size of the target subject in the video image is basically unchanged, and the user is provided with the visual experience of background object moving away from/approaching the target subject and background space expansion/compression, thereby implementing heuchker zoom.
The shooting method provided by the embodiment of the application will be explained below by taking an electronic device with a zoom camera as a mobile phone as an example. It can be understood that the electronic device may also be other devices such as a tablet computer or a watch with a zoom camera, and the embodiment of the present application does not limit the specific type of the electronic device.
Referring to fig. 2, a shooting method provided in an embodiment of the present application may include:
201. and the mobile phone starts a Hooke shooting function and displays a preview image.
When a user wants to record the Hirschk zoom video by using the mobile phone, the user can start the shooting function of the mobile phone and enter the Hirschk mode so as to record in the shooting mode, namely, the mobile phone can start the Hirschk shooting function. It is understood that the shooting mode for recording the heucher zoom video may be named otherwise, and the embodiment of the present application is not limited thereto.
For example, the mobile phone may start a camera application, or start another application with a photographing or video recording function (such as an AR application like a tremble or a river view cyberverse), so as to start a photographing function of the mobile phone, and then enter the heuchek mode.
For example, after detecting that the user clicks the camera icon 301 shown in (a) of fig. 3, the mobile phone starts a shooting function, and enters a shooting mode shown in (b) of fig. 3. Illustratively, after detecting the operation of clicking the control 302 shown in (b) of fig. 3 by the user, the cell phone enters the schichkock mode, and displays the preview interface shown in (c) of fig. 3 in the preview state. As another example, after detecting that the user clicks the control 303 shown in (b) in fig. 3, the mobile phone displays an interface shown in (d) in fig. 3; after detecting the operation of clicking the control 304 by the user, the mobile phone enters the schichco-ck mode, and displays a preview interface as shown in (c) in fig. 3 in a preview state.
For another example, after detecting that the user marks an "xq" track on the touch screen in the message screen mode, the mobile phone starts the shooting function and enters the schickoks mode, and displays a preview interface as shown in (c) in fig. 3 in the preview state.
For another example, after detecting the voice command of the user to enter the schichkock mode, the mobile phone starts the shooting function and enters the schichkock mode, and displays a preview interface as shown in (c) of fig. 3 in a preview state.
It should be noted that the mobile phone may also enter the xitake kock mode in response to other operations such as a user touch operation, a voice instruction, or a shortcut gesture, and the operation of triggering the mobile phone to enter the xitake kock mode is not limited in the embodiment of the present application.
After the mobile phone starts the xike photographing function, an image can be acquired by the zoom camera with a focal length value 1, where the focal length value 1 may be a default initial focal length value (e.g., a focal length value corresponding to a zoom magnification of 1X) or an initial focal length value specified by a user.
For example, after detecting that the user clicks the control 401 shown in (a) in fig. 4, the mobile phone may display a setting interface as shown in (b) in fig. 4, and the user may set an initial focal length value of the zoom camera, that is, a focal length value 1, based on the setting page. As another example, in the heuchock mode, as shown in fig. 4 (c), the preview interface of the mobile phone includes a plurality of selectable focal length values, and the mobile phone determines an initial focal length value, i.e., a focal length value 1, according to a selection of the user.
202. The mobile phone determines the target subject in the preview state of the Hirschk mode.
The target subject can be determined after the mobile phone enters the Hirschk mode, so that the imaging size of the target subject is kept basically unchanged in the video recording process.
The target body may be an object, and the position of the target body may not move during the shooting process, or may move laterally at the same depth.
Alternatively, the target body may include a plurality of objects having the same depth, and the entirety of the plurality of objects may be the target body. In some embodiments, when the target subject includes multiple objects, the images of the multiple objects are connected or partially overlap. As can be seen from the foregoing description, in the video shooting process, the magnitude of the change in the size of the object image at different depths is different as the distance between the user and the target subject changes. Thus, it is difficult to simultaneously achieve substantially constant image sizes for objects of different depths as the distance between the user and the target subject varies. Therefore, to keep the size of the target subject image substantially unchanged, multiple objects in the target subject should have the same depth.
The target subject may be automatically determined by the mobile phone or specified by the user, and these two cases are described below separately.
(1) The cell phone automatically determines a target subject, which may include one or more objects.
In some embodiments, the target subject is a preset type of object. For example, the predetermined type of object is a person, an animal, a famous building or a landmark, and the like. And the mobile phone determines the preset type of object as a target subject based on the preview image.
In other embodiments, the target subject is an object whose image on the preview image is located in the center region. The target subject of interest to the user will typically be facing the zoom camera and the image of the target subject on the preview image will thus typically be located in the central region.
In other embodiments, the target subject is an object whose image on the preview image is close to the central region and whose area is larger than the preset threshold 1. The target subject of interest to the user will typically be facing the zoom camera and closer to the zoom camera, so that the image of the target subject on the preview image is close to the central region and has an area greater than a preset threshold 1.
In other embodiments, the target subject is a preset type of object whose image on the preview image is near the center region.
In other embodiments, the target subject is a preset type of object whose image on the preview image is near the center region and whose area is larger than a preset threshold.
In other embodiments, the target subject is a preset type of object with the smallest depth near the center region of the image on the preview image. When the preset type of object of the image near the center area on the preview image includes a plurality of objects having different depths, the target object is an object of which the depth is the smallest.
In some embodiments, the handset default target subject includes only one object.
It can be understood that there are various ways for the mobile phone to automatically determine the target subject, and this way is not particularly limited in the embodiments of the present application.
In some embodiments, after the mobile phone determines the target subject, the target subject may be prompted to the user by displaying a prompt message or by voice broadcasting.
For example, the preset type is a person, and the mobile phone determines that the target subject is a preset type of person 1 whose image on the preview image is close to the center area. For example, referring to fig. 5 (a), the mobile phone may select a person 1 through a box 501 to prompt the user that the person 1 is the target subject.
For another example, the preset type is a person, and the mobile phone determines that the target subject is a person 2 and a person 3 of the preset type, which are close to the central area and have the same depth in the image on the preview image. For example, referring to fig. 5 (b), the mobile phone may select a person 2 and a person 3 through a circle 502 to prompt the user that the person 2 and the person 3 are the target subjects.
As another example, the preset types include a person and an animal, and the mobile phone determines that the target subject is the preset type of the person 4 and the animal 1 having the same depth near the center area in the image on the preview image. For example, referring to (c) in fig. 5, the mobile phone may prompt the user that the person 4 and the animal 1 are the target subject by displaying a prompt message.
It can be understood that there are various ways for the mobile phone to prompt the target subject to the user, and this embodiment of the present application does not specifically limit this way.
In some embodiments, after the mobile phone automatically determines the target subject, the target subject may be modified in response to a user operation, such as switching, adding, or deleting the target subject.
For example, in the case shown in fig. 6 (a), the target subject automatically specified by the mobile phone is the person 1, and when the mobile phone detects an operation of clicking the person 5 on the preview image by the user, the target subject is modified from the person 1 to the person 5 as shown in fig. 6 (b).
For another example, in the case shown in fig. 7 (a), the target subject automatically determined by the mobile phone is person 1, and after the mobile phone detects an operation of the user dragging the frame to simultaneously frame person 1 and person 5, the target subject is modified from person 1 to person 1 and person 5 as shown in fig. 7 (b).
For another example, in the case shown in fig. 8 (a), the target subjects automatically determined by the mobile phone are person 1 and person 5, and after the mobile phone detects that the user can click on person 5, the target subjects are modified from person 1 and person 5 to person 1 as shown in fig. 8 (b).
For another example, the mobile phone first enters the target subject modification mode according to the instruction of the user, and then modifies the target subject in response to the operation of the user.
It is understood that there may be various ways for the user to modify the target subject, and the embodiment of the present application is not particularly limited to this way.
(2) The user specifies a target subject, the target subject including one or more objects.
After the mobile phone enters the Hirschker mode, the target main body can be determined in response to the preset operation of the user on the preview interface. The preset operation is used to designate a certain object or objects as target objects. The preset operation may be a touch operation, a voice instruction operation, or a gesture operation, and the embodiment of the present application is not limited. For example, the touch operation may be a single click, a double click, a long press, a pressure press, an operation of circling an object, or the like.
Illustratively, on the preview interface shown in fig. 9 (a), after the mobile phone detects the operation of double-clicking the person 1 on the preview image by the user, the person 1 is determined as the target subject as shown in fig. 9 (b).
In other embodiments, the user may be prompted to specify the target subject after the handset enters the Hirschk mode. For example, referring to fig. 10 (a), the mobile phone may display a prompt message: please specify the target subject so that the image size of the target subject is substantially unchanged during the photographing process. Then, the mobile phone responds to the preset operation of the user on the preview interface to determine the target subject. For example, when the mobile phone detects an operation of the user to circle the person 1 shown in fig. 10 (a), the corresponding person 1 is determined as the target subject as shown in fig. 10 (b). For another example, after the mobile phone detects an operation in which the user voice indicates that the person is the target subject, the person 1 is determined as the target subject.
As another example, in the case where the target subject is a preset object type and the preset object type is a person, referring to (a) in fig. 11, the mobile phone may display a prompt message: is a person detected and is the person designated as the target subject so that the size of the image of the target subject is substantially unchanged during photographing? Then, after the mobile phone responds to the operation of clicking the "yes" control by the user, the person is determined as the target subject as shown in (b) in fig. 11.
In some embodiments, if the default target subject of the cell phone includes only one object, when the user designates a plurality of objects as the target subject, the cell phone may prompt the user to: please select only one object as the target subject.
Similar to the method for automatically determining the target subject by the mobile phone, after the target subject is determined by the mobile phone in response to the preset operation of the user, the target subject can be prompted to the user by displaying prompt information or voice broadcasting and the like. Also, the mobile phone may modify the target subject in response to an operation by the user, such as switching, adding, or deleting the target subject. And will not be described in detail herein.
203. And the mobile phone determines the size reference value of the target main body image in the preview state of the Hirschk mode.
In the preview state of the heuchock mode, the mobile phone can determine the size reference value of the target subject image, so that in the video shooting process, the mobile phone can automatically adjust the focal length based on the size reference value of the target subject image and the actual size of the target subject image, so that the image size of the target subject in the shooting process is not greatly different from the size reference value, and the image size of the target subject in the shooting process is basically unchanged.
The size reference value is used to describe a size value as a reference size of the target subject image, and may be represented by the number of pixels included in the target subject image (or the area of the target subject image), or a ratio of the number of pixels included in the target subject image to the number of pixels in the entire image, for example.
In some embodiments, the size reference value of the target subject image may be user-set.
For example, in one possible implementation, in the preview state of Hirschk mode, the handset may prompt the user to set a size reference for the target subject image. For example, the user/target subject may change the size of the target subject image by changing the distance between the zoom camera and the target subject by advancing/retreating so as to set a size reference value of the target subject image desired by the user. It should be noted that the mobile phone does not need to implement the hessian zoom at this time, and does not adjust the focal length so that the size of the target subject image remains substantially unchanged. When the user/subject can move forward and backward, the focal length is constant, the field angle of the zoom camera is also constant, but the range of the subject on the preview image and the size of the image of the object on the preview image are changed. For example, if the focal length is not changed, when the user/subject recedes, the field angle of the zoom camera is not changed, but the number of objects in the preview image increases, the range of the subject in the preview image increases, and the subject image and the background object image in the preview image decrease. For another example, if the focal length is not changed, when the user/target subject moves forward, the field angle of the zoom camera is not changed, but the objects in the preview image decrease, the field of view of the preview image decreases, and the target subject image and the background object image in the preview image increase. After the user/target subject stops moving forward/backward, the size value of the target subject image on the preview image is the size reference value.
For example, referring to fig. 12 (a), the handset may prompt the user on a preview interface: the user/target subject is asked to advance/retreat to set the target subject image to an appropriate size so that the size of the target subject image is substantially maintained at the size during photographing. As shown in fig. 12 (b), in the process of the user advancing, the target subject image on the preview interface becomes larger, and after the user stops advancing, the mobile phone determines the size of the current target subject image as the size reference value.
It can be understood that, in the preview state of the xitake kock mode, if the mobile phone subsequently detects that the user/target subject moves forward/backward again, the size reference value is updated to the size value of the target subject image on the preview image after the user/target subject stops moving forward/backward this time.
In another possible implementation manner, in the preview state of the xieke mode, after detecting that the user adjusts the zoom magnification of the image, the mobile phone adjusts the size of the target subject image on the preview interface, so as to set the size reference value of the target subject image that the user wants. Illustratively, referring to fig. 13 (a), the handset may prompt the user to: please set a size reference value of the target subject image so that the size of the target subject image is substantially maintained at the size during the photographing process. For example, the operation of the user to adjust the zoom magnification of the image may be a zoom/zoom operation of the user on the preview image as shown in (b) in fig. 13, or a drag operation of the user on the zoom magnification adjustment lever, or an operation of the user to instruct adjustment of the zoom magnification with voice, or the like. The mobile phone responds to the operation of adjusting the image zoom magnification of a user, and can adjust the size of the target subject image on the preview interface through optical zooming and/or digital zooming. And after the user stops adjusting the zoom ratio, the mobile phone determines the size of the current target main body image as a size reference value. It should be noted that, after the size of the target subject image on the preview interface is adjusted by the mobile phone through the optical zoom, the focal length value 1 may be updated to the focal length value after the optical zoom. After the size of the target main body image on the preview interface is adjusted through digital zooming, the mobile phone can determine the corresponding image cutting proportion K during digital zooming, and the mobile phone keeps the cutting proportion K unchanged in the subsequent shooting process.
It can be understood that, in the preview state of the xieke mode, if the mobile phone subsequently detects again the operation of the user to adjust the zoom magnification on the preview interface, the size reference value is updated to the size value of the target subject image on the preview image after the user stops adjusting the zoom magnification this time.
In other embodiments, the size of the target subject image when the mobile phone determines the target subject is the size reference value. In other embodiments, the size reference value of the target subject image is a default numerical value. In some technical solutions, if parameters such as the object type and the depth of the target subject are different, the default size reference value of the target subject image may also be different. The mobile phone can also respond to the preset operation of the user to modify the size reference value of the target subject image.
For example, a size reference value modification control is provided on the preview interface, and the mobile phone enters a size reference value modification mode after detecting an operation of clicking the control by the user. In this mode, the user/target subject may advance/retreat to change the distance between the user and the target subject, or the user may adjust the zoom magnification based on the preview image, thereby setting the target subject image on the preview image to an appropriate size. After detecting the operation of clicking the exit control by the user, the mobile phone determines the size of the current target main body image as a reference value and exits the reference value modification mode. It should be noted that, in the process of modifying the size reference value, the size of the target subject image changes accordingly, and the mobile phone does not adjust the focal length so that the size of the target subject image remains substantially unchanged.
The following embodiment will be described taking as an example a size of a target subject image whose size reference value is shown in (b) in fig. 12.
Further, it is understood that the visual impact of background space compression or expansion around the target subject in the heucher mode is greater when the target subject image is near the central region of the image; further, when the target subject image is close to the edge position of the image, distortion such as stretching or deformation is likely to occur. Therefore, in order to achieve a better shooting effect, in some embodiments of the application, in a preview state of the heuchek mode, when the target subject image is far away from the middle area of the preview image, the mobile phone may prompt the user or the target subject to center the target subject by displaying a prompt message or voice broadcasting. For example, referring to fig. 14 (a), the mobile phone may prompt the user by displaying a prompt message: please move the photographer or target subject left/right to try to center the target subject. Referring to fig. 14 (b), the handset may stop the prompt when the target subject is close to the middle area of the image.
In the Hirscoke mode of the handset, the user may instruct the handset to start shooting when he wants to take a video. The user instructs to move away from/close to the target subject after starting shooting.
To record the heuchock zoom video, the user may move away from/close to the target subject during the shooting in the heuchock mode, and the distance between the user and the target subject is increased/decreased accordingly, thereby achieving a video effect of expanding/compressing the background space. For example, in a rear shooting scene, the user may move back/forward to move away from/close to the target subject during shooting.
In the following description, mainly in a rear-view shooting scene, the user moves back, i.e., moves away from the target subject during shooting.
204. After the mobile phone detects the shooting operation of the user, an original image 1 is collected by the zooming camera through the focal length value 1.
After the mobile phone detects the shooting operation of the user, the mobile phone starts to shoot the Hooke zooming video, and therefore the zooming camera collects an original image 1 by adopting a focal length value 1. Illustratively, the focal length value 1 is 15mm. It should be noted that the focal length values listed in the embodiments of the present application are all equivalent focal lengths of the zoom camera.
For example, after detecting an operation of the user clicking the shooting control 1201 shown in (b) in fig. 12, the mobile phone determines that the shooting operation of the user is detected, and thus enters a shooting process of the xiekeck zoom video. For another example, after detecting that the user indicates to start shooting operation by voice, the mobile phone determines that the shooting operation of the user is detected, and then enters a shooting process of the heuchock zoom video. It is understood that there may be many other ways for triggering the mobile phone to enter the xiekeck zoom video capturing process, and the embodiments of the present application are not limited thereto.
In the heuchock mode, the user can shoot to obtain a desired video effect by moving forward/backward at any time. The original image 1 obtained by the mobile phone corresponds to the distance between the user and the target body in the process of backing the user and the focal length value 1. The imaging sizes of the target subject and the background object in the original image 1 can be calculated according to the distance between the user and the target subject and the focal length value 1.
It should be noted that, after the mobile phone detects the shooting operation of the user, if the user does not move forward or backward, the zoom camera acquires, by using the focal length value 1, that the size of the target subject image in the original image 1 is the same as the size reference value. If the user moves forward/backward after the mobile phone detects the shooting operation of the user, the zoom camera acquires the size of the target subject image on the original image 1 acquired by adopting the focal length value 1 and differs from the size reference value. The forward/backward speed of the user is not too high, and the period of collecting the original image by the mobile phone is short, so that the distance between the user and the target body is small in change, and the difference between the size of the target body image on the original image 1 and the size reference value is small, wherein the size of the target body image is collected by the zoom camera by adopting the focal length value 1.
For example, a schematic diagram of the original image 1 can be seen in (a) of fig. 15A. In the case where the preview image shown in (b) in fig. 12 and the original image 1 both adopt the focal length value 1, if the user moves backward during shooting, the subject image and the background object image on the original image 1 slightly decrease and the subject range on the original image 1 also slightly increases, compared to the preview image shown in (b) in fig. 12.
205. The mobile phone generates a recorded image 1 according to the original image 1, and displays the recorded image 1 on a shooting interface.
It can be understood that, after the mobile phone enters the shooting process of the heuchok mode, the original image can be continuously collected according to the preset collection frame rate. The original image is an image which is acquired by a zoom camera and processed by an ISP (internet service provider) and the like. The mobile phone performs deformation processing such as Electronic Image Stabilization (EIS) processing and affine transformation on the original image to obtain a captured image.
The mobile phone generates a recording image 1 according to a first frame original image, namely an original image 1, acquired by a user in a shooting process, and displays the recording image 1 on a shooting interface. For example, a schematic diagram of the captured image 1 displayed on the shooting interface can be seen in (b) of fig. 15A.
206. The mobile phone performs video instance segmentation on the target subject on the original image 1 to obtain the size 1 and the position 1 of the target subject image.
The mobile phone can perform video instance segmentation on a target subject on an original image 1 acquired by a zoom camera by adopting a focal length value 1 to obtain a region mask of the target subject image, namely a position region where the target subject image is located, so as to obtain the size 1 and the position 1 of the target subject image. For example, the size of the target subject image on the original image is the number of pixel points included in the target subject image. The position of the target subject image on the original image can be represented by coordinates.
The size 1 of the target subject image on the original image 1 can be used to determine the focal length value 2 corresponding to the next frame of original image 2 to be acquired. That is to say, the mobile phone may determine the focal length value 2 corresponding to the next frame of original image 2 to be acquired according to the size 1 of the target subject image on the original image 1. The position 1 of the target subject image on the original image 1 can be used for position smoothing of the target subject image on the subsequent original image.
207. The mobile phone determines a focal length value 2 corresponding to a next frame of original image 2 to be acquired according to the size 1 and the size reference value of the target subject image in the original image 1, wherein the size 2 of the target subject image in the original image 2 corresponding to the focal length value 2 is equal to the size reference value.
In the shooting process, the mobile phone can determine the focal length value 2 corresponding to the next frame of original image 2 to be acquired according to the size 1 of the target subject image on the historical original image 1. For example, the mobile phone may calculate the focal length value 2 corresponding to the next frame of original image 2 to be acquired according to the difference or ratio between the size 1 of the target subject image in the original image 1 and the size reference value. When the zoom camera captures the original image 2 using the focal length value 2, the size 2 of the target subject image on the original image 2 can be made to coincide with the size reference value. Illustratively, in the user back-off scenario, the focal length value 2 is 15.1mm, which is greater than the focal length value 1. That is, in the hek zoom mode, when the user moves back away from the target subject, the focal length value increases so that the size of the target subject image remains substantially unchanged.
For example, suppose that the photographer moves backward by a certain distance at time t, d is the distance between the user and the target subject at time t, S1 is the actual size of the target subject, S2 is the imaging size of the target subject at time t, S3 is a size reference value, and f1 is the focal length value adopted at time t. S3, f1 and S2 are known quantities. f2 is a focal length value which enables the imaging size of the target subject on the original image acquired at the time t +1 to be kept basically unchanged. From the imaging relationship shown in FIG. 15B, the following expression 10-12 can be obtained:
f1/S2= (f 1+ d)/S1 = d/S1 formula 10
f2/S3= (f 2+ d)/S1 = d/S1 formula 11
The calculation formula of f2 can be obtained from equation 10 and equation 11:
f2 Formula 12 = S3 × f1/S2
When S2 is the size of the target subject image on the original image 1 and f1 is the focal length value 1 corresponding to the original image 1, f2 is the focal length value 2 corresponding to the original image 2 to be acquired. That is to say, the mobile phone may calculate the focal length value 2 corresponding to the original image 2 to be acquired according to the size 1, the focal length value 1, and the size reference value of the target subject image on the original image 1.
When S2 is the size of the target subject image on the original image n (n is an integer greater than 1), and f1 is the focal length value n corresponding to the original image n, f2 is the focal length value n +1 corresponding to the original image n +1 to be acquired. Wherein the focus value n may be a value updated to the smoothed focus value n'.
In this way, on the original image 2 acquired by the mobile phone by adopting the focal length value 2, the size of the target subject image is equal to the size reference value, so that the size of the target subject image in the shooting process can be kept basically unchanged.
In some embodiments, after step 207, the method may further include step 208:
208. the mobile phone determines a focal length value 2' according to the historical focal length value, and if the focal length value 2' is different from the focal length value 2, the value of the focal length value 2 is updated to the value of the focal length value 2'.
The focal length value 2' is determined by the mobile phone according to the historical focal length value, and is the focal length value after the focal length smoothing processing is performed. If the focal length value 2 is different from the focal length value 2 'after the focal length smoothing processing, the mobile phone can update the focal length value 2 to the focal length value 2' after the focal length smoothing processing. That is, the handset may adjust the value of the focus value 2 in conjunction with the historical focus value. In the embodiment of the present application, the focal length value n that is not updated to the smoothed focal length value n' may be referred to as an original focal length value.
The difference value between the focal length value 2 'determined by the mobile phone according to the historical focal length value and the focal length value 2 is small, so that the difference value between the size of the target subject image and the size reference value is small on the original image 2 acquired by the mobile phone through the focal length value 2', and the size of the target subject image is kept basically unchanged in the shooting process.
Moreover, the focal length value 2' determined by the mobile phone according to the historical focal length value is consistent with the increasing/decreasing trend of the historical focal length value, so that the change trend of the focal length value in the shooting process is relatively stable, the focal length value smoothly changes, and the change of the focal length value between increasing and decreasing in the shooting process does not occur.
As can be seen from the foregoing description, in the heucher mode, when the focal length value increases or decreases, the imaging size of the background object on the image collected by the zoom camera decreases or increases accordingly, and the background space is compressed or expanded accordingly. When the focal length value has large sudden change, the imaging size of the background object can also change suddenly by a large amplitude, the amplitude of background space compression or expansion is also obvious, and the visual experience and the shooting experience of a user are poor. Also, when the focus value is frequently switched between increase or decrease, the user may see that the imaging size of the background object frequently changes between decrease and increase, the background space frequently changes between compression and expansion, and the user's visual experience and photographing effect are poor.
The mobile phone can enable the focal length value to change smoothly by updating the focal length value 2 to be the focal length value 2', so that the imaging size and the background space of a background object change smoothly, the focal length value is prevented from being switched back and forth between increase and decrease in the shooting process, the imaging size of the background object is prevented from being reduced or increased frequently, the background space is prevented from changing between compression and expansion frequently, and better shooting experience can be provided for a user. Further, the focus value smoothing processing can enable the picture of the recorded image to be smoothly transited in the whole shooting process without jumping and jamming, so that the user experience and the shooting effect are improved.
There are various methods for determining the focal length value 2' according to the historical focal length value by the mobile phone.
For example, the difference Δ f between the focal length value 2' and the focal length value 1, which is determined by the mobile phone according to the historical focal length value, may be smaller than or equal to the preset value 1, so that the change of the focal length value during the shooting process is smoother. Illustratively, the preset value 1 may be in the range of-0.1 mm to 0.1 mm. And, the difference between the focal length value 2 'and the focal length value 2 may be less than or equal to the preset value 2, so that the difference between the size 2 of the target subject image on the original image 2 corresponding to the focal length value 2' and the size reference value is small, and the size of the target subject is kept substantially unchanged. Illustratively, the preset value 2 may be in the range of-0.02 mm to 0.02 mm. For example, the focal length value 2 is 15.1mm, and the focal length value 2' is 15.12mm.
For another example, the focal length value 2' is an average value of historical focal length values in the shooting process, or an average value of M recent historical focal length values in the shooting process. Where M is a positive integer, for example, 5 or 10. The difference value between the focal length value 2' and the focal length value 1 obtained by the method is small, so that the change of the focal length value in the shooting process is smooth; the difference value between the focal length value 2' and the focal length value 2 obtained by the method is small, and the imaging size of the target main body can be kept basically unchanged.
For another example, the mobile phone may obtain the focal length value corresponding to the multi-frame historical original image by performing curve fitting, and for example, the focal length smoothing may be performed by adopting bezier curve fitting such as second order or third order.
After step 207 or step 208, the method may further comprise:
209. the mobile phone collects an original image 2 by adopting a focal length value 2 through the zoom camera.
It should be noted that, if step 209 is after step 208, the focal length value 2 may be the focal length value 2 updated to the focal length value 2' in step 208; if the method does not include step 208, the focus value may be focus value 2 in step 207, i.e. the original focus value.
The imaging size of the target subject on the original image 2 acquired by the mobile phone by using the focal length value 2 is not much different from the size reference value and the imaging size of the target subject on the original image 1. That is, the imaging size of the target subject is substantially unchanged during photographing.
That is to say, the mobile phone can automatically adjust the focal length value of the zoom camera according to the actual size and the size reference value of the target subject image on the original image, so that the imaging size of the target subject image on the original image acquired by the zoom camera according to the adjusted focal length value is kept basically unchanged, and therefore the hek zoom is realized.
For example, in the user backward scene, a schematic diagram of the original image 2 can be seen in (c) of fig. 15A. In comparison with the original image 1 shown in (a) in fig. 15A, in the process of the user's backward movement, the subject range of the original image 2 decreases, the background object on the original image 2 decreases, and the background object image on the original image 2 increases. Compared with the original image 1, the imaging center point of the background object image on the original image 2 is farther away from the image center, so that the user can have the visual experience of expanding the background space outwards; background objects with greater depth are imaged larger and faster in size, and the imaged location is greater in magnitude and faster away from the center of the image than background objects with lesser depth.
210. The mobile phone performs video instance segmentation on the target subject on the original image 2 to obtain the size 2 and the position 2 of the target subject image.
The processing procedure of the mobile phone obtaining the size 2 and the position 2 of the target subject image in step 210 is similar to the processing procedure of the mobile phone obtaining the size 1 and the position 1 of the target subject image in step 206, and reference may be made to the related description in step 206, which is not described herein again.
In some embodiments, after step 210, the method may further include step 211:
211. the mobile phone adjusts the position of the target subject image on the original image 2 according to the position 2 and the historical position of the target subject image on the original image 2.
Here, the historical position refers to a position of the target subject image on the historical original image. In the shooting process, the position of the target subject image on different original images may be shifted due to hand shake or shaking of the user. The mobile phone can determine the position offset of the current original image relative to the target subject image on the historical original image in a plurality of ways.
For example, the mobile phone may perform, through an area where the target subject image is located on the current original image, a keypoint matching (for example, an original and rotated feature transform (ORB) matching) with an area where the target subject image is located on the previous frame of original image, so as to determine a position offset of the current original image with respect to the target subject image on the historical original image. In some embodiments, in order to improve the robustness of the position offset detection algorithm, the mobile phone may perform key point matching on the region where the target subject image is located on the current original image and the region where the target subject image is located on the historical P (integer greater than 1, for example, 2 or 3, etc.) frame original image, so as to determine the position offset of the current original image relative to the target subject image on the historical original image.
If the position offset of the target main body image on the adjacent original image is large, the position offset of the background object image is also large, the content mutation on the original image is large, and the user visual experience and the shooting effect are poor. Therefore, the mobile phone can adjust the position of the target subject image on the current original image according to the position and the historical position of the target subject image on the current original image, so that position smoothing (also called target subject trajectory smoothing) is performed on the target subject image, the offset between the position of the target subject image on the current original image and the historical position is small, the displacement of the target subject image is smooth, and the influence on the visual experience and the shooting effect of a user due to sudden change of the position of the target subject image is avoided.
It should be noted that, for the current original image S (S is an integer greater than 2), if the position of the target subject image on the history original image is the position after position smoothing adjustment, the mobile phone adjusts the position of the target subject image on the current original image S according to the position of the target subject image on the current original image S and the position of the target subject image on the history original image after adjustment. And the history original image after position smooth adjustment is the history recorded image. That is, for the current original image S (S is an integer greater than 2), the history position indicates the position of the target subject image on the history captured image. That is, the mobile phone adjusts the position of the target subject image on the original image S according to the position S of the target subject image on the original image S and the position of the target subject image on the history original image, and may alternatively adjust the position of the target subject image on the original image S according to the position S of the target subject image on the original image S and the position of the target subject image on the history captured image.
In step 211, after the mobile phone performs video instance segmentation on the target subject, the position 2 of the target subject image on the original image 2 can be obtained.
In some embodiments, the mobile phone may adjust the position of the target subject image on the original image 2 by adjusting a crop region (crop region) of the original image 2 in combination with the EIS, so that the offset between the position of the target subject image on the original image 2 and the historical position is small, thereby making the displacement of the target subject image smooth, and avoiding the influence on the visual experience and the shooting effect of the user due to the abrupt change of the position. Furthermore, the position smoothing of the target subject image can enable the picture of the image to be recorded and shot to be smooth and transition without jumping in the whole shooting process, so that the user experience and the shooting effect are improved.
For example, if the mobile phone determines that the image of the target subject needs to be moved to the left by 1/10 of the original image width, the mobile phone may cut more than 1/10 of the original image width on the left side of the original image 2 and less than 1/10 of the original image width on the right side of the original image 2, so as to obtain a new original image 2, and adjust the position of the target subject image on the original image 2. Specifically, the mobile phone may crop to obtain a new original image 2 by modifying the cropping area information in the electronic anti-shake deformation information (warp info). The new original image 2 is the original image 2 obtained by smoothing the position of the target subject image.
For another example, the difference between the position of the adjusted target subject image on the original image 2 and the position of the target subject image on the captured image 1, which is determined by the mobile phone according to the historical position, may be less than or equal to a preset value 3; in addition, the difference value between the position of the adjusted target subject image on the original image 2 and the position of the target subject image on the original image 2, which is determined by the mobile phone according to the historical position, may be less than or equal to a preset value 4, so that the position change of the target subject image in the shooting process is relatively smooth.
For another example, the mobile phone may also perform position smoothing of the target subject image by using bezier curve fitting according to the position of the target subject image on the historical original image.
When S is greater than 2, for example, a difference between the position of the adjusted target subject image on the original image S and the position of the target subject image on the captured image S-1 may be less than or equal to a preset value 3; in addition, the difference value between the position of the adjusted target subject image on the recorded image S and the position of the target subject image on the recorded image S, which is determined by the mobile phone according to the historical position, may be less than or equal to a preset value of 4, so that the position change of the target subject image in the shooting process is relatively smooth.
Alternatively, the position of the adjusted target subject image (for example, the position of the center point) on the original image S may be an average value of the positions of the target subject images on the respective history captured images, or an average value of the positions of the target subject images on the R most recent history captured images during the capturing process. Wherein R is a positive integer, and may be, for example, 5 or 10. The difference value between the position of the adjusted target subject image and the historical position obtained by the method is small, the difference value between the target subject image before and after adjustment is small, and the position change of the target subject image in the shooting process can be smooth.
Or, the mobile phone may also perform position smoothing of the target subject image by using bezier curve fitting according to the position of the target subject image on the history captured image.
After step 210 or step 211, the method may further comprise:
212. the mobile phone generates a recorded image 2 according to the original image 2, and displays the recorded image 2 on a shooting interface.
If step 212 is after step 211, the mobile phone generates the captured image 2 from the original image 2 after position smoothing. If the method does not include step 211, the mobile phone may generate the captured image 2 according to the original image 2 without position smoothing, and display the captured image 2 on the shooting interface. In some embodiments, the mobile phone may further perform deformation processing such as affine transformation on the original image 2 to correct the original image 2, so as to generate a better captured image 2.
For example, a schematic diagram of the captured image 2 can be seen in (d) of fig. 15A, where the captured image 2 is an image obtained after position smoothing of the original image 2 shown in (c) of fig. 15A.
When the user moves backward, the scene area of the captured image 2 decreases, the background object on the captured image 2 decreases, and the background object image on the captured image 2 increases, as compared with the captured image 1 shown in fig. 15A (b). Moreover, compared with the recorded image 1, the imaging center point of the background object image on the recorded image 2 is farther and farther away from the image center, so that the user can experience the visual experience of expanding the background space outwards; the larger depth background object has a greater and faster imaging size than the smaller depth background object, and the greater the amplitude and the faster the imaging position is away from the center of the image.
After step 212, the method may further comprise:
213. the mobile phone determines a focal length value 3 corresponding to the next frame of original image 3 to be acquired according to the size 2 and the size reference value of the target subject image on the original image 2, wherein the size 3 of the target subject image on the original image 3 corresponding to the focal length value 3 is equal to the size reference value.
For example, in the user back-off scenario, the focal length value 3 may be 15.2mm, which is greater than the focal length value 2. That is, as the user moves backward, the focal length value should be gradually increased to keep the size of the target subject image substantially constant.
The processing procedure of determining the focal length value 3 in step 213 of the mobile phone is similar to the processing procedure of determining the focal length value 2 in step 207 of the mobile phone, and reference may be made to the related description in step 207, which is not described herein again.
In some embodiments, after step 213, the method may further comprise step 214:
214. the mobile phone determines a focal length value 3' according to the historical focal length value, and if the focal length value 3' is different from the focal length value 3, the value of the focal length value 3 is updated to the value of the focal length value 3 '.
The processing procedure of the mobile phone updating the focal length value 3 in step 214 is similar to the processing procedure of the mobile phone updating the focal length value 2 in step 208, and reference may be made to the related description in step 208, which is not described herein again.
After step 213 or step 214, the method may further comprise:
215. the mobile phone collects an original image 3 by adopting a focal length value 3 through the zoom camera.
It should be noted that if step 215 follows step 214, the focal length value 3 may be the focal length value 3 updated to the focal length value 3' in step 214; if the method does not include step 214, the focus value may be focus value 3 in step 213.
The imaging size of the target subject on the original image 3 acquired by the mobile phone by using the focal length value 3 is not much different from the imaging size of the target subject on the size reference value and the original image 1. That is, the imaging size of the target subject is substantially unchanged during photographing.
For example, in the user backward scene, a schematic diagram of the original image 3 can be seen in (e) of fig. 15A. In comparison with the original image 1 and the original image 2 shown in (a) and (c) in fig. 15A, in the process of the user backing, the subject range of the original image 3 decreases, the background object on the original image 3 decreases, and the background object image on the original image 3 increases. Compared with the original image 1 and the original image 2, the imaging center point of the background object image on the original image 3 is farther away from the image center, so that the user can have the visual experience of expanding the background space outwards; the larger depth background object has a greater and faster imaging size than the smaller depth background object, and the greater the amplitude and the faster the imaging position is away from the center of the image.
Then, for the original image 3 and the subsequently acquired original images, the mobile phone performs processing by using a flow similar to the flow shown in steps 210-215. That is, the method may further include:
216. the mobile phone performs video instance segmentation on a target subject on the original image Q to obtain the size Q and the position Q of the target subject image, wherein Q is an integer greater than or equal to 3.
217. And the mobile phone adjusts the position of the target main body image on the original image Q according to the position Q and the historical position of the target main body image on the original image Q.
218. And the mobile phone generates a recorded image Q according to the original image Q and displays the recorded image Q on a shooting interface.
For example, a schematic diagram of the captured image 3 can be seen in (f) of fig. 15A. When the user moves backward, the subject range of the captured image 3 decreases, the background object on the captured image 3 decreases, and the background object image on the captured image 3 increases, as compared with the captured images 1 and 2 shown in (b) and (d) in fig. 15A. Moreover, compared with the recorded image 1 and the recorded image 2, the imaging center point of the background object image on the recorded image 3 is farther away from the image center, so that the user can have visual experience of outward expansion of the background space; the larger depth background object has a greater and faster imaging size than the smaller depth background object, and the greater the amplitude and the faster the imaging position is away from the center of the image.
219. The mobile phone determines a focal length value Q +1 corresponding to the next frame of original image Q +1 to be acquired according to the size Q and the size reference value of the target subject image on the original image Q, wherein the size Q +1 of the target subject image on the original image Q +1 corresponding to the focal length value Q +1 is equal to the size reference value.
220. The mobile phone determines a focal length value Q +1' according to the historical focal length value, and if the focal length value Q +1' is different from the focal length value Q +1, the value of the focal length value Q +1 is updated to the value of the focal length value Q +1'.
As shown in step 208, the handset can perform focus smoothing by using various methods to determine the focus value Q +1'. Illustratively, when Q is greater than 3 and focal length smoothing is performed using the bezier curve fitting method, see (a) - (C) in fig. 15C, the second order bezier curve fitting principle is:
1. selecting 3 points of different lines in the plane and connecting the points by line segments in sequence;
2. find points D and E on the AB and BC segments, so that AD/AB = BE/BC:
3. connecting DE and finding a point F on DE such that DF/DE = AD/AB = BE/BC:
4. the selected point D is moved on the first line segment from the starting point a to the end point B, finding all points F and connecting them together to obtain a smooth bezier curve.
When the focal length smoothing is performed by adopting Bezier curve fitting, the point A, the point B and the point C are focal length values n adopted for acquiring the original image, and when the focal length value n is updated to be a value behind the focal length value n ', the point A, the point B and the point C are specifically the focal length values n'.
For an exemplary focal length smoothing diagram based on bezier curve fitting, see (d) in fig. 15C, the horizontal axis represents the original image frame number, and the vertical axis represents the focal length value (in mm) used to acquire the original image. Wherein, the point represented by the triangle on the curve 1 is the original focal length value n obtained by calculation, that is, the focal length value which is not updated to the smoothed focal length value n'; the point on the curve 2 represented by the circle is the smoothed focal length value n'. Compared with curve 1, curve 2 is smoother, i.e. the variation between the focal length values n' is smoother, and the variation of the focal length values used between different original image frames is smoother. And, the difference between the focal length value n' on the curve 2 and the focal length value n on the curve 1 is small, so that the size of the target subject image in the shooting process can be kept basically unchanged.
The first point represented by a circle on the curve 2 represents the smoothed focal length value corresponding to the 4 th frame original image, and is obtained by Bezier curve fitting according to the original focal length values corresponding to the 1 st to 3 rd frame original images. The 2 nd point represented by the circle on the curve 2 represents the smoothed focal length value corresponding to the 5 th frame original image, and is obtained through Bezier curve fitting according to the original focal length values corresponding to the 2 nd to 3 rd frame original images and the smoothed focal length value corresponding to the 4 th frame original image. The 3 rd point represented by the circle on the curve 2 represents the smoothed focal length value corresponding to the 6 th frame original image, and the focal length value is obtained through Bezier curve fitting according to the original focal length value corresponding to the 3 rd frame original image and the smoothed focal length values corresponding to the 4 th-5 th frame original images. The 4 th point represented by a circle on the curve 2 represents the smoothed focal length value corresponding to the 7 th frame original image, and is obtained by Bezier curve fitting according to the smoothed focal length values corresponding to the 4 th to 6 th frame original images; similarly, the subsequent points represented by circles on curve 2 are obtained by bezier curve fitting according to the smoothed focal length values corresponding to the most recent history 3 frames of original images.
221. The mobile phone collects an original image Q +1 by adopting a focal length value Q +1 through the zoom camera.
It should be noted that, if the mobile phone cuts the original image through the digital zoom in the preview state to obtain the preview image and sets the size reference value of the target subject image, and the cutting ratio is K, the mobile phone also needs to cut each original image according to the cutting ratio K during the shooting process, and then performs the processing operation in the shooting process on the original image.
222. And after detecting that the shooting operation of the user is stopped, the mobile phone generates a video, and a video image in the video is generated according to the recorded image.
And after detecting that the shooting operation of the user is stopped, the mobile phone generates a Hirschhorn zoom video, wherein the imaging size of the target main body on the video image in the video is kept basically unchanged. For example, schematic diagrams of video images in a heucheck zoom video generated by a mobile phone can be seen in (a) - (c) of fig. 16.
In the shooting method described in the above steps 201 to 222, in the video recording process in the xitake kock mode, when the user holds the mobile phone to move away from/close to the target subject, the mobile phone may automatically adjust the focal length value of the zoom camera according to the imaging size of the target subject, so that the size of the target subject on the video image is basically kept unchanged, and no extra auxiliary equipment such as a slide rail and manual zooming of the user are needed, which may facilitate the user to take the xitake kock zoom video, reduce the user operation difficulty, and improve the user shooting experience. In addition, the pictures of the recorded images can be smoothly transited by the mobile phone through the smooth focal length value and the smooth position, so that the pictures of the video images generated according to the recorded images can be smoothly transited.
Moreover, in the shooting method described in the above steps 201 to 222, the mobile phone automatically adjusts the focal length value of the zoom camera according to the imaging size of the target subject to achieve xicluster zoom, that is, the mobile phone achieves xicluster zoom through optical zoom instead of achieving xicluster zoom through digital zoom adopted for cutting and zooming, so that the resolution and definition of the captured image and the video image obtained by the mobile phone in the shooting process are both high, and the user experience is good.
From another perspective, the flow of the above-described photographing method may also refer to the process shown in fig. 17. As shown in fig. 17, the process includes: the mobile phone performs video instance segmentation on an image acquired by the zoom camera to obtain a mask of a target main body image so as to obtain the size and the position of the target main body image, and the target main body performs key point matching; and then, smoothing the subject track (namely smoothing the position) according to the position of the target subject image, determining a crop region (crop region) and performing crop (crop) and deformation (warp) processing to obtain a recorded image so as to output a video image. Here, the trimming area is a trimming area in the warp information (warp info) of the electronic anti-shake EIS. The process also includes: and the mobile phone calculates the focal length value of the next frame of image according to the size of the target main body image, and acquires the next frame of image after performing focal length smoothing.
In some embodiments of the present application, the xistoke zoom video saved by the mobile phone may be specifically identified from other videos, so as to facilitate the user to intuitively know the xistoke zoom video. For example, referring to fig. 18 (a), a text mark 1801 of "xq" is displayed on the xicko zoom video, and for example, referring to fig. 18 (b), a specific symbol mark 1802 is displayed on the xicko zoom video.
In the above embodiment, the illustration of the captured image is described by taking an example in which the user moves back, i.e., moves away from the target subject, in the rear-view shooting scene. In the process that the user moves backwards, the distance d1 between the user and the target main body is increased, if the imaging size of the target main body is not changed, the focal length f of the camera can be increased by the mobile phone, the field angle of the camera is reduced, the scene range of the recorded and shot image is reduced, background objects on the recorded and shot image are reduced, background object images on the recorded and shot image are increased, the imaging center point of the background object images is more and more far away from the image center, and the user is provided with visual experience of outward expansion of a background space. And compared with the background object with smaller depth, the imaging size of the background object with larger depth is increased more and faster, and the imaging position is far away from the center of the image with larger amplitude and faster speed.
In the rear shooting scene, if the user moves forward, that is, approaches the target subject, the captured images during shooting may be referred to as (a) - (c) in fig. 19 or (a) - (c) in fig. 20. In the advancing process of a user, the distance d1 between the user and a target main body is reduced, if the imaging size of the target main body is not changed, the focal length f of the camera can be reduced by the mobile phone, the field angle of the camera is increased, the scene range of a recorded image is increased, background objects on the recorded image are increased, background object images on the recorded image are reduced, the imaging center point of the background object images is closer to the center of the image, and the user is provided with visual experience of inward compression of a background space. And compared with the background object with smaller depth, the imaging size of the background object with larger depth is reduced more and faster, and the imaging position is closer to the center of the image with larger amplitude and faster speed.
In the scheme described in the above embodiment, the target body is generally stationary during photographing in the heucheck mode. In other embodiments of the present application, in the shooting process in the heuchock mode, the target subject may also move left/right at the same depth, and the mobile phone may still achieve that the imaging size of the target subject image on the captured image and the video image remains substantially unchanged by using the shooting method described in the above embodiments. For example, in a scene in which the user moves backward and the target subject moves from left to right, schematic diagrams of the captured images during shooting may be seen in fig. 21 (a) - (c).
In the scheme described in the above embodiment, the handset does not perform the heuchock zoom in the preview state of the heuchock mode. In some other embodiments of the present application, after determining the size reference value of the target subject image in the preview state of the heuchock mode, the mobile phone may also perform the heuchock zoom in the preview state.
For example, referring to (a) in fig. 22A, the cell phone may prompt the user on the preview interface: the user/target subject is asked to advance/retreat to set the target subject image to an appropriate size so that the size of the target subject image is substantially maintained at the size during photographing. As shown in fig. 22A (b), in the process of advancing by the user, the target subject image on the preview interface becomes larger, and after the mobile phone detects the operation of clicking the "size determination" control by the user, the size of the current target subject image is determined as the size reference value.
For another example, in the preview state of the xieke mode, after detecting the operation of adjusting the zoom magnification of the image by the user, the mobile phone adjusts the size of the target subject image on the preview interface, so as to set the size reference value of the target subject image desired by the user. Illustratively, referring to fig. 22B (a), the handset may prompt the user to: please set a size reference value of the target subject image so that the size of the target subject image is substantially maintained at the size during photographing. For example, the operation of the user to adjust the zoom magnification of the image may be a zoom/zoom operation of the user on the preview image as shown in (B) in fig. 22B, or a drag operation of the user on the zoom magnification adjustment lever, or an operation of the user to instruct adjustment of the zoom magnification by voice, or the like. The mobile phone responds to the operation of adjusting the image zoom magnification of a user, and can adjust the size of the target subject image on the preview interface through optical zooming and/or digital zooming. After the mobile phone detects the operation of clicking the 'confirm' control by the user, the size of the current target main body image is confirmed as a size reference value. It should be noted that, after the size of the target subject image on the preview interface is adjusted by the mobile phone through the optical zoom, the focal length value 1 may be updated to the focal length value after the optical zoom. After the size of the target main body image on the preview interface is adjusted through digital zooming, the mobile phone can determine the corresponding image cutting proportion K during digital zooming, and the mobile phone keeps the cutting proportion K unchanged in the subsequent shooting process.
After the mobile phone determines the size reference value, the Hirschgot zoom can be performed in a preview state, so that the Hirschgot zoom effect is presented to the user on a preview interface in real time. Then, the above-described steps 204 to 222 are executed in response to the photographing operation by the user. The process of performing the heuchock zoom in the preview state of the mobile phone is similar to the process of performing the heuchock zoom in the shooting process of the mobile phone described in steps 204-222. The method is characterized in that the shooting operation of a user does not need to be detected in the previewing process, the recorded image in the shooting process is replaced by the previewing image, and the shooting interface in the shooting process is replaced by the previewing interface. For example, in the preview state, schematic diagrams of a preview interface in which the user advances to perform the xike zoom in the rear shooting scene may be seen in (a) - (c) of fig. 23.
In the above, the electronic device is taken as a mobile phone for explanation, and when the electronic device is a tablet personal computer with a zoom camera or other devices, the shooting method provided in the embodiment of the present application may still be used to record the xiekeck zoom video, which is not described herein again.
With reference to the foregoing embodiments and accompanying drawings, another embodiment of the present application provides a shooting method, which can be implemented in an electronic device having a hardware structure shown in fig. 1. As shown in fig. 24, the method may include:
2401. the electronic equipment displays a first recorded image on a shooting interface, the first recorded image comprises a target main body image, the first recorded image is obtained according to a first original image, and the first original image is obtained by shooting through a zoom camera by adopting a first focal length value.
For example, the first captured image may be the captured image 2, the first original image may be the original image 2, and the first focal length value may be the focal length value 2. The value of the focal length value 2 may be a value of a focal length value 2' after the focal length smoothing processing, and the target subject may be a person.
When the first captured image is the captured image 1 and the first original image is the original image 1, the first focal length value corresponding to the original image 1 may be a preset focal length value or a user-set focal length value.
2402. And the electronic equipment determines a second focal length value corresponding to a second original image to be acquired according to the first size of the target main body image on the first original image.
For example, the first original image may be the original image 2, the second original image may be the original image 3, and the second focal distance value may be the focal distance value 3.
2403. The electronic equipment displays a second recorded image on the shooting interface, the second recorded image comprises a target main body image, the second recorded image is obtained according to a second original image, and the second original image is obtained through shooting by the zooming camera by adopting a second focal length value.
For example, the second original image may be the original image 3, and the second captured image may be the captured image 3.
In the scheme, the electronic equipment can determine the focal length value used by the zoom camera to acquire the subsequent original image according to the size of the target main image on the current original image, so that the size of the target main image on different original images is basically kept unchanged, the size of the target main image on different video images is basically kept unchanged, and the Hirschner zoom is realized.
That is to say, in the target shooting mode, when the user holds the electronic device and keeps away from/close to the target subject, the electronic device can automatically adjust the focal length value of the zoom camera according to the imaging size of the target subject, so that the size of the target subject on the video image basically remains unchanged, and no extra auxiliary equipment such as a slide rail and the like and manual zooming of the user are needed, so that the user can conveniently shoot the heuchk zoom video, the operation difficulty of the user is reduced, and the shooting experience of the user is improved.
Moreover, the electronic equipment realizes the Hirschhock zooming through the optical zooming instead of the digital zooming adopted by cutting and zooming, so that the resolution and definition of a recorded image and a video image obtained by the electronic equipment in the shooting process are high, and the shooting experience of a user is good.
In a possible implementation manner, after the electronic device determines, according to a first size of the target subject image on the first original image, a second focal length value corresponding to a second original image to be acquired, the method further includes: and the electronic equipment updates the second focal length value into a target focal length value, the target focal length value is obtained according to the focal length value corresponding to the historical original image, and the second original image is obtained by shooting through the zoom camera by using the updated second focal length value. That is, the electronic device may update the second focus value in combination with the focus value corresponding to the historical original image, thereby performing focus smoothing. And the electronic equipment acquires a second original image according to the updated second focal length value.
For example, a difference between the target focal length value and a first focal length value corresponding to the first original image is less than or equal to a first preset value, and a difference between the target focal length value and a second focal length value is less than or equal to a second preset value. For another example, the target focal length value is an average value of focal length values corresponding to the multi-frame historical original images. For another example, the target focal length value is obtained by performing curve fitting on the focal length values corresponding to the multiple frames of historical original images.
That is, the electronic device may perform focus value smoothing to enable smooth transitions between pictures of a video image generated from a captured image.
In another possible implementation manner, the second position coordinates of the target subject image on the second captured image are obtained according to the initial position coordinates of the target subject image on the second original image and the position coordinates of the target subject image on the history captured image. That is, the electronic device may adjust the coordinate position of the target subject image on the second original image in combination with the position coordinates of the target subject image corresponding to the history captured image, thereby obtaining the second captured image with a smoothed position.
For example, the difference between the second position coordinates and the first position coordinates of the target subject image on the first captured image is less than or equal to a third preset value, and the difference between the second position coordinates and the initial position coordinates is less than or equal to a fourth preset value. For another example, the second position coordinate is an average value of the position coordinates of the target subject image on the multi-frame history captured image. For another example, the second position coordinate is obtained by curve-fitting the position coordinate of the target subject image on the multi-frame history captured image.
That is, the electronic device may position-smooth the target subject image to enable smooth transitions between screens of the video image generated from the captured image.
In another possible implementation, the method further includes: the method comprises the steps that the electronic equipment detects a first preset operation of a user indicating a target main body based on a preview image in a preview state of a target shooting mode; the target subject is determined in response to a first preset operation. That is, the user may specify the target subject. For example, the first preset operation may be an operation in which the user frames a person on the preview image as shown in fig. 10.
In another possible implementation, the target subject is automatically identified by the electronic device.
In another possible implementation manner, the determining, by the electronic device, a second focal length value corresponding to a second original image to be acquired according to a first size of a target subject image on the first original image includes: and the electronic equipment determines a second focal length value corresponding to a second original image to be acquired according to the first size and the size reference value of the target main body image on the first original image.
Therefore, the electronic equipment can determine the focal length value used by the zoom camera to acquire the subsequent original image according to the size and the size reference value of the target main body image on the current original image, so that the size of the target main body image on different original images is basically kept unchanged, the size of the target main body image on different video images is basically kept unchanged, and the Hirschner zoom is realized.
In another possible implementation manner, the determining, by the electronic device, a second focal length value corresponding to a second original image to be acquired according to a first size of a target subject image on the first original image includes: the electronic equipment determines a second focal length value corresponding to a second original image to be acquired by adopting a first formula according to the first size, the first focal length value and the size reference value of the target subject image on the first original image, wherein the second size of the target subject image on the second original image corresponding to the second focal length value is equal to the size reference value;
f2= S3 f1/S2 formula one
Wherein, S3 represents a size reference value, S2 a first size, f 1a first focal length value, and f 2a second focal length value.
Therefore, the electronic device can adjust and collect the focal length value corresponding to the subsequent original image according to the size, the size reference value and the focal length value of the target main body image on the current original image, the zoom camera collects the size of the target main body image on the subsequent original image according to the adjusted focal length value and the size reference value are the same, so that the size of the target main body image on the subsequent different original images is basically consistent with the size reference value, the size of the target main body image on the adjacent original image is basically kept unchanged, the size of the target main body image on the adjacent video image is basically kept unchanged, and the Hosieck zoom is realized.
In another possible implementation manner, the method further includes: the electronic device prompts the user on a preview interface to set the size reference value. So that the user can set the size reference value of the target subject according to the prompt. For example, the prompt interface may be referred to as (a) in fig. 12 or (a) in fig. 13.
In another possible implementation, on the preview interface, the size of the target subject image on the preview image changes as the electronic device moves forward or backward, and the size reference value is the size of the target subject image on the preview image. The user may move the handheld electronic device forward or backward to change the size of the target subject image on the preview image to set or resize the size reference value.
In another possible implementation, on the preview interface, the size of the target subject image on the preview image is changed in response to the user's operation of adjusting the zoom magnification, and the size reference value is the size of the target subject image on the preview image. The user can adjust the zoom magnification of the preview image to change the size of the target subject image on the preview image, thereby setting or adjusting the size of the size reference value.
That is, the user may set or modify the size of the size reference value.
In another possible implementation, the size reference value is a preset value.
It will be appreciated that in order to implement the above-described functions, the electronic device comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Embodiments of the present application also provide an electronic device including one or more processors and one or more memories. The one or more memories are coupled to the one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the associated method steps described above to implement the shooting method in the above embodiments.
Embodiments of the present application further provide a computer-readable storage medium, where computer instructions are stored, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute the related method steps to implement the shooting method in the foregoing embodiments.
Embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the shooting method performed by the electronic device in the above embodiments.
In addition, an apparatus, which may be specifically a chip, a component or a module, may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the shooting method executed by the electronic equipment in the above-mentioned method embodiments.
The electronic device, the computer-readable storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer-readable storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A shooting method is applied to an electronic device, and is characterized in that the electronic device comprises a zoom camera, and the method comprises the following steps:
displaying a first recorded image on a shooting interface, wherein the first recorded image comprises a target main body image, the first recorded image is obtained according to a first original image, and the first original image is obtained by shooting through a zoom camera by adopting a first focal length value;
determining a second focal length value corresponding to a second original image to be acquired according to the first size of the target subject image on the first original image, the first focal length value and a size reference value, wherein the second size of the target subject image on the second original image corresponding to the second focal length value is equal to the size reference value;
updating the second focal length value to a target focal length value, wherein the target focal length value is obtained according to a focal length value corresponding to a historical original image; the difference value between the target focal length value and a first focal length value corresponding to the first original image is smaller than or equal to a first preset value, and the difference value between the target focal length value and the second focal length value is smaller than or equal to a second preset value; or the target focal length value is the average value of focal length values corresponding to multiple frames of historical original images; or the target focal length value is obtained by performing curve fitting on the focal length value corresponding to the multi-frame historical original image;
and displaying a second recorded image on the shooting interface, wherein the second recorded image comprises the target main body image, the second recorded image is obtained according to the second original image, and the second original image is obtained by shooting the second focal length value after updating through the zoom camera.
2. The method according to claim 1, wherein the second position coordinates of the target subject image on the second captured image are obtained from the initial position coordinates of the target subject image on the second original image and the position coordinates of the target subject image on the history captured image.
3. The method of claim 2, wherein a difference between the second position coordinates and the first position coordinates of the target subject image on the first captured image is less than or equal to a third preset value, and wherein a difference between the second position coordinates and the initial position coordinates is less than or equal to a fourth preset value.
4. The method according to claim 2, wherein the second position coordinate is an average value of position coordinates of the target subject image on a plurality of frames of history-captured images.
5. The method according to claim 2, wherein the second position coordinates are obtained by curve-fitting the position coordinates of the target subject image on a plurality of frames of history captured images.
6. The method according to any one of claims 1-5, further comprising:
in a preview state of a target shooting mode, detecting a first preset operation of a user indicating a target subject based on a preview image;
determining the target subject in response to the first preset operation.
7. The method of claim 6, wherein the target subject is a human figure.
8. The method according to any one of claims 1-5 or 7, wherein the determining a second focus value corresponding to a second original image to be acquired according to the first size of the target subject image on the first original image, the first focus value and a size reference value comprises:
determining a second focal length value corresponding to a second original image to be acquired by adopting a formula I;
f2= S3 f1/S2 formula one
Wherein S3 represents the size reference value, S2 represents the first size, f1 represents the first focus value, and f2 represents the second focus value.
9. The method of claim 8, wherein on the preview interface, the size of the target subject image on the preview image changes as the electronic device moves forward or backward, and the size reference is the size of the target subject image on the preview image.
10. The method according to claim 8, wherein, on a preview interface, the size of the target subject image on a preview image is changed in response to a user's operation of adjusting zoom magnification, and the size reference value is the size of the target subject image on the preview image.
11. The method according to claim 9 or 10, characterized in that the method further comprises:
and prompting a user to set the size reference value on a preview interface.
12. The method of any of claims 1-5 or 7 or 9 or 10, wherein the first focus value is a preset focus value or a user-set focus value.
13. An electronic device, comprising:
the zoom camera is used for acquiring images;
a screen for displaying an interface;
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the photographing method of any of claims 1-12.
14. A computer-readable storage medium, comprising computer instructions which, when run on a computer, cause the computer to perform the photographing method according to any one of claims 1 to 12.
CN202011044018.3A 2020-05-30 2020-09-28 Shooting method and equipment Active CN113747050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/078543 WO2022062318A1 (en) 2020-05-30 2021-03-01 Photographing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010480536 2020-05-30
CN2020104805363 2020-05-30

Publications (2)

Publication Number Publication Date
CN113747050A CN113747050A (en) 2021-12-03
CN113747050B true CN113747050B (en) 2023-04-18

Family

ID=78728055

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011043999.XA Active CN113747085B (en) 2020-05-30 2020-09-28 Method and device for shooting video
CN202011044018.3A Active CN113747050B (en) 2020-05-30 2020-09-28 Shooting method and equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011043999.XA Active CN113747085B (en) 2020-05-30 2020-09-28 Method and device for shooting video

Country Status (2)

Country Link
CN (2) CN113747085B (en)
WO (2) WO2022062318A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002338B (en) * 2021-12-10 2023-06-02 荣耀终端有限公司 Shooting parameter control method and device
CN116546316B (en) * 2022-01-25 2023-12-08 荣耀终端有限公司 Method for switching cameras and electronic equipment
CN114500852B (en) * 2022-02-25 2024-04-19 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN116723394B (en) * 2022-02-28 2024-05-10 荣耀终端有限公司 Multi-shot strategy scheduling method and related equipment thereof
CN117441344A (en) * 2022-05-16 2024-01-23 北京小米移动软件有限公司 Image processing method, device, terminal and storage medium
CN116051368B (en) * 2022-06-29 2023-10-20 荣耀终端有限公司 Image processing method and related device
CN116055871B (en) * 2022-08-31 2023-10-20 荣耀终端有限公司 Video processing method and related equipment thereof
CN115965942B (en) * 2023-03-03 2023-06-23 安徽蔚来智驾科技有限公司 Position estimation method, vehicle control method, device, medium and vehicle
CN117596497A (en) * 2023-09-28 2024-02-23 书行科技(北京)有限公司 Image rendering method, device, electronic equipment and computer readable storage medium
CN117459830B (en) * 2023-12-19 2024-04-05 北京搜狐互联网信息服务有限公司 Automatic zooming method and system for mobile equipment

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007086269A (en) * 2005-09-21 2007-04-05 Hitachi Kokusai Electric Inc Camera system and focal length adjusting method of zoom lens optical system of camera system
JP4991899B2 (en) * 2010-04-06 2012-08-01 キヤノン株式会社 Imaging apparatus and control method thereof
CN104717427B (en) * 2015-03-06 2018-06-08 广东欧珀移动通信有限公司 A kind of automatic zooming method, device and mobile terminal
JP6512897B2 (en) * 2015-03-30 2019-05-15 キヤノン株式会社 Zoom control device, control method of zoom control device
CN106412547B (en) * 2016-08-29 2019-01-22 厦门美图之家科技有限公司 A kind of image white balance method based on convolutional neural networks, device and calculate equipment
KR20180056182A (en) * 2016-11-18 2018-05-28 엘지전자 주식회사 Mobile terminal and method for controlling the same
US10321069B2 (en) * 2017-04-25 2019-06-11 International Business Machines Corporation System and method for photographic effects
CN108234879B (en) * 2018-02-02 2021-01-26 成都西纬科技有限公司 Method and device for acquiring sliding zoom video
WO2019227441A1 (en) * 2018-05-31 2019-12-05 深圳市大疆创新科技有限公司 Video control method and device of movable platform
CN110557550B (en) * 2018-05-31 2020-10-30 杭州海康威视数字技术股份有限公司 Focusing method, device and computer readable storage medium
US10791310B2 (en) * 2018-10-02 2020-09-29 Intel Corporation Method and system of deep learning-based automatic white balancing
CN109361865B (en) * 2018-11-21 2020-08-04 维沃移动通信(杭州)有限公司 Shooting method and terminal
WO2020107372A1 (en) * 2018-11-30 2020-06-04 深圳市大疆创新科技有限公司 Control method and apparatus for photographing device, and device and storage medium
CN109379537A (en) * 2018-12-30 2019-02-22 北京旷视科技有限公司 Slide Zoom effect implementation method, device, electronic equipment and computer readable storage medium
CN110198413B (en) * 2019-06-25 2021-01-08 维沃移动通信有限公司 Video shooting method, video shooting device and electronic equipment
CN110262737A (en) * 2019-06-25 2019-09-20 维沃移动通信有限公司 A kind of processing method and terminal of video data
CN111083380B (en) * 2019-12-31 2021-06-11 维沃移动通信有限公司 Video processing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113747085B (en) 2023-01-06
WO2022062318A1 (en) 2022-03-31
WO2021244295A1 (en) 2021-12-09
CN113747085A (en) 2021-12-03
CN113747050A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN113747050B (en) Shooting method and equipment
US11860511B2 (en) Image pickup device and method of tracking subject thereof
US10291842B2 (en) Digital photographing apparatus and method of operating the same
KR102114377B1 (en) Method for previewing images captured by electronic device and the electronic device therefor
JP5567235B2 (en) Image processing apparatus, photographing apparatus, program, and image processing method
CN113630545B (en) Shooting method and equipment
EP3316568B1 (en) Digital photographing device and operation method therefor
KR20120022512A (en) Electronic camera, image processing apparatus, and image processing method
CN113747049B (en) Shooting method and equipment for delayed photography
JP2007251429A (en) Moving image imaging unit, and zoom adjustment method
CN114339102B (en) Video recording method and equipment
CN109325908B (en) Image processing method and device, electronic equipment and storage medium
CN114071010B (en) Shooting method and equipment
CN117692771A (en) Focusing method and related device
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN113747044B (en) Panoramic shooting method and equipment
CN114071009B (en) Shooting method and equipment
CN114339101B (en) Video recording method and equipment
CN117278839A (en) Shooting method, electronic equipment and storage medium
CN114531539A (en) Shooting method and electronic equipment
EP3945717A1 (en) Take-off capture method and electronic device, and storage medium
CN117714849A (en) Image shooting method and related equipment
CN115278043A (en) Target tracking method and related device
CN118012319A (en) Image processing method, electronic equipment and computer readable storage medium
CN117729320A (en) Image display method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant