CN117459827A - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment Download PDF

Info

Publication number
CN117459827A
CN117459827A CN202311399721.XA CN202311399721A CN117459827A CN 117459827 A CN117459827 A CN 117459827A CN 202311399721 A CN202311399721 A CN 202311399721A CN 117459827 A CN117459827 A CN 117459827A
Authority
CN
China
Prior art keywords
input
display
identifier
preview interface
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311399721.XA
Other languages
Chinese (zh)
Inventor
罗子扬
张晓怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311399721.XA priority Critical patent/CN117459827A/en
Publication of CN117459827A publication Critical patent/CN117459827A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a shooting method and electronic equipment, and belongs to the field of shooting. The method comprises the following steps: receiving a first input of a user under the condition of displaying a shooting preview interface of a rear camera of the electronic equipment; responding to the first input, and displaying a control mark in a shooting preview interface; wherein, the control mark has association relation with the image collected by the front camera; receiving a second input from the user; responding to the second input, updating the display parameters of the control mark, and updating the display content of the shooting preview interface; and the display parameters of the control mark and the display content of the shooting preview interface have an association relation.

Description

Shooting method and electronic equipment
Technical Field
The application belongs to the technical field of shooting, and particularly relates to a shooting method and electronic equipment.
Background
With the popularization of electronic devices, the electronic devices are widely applied and have more and more powerful functions. As an important function of an electronic device, a photographing function is also a function that is used in daily life of a user.
Currently, in the video shooting process, in order to obtain a picture with a specific effect, a moving lens is generally used, that is, shooting is performed by moving a camera, changing an optical axis of the lens, or changing a focal length of the lens, which is called as a "moving mirror". In order to realize the mirror function, a user needs to hold the electronic device by hand or by other tools and move the electronic device according to the effect of the picture to be shot so as to obtain a shot picture with a mirror effect. However, in the existing manner, the user needs to manually move the electronic device to realize the "mirror-moving", and a certain mirror-moving method is required to be provided for the user to shoot a video picture with a better effect, in addition, in some shooting scenes, the position, shooting angle and the like of the electronic device can be greatly changed due to the influence of external factors, for example, in the scene of shooting video underwater, the user cannot manually control the electronic device to perform the "mirror-moving" like on the ground due to the influence of water flow and buoyancy, so that the mirror-moving effect is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method and electronic equipment, which can realize a stable mirror transporting effect.
In a first aspect, an embodiment of the present application provides a photographing method, including: receiving a first input of a user under the condition of displaying a shooting preview interface of a rear camera of the electronic equipment; responding to the first input, and displaying a control mark in a shooting preview interface; wherein, the control mark has association relation with the image collected by the front camera; receiving a second input from the user; responding to the second input, updating the display parameters of the control mark, and updating the display content of the shooting preview interface; and the display parameters of the control mark and the display content of the shooting preview interface have an association relation.
In a second aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a front camera and a rear camera, and the electronic device further includes: the device comprises a receiving module, a display module and an execution module, wherein: the receiving module is used for receiving a first input of a user under the condition of displaying a shooting preview interface of a rear camera of the electronic equipment; the display module is used for responding to the first input received by the receiving module and displaying a control identifier in the shooting preview interface; wherein, the control mark has association relation with the image collected by the front camera; the receiving module is further used for receiving a second input of the user; the execution module is used for responding to the second input received by the receiving module, updating the display parameters of the control mark and updating the display content of the shooting preview interface; and the display parameters of the control mark and the display content of the shooting preview interface have an association relation.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
According to the shooting method provided by the embodiment of the application, under the condition that the shooting preview interface of the rear camera is displayed, the electronic equipment receives the first input of the user, the mirror-moving function is started, the control mark is displayed in the shooting preview interface, the control mark has an association relation with the image acquired by the front camera, then the display parameter of the control mark is updated through the second input of the user, the display content of the shooting preview interface is updated, and the change of the display parameter of the control mark indicates the change of the shooting content. By the method, a user can conveniently control the image acquired by the rear camera without manually moving the camera to update the display content in the shooting preview interface, so that a stable lens transporting effect can be realized, and the threshold and difficulty for transporting the lens when shooting videos are reduced. When the shooting method is applied to underwater shooting scenes, stable lens transporting effect can be realized even if the shooting method is influenced by water flow and buoyancy; and the user can not directly input the touch display screen, and even if the screen touch function fails underwater, the user can shoot the video of the wanted mirror effect.
Drawings
Fig. 1 is a flow chart of a shooting method provided in an embodiment of the present application;
fig. 2 (a) is one of schematic diagrams of an interface to which the photographing method provided in the embodiment of the present application is applied;
FIG. 2 (B) is a second schematic diagram of an interface applied by the photographing method according to the embodiment of the present application;
FIG. 3 (A) is a third schematic diagram of an interface to which the photographing method according to the embodiment of the present application is applied;
fig. 3 (B) is a schematic diagram of an interface to which the photographing method according to the embodiment of the present application is applied;
fig. 4 (a) is a schematic diagram of an interface to which the photographing method according to the embodiment of the present application is applied;
fig. 4 (B) is a schematic diagram of an interface to which the photographing method according to the embodiment of the present application is applied;
FIG. 5 is a schematic diagram of a control mark moving direction in the direction of a translational mirror according to an embodiment of the present application;
FIG. 6 (A) is a schematic diagram of a movement direction of a control marker for translating a mirror back and forth according to an embodiment of the present application;
FIG. 6 (B) is a schematic diagram of an interface to which the photographing method according to the embodiment of the present application is applied;
fig. 7 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. In addition, "and/or" in the specification means at least one of the connected objects, and the character "/", generally means a relationship in which the associated objects are one kind of "or".
The terms "at least one", and the like in the description of the present application refer to any one, any two, or a combination of two or more of the objects that it comprises. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The shooting method provided by the application can be applied to shooting a scene of a video or delaying shooting the scene.
For a video shooting scene, assuming that a user uses an electronic device to shoot a video underwater, after the user selects a shooting angle and presses a shooting control, the user may not control the shooting device to perform stable operation as in the ground due to the influence of water flow and buoyancy, so that the shooting effect is poor. In some embodiments, after the user selects the shooting angle and presses the key for opening the mirror function, the electronic device may display a control identifier in the shooting preview interface, and update the display parameter of the control identifier and update the display content of the shooting preview interface according to a second input (for example, gesture input) of the user, where the display parameter of the control identifier has an association relationship with the display content of the shooting preview interface. Therefore, a user can conveniently control the image acquired by the rear camera without manually moving the camera to update the display content in the shooting preview interface, so that a stable lens transporting effect can be realized, and the threshold and difficulty of transporting the lens when shooting videos are reduced. When shooting a scene underwater, stable lens transporting effect can be realized even if the scene is influenced by water flow and buoyancy; and the user can not directly input the touch display screen, and even if the screen touch function fails underwater, the user can shoot the video of the wanted mirror effect.
The shooting method provided by the embodiment of the invention can be implemented by an electronic device or at least one of a functional module and an entity module in the electronic device, which can implement the shooting method, specifically can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The shooting method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The shooting method provided by the embodiment of the application can be executed by electronic equipment, and the electronic equipment comprises a front camera and a rear camera. Fig. 1 is a flowchart of a photographing method according to an embodiment of the present application, as shown in fig. 1, the photographing method may include the following steps S201 to S204:
step S201: and the electronic equipment receives a first input of a user under the condition that a shooting preview interface of the front camera is displayed.
In some embodiments, the shooting preview interface is a shooting preview interface displayed by the electronic device in response to an input for indicating to display the shooting preview interface. The input for displaying the shooting preview interface may be at least one of a voice input, a space gesture input, a touch input, etc., or may be an input to a fixed key of the electronic device.
In some embodiments, the shooting preview interface may be a video recording preview interface.
In some embodiments, the shooting preview interface may be an interface displayed before video recording or an interface displayed during video recording. For example, the recording preview interface may display image frames captured by a camera during recording or image frames captured by a camera prior to recording.
In some possible implementations, the electronic device displays a shooting preview interface in response to the touch input after receiving input from a user clicking on an icon of the camera application, or displays a shooting preview interface in response to the touch input after receiving input from a user clicking on a video recording button.
In some embodiments, the first input may trigger the electronic device to enter a mirror mode and display a control identifier in the shooting preview interface.
In some embodiments, the first input may include any of the following: touch input, voice input, gesture input, or other feasibility input of the user, which is not limited in the embodiments of the present application. For example, it may be: click input, slide input, press input, etc. by the user. The above-described clicking operation may be any number of clicking operations. The sliding operation may be a sliding operation in any direction, such as sliding up, sliding down, sliding left, or sliding right, and is not limited in some embodiments.
The first input may be, for example, a user pressing input of a fixed key.
Step S202: and the electronic equipment responds to the first input and displays a control mark in a shooting preview interface.
Wherein, the control mark has association relation with the image collected by the front camera.
In some embodiments, the control flag is related to the image captured by the front camera.
In some embodiments, the control mark may be any shape such as a circle, a square, or multiple variations, which is not limited in the embodiments of the present application.
In some embodiments, the electronic device may capture a center location of the preview interface to display the control identification.
For example, after the user presses a fixed key of the electronic device, the electronic device displays a circular control identifier in the photographing preview interface.
Step S203: the electronic device receives a second input from the user.
In some embodiments, the second input may trigger the electronic device to update the display parameter of the control identifier, and trigger the electronic device to update the display content of the shooting preview interface.
In some embodiments, the second input may include any of the following: touch input, voice input, gesture input, or other feasibility input of the user, which is not limited in the embodiments of the present application.
The second input may be, for example, a space gesture input of the user.
Step S204: and the electronic equipment responds to the second input, updates the display parameters of the control identifier and updates the display content of the shooting preview interface.
And the display parameters of the control mark and the display content of the shooting preview interface have an association relation.
In some embodiments, the display parameters may include display location parameters.
In some embodiments, the electronic device receives a second input from the user, updating the display location of the control identifier in the capture preview interface.
Taking the second input as the user's space-apart gesture input as an example, the mirror operation function is controlled by a fixed key of the electronic device, the user presses and keeps the fixed key pressed for a long time to open the mirror operation function, at this time, the front camera is turned on, as shown in fig. 2 (a), the user performs the space-apart finger input in front of the front camera, and when the front camera captures the space-apart finger input of the user, as shown in fig. 2 (B), the electronic device displays a control identifier 21 in the center of the shooting preview interface, and the display position of the control identifier is updated along with the movement of the finger of the user.
It should be noted that, in order to facilitate the front camera to capture the space gesture input of the user, a red dot may be set on the finger of the user performing the space gesture input.
In some embodiments, the electronic device may update the display content in the shooting preview interface according to the display position of the control identifier.
For example, the electronic device may control the rear camera to rotate according to the display position of the control identifier, or translate the display content of the shooting preview interface according to the display position of the control identifier, so as to update the display content of the shooting preview interface.
According to the shooting method provided by the embodiment of the application, under the condition that the shooting preview interface of the rear camera is displayed, the shooting device receives the first input of the user, the control identifier is displayed in the shooting preview interface, the control identifier has an association relationship with the image acquired by the front camera, the shooting device receives the second input of the user, the display parameters of the control identifier are updated, and the display content of the shooting preview interface is updated. By the method, a user can conveniently control the image acquired by the rear camera without manually moving the camera to update the display content in the shooting preview interface, so that a stable lens transporting effect can be realized, and the threshold and difficulty for transporting the lens when shooting videos are reduced. When the shooting method is applied to underwater shooting scenes, stable lens transporting effect can be realized even if the shooting method is influenced by water flow and buoyancy; and the user can not directly input the touch display screen, and even if the screen touch function fails underwater, the user can shoot the video of the wanted mirror effect.
In some embodiments, the step S204 may be implemented by the following step S204 a.
Step S204a: and the electronic equipment responds to the second input, updates the display parameters of the control identifier according to the input parameters of the second input, and updates the display content of the shooting preview interface while updating the display parameters of the control identifier.
In some embodiments, the second input may be a space gesture input, and the input parameters of the second input may include a start position, a movement direction, an end position, a movement track, and the like of the space gesture input.
In some embodiments, the electronic device may update the display parameter of the control identifier and the display content of the photographing preview interface simultaneously according to the input parameter of the second input.
Taking the second input as the space gesture input as an example, the electronic device may control the control identifier to move in the same movement direction in the shooting preview interface according to the movement direction of the space gesture input of the user, control the rear camera to rotate based on the movement direction, and display the image acquired by the rear camera in the shooting preview interface.
In some embodiments, the electronic device may determine the lens information corresponding to the rear camera according to the spaced gesture input of the user, and adjust the shooting pose of the rear camera or the display content in the shooting preview interface according to the lens information corresponding to the rear camera, so as to obtain a video picture with a corresponding lens effect.
In some embodiments, the electronic device may control the rear camera to adjust the shooting pose of the camera through the pan-tilt or a rotating assembly connected to the rear camera. For example, the cradle head drives the rear camera to rotate.
In some embodiments, the rear camera may be a retractable camera.
In some embodiments, the above-described mirror information may include mirror angle information or mirror direction information.
In some embodiments, where the mirror information includes mirror angle information, the above-described mirror information includes, but is not limited to, any of the following: x-axis angle change information, Y-axis angle change information, and Z-axis angle change information; alternatively, in the case where the behavior information of the mirror information includes the mirror direction information, the behavior information of the mirror information includes, but is not limited to, any of the following: left-right translational direction mirror information and fore-aft translational direction mirror (Zoom) information. Further, the Z-axis angle change information may include Z-axis clockwise angle change information or Z-axis counterclockwise angle change information.
The X-axis angle change information indicates that the camera rotates with the X-axis as an axis, the Y-axis angle change information indicates that the camera rotates with the Y-axis as an axis, the Z-axis angle change information indicates that the camera rotates with the Z-axis as an axis, and the Y-axis angle change information indicates that the camera rotates with the Z-axis as an axis.
In some embodiments, the movement direction of the above-described space gesture input may include an upward movement, a downward movement, a leftward movement, or a rightward movement.
For example, in a case where the movement direction of the space gesture input is upward movement, the mirror information corresponding to the rear camera may be X-axis angle change information, for example, the photographing angle is raised upward.
For example, in the case that the movement direction of the space gesture input is clockwise rotation, the mirror information corresponding to the rear camera may be Z-axis clockwise change information; under the condition that the movement direction of the space gesture input is anticlockwise rotation, the mirror conveying information corresponding to the rear camera can be Z-axis anticlockwise change information.
In some embodiments, the second input comprises a first spaced gesture input; the input parameters of the second input comprise the moving direction of the first space gesture input; illustratively, the process of updating the display parameters of the control flag and updating the display content of the photographing preview interface in the above step S204 may include the following steps S204b1 and S204b2:
step S204b1: the electronic equipment updates the display state of the control mark based on the moving direction input by the first space gesture, and controls the movement of the control mark along the moving direction.
Step S204b2: and the electronic equipment controls the rotation of the rear camera based on the moving direction, and displays the image acquired by the rear camera in a shooting preview interface.
In some embodiments, the electronic device may perform the step S204b1, then perform the step S204b2, or perform the step S204b1 and the step S204b2 simultaneously.
The electronic device may update the display state of the control identifier based on the movement direction input by the first space gesture, control the movement of the control identifier along the movement direction, and then control the rotation of the rear camera based on the movement direction, and display the image acquired by the rear camera in the shooting preview interface; or, the electronic device may update the display state of the control identifier based on the movement direction input by the first space gesture, control the movement of the control identifier along the movement direction, control the rotation of the rear camera based on the movement direction, and display the image acquired by the rear camera in the shooting preview interface.
In some embodiments, the moving direction of the first space gesture input may be an upward movement or a downward movement.
In some embodiments, the electronic device may control the rear camera to rotate around the X-axis or the Y-axis according to the movement direction of the first space gesture input, so as to implement the X-axis or the Y-axis angle change mirror.
It should be noted that, for a mobile device, such as a phone or tablet, the device coordinate system is defined in a standard direction of the screen. In the equipment coordinate system, the X axis is on the plane of the screen of the electronic equipment and is parallel to the short side direction of the screen, the Y axis is also on the plane of the screen of the electronic equipment and is parallel to the long side direction of the screen, and the Z axis is perpendicular to the plane of the screen of the electronic equipment.
For example, in combination with the above embodiment, when the user wants to move the mirror upward, i.e. the shooting angle is raised upward, as shown in fig. 3 (a), the user moves the finger upward, as shown in fig. 3 (B), after the front camera captures the gesture motion, the control flag 31 in the shooting preview interface is controlled to move upward, and the rear camera is controlled to rotate upward with the X axis of the electronic device as the rotation axis, so as to increase the elevation angle of the rear camera.
The purpose of rotating the camera upward about the X-axis is to increase the elevation angle of the camera, so that the lens can be lifted upward at an angle, and an object in an upper position can be photographed.
In some embodiments, the control identifier includes a first identifier and a second identifier; the second input comprises a second spaced gesture input; illustratively, the process of updating the display parameters of the control flag and updating the display content of the photographing preview interface in the above step S204 may include the following steps S204c1 and S204c2:
Step S204c1: the electronic device updates the display states of the first identifier and the second identifier based on the input parameters input by the second space gesture.
Step S204c1: and the electronic equipment controls the rotation of the rear camera based on the rotation directions indicated by the first mark and the second mark, and displays the image acquired by the rear camera in a shooting preview interface.
In some embodiments, the input parameters of the second space gesture input may include an input start position and a movement direction.
In some embodiments, the first mark and the second mark may be any polygon such as a circle, a rectangle, etc., which is not limited in the embodiments of the present application.
In some embodiments, the electronic device may determine the direction of rotation indicated by the first and second markers based on a change in relative position of the first and second markers. For example, assuming that the first index moves to the upper right and the second index moves to the lower left, the rotation directions indicated by the first index and the second index are counterclockwise.
In some embodiments, the electronic device may control the rear camera to rotate around the Z axis based on the rotation directions indicated by the first and second markers, so as to implement a Z axis direction angle change mirror.
Illustratively, in combination with the above embodiment, when the user wants to transport the mirror in the Z-axis direction, the user presses the fixed key and rotates the finger counterclockwise, as shown in fig. 4 (a), and when the front camera recognizes two markers, the change auxiliary control ring is as follows: when the gesture of the user is recognized to rotate counterclockwise, as shown in fig. 4 (B), the control mark 41 and the control mark 42 show a counterclockwise rotation action, which indicates that the mirror is required to be carried out counterclockwise.
It should be noted that, since the Z-direction mirror needs to distinguish between clockwise and counterclockwise mirrors, the direction of the mirror can be determined by two feature points. Because the finger of the user cannot keep continuous rotation when shooting, only the fixed key is pressed (the mirror operation is triggered), and the relative displacement of the characteristic point is compared with the time T, namely if the user wants to move the mirror clockwise, the position of the characteristic point is recorded at the time of pressing the fixed key, the user performs a gesture of 10 degrees clockwise at the time of T, the user rotates 2 degrees in the anticlockwise direction due to the influence of factors such as ocean currents at the time of T+1, and at the moment, the relative displacement recognized by the equipment (relative to the time of pressing the fixed key) is still 8 degrees clockwise, and the mirror operation is still kept clockwise. When the user wants to stop the operation of the mirror, the fixed key is released, and the control mark is restored to the initial state.
In some embodiments, the control identifier includes a third identifier and a fourth identifier, and the second input includes a third spaced gesture input; illustratively, the process of updating the display parameters of the control flag and updating the display content of the photographing preview interface in the above step S204 may include the following steps S204d1 and S204d2:
step S204d1: and the electronic equipment updates the display positions of the third mark and the fourth mark based on the input parameters input by the third space gesture.
Step S204d2: and the electronic equipment translates the display content of the preview interface based on the display positions of the third mark and the fourth mark.
In some embodiments, the input parameters of the third space gesture input may include an input start position and a movement direction.
In some embodiments, the third identifier and the fourth identifier may be any polygons such as circles, rectangles, and the like, which is not limited in the embodiments of the present application.
In some embodiments, the electronic device may determine a panning direction of the display content of the capture preview interface based on the relative position change of the first and second identifiers.
For example, assuming that the first identifier and the second identifier are moved leftwards at the same time, the display content of the shooting preview interface is translated leftwards so as to realize right translation of the transport mirror; and if the first mark and the second mark are simultaneously moved to the right, shifting the display content of the shooting preview interface to the right so as to realize the left shift mirror. For example, assuming that the first mark and the second mark are simultaneously moved to the upper left, the display content of the photographing preview interface is translated to the upper left to realize a lower right translation mirror.
For example, assuming that the first identifier and the second identifier are simultaneously moved leftwards, the display content of the shooting preview interface is translated rightwards so as to realize a leftwards translation mirror; and if the first mark and the second mark are simultaneously moved to the right, the display content of the shooting preview interface is translated to the left so as to realize the right translation of the mirror. For example, assuming that the first mark and the second mark are simultaneously moved to the upper left, the display content of the photographing preview interface is translated to the lower right to realize an upper left translation mirror.
Further, when the display content of the shooting preview interface translates, pixel compensation can be performed on the display content, and the display content after the pixel compensation is displayed.
In the case of panning the display content, if the display content cannot be fully displayed on the entire shooting preview interface, pixel compensation may be performed to increase the edge pixels so that the display content is fully displayed on the shooting preview interface.
In an exemplary manner, when the user needs to translate the mirror, the change of the translation direction is controlled by two control marks, the user recognizes the space gesture input of the user after pressing the fixed key, displays the two control marks according to the space gesture input of the user, and then recognizes the behavior of the user's translate mirror according to the relative position change of the two control marks, as shown in a schematic diagram of one movement direction of the control marks in fig. 5.
When the mirror is translated, the auxiliary ring may not be displayed in the shooting preview interface, and the mirror may be translated by directly controlling the translation of the screen by recognizing the change of the relative position. After the user releases the fixed key, the translation of the picture is stopped.
When the user needs to translate the mirror, the change of the screen in the translation direction is small unlike the change of the angle, and it can be understood that the movement of the cutting frame on the canvas does not involve the change of the angle or the change of the perspective.
In some embodiments, the control identifier includes a fifth identifier and a sixth identifier, and the second input includes a fourth spaced gesture input; illustratively, the process of updating the display parameters of the control flag and updating the display content of the photographing preview interface in the above step S204 may include the following steps S204e1 and S204e2:
step S204e1: and the electronic equipment updates the relative position relation between the fifth mark and the sixth mark based on the input parameters input by the fourth space gesture.
Step S204e2: and the electronic equipment zooms the display content of the shooting preview interface based on the relative position relation.
In some embodiments, the input parameters of the fourth space gesture input may include an input start position and a movement direction.
In some embodiments, the third identifier and the fourth identifier may be any polygons such as circles, rectangles, and the like, which is not limited in the embodiments of the present application.
In some embodiments, the electronic device may zoom in or zoom out on the display content of the shooting preview interface according to the relative position change of the third identifier and the fourth identifier.
For example, assuming that the distance between the first mark and the second mark is reduced, the display content of the photographing preview interface is reduced to realize a rear-shifting mirror. For example, assuming that the distance between the first mark and the second mark increases, the display content of the photographing preview interface is enlarged to realize a forward translation mirror.
For example, in combination with the above embodiment, when the user needs to move the mirror in the front-rear translation direction, the translation mirror in the front-rear direction may be implemented by a digital Zoom function (Zoom), which may also be controlled by two control marks, and fig. 6 (a) shows a schematic diagram of the relationship between the movement direction of the two control marks and the direction of the front-rear mirror, and as shown in fig. 6 (a), the distance between the two control marks is reduced, which indicates that the mirror is moved backward, the field angle of the picture is increased, the distance between the two control marks is increased, which indicates that the mirror is moved forward, and the field angle of the picture is reduced. After the user presses the fixed key, as shown in fig. 6 (B), the user is bi-directionally contracted, the distance between the two control marks is reduced, and the photographing device determines that the lens information is a forward translational lens.
In some embodiments, a first fixed key is disposed on the electronic device; illustratively, the above step S203 may be implemented by the following step S203 a.
Step S203a: and the electronic equipment receives a second input of the user under the condition that the first fixed key is in an activated state.
In some embodiments, the first fixed key may be used to trigger entering the mirror mode.
In some embodiments, the first fixed key may be a physical key or a virtual key of the electronic device.
In some embodiments, in the case where the first fixed key is a physical key, the first fixed key may be a volume key, a power key, or other keys of the electronic device, which is not limited in this embodiment of the present application.
For example, the second input may be a long-press input of a volume "+" key of the electronic device by the user, where the volume "+" key is in an activated state during the process that the volume "+" key is in a pressed state, and the electronic device starts a mirror mode.
In some embodiments, when shooting, the electronic device may identify a space gesture input of a user through the front-end camera, display a corresponding control identifier in the shooting preview interface according to an input parameter of the space gesture input, and then control the camera to rotate or translate display content in the shooting preview interface according to position information of the control identifier, so as to achieve a corresponding mirror effect. Therefore, when a user needs to carry out the lens transportation, the lens transportation is carried out by the camera under the control of the space-apart gesture input, and the stable lens transportation effect is realized.
In some embodiments, the electronic device exits the mirror mode after receiving a fourth input from the user to the first fixed key.
In some embodiments, the fourth input may be any feasible input such as a touch input, a voice input, or a gesture input of the user, which is not limited in the embodiments of the present application.
For example, when the user ends the mirror requirement, only the fixed key needs to be released, and the picture will remain in the released position. The change in the mirror shooting angle in the other directions is similar to the upward mirror operation described above.
It should be noted that, in the process of transporting the mirror, the fixed key can be kept to be pressed all the time, if the fixed key is released, the front camera stops gesture recognition, the movement of the mirror is stopped, the picture stays at the position of the fixed key at the releasing moment, and the control mark is restored to the center position.
In some embodiments, the electronic device is provided with a second fixed key, and after the display content of the shooting preview interface is updated in the step S204, the shooting method provided in the embodiment of the present application further includes the following step S205:
step S205: and the electronic equipment receives a third input of the user to the second fixed key and generates a video file.
In some embodiments, the second fixed key is used to trigger video capturing or to trigger ending video capturing.
In some embodiments, the second fixed key may be the same as or different from the first fixed key.
In some embodiments, the second fixed key may be a shooting control or other entity key of the electronic device. The physical key may be, for example, a volume key, a power key, or other keys of the electronic device, which is not limited in the embodiments of the present application.
For example, in a case where the user needs to end shooting, the user may click on a shooting control, and the electronic device ends shooting and generates a video file from the shot image.
In some shooting scenes, the position, shooting angle, etc. of the electronic device may be greatly changed due to the influence of external factors, for example, in a scene of shooting video underwater, due to the influence of water flow and buoyancy, a user cannot control the electronic device to perform stable video shooting like on the ground, so that the shooting effect is poor.
In some embodiments, the electronic device may acquire an initial shooting pose of the rear camera, acquire a second shooting pose of the rear camera when a first image of the shooting object is obtained by shooting, and then perform image correction processing on the first image based on a pose offset between the second shooting pose and the initial shooting pose to obtain a second image after the view correction.
In some embodiments, the electronic device may obtain a shooting pose of the electronic device through a gyroscope and an acceleration sensor.
For example, when a user is taking a photograph, gyro (Gyro) gravity sensor data built in the electronic device may provide current pose information of the device. The gyroscope can provide angular velocity information of X/Y/Z three-axis shaking of the equipment, the data integration can obtain angles of three axes of the equipment moving respectively in a period of time, and the angles of the initial state of the equipment and the information of the angle change caused by the movement of the equipment in the fixed shooting process can be calculated through the data. Gravity data may provide the angle of the device to the direction of gravity.
In some embodiments, the pose offset may include an angular offset and a positional offset. The electronic device can obtain the shooting angle offset by integrating the inertial measurement unit (Inertial Measurement Unit, IMU) in real time in the shooting process, and the data after the IMU integration isNamely, the angle of the triaxial offset from the pose of the target, namely, the shooting angle to be compensated; the photographing position offset amount, that is, the amount of translation to be compensated, may also be obtained by integrating Acceleration (ACC) in real time. The data after ACC integration is
In some embodiments, in a case where the pose offset includes a position offset, the electronic device may perform a translation process on the first image according to the position offset, to obtain the second image.
In some embodiments, in a case where the pose offset includes an angle offset, the electronic device may perform perspective transformation processing on the first image according to the angle offset, to obtain the second image.
Further alternatively, in the case that the pose offset includes an angle offset, the electronic device may calculate a perspective transformation matrix of the first image according to the angle offset, and then perform perspective transformation on the first image based on the perspective transformation matrix to obtain the second image.
Optionally, the perspective transformation matrix is a matrix generated according to the angular offset, and the matrix is used for representing the rotation angle variation in the X-axis, the Y-axis and the Z-axis.
Illustratively, in the case of including the angle and the position, the first image may be subjected to perspective transformation and translation processing according to the rotation matrix (i.e., perspective transformation matrix) and the translation amount, resulting in the second image.
The following explains the principle of the related algorithm of the shooting function of the fixed angle.
First, achieving fixed angle shooting requires converting the position of the camera from the world coordinate system to the camera coordinate system. A point P in the world coordinate system can be converted into coordinates of the camera coordinate system by the following formula, where R is a coordinate calculated from IMU data (θ xyz ) The rotation matrix is formed, T is a translation vector, and f is the focal length of the camera. The three-dimensional camera coordinates and two-dimensional pixel coordinates are converted as follows:
where x, y and z are the coordinates of point P in the camera coordinate system, and Xw, yw and Zw are the coordinates of point P in the world coordinate system.
Next, an angular offset and a translational offset of the camera are calculated. The rotation matrix R may be integrated by a gyroscope of the electronic deviceCarry over into the calculation, namely:
in addition, since translational shake occurs in addition to the change in shooting angle during underwater shooting, translational shake also needs to be compensated. This part is calculated by the translation vector T, the offset of X/Y is obtained by the acceleration integration acquired by the acceleration sensor, and the translation amount in unit time can be expressed asIn the scheme, only the plane direction of the mobile phone is considered up and down&Movement in the left-right direction, thus bringing in only +.>And->The translation vector in the above coordinate conversion relation can be obtained:
In summary, the compensation coordinate conversion formula at the time of the compensation time t is as follows:
in some embodiments, the electronic device may determine a pose offset according to the initial shooting pose of the rear camera and the pose information after the movement, and perform angle processing on an image obtained by shooting the camera after the pose of the camera is offset according to the pose offset, so that the angle of the image is consistent with the initial shooting angle, thereby obtaining an image of a shooting visual angle wanted by a user, and further improving the shooting effect of the image.
In some embodiments, when it is desired to implement fixed angle shooting, the electronic device may receive a fifth input from the user, entering a fixed angle shooting mode.
In some embodiments, the fifth input is used to trigger entry into a fixed angle capture mode. The fifth input may be any input with feasibility, such as a touch input, a gesture input, or a voice input, which is not limited in the embodiment of the present application.
In some embodiments, the electronic device exits the fixed capture mode after receiving a sixth input from the user.
In some embodiments, the sixth input is used to trigger entry into the fixed angle capture mode. The sixth input may be any input with feasibility, such as a touch input, a gesture input, or a voice input, which is not limited in the embodiment of the present application.
For example, when the user wishes to exit the fixed shooting mode during one shot, the user may quickly double click on the fixed key, triggering entry into the autonomous control camera mirror mode. After the user double-clicks the fixed key, the algorithm changes the processing state, and gradually reduces the rotation matrix R and the translation vector T into an identity matrix, namely: fixed mode:
free mode (non-fixed):
namely, after the fixed mode is canceled, the conversion formula is as follows:
for example, to avoid a jump in the shooting experience, the values of the rotation matrix and translation vector need to be linearly reduced after double clicking by the user so that the picture is slowly returned to the imaging center.
The foregoing method embodiments, or various possible implementation manners in the method embodiments, may be executed separately, or may be executed in combination with each other on the premise that no contradiction exists, and may be specifically determined according to actual use requirements, which is not limited by the embodiments of the present application.
According to the shooting method provided by the embodiment of the application, the execution subject can be a shooting device. In the embodiment of the present application, taking an example of a photographing method performed by a photographing device, the photographing device provided in the embodiment of the present application is described.
Fig. 7 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application, as shown in fig. 7, the photographing method 700 includes: a receiving module 701, a display module 702 and an executing module 703, wherein: the receiving module 701 is configured to receive a first input of a user when a shooting preview interface of a rear camera of the electronic device is displayed; the display module 702 is configured to display a control identifier in a shooting preview interface in response to the first input received by the receiving module 701; wherein, the control mark has association relation with the image collected by the front camera; the receiving module 701 is further configured to receive a second input from a user; the execution module 703 is configured to update a display parameter of the control identifier and update a display content of the shooting preview interface in response to the second input received by the receiving module 701; and the display parameters of the control mark and the display content of the shooting preview interface have an association relation.
In some embodiments, the executing module is specifically configured to respond to the second input, update the display parameter of the control identifier according to the input parameter of the second input, and update the display content of the shooting preview interface while updating the display parameter of the control identifier.
In some embodiments, the second input comprises a first spaced gesture input; the input parameters of the second input comprise the moving direction of the first space gesture input; the execution module is specifically configured to update a display state of the control identifier based on a movement direction input by the first space gesture, and control movement of the control identifier along the movement direction; the execution module is specifically configured to control rotation of the rear camera based on the movement direction, and display an image acquired by the rear camera in the shooting preview interface.
In some embodiments, the control identifier includes a first identifier and a second identifier; the second input comprises a second spaced gesture input; the execution module is specifically configured to update display states of the first identifier and the second identifier based on input parameters input by the second space gesture; the execution module is specifically configured to control rotation of the rear camera based on the rotation directions indicated by the first identifier and the second identifier, and display an image acquired by the rear camera in the shooting preview interface.
In some embodiments, the control identifier includes a third identifier and a fourth identifier, and the second input includes a third spaced gesture input; the execution module is specifically configured to update display positions of the third identifier and the fourth identifier based on input parameters input by the third space gesture; the execution module is specifically configured to translate display content of the shooting preview interface based on display positions of the third identifier and the fourth identifier.
In some embodiments, the control identifier includes a fifth identifier and a sixth identifier, and the second input includes a fourth spaced gesture input; the execution module is specifically configured to update a relative positional relationship between the fifth identifier and the sixth identifier based on an input parameter input by the fourth space gesture; the execution module is specifically configured to scale the display content of the shooting preview interface based on the relative positional relationship.
In some embodiments, a first fixed key is disposed on the electronic device; the receiving module is specifically configured to receive a second input from a user when the first fixed key is in an activated state.
In some embodiments, a second fixed key is provided on the electronic device; the receiving module is further configured to receive a third input from the user to the second fixed key after updating the display content of the shooting preview interface; and the generating module is used for responding to the third input received by the receiving module and generating a video file.
According to the shooting device provided by the embodiment of the application, under the condition that the shooting preview interface of the rear camera is displayed, the electronic equipment receives the first input of the user, starts the mirror-moving function, and displays the control mark in the shooting preview interface, wherein the control mark has an association relationship with the image acquired by the front camera, then the display parameter of the control mark is updated through the second input of the user, the display content of the shooting preview interface is updated, and the change of the display parameter of the control mark indicates the change of the shooting content. By the method, a user can conveniently control the image acquired by the rear camera without manually moving the camera to update the display content in the shooting preview interface, so that a stable lens transporting effect can be realized, and the threshold and difficulty for transporting the lens when shooting videos are reduced. When the shooting method is applied to underwater shooting scenes, stable lens transporting effect can be realized even if the shooting method is influenced by water flow and buoyancy; and the user can not directly input the touch display screen, and even if the screen touch function fails underwater, the user can shoot the video of the wanted mirror effect.
The photographing device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The photographing device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The photographing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 6 (B), and achieve the same technical effects, so that repetition is avoided, and no further description is given here.
Optionally, as shown in fig. 8, the embodiment of the present application further provides an electronic device 800, including a processor 801 and a memory 802, where a program or an instruction that can be executed on the processor 801 is stored in the memory 802, and the program or the instruction when executed by the processor 801 implements each step of the above-mentioned embodiment of the photographing method, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110. The electronic device comprises a front camera and a rear camera.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The user input unit 107 is configured to receive a first input of a user when a shooting preview interface of a rear camera of the electronic device is displayed; the display unit 106 is configured to display a control identifier in the shooting preview interface in response to the first input received by the user input unit 107; wherein, the control mark has association relation with the image collected by the front camera; the user input unit 107 is further configured to receive a second input of a user; the processor 110 is configured to update a display parameter of the control identifier and update display content of the shooting preview interface in response to the second input received by the user input unit 107; and the display parameters of the control mark and the display content of the shooting preview interface have an association relation.
In some embodiments, the processor 110 is specifically configured to update the display parameter of the control identifier according to the input parameter of the second input in response to the second input, and update the display content of the shooting preview interface while updating the display parameter of the control identifier.
In some embodiments, the second input comprises a first spaced gesture input; the input parameters of the second input comprise the moving direction of the first space gesture input; the processor 110 is specifically configured to update a display state of the control identifier based on a movement direction input by the first space gesture, and control movement of the control identifier along the movement direction; the processor 110 is specifically configured to control the rotation of the rear camera based on the moving direction, and control the display unit 106 to display the image acquired by the rear camera in the shooting preview interface.
In some embodiments, the control identifier includes a first identifier and a second identifier; the second input comprises a second spaced gesture input; the processor 110 is specifically configured to update the display states of the first identifier and the second identifier based on the input parameters input by the second space gesture; the processor 110 is specifically configured to control the rotation of the rear camera based on the rotation directions indicated by the first identifier and the second identifier, and control the display unit 106 to display the image acquired by the rear camera in the shooting preview interface.
In some embodiments, the control identifier includes a third identifier and a fourth identifier, and the second input includes a third spaced gesture input; the processor 110 is specifically configured to update display positions of the third identifier and the fourth identifier based on the input parameter input by the third space gesture; the processor 110 is specifically configured to pan the display content of the shooting preview interface based on the display positions of the third identifier and the fourth identifier.
In some embodiments, the control identifier includes a fifth identifier and a sixth identifier, and the second input includes a fourth spaced gesture input; the processor 110 is specifically configured to update a relative positional relationship between the fifth identifier and the sixth identifier based on the input parameter input by the fourth space gesture; the processor 110 is specifically configured to scale the display content of the shooting preview interface based on the relative positional relationship.
In some embodiments, a first fixed key is disposed on the electronic device; the user input unit 107 is specifically configured to receive a second input from a user when the first fixed key is in an activated state.
In some embodiments, a second fixed key is provided on the electronic device; the user input unit 107 is further configured to receive a third input of the second fixed key from the user after updating the display content of the shooting preview interface; a processor 110 for generating a video file in response to the third input received by the user input unit 107.
According to the electronic equipment provided by the embodiment of the application, under the condition that the shooting preview interface of the rear camera is displayed, the electronic equipment receives the first input of a user, the mirror-moving function is started, the control mark is displayed in the shooting preview interface, the control mark has an association relation with the image acquired by the front camera, then the display parameter of the control mark is updated through the second input of the user, the display content of the shooting preview interface is updated, and the change of the display parameter of the control mark indicates the change of the shooting content. By the method, a user can conveniently control the image acquired by the rear camera without manually moving the camera to update the display content in the shooting preview interface, so that a stable lens transporting effect can be realized, and the threshold and difficulty for transporting the lens when shooting videos are reduced. When the shooting method is applied to underwater shooting scenes, stable lens transporting effect can be realized even if the shooting method is influenced by water flow and buoyancy; and the user can not directly input the touch display screen, and even if the screen touch function fails underwater, the user can shoot the video of the wanted mirror effect.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the above-mentioned shooting method embodiment, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, implementing each process of the shooting method embodiment, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the foregoing shooting method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (15)

1. A photographing method performed by an electronic device including a front camera and a rear camera, the method comprising:
receiving a first input of a user under the condition of displaying a shooting preview interface of the rear camera;
responsive to the first input, displaying a control identification in the shooting preview interface; wherein, the control mark has an association relation with the image collected by the front camera;
receiving a second input from the user;
responding to the second input, updating the display parameters of the control identifier, and updating the display content of the shooting preview interface; and the display parameters of the control mark and the display content of the shooting preview interface have an association relation.
2. The method of claim 1, wherein the updating the display parameters of the control identifier and updating the display content of the capture preview interface in response to the second input comprises:
and responding to the second input, updating the display parameters of the control identifier according to the input parameters of the second input, and updating the display content of the shooting preview interface while updating the display parameters of the control identifier.
3. The method of claim 1, wherein the second input comprises a first spaced gesture input; the input parameters of the second input comprise the moving direction of the first space gesture input;
and updating the display parameters of the control identifier and updating the display content of the shooting preview interface, wherein the updating comprises the following steps:
updating the display state of the control mark based on the moving direction input by the first space gesture, and controlling the control mark to move along the moving direction;
and controlling the rear camera to rotate based on the moving direction, and displaying the image acquired by the rear camera in the shooting preview interface.
4. The method of claim 1, wherein the control identifier comprises a first identifier and a second identifier; the second input comprises a second spaced gesture input;
the updating of the display parameters of the control mark and the updating of the display content of the shooting preview interface comprise the following steps:
updating the display states of the first mark and the second mark based on the input parameters input by the second space gesture;
and controlling the rear camera to rotate based on the rotation directions indicated by the first mark and the second mark, and displaying the image acquired by the rear camera in the shooting preview interface.
5. The method of claim 1, wherein the control indicia comprises a third indicia and a fourth indicia, and the second input comprises a third spaced gesture input;
and updating the display parameters of the control identifier and updating the display content of the shooting preview interface, wherein the updating comprises the following steps:
updating the display positions of the third mark and the fourth mark based on the input parameters input by the third space gesture;
and translating the display content of the shooting preview interface based on the display positions of the third mark and the fourth mark.
6. The method of claim 1, wherein the control indicia comprises a fifth indicia and a sixth indicia, and the second input comprises a fourth spaced gesture input;
and updating the display parameters of the control identifier and updating the display content of the shooting preview interface, wherein the updating comprises the following steps:
updating the relative position relation between the fifth mark and the sixth mark based on the input parameters input by the fourth space gesture;
and scaling the display content of the shooting preview interface based on the relative position relation.
7. The method of claim 1, wherein a first stationary key is provided on the electronic device;
The receiving a second input from the user, comprising:
and receiving a second input of the user under the condition that the first fixed key is in an activated state.
8. The method of claim 1, wherein a second fixed key is provided on the electronic device, and after the updating of the display content of the shooting preview interface, the method further comprises:
and receiving a third input of the user to the second fixed key, and generating a video file.
9. An electronic device comprising a front camera and a rear camera, the electronic device further comprising: the device comprises a receiving module, a display module and an execution module, wherein:
the receiving module is used for receiving a first input of a user under the condition of displaying a shooting preview interface of the rear camera;
the display module is used for responding to the first input received by the receiving module and displaying a control identifier in the shooting preview interface; wherein, the control mark has an association relation with the image collected by the front camera;
the receiving module is also used for receiving a second input of the user;
the execution module is used for responding to the second input received by the receiving module, updating the display parameters of the control identifier and updating the display content of the shooting preview interface; and the display parameters of the control mark and the display content of the shooting preview interface have an association relation.
10. The electronic device according to claim 9, wherein the execution module is specifically configured to respond to the second input, update the display parameter of the control identifier according to the input parameter of the second input, and update the display content of the shooting preview interface while updating the display parameter of the control identifier.
11. The electronic device of claim 9, wherein the second input comprises a first spaced gesture input; the input parameters of the second input comprise the moving direction of the first space gesture input;
the execution module is specifically configured to update a display state of the control identifier based on a movement direction input by the first space gesture, and control movement of the control identifier along the movement direction;
the execution module is specifically configured to control the rotation of the rear camera based on the movement direction, and display an image acquired by the rear camera in the shooting preview interface.
12. The electronic device of claim 9, wherein the control identifier comprises a first identifier and a second identifier; the second input comprises a second spaced gesture input;
The execution module is specifically configured to update display states of the first identifier and the second identifier based on input parameters input by the second space gesture;
the execution module is specifically configured to control the rotation of the rear camera based on the rotation directions indicated by the first identifier and the second identifier, and display the image acquired by the rear camera in the shooting preview interface.
13. The electronic device of claim 9, wherein the control indicia comprises a third indicia and a fourth indicia, and wherein the second input comprises a third spaced gesture input;
the execution module is specifically configured to update display positions of the third identifier and the fourth identifier based on input parameters input by the third space gesture;
the execution module is specifically configured to translate display content of the shooting preview interface based on display positions of the third identifier and the fourth identifier.
14. The electronic device of claim 9, wherein the control indicia comprises a fifth indicia and a sixth indicia, and the second input comprises a fourth alternate gesture input;
the execution module is specifically configured to update a relative positional relationship between the fifth identifier and the sixth identifier based on an input parameter input by the fourth space gesture;
The execution module is specifically configured to scale display content of the shooting preview interface based on the relative positional relationship.
15. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the shooting method as claimed in any one of claims 1 to 8.
CN202311399721.XA 2023-10-25 2023-10-25 Shooting method and electronic equipment Pending CN117459827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311399721.XA CN117459827A (en) 2023-10-25 2023-10-25 Shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311399721.XA CN117459827A (en) 2023-10-25 2023-10-25 Shooting method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117459827A true CN117459827A (en) 2024-01-26

Family

ID=89581121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311399721.XA Pending CN117459827A (en) 2023-10-25 2023-10-25 Shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117459827A (en)

Similar Documents

Publication Publication Date Title
US10198867B2 (en) Display control device, display control method, and program
CN110169056B (en) Method and equipment for acquiring dynamic three-dimensional image
KR102497683B1 (en) Method, device, device and storage medium for controlling multiple virtual characters
KR102114377B1 (en) Method for previewing images captured by electronic device and the electronic device therefor
US20090227283A1 (en) Electronic device
US20140063320A1 (en) Image capture methods and systems with positioning and angling assistance
CN112637500B (en) Image processing method and device
CN112532881B (en) Image processing method and device and electronic equipment
WO2023072088A1 (en) Focusing method and apparatus
CN112738420B (en) Special effect implementation method, device, electronic equipment and storage medium
CN112954214A (en) Shooting method and device, electronic equipment and storage medium
CN113873100B (en) Video recording method, device, electronic equipment and storage medium
CN112788244B (en) Shooting method, shooting device and electronic equipment
CN114390186A (en) Video shooting method and electronic equipment
CN112437231A (en) Image shooting method and device, electronic equipment and storage medium
CN115379118B (en) Camera switching method and device, electronic equipment and readable storage medium
US20150022559A1 (en) Method and apparatus for displaying images in portable terminal
CN117459827A (en) Shooting method and electronic equipment
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113873147A (en) Video recording method and device and electronic equipment
CN112822398A (en) Shooting method and device and electronic equipment
CN115103113B (en) Image processing method and electronic device
US20240220022A1 (en) Trigger signal generating apparatus and portable terminal
WO2023275920A1 (en) Trigger signal generation device and mobile terminal
CN113891000B (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination