CN114827441A - Shooting method and device, terminal equipment and storage medium - Google Patents

Shooting method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN114827441A
CN114827441A CN202110128464.0A CN202110128464A CN114827441A CN 114827441 A CN114827441 A CN 114827441A CN 202110128464 A CN202110128464 A CN 202110128464A CN 114827441 A CN114827441 A CN 114827441A
Authority
CN
China
Prior art keywords
target
subject
camera
determining
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110128464.0A
Other languages
Chinese (zh)
Inventor
霍文甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110128464.0A priority Critical patent/CN114827441A/en
Publication of CN114827441A publication Critical patent/CN114827441A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to a shooting method, a shooting device, a terminal device and a storage medium, wherein the method comprises the following steps: determining a target subject in at least one subject in the view frame in response to the first operation instruction; in response to the target subject being a dynamic subject, determining a target position of the target subject moving from a current position; wherein the target location is used to characterize: at the moment of acquiring the target image frame, the target body moves to the end point position; adjusting a camera structure according to the target position to obtain a target image frame; wherein the target subject is located at the center position of the target image frame. By using the method disclosed by the invention, when the dynamic body is shot, the moving target position of the dynamic body can be determined in advance, so that the camera shooting structure can be adjusted in time, and a target image frame with a better imaging effect can be obtained.

Description

Shooting method and device, terminal equipment and storage medium
Technical Field
The present disclosure relates to the field of image capture, and in particular, to a method and an apparatus for image capture, a terminal device, and a storage medium.
Background
Terminal equipment such as cell-phone is the communication tool that people carried about, and along with the development of technique, terminal equipment such as cell-phone's function is more and more, very big promotion people's convenience of living. The photographing/shooting function is one of important functions in terminal equipment such as mobile phones, and users can conveniently and quickly shoot and record life by using the shooting function of the mobile phones.
In the related art, there is a problem that at least the effect of photographing a subject of motion is not good. Although the related art has a corresponding tracking shooting method, the use is more complicated, and the user experience is not good.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a photographing method, apparatus, terminal device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, a shooting method is provided, including:
determining a target subject in at least one subject in the view frame in response to the first operation instruction;
in response to the target subject being a dynamic subject, determining a target position of the target subject moving from a current position; wherein the target location is used to characterize: at the moment of acquiring the target image frame, the target body moves to the end point position;
adjusting a camera structure according to the target position to obtain a target image frame; wherein the target subject is located at a central position of the target image frame.
Optionally, the method further comprises:
in response to a second operation instruction, at least one subject is identified in the viewing frame, and the identification frame is displayed on the at least one subject; the second operation instruction is used for indicating to enter a preset shooting mode, and the at least one main body is at least one static main body or at least one dynamic main body.
Optionally, the determining the target position of the target subject moved from the current position includes:
determining motion information of the target subject;
and predicting the target position of the target main body according to the motion information and a preset algorithm.
Optionally, the determining motion information of the target subject includes:
acquiring a plurality of position information of the target body in the motion process;
and determining the motion trail of the target body according to the plurality of pieces of position information.
Optionally, the adjusting the camera structure according to the target position to obtain the target image frame includes:
determining a preset position adjusted by a camera shooting structure according to the target position; wherein, predetermine the position and correspond with the target position, predetermine the position and satisfy: when the camera shooting structure collects images at a preset position, the target main body is positioned at the center of the view-finding frame;
and controlling the camera shooting structure to move to a preset position, focusing the target main body at the target position, and acquiring a target image frame.
Optionally, the controlling the camera structure to move to a preset position includes:
determining a moving distance according to the current position and the preset position of the camera structure;
and controlling the camera shooting structure to move according to the moving distance.
Optionally, the method further comprises:
pausing the moving of the image pickup structure in response to a zoom instruction based on the target subject;
and responding to the completion of the zooming of the target body in the view frame, and continuously moving the camera shooting structure to the preset position.
According to a second aspect of the embodiments of the present disclosure, there is provided a photographing apparatus including:
the first determination module is used for responding to the first operation instruction and determining a target subject in at least one subject in the view frame;
the second determination module is used for determining the target position of the target body moved from the current position in response to the target body being a dynamic body; wherein the target location is used to characterize: at the moment of acquiring the target image frame, the target body moves to the end point position;
the control module is used for adjusting the camera shooting structure according to the target position to obtain a target image frame; wherein the target subject is located at the center position of the target image frame.
Optionally, the apparatus further comprises:
the identification module is used for responding to the second operation instruction, identifying at least one main body in the view frame and displaying the identification frame on the at least one main body; the second operation instruction is used for indicating to enter a preset shooting mode, and the at least one main body is at least one static main body or at least one dynamic main body.
Optionally, the second determining module is specifically configured to:
determining motion information of the target subject;
and predicting the target position of the target main body according to the motion information and a preset algorithm.
Optionally, the second determining module is specifically configured to:
acquiring a plurality of position information of the target body in the motion process;
and determining the motion trail of the target body according to the plurality of pieces of position information.
Optionally, the control module is specifically configured to:
determining a preset position adjusted by a camera shooting structure according to the target position; wherein, predetermine the position and correspond with the target position, predetermine the position and satisfy: when the camera shooting structure collects images at a preset position, the target main body is positioned at the center of the view-finding frame;
and controlling the camera shooting structure to move to a preset position, focusing the target main body at the target position, and acquiring a target image frame.
Optionally, the control module is specifically configured to:
determining a moving distance according to the current position and the preset position of the camera structure;
and controlling the camera shooting structure to move according to the moving distance.
Optionally, the control module is further configured to:
pausing the moving of the image pickup structure in response to a zoom instruction based on the target subject;
and responding to the completion of the zooming of the target body in the view frame, and continuously moving the camera shooting structure to the preset position.
According to a third aspect of the embodiments of the present disclosure, a terminal device is provided, including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the photographing method as defined in any one of the above.
According to a fourth aspect of embodiments of the present disclosure, a non-transitory computer-readable storage medium is presented, in which instructions, when executed by a processor of a terminal device, enable the terminal device to perform the photographing method as described in any one of the above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: by using the method disclosed by the invention, when the dynamic body is shot, the moving target position of the dynamic body can be determined in advance, so that the camera shooting structure can be adjusted in time, and a target image frame with a better imaging effect can be obtained.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method according to an example embodiment.
FIG. 2 is a flowchart illustrating a method according to an example embodiment.
FIG. 3 is a flow chart illustrating a method according to an example embodiment.
FIG. 4 is a flow chart illustrating a method according to an example embodiment.
FIG. 5 is a flow chart illustrating a method according to an example embodiment.
FIG. 6 is a flow chart illustrating a method according to an example embodiment.
FIG. 7 is a flow chart illustrating a method according to an example embodiment.
Fig. 8 is a block diagram illustrating an apparatus according to an example embodiment.
Fig. 9 is a block diagram of a terminal device shown according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Terminal equipment such as cell-phone is the communication tool that people carried about, and along with the development of technique, terminal equipment such as cell-phone's function is more and more, very big promotion people's convenience of living. The photographing/shooting function is one of important functions in terminal equipment such as mobile phones, and users can conveniently and quickly shoot and record life by using the shooting function of the mobile phones.
In using the terminal device, the user expects to achieve higher quality shooting with the terminal device. In order to improve the shooting quality of a moving body, the terminal equipment is technically improved in an image shooting structure or an image shooting method.
For example, in the related art, after the camera program is opened, the target recommendation algorithm may be used to perform the target recommendation, so as to provide a reference for the user. The target recommendation algorithm comprises a cascade classifier framework, a HoG + SVM (target object detection) or a convolutional neural network and the like. For another example, in the related art, in order to prevent shaking during shooting, an anti-shake technique can be used to improve the shooting quality. As another example, a tracking shooting method in the related art.
However, the above-described system has at least the following problems: the photographing effect is not good for the subject who moves. Although a corresponding tracking shooting method exists, the method is complicated to use, and the user experience is not good.
In order to solve technical problems in the related art, the present disclosure provides a photographing method, including: and determining a target subject in at least one subject in the view frame in response to the first operation instruction. And determining the target position of the target body moved from the current position in response to the target body being a dynamic body. And adjusting the camera structure according to the target position to acquire a target image frame. By using the method disclosed by the invention, when the dynamic body is shot, the moving target position of the dynamic body can be determined in advance, so that the camera shooting structure can be adjusted in time, and a target image frame with a better imaging effect can be obtained.
In an exemplary embodiment, the shooting method of the present embodiment is applied to a terminal device. The terminal device can be, for example, an electronic device with a camera shooting structure, such as a mobile phone, a tablet computer, a notebook computer, and an intelligent wearable device.
The terminal device generally includes a processor, a memory, and a display screen to implement the operation of the terminal device system or the operation of the application program. Wherein the processor performs various functions of the terminal device and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory, and invoking data stored in the memory. For example, the processor may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. The memory may be used to store an instruction, a program, code, a set of codes, or a set of instructions. For example, the storage program area of the memory may store instructions for implementing an operating system, instructions for performing at least one function (such as a touch function, a sound playing function, an image playing function, and the like), instructions for implementing a control method, and the like.
The terminal device can realize the shooting function through an image processor (GPU), an Image Signal Processor (ISP), a camera component, a display screen, a Central Processing Unit (CPU) and the like. The ISP is used for processing data fed back by the camera assembly (the ISP may also be disposed in the camera assembly), a photosensitive element (CCD or CMOS) of the camera assembly converts an optical signal into an electrical signal and transmits the electrical signal to the ISP to be converted into a digital image signal, the ISP outputs the digital image signal to a Digital Signal Processor (DSP) for processing, and the DSP converts the digital image signal into an image signal in a standard RGB format or the like, so as to finally realize display of a shot image on the terminal device.
Taking an intelligent device with an operating system as an android system as an example, a Linux kernel layer, a system running library layer, an application framework layer and an application layer are stored in the memory. The Linux kernel layer provides underlying drivers for various hardware of the intelligent device, such as a display driver and an audio driver. The system operation library layer provides main characteristic support for the Android system through some C/C + + libraries, for example, the OpenGL/ES library provides support for 3D drawing. The application framework layer provides various APIs that may be used when building an application, such as the following: window management, view management, etc. At least one application program runs in the application layer, and the application programs can be self-contained in an operating system, such as a camera program.
As shown in fig. 1, the method of the present embodiment includes the following steps:
and S110, responding to the first operation instruction, and determining a target subject in at least one subject in the view frame.
And S120, determining the target position of the target body moving from the current position in response to the target body being a dynamic body.
And S130, adjusting the camera shooting structure according to the target position to acquire a target image frame.
The processor of the terminal equipment controls and starts the camera program according to the operation instruction of the user.
In step S110, at least one subject, such as a person, a plant, an animal, etc., may be displayed in the view finder of the camera program. The subject may be either static or dynamic. The subject may be identified according to a target recommendation algorithm and displayed in a viewfinder.
The first operation instruction may be, for example, a touch instruction for clicking any subject in the camera program interface, and the processor determines a target subject selected by the user according to the first operation instruction.
In step S120, when the target subject is a dynamic subject, i.e., the target subject determined according to the user' S selection is moving in the field of view. The target location is used to characterize: at the moment of acquiring the target image frame, the target subject will move to the end position. The target position may be, for example: according to the movement condition, when the target main body moves from the current position to the preset moment, the target main body is located at the position; the preset time may be, for example, the time at which one target image frame is acquired per shooting.
In this step, the motion trajectory of the target subject may be determined according to the motion condition of the target subject in the finder frame, and the target position of the target subject may be predicted in an algorithmic manner.
In the process of determining the motion trail of the target body, a plurality of position coordinates (such as three-dimensional coordinates) of the target body can be acquired in real time. And determining the motion trail of the target body according to the position coordinates. After the motion trail of the target body is determined, the target position of the target body at the preset moment is predicted by combining an algorithm.
In step S130, after determining the target position, the processor may adjust the image capture structure. The camera structure may include, for example, a lens, and the processor may adjust at least one lens in the lens to move or zoom, so as to adjust the acquisition field of view of the camera structure.
The image pickup structure may include, for example: optical anti-shake structures (OIS structures). The OIS structure may be, for example: in the camera lens of the camera shooting structure, the adjusting lens which is arranged and is suspended by magnetic force can be matched with the data of the gyroscope to realize position adjustment. When the position of the camera structure to be adjusted is determined, the adjusting lens of the OIS structure can be controlled to perform floating adjustment, so that a target image frame is obtained. It is understood that the OIS structure may also be used for: when the machine body shakes, the lens is adjusted to float according to shaking control to perform displacement compensation on shaking, so that shaking of the light path is avoided, and optical anti-shaking is achieved.
In this step, the processor can control the adjusting lens in the adjusting camera structure to move to a preset position. The preset position corresponds to the target position, and the preset position meets the following requirements: when the camera shooting structure collects images at a preset position, the target body at the target position is positioned at the center of the view-finding frame.
After the structure of making a video recording is located predetermined position, can gather the image. The processor acquires an image acquired by the camera structure as a target image frame. Or, the image acquired by the camera structure is processed (for example, cropped) by other algorithms to obtain a target image frame, and the processor acquires the target image frame. In the target image frame, the target subject is located at a central position.
The target image frame may be a final image when the picture is taken, or may be one image frame when the video is taken. When capturing video, the video file may include a plurality of frames of the target image frame. Wherein each target image frame corresponds to a respective target position. The processor can determine the target position of the target subject corresponding to each frame of the target image frame in real time.
In an exemplary embodiment, as shown in fig. 2, before step S110, the method of the present embodiment further includes:
s100, in response to a second operation instruction, at least one subject is identified in the view frame, and the identified frame is displayed on the at least one subject.
And the second operation instruction is used for indicating to enter a preset shooting mode. The preset photographing mode may be, for example, a target recommendation mode. The second operation instruction may be, for example, a touch instruction that the user clicks to enter the target recommendation mode based on the prompt information of whether to enter the target recommendation mode in the camera program interface; or the second operation instruction is an operation of directly clicking the identifier corresponding to the target recommendation mode in the camera program interface. And the processor controls the camera program to enter a target recommendation mode according to the second operation instruction.
In this embodiment, the target recommendation mode may include two forms, namely static target recommendation and moving target recommendation.
In one example, the second operation instruction includes a touch instruction to enter a static target recommendation mode. And the processor controls the camera program to enter a static target recommendation mode according to the second operation instruction.
In the static object recommendation mode, the camera program runs an AI recognition algorithm to recognize at least one static subject in the viewfinder. And correspondingly displaying an identification frame on at least one static main body in the view frames so as to recommend the main body for the reference of the user. The user can issue a first operation instruction based on the displayed identification frame to select the static target subject.
In another example, the second operation instruction includes a touch instruction to enter a dynamic target recommendation mode. And the processor controls the camera program to enter a dynamic target recommendation mode according to the second operation instruction.
And under the dynamic target recommendation mode, the camera program runs a moving object recognition algorithm to recognize at least one moving subject in the view frame. The moving object recognition algorithm may, for example, use an optical flow method to recognize at least one moving subject in the finder frame. It is understood that the moving speed of the moving object in the viewfinder frame is within a suitable range in the present embodiment.
And correspondingly displaying an identification frame on at least one identified dynamic main body in the view frames so as to recommend the main body for the reference of the user. In this example, the moving object recognition algorithm may recognize a maximum of 6 recommended moving subjects. The user can send out a first operation instruction based on the displayed identification frame to select the dynamic target subject.
In another example, the second operation instruction includes a touch instruction for entering a target recommendation mode, and the processor may perform static target recommendation and dynamic target recommendation simultaneously by default. And simultaneously identifying a static main body and a dynamic main body in the recommended viewing frame for the reference selection of a user.
In an exemplary embodiment, as shown in fig. 3, step S120 in this embodiment specifically includes the following steps:
s1201, determining the motion information of the target body.
And S1202, predicting the target position of the target body according to the motion information and a preset algorithm.
In step S1201, the motion information may include, for example, a motion trajectory of the target subject.
In this embodiment, the camera structure further includes a depth camera (3D camera), and the depth camera can acquire three-dimensional coordinates of the target subject at different times in real time. The processor can acquire the three-dimensional coordinates and draw the motion trail of the target body according to the three-dimensional coordinates at different moments.
In step S1202, the preset algorithm may include, for example, an algorithm combining kalman filtering and deep learning, and the processor predicts the position of the target subject at the set time according to the motion trajectory of the target subject.
In an exemplary embodiment, as shown in fig. 4, step S1201 may include the steps of:
s1201-1, acquiring a plurality of pieces of position information of the target body in the motion process.
S1201-2, determining the motion trail of the target body according to the plurality of pieces of position information.
Here, in step S1201-1, the position information may be, for example, three-dimensional coordinates of the target subject. The target subject corresponds to different position information at different moments in the moving process, and in this embodiment, the position information of at least one key point of the target subject may be used as the position information of the target subject.
The three-dimensional coordinates of the target subject at different moments are acquired in real time through the depth camera, and the processor can acquire the three-dimensional coordinates of the target subject.
The depth camera may be, for example, a structured light depth camera or a TOF (time of flight) depth camera. In the dynamic goal recommendation mode, the depth camera may be automatically activated.
In one example, when the depth camera is a structured light depth camera, an emitting portion (e.g., a near infrared laser) of the structured light depth camera emits light having a structured feature that is reflected after the light is projected onto the target subject. The infrared camera of the structured light depth camera can receive light reflected by the target body, and the three-dimensional coordinates of the target body are converted through operation.
In another example, when the depth camera is a TOF (time of flight) depth camera, the transmitting portion of the TOF depth camera continuously transmits laser pulses that are reflected after they are projected onto the target subject. The sensors of the TOF depth camera receive the reflected light rays and, in combination with the round trip time of the rays, determine the three-dimensional coordinates of the target subject.
In this embodiment, the TOF may be used to acquire three-dimensional coordinates of the target subject, and the three-dimensional coordinates may be converted into world coordinate system coordinates.
In step S1201-2, the processor may draw a motion trajectory according to the three-dimensional coordinates, or acquire a TOF drawn motion trajectory. For example, the TOF may acquire three-dimensional coordinates in real time, draw a motion trajectory in real time, and send the motion trajectory drawn in real time to the processor. For another example, the processor obtains three-dimensional coordinates detected by TOF, and draws a motion trajectory of the target subject according to the three-dimensional coordinates of the target subject at different times.
In an exemplary embodiment, as shown in fig. 5, step S130 specifically includes the following steps:
and S1301, determining a preset position adjusted by the camera shooting structure according to the target position.
And S1302, controlling the camera shooting structure to move to a preset position, focusing the target main body at the target position, and acquiring a target image frame.
In step S1301, the preset position corresponds to the target position, and the preset position satisfies: when the camera shooting structure collects images at a preset position, the target body at the target position is positioned at the center of the view-finding frame. The image capture structure may for example be an adjusting lens of the OIS structure.
In this step, the method may specifically include the following steps: s1301-1, determining a moving distance according to the current position and the preset position of the camera shooting structure. The current location may refer to, for example: and in the current preview state, adjusting the position of the lens before position adjustment. And determining the distance to be moved by the adjusting lens according to the distance between the two positions. And S1301-2, controlling the camera shooting structure to move according to the moving distance. After the moving distance is determined, the processor can control the adjusting lens to move so as to match the motion state of the target body and ensure that the target body is positioned at the center of the viewing frame.
In step S1302, the processor issues a control command to control the adjusting lens of the OIS structure to move to a preset position. The processor pre-adjusts the camera structure to a preset position according to the target position predicted and determined in advance. The OIS structure is adjusted more quickly and timely, and moving objects can be shot effectively.
The camera shooting structure is adjusted in advance, when the target body moves to the target position, the processor can control focusing, and the bottom driving layer drives the camera shooting structure to collect images. The image collected by the camera shooting structure can be directly used as a target image frame, and can also be used as the target image frame after algorithm processing such as beautifying, filtering, cutting and the like. The processor acquires the target image frame, wherein the target body in the target image frame is positioned at the central position.
In an exemplary embodiment, as shown in fig. 6, the method of the present embodiment further includes:
and S140, in response to the zooming instruction based on the target body, pausing the moving of the image pickup structure.
And S150, responding to the completion of the zooming of the target main body in the view frame, and continuously moving the camera shooting structure to a preset position.
The embodiment is suitable for a scene in which a user sends a zoom instruction in the process of shooting the dynamic target subject.
In step S140, when the user issues a zoom instruction before acquiring the target image frame after determining the manner of adjusting the image capturing structure. The processor suspends the movement of the camera structure (adjustment lens of OIS) and controls the zooming in the camera program.
In step S150, after zooming is completed, the processor issues a continue instruction to control the camera structure to continue moving until reaching the preset position.
In other embodiments, the user may issue the zoom instruction before determining the adjustment mode of the image capture structure.
Referring to fig. 7, after the camera program is opened, a prompt message indicating whether to enter the target recommendation mode is displayed on the program interface, and the user issues a second operation instruction. The processor can enter a preset shooting mode (target recommendation mode) according to a second operation instruction of the user.
In the preset shooting mode, the processor can recommend and display the static main body according to the static recommendation mode and/or recommend the dynamic main body according to the dynamic recommendation mode. The processor determines a target subject according to a first operation instruction of a user.
And if the target main body is a dynamic target main body, the processor controls and starts the TOF structure so as to acquire the position information of the target main body in real time.
If the user sends a zoom instruction in the current state of previewing the view frame, the processor can zoom the picture in the view frame according to the zoom instruction. And after the zooming is finished, the next step of the step before the zooming is continuously executed.
And the TOF acquires the three-dimensional coordinates of the target main body in real time, and the processor acquires the motion trail drawn by the TOF or draws the motion trail according to the three-dimensional coordinates at different moments.
And the processor is combined with a preset algorithm to predict the target position of the target main body according to the motion trail. The target position characterizes the position at which the target subject will be at the moment the target image frame is acquired. The processor adjusts the camera shooting structure (the adjusting lens of the OIS structure) in advance according to the target position, and focuses and shoots when the target body moves to the target position.
The bottom driving layer drives the image collected by the camera structure, and the collected image can be used as a target image frame; or the collected image is subjected to cropping (crop) zooming to be used as a target image frame, and the target main body is further ensured to be positioned in the center of the target image frame.
The target image frame may be saved as a captured picture. Or, the target image frame is used as a video frame in the captured video, and the next target image frame is continuously obtained according to the cycle shown in fig. 7, so as to finally obtain the captured video.
In an exemplary embodiment, the present disclosure proposes a photographing apparatus, as shown in fig. 8, the apparatus of the present embodiment including: a first determining module 110, a second determining module 120 and a control module 130, the apparatus of the embodiment is used for implementing the method shown in fig. 1. The first determining module 110 is configured to determine a target subject in at least one subject in the frame in response to the first operation instruction. The second determining module 120 is configured to determine a target position of the target subject moving from the current position in response to the target subject being a dynamic subject, wherein the target position is used to characterize: at the moment of acquiring the target image frame, the target subject will move to the end position. The control module 130 is configured to adjust the camera structure according to the target position to obtain a target image frame; wherein, the target subject is located at the center of the target image frame.
In an exemplary embodiment, the apparatus of the present embodiment further includes: and identifying the module. The identification module is used for responding to the second operation instruction, identifying at least one main body in the view frame and displaying the identification frame on the at least one main body; and the second operation instruction is used for indicating to enter a preset shooting mode. The at least one body is at least one static body or at least one dynamic body.
In an exemplary embodiment, still referring to fig. 8, the apparatus in this embodiment is used to implement the method as shown in fig. 3 or fig. 4. The second determining module 120 is specifically configured to: determining motion information of a target subject; and predicting the target position of the target subject according to the motion information and a preset algorithm. In this embodiment, the second determining module 120 is specifically configured to: acquiring a plurality of position information of a target body in a motion process; and determining the motion trail of the target body according to the plurality of pieces of position information.
In an exemplary embodiment, still referring to fig. 8, the apparatus in this embodiment is used to implement the method shown in fig. 5. The control module 130 is specifically configured to: determining a preset position adjusted by the camera shooting structure according to the target position; wherein, predetermine the position and correspond with the target position, predetermine the position and satisfy: when the camera shooting structure collects images at a preset position, the target main body is positioned at the center of the view-finding frame; and controlling the camera shooting structure to move to a preset position, focusing by using the target main body at the target position, and acquiring a target image frame. In this embodiment, the control module 130 is specifically configured to: determining a moving distance according to the current position and the preset position of the camera structure; and controlling the camera shooting structure to move according to the moving distance.
In an exemplary embodiment, still referring to fig. 8, the apparatus in this embodiment is used to implement the method shown in fig. 6. The control module 130 is further configured to: pausing the moving of the image pickup structure in response to a zoom instruction based on the target subject; and responding to the completion of the zooming of the target body in the view frame, and continuously moving the camera shooting structure to the preset position.
Fig. 9 is a block diagram of a terminal device. The present disclosure also provides for a terminal device, for example, device 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
Device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 506 provides power to the various components of device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of the components, such as a display and keypad of the device 500, the sensor assembly 514 may also detect a change in the position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, orientation or acceleration/deceleration of the device 500, and a change in the temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communications between the device 500 and other devices in a wired or wireless manner. The device 500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
A non-transitory computer readable storage medium, such as the memory 504 including instructions executable by the processor 520 of the device 500 to perform the method, is provided in another exemplary embodiment of the present disclosure. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The instructions in the storage medium, when executed by a processor of the terminal device, enable the terminal device to perform the above-described method.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A photographing method, characterized by comprising:
determining a target subject in at least one subject in the view frame in response to the first operation instruction;
in response to the target subject being a dynamic subject, determining a target position of the target subject moving from a current position; wherein the target location is used to characterize: at the moment of acquiring the target image frame, the target body moves to the end point position;
adjusting a camera structure according to the target position to obtain a target image frame; wherein the target subject is located at the center position of the target image frame.
2. The photographing method according to claim 1, wherein the method further comprises:
in response to a second operation instruction, at least one subject is identified in the viewing frame, and the identification frame is displayed on the at least one subject; the second operation instruction is used for indicating to enter a preset shooting mode, and the at least one main body is at least one static main body or at least one dynamic main body.
3. The photographing method according to claim 2, wherein the determining of the target position where the target subject moves from the current position includes:
determining motion information of the target subject;
and predicting the target position of the target main body according to the motion information and a preset algorithm.
4. The photographing method according to claim 3, wherein the determining motion information of the target subject includes:
acquiring a plurality of position information of the target body in the motion process;
and determining the motion trail of the target body according to the plurality of pieces of position information.
5. The shooting method according to claim 1, wherein the adjusting the camera structure according to the target position to obtain the target image frame comprises:
determining a preset position adjusted by a camera shooting structure according to the target position; wherein, predetermine the position and correspond with the target position, predetermine the position and satisfy: when the camera shooting structure collects images at a preset position, the target main body is positioned at the center of the view-finding frame;
and controlling the camera shooting structure to move to a preset position, focusing the target main body at the target position, and acquiring a target image frame.
6. The shooting method according to claim 5, wherein the controlling the camera structure to move to the preset position comprises:
determining a moving distance according to the current position and the preset position of the camera structure;
and controlling the camera shooting structure to move according to the moving distance.
7. The photographing method according to claim 5, wherein the method further comprises:
pausing the moving of the image pickup structure in response to a zoom instruction based on the target subject;
and responding to the completion of the zooming of the target body in the view frame, and continuously moving the camera shooting structure to the preset position.
8. A camera, comprising:
the first determination module is used for responding to the first operation instruction and determining a target subject in at least one subject in the view frame;
the second determination module is used for determining the target position of the target body moved from the current position in response to the target body being a dynamic body; wherein the target location is used to characterize: at the moment of acquiring the target image frame, the target body moves to the end point position;
the control module is used for adjusting the camera shooting structure according to the target position to obtain a target image frame; wherein the target subject is located at the center position of the target image frame.
9. A terminal device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the photographing method according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a terminal device, enable the terminal device to perform the photographing method according to any one of claims 1 to 7.
CN202110128464.0A 2021-01-29 2021-01-29 Shooting method and device, terminal equipment and storage medium Pending CN114827441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110128464.0A CN114827441A (en) 2021-01-29 2021-01-29 Shooting method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110128464.0A CN114827441A (en) 2021-01-29 2021-01-29 Shooting method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114827441A true CN114827441A (en) 2022-07-29

Family

ID=82525328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110128464.0A Pending CN114827441A (en) 2021-01-29 2021-01-29 Shooting method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114827441A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103676405A (en) * 2013-12-02 2014-03-26 宇龙计算机通信科技(深圳)有限公司 Optical imaging device, optical system and mobile terminal
CN108259703A (en) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 A kind of holder with clapping control method, device and holder
CN110086988A (en) * 2019-04-24 2019-08-02 薄涛 Shooting angle method of adjustment, device, equipment and its storage medium
CN110602400A (en) * 2019-09-17 2019-12-20 Oppo(重庆)智能科技有限公司 Video shooting method and device and computer readable storage medium
CN112017210A (en) * 2020-07-14 2020-12-01 创泽智能机器人集团股份有限公司 Target object tracking method and device
CN113141518A (en) * 2021-04-20 2021-07-20 北京安博盛赢教育科技有限责任公司 Control method and control device for video frame images in live classroom

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103676405A (en) * 2013-12-02 2014-03-26 宇龙计算机通信科技(深圳)有限公司 Optical imaging device, optical system and mobile terminal
CN108259703A (en) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 A kind of holder with clapping control method, device and holder
CN110086988A (en) * 2019-04-24 2019-08-02 薄涛 Shooting angle method of adjustment, device, equipment and its storage medium
CN110602400A (en) * 2019-09-17 2019-12-20 Oppo(重庆)智能科技有限公司 Video shooting method and device and computer readable storage medium
CN112017210A (en) * 2020-07-14 2020-12-01 创泽智能机器人集团股份有限公司 Target object tracking method and device
CN113141518A (en) * 2021-04-20 2021-07-20 北京安博盛赢教育科技有限责任公司 Control method and control device for video frame images in live classroom

Similar Documents

Publication Publication Date Title
EP3125530B1 (en) Video recording method and device
RU2679199C1 (en) Method and device for controlling photoshoot of unmanned aircraft
US10375296B2 (en) Methods apparatuses, and storage mediums for adjusting camera shooting angle
WO2022062896A1 (en) Livestreaming interaction method and apparatus
US10217487B2 (en) Method and device for controlling playback
CN107515669B (en) Display method and device
JP6091669B2 (en) IMAGING DEVICE, IMAGING ASSIST METHOD, AND RECORDING MEDIUM CONTAINING IMAGING ASSIST PROGRAM
KR102457864B1 (en) Method and device for processing video, terminal communication apparatus and storage medium
CN112905074B (en) Interactive interface display method, interactive interface generation method and device and electronic equipment
EP3945494A1 (en) Video processing method, apparatus and storage medium
EP3226119A1 (en) Method and apparatus for displaying image data from a terminal on a wearable display
RU2635873C2 (en) Method and device for displaying framing information
CN112738420A (en) Special effect implementation method and device, electronic equipment and storage medium
EP3211879A1 (en) Method and device for automatically capturing photograph, electronic device
CN107809588B (en) Monitoring method and device
CN108257091B (en) Imaging processing method for intelligent mirror and intelligent mirror
CN108986803B (en) Scene control method and device, electronic equipment and readable storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN114827441A (en) Shooting method and device, terminal equipment and storage medium
KR20210157289A (en) Method and apparatus for displaying captured preview image, and medium
CN113315904A (en) Imaging method, imaging device, and storage medium
EP3945717A1 (en) Take-off capture method and electronic device, and storage medium
CN106713748B (en) Method and device for sending pictures
CN117041728A (en) Focusing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination