WO2021047070A1 - 终端拍摄方法、装置、移动终端及可读存储介质 - Google Patents

终端拍摄方法、装置、移动终端及可读存储介质 Download PDF

Info

Publication number
WO2021047070A1
WO2021047070A1 PCT/CN2019/122776 CN2019122776W WO2021047070A1 WO 2021047070 A1 WO2021047070 A1 WO 2021047070A1 CN 2019122776 W CN2019122776 W CN 2019122776W WO 2021047070 A1 WO2021047070 A1 WO 2021047070A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
predicted
terminal
focus area
shooting target
Prior art date
Application number
PCT/CN2019/122776
Other languages
English (en)
French (fr)
Inventor
彭叶斌
周凡贻
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Publication of WO2021047070A1 publication Critical patent/WO2021047070A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • This application relates to the field of terminal intelligent shooting technology, and in particular to a terminal shooting method, device, mobile terminal, and readable storage medium.
  • the main purpose of this application is to provide a terminal shooting method, device, mobile terminal, and computer storage medium, aiming to solve the technical problem of inaccurate focus when the mobile terminal camera shoots a non-stationary shooting target in the prior art, which causes the shooting effect to be unclear. .
  • an embodiment of the present application provides a terminal photographing method, and the terminal photographing method includes the following steps:
  • the shooting target enters the predicted focus area, the shooting of the shooting target is completed.
  • the displacement information includes historical motion trajectories and object attributes within a preset unit time before the current moment
  • the step of determining the predicted focus area according to the displacement information includes:
  • the predicted focus area is determined.
  • the step of obtaining the predicted movement trajectory of the shooting target in the next preset unit duration according to the historical movement trajectory includes:
  • the motion rule description table includes a motion reference point and the motion reference point mapped The predicted direction of movement
  • the predicted motion speed and the historical motion trajectory, the predicted motion trajectory of the shooting target in the next preset unit duration is determined.
  • the step of determining the predicted motion trajectory according to the motion rule description table, the predicted motion speed, and the historical motion trajectory includes:
  • the predicted motion direction mapped by the current motion reference point is searched in the motion trajectory description table
  • the predicted movement trajectory is determined.
  • the step of determining the predicted focus area according to the position point on the predicted motion track includes:
  • the predicted focus area is determined.
  • the step of determining the predicted focus area by taking the center of the reference position as the geometric center includes:
  • the display size of the overall area is determined according to the screen-to-body ratio of each of the shooting targets relative to the terminal screen, so as to generate a predicted focus area.
  • the step of determining whether the shooting target enters the predicted focus area includes:
  • each of the shooting targets It is determined whether the geometric center of the overall area formed by each of the shooting targets coincides with the geometric center of the predicted focus area, and if they overlap, it is determined that each of the shooting targets enters the predicted focus area.
  • the method before the step of acquiring and tracking the displacement information of the shooting target in the preview frame of the terminal camera, the method includes:
  • the step of completing the shooting of the shooting target includes:
  • the user can manually confirm the shooting or start the shooting module to automatically shoot.
  • the present application also provides a terminal photographing device, the terminal photographing device includes:
  • the information acquisition module is used to acquire and track the displacement information of the shooting target in the preview frame of the terminal camera
  • An area determination module configured to determine a predicted focus area according to the displacement information, and perform focusing in the predicted focus area
  • a judging module configured to judge whether the shooting target enters the predicted focus area
  • the shooting module is configured to complete shooting of the shooting target if the shooting target enters the predicted focus area.
  • the area determination module of the terminal photographing device includes a prediction unit and a determination unit:
  • the prediction unit is configured to obtain a predicted movement trajectory of the shooting target in the next preset unit duration according to the historical movement trajectory;
  • the determining unit is configured to determine a predicted focus area according to a position point on the predicted motion track.
  • the present application also provides a mobile terminal.
  • the mobile terminal includes a memory, a processor, and a terminal photographing program stored on the memory and running on the processor, and the terminal photographing program is used by the processor. When executed, the steps of the terminal shooting method described above are implemented.
  • the present application also provides a computer storage medium having a terminal shooting program stored on the computer storage medium, and when the terminal shooting program is executed by a processor, the steps of the terminal shooting method described above are implemented.
  • the displacement information of the shooting target in the preview frame of the terminal camera is acquired and tracked; and then according to the The displacement information determines the predicted focus area, and focus is performed in the predicted focus area; finally, it is determined whether the shooting target enters the predicted focus area; if the shooting target enters the predicted focus area, the shooting of the shooting target is completed .
  • This application focuses on the predicted focus area, and when the subject enters the predicted focus area, shooting can be performed without the user needing to focus manually, which avoids the change of the location of the subject, which leads to inaccurate focus and unclear photos, and improves the shooting effect;
  • the shooting process is reduced, the user shooting operation steps are reduced, and the user experience is improved.
  • FIG. 1 is a schematic diagram of the hardware structure of an optional mobile terminal according to an embodiment of the application
  • FIG. 2 is a schematic flowchart of an embodiment of a terminal shooting method according to this application.
  • FIG. 3 is a detailed flowchart of step S20 of an embodiment of a terminal shooting method according to this application;
  • step S21 is a detailed flowchart of step S21 of an embodiment of a terminal shooting method according to this application;
  • FIG. 5 is a detailed flowchart of step S212 of an embodiment of a terminal shooting method according to this application;
  • FIG. 6 is a detailed flowchart of step S22 of an embodiment of a terminal shooting method according to this application.
  • FIG. 7 is a detailed flowchart of step S222 of an embodiment of a terminal shooting method according to this application.
  • FIG. 8 is a schematic diagram of functional modules in the area of the terminal camera of the application.
  • FIG. 9 is a schematic diagram of detailed functional modules of the region determining module of the terminal photographing device of the application.
  • FIG. 10 is a schematic diagram of an application scenario of an embodiment of the terminal shooting method of this application.
  • FIG. 11 is a schematic diagram of another application scenario of the embodiment of the terminal shooting method of this application.
  • FIG. 12 is a schematic diagram of another application scenario of the embodiment of the terminal shooting method of this application.
  • FIG. 13 is a schematic diagram of another application scenario of the embodiment of the terminal shooting method of this application.
  • module means, “component” or “unit” used to indicate elements is only for the description of the present application, and has no specific meaning in itself. Therefore, “module”, “part” or “unit” can be used in a mixed manner.
  • the mobile terminal can be implemented in various forms.
  • the mobile terminals described in this application may include mobile phones, tablet computers, notebook computers, palmtop computers, and personal digital assistants (Personal Digital Assistants). Digital Assistant, PDA) and other mobile terminals.
  • PDA Personal Digital Assistants
  • FIG. 1 is a schematic diagram of the hardware structure of a mobile terminal that implements the various embodiments of the present application.
  • the mobile terminal 100 may include: RF (Radio Frequency (radio frequency) unit 101, WiFi module 102, audio output unit 103, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110 , And power supply 111 and other components.
  • RF Radio Frequency
  • WiFi module 102 Wireless Fidelity unit
  • A/V (audio/video) input unit 104 sensor 105
  • display unit 106 user input unit 107
  • interface unit 108 user input unit
  • memory 109 memory 109
  • processor 110 And power supply 111 and other components.
  • power supply 111 and other components.
  • the mobile terminal may include more or less components than those shown in the figure, or a combination of certain components, or different components. Layout.
  • the radio frequency unit 101 may be used for receiving and sending signals during information transmission or communication. Specifically, after receiving the downlink information of the base station, it is processed by the processor 110; in addition, the uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
  • WiFi is a short-distance wireless transmission technology.
  • the mobile terminal can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 102. It provides users with wireless broadband Internet access.
  • FIG. 1 shows the WiFi module 102, it is understandable that it is not a necessary component of the mobile terminal, and can be omitted as needed without changing the essence of the invention.
  • the audio output unit 103 can store data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, etc.
  • the audio data is converted into an audio signal and output as sound.
  • the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 may include a speaker, a buzzer, and so on.
  • the A/V input unit 104 is used to receive audio or video signals.
  • the A/V input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, and graphics processor 1041 process image data of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the processed image frame can be displayed on the display unit 106.
  • the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the WiFi module 102.
  • the microphone 1042 can receive sound (audio data) via the microphone 1042 in operation modes such as a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound into audio data.
  • the processed audio (voice) data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode for output.
  • the microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to eliminate (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
  • the mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 1061 and the display panel 1061 when the mobile terminal 100 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and may adopt a liquid crystal display (Liquid Crystal Display, LCD), Organic Light-Emitting Diode (Organic Light-Emitting Diode, OLED) and other forms to configure the display panel 1061.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 107 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the mobile terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1071 or near the touch panel 1071. Operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 110, and can receive and execute the commands sent by the processor 110.
  • the touch panel 1071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072.
  • other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, joystick, etc., which are not specifically limited here. .
  • the touch panel 1071 can cover the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it is sent to the processor 110 to determine the type of touch event, and then the processor 110 responds to the touch event.
  • the type provides corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to implement the input and output functions of the mobile terminal, but in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated
  • the implementation of the input and output functions of the mobile terminal is not specifically limited here.
  • the interface unit 108 serves as an interface through which at least one external device can be connected to the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 108 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the mobile terminal 100 or can be used to connect to the mobile terminal 100 and external Transfer data between devices.
  • the memory 109 may be used to store software programs and various data.
  • the memory 109 may be a computer storage medium, and the memory 109 stores the terminal shooting program of the present application.
  • the memory 109 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the mobile terminal. It uses various interfaces and lines to connect the various parts of the entire mobile terminal, runs or executes software programs and/or modules stored in the memory 109, and calls data stored in the memory 109. , Perform various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole. For example, the processor 110 executes the terminal photographing program in the memory 109 to implement the steps of the embodiments of the terminal photographing method of the present application.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
  • the mobile terminal 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the mobile terminal 100 may also include a Bluetooth module, etc., which will not be repeated here.
  • the mobile terminal 100 can be connected with other terminal devices through Bluetooth to realize communication and information interaction.
  • the terminal photographing method includes:
  • Step S10 acquiring and tracking the displacement information of the shooting target in the preview frame of the terminal camera
  • the execution subject of this application may be a terminal shooting application;
  • the terminal camera preview frame refers to the display interface of the mobile terminal camera after the user opens the mobile terminal camera application; when the user opens the mobile terminal camera, the user adjusts the mobile terminal’s
  • the position allows the shooting target to be displayed in the preview frame according to the user's shooting composition. At this time, the shooting target will move in the preview frame, and the speed and direction of the shooting target will change, that is, displacement.
  • the information describing the displacement is the displacement information. .
  • Step S20 Determine the predicted focus area according to the displacement information, and perform focusing in the predicted focus area;
  • the predicted focus area refers to the area on the trajectory that is likely to be formed on the basis of the displacement of the moving target; focusing in the predicted focus area refers to the terminal shooting application when the target enters the predicted focus Before the area, focus on the predicted focus area in advance.
  • Step S30 judging whether the shooting target enters the predicted focus area
  • the shooting target Since the shooting target will be displaced during the shooting process and is not absolutely still, the positional relationship between the shooting target and the predicted focus area will change in real time while the shooting target is moving.
  • step S40 if the shooting target enters the predicted focus area, the shooting of the shooting target is completed.
  • shooting can be performed without the user needing to focus manually.
  • the displacement information of the shooting target in the preview frame of the terminal camera is acquired and tracked; and then according to the The displacement information determines the predicted focus area, and focus is performed in the predicted focus area; finally, it is determined whether the shooting target enters the predicted focus area; if the shooting target enters the predicted focus area, the shooting of the shooting target is completed .
  • This application focuses on the predicted focus area, and when the shooting target enters the predicted focus area, the user is not required to focus manually, which avoids the change of the location of the shooting target, which leads to inaccurate focus and unclear photos, and improves the shooting effect; at the same time, it simplifies The shooting process reduces the user's shooting operation steps and improves the user experience.
  • Step S20 includes:
  • Step S21 Obtain a predicted movement trajectory of the shooting target in the next preset unit duration according to the historical movement trajectory;
  • the historical motion trajectory is one of the displacement information, which refers to the trajectory formed by the displacement of the shooting target in the terminal camera preview frame; the predicted motion trajectory of the next preset unit duration refers to the prediction of the shooting target after the current moment.
  • the trajectory formed by the possible displacement within a unit time refers to the trajectory formed by the possible displacement within a unit time.
  • Step S22 Determine the predicted focus area according to the position point on the predicted motion track.
  • the position point on the predicted motion trajectory refers to the point that forms the predicted motion trajectory of the shooting target.
  • the information acquisition module first obtains the historical motion trajectory of the shooting target, and then obtains the predicted motion trajectory from the historical motion trajectory, and then determines the predicted focus area by the position point on the predicted motion trajectory, so that the shooting application of this terminal can be advanced Focusing in the predicted focus area, as long as the subject enters the predicted focus area, you can shoot, avoiding inaccurate focus and unclear photos.
  • Step S21 includes:
  • Step S211 Determine the object attribute of the shooting target, and determine the motion rule description table and predicted motion speed of the shooting target according to the object attribute, wherein the motion rule description table includes the motion reference point and the predicted motion direction mapped by the motion reference point;
  • the object attribute of the shooting target belongs to another displacement information, which means that the shooting target can be one or more of humans, animals, plants, and objects; the predicted motion direction mapped by the motion reference point and the motion reference point is based on the object
  • the predicted movement direction of the big clock at this movement reference point is the tangent direction of the pendulum movement trajectory;
  • the movement law description table refers to the mapping relationship table between the movement reference points describing the attributes of different objects and the predicted movement direction, which is described by the movement law description table You can query the motion reference point and the predicted motion direction mapped by the motion reference point;
  • the predicted motion speed refers to the prediction of the motion speed of the shooting target according to the general motion law of the object attribute; because of the different types of object attributes, motion law and speed
  • Step S212 according to the motion rule description table, the predicted motion speed and the historical motion trajectory, the predicted motion trajectory of the shooting target in the next preset unit duration is determined.
  • the motion rule description table can query the predicted motion direction that has a mapping relationship with the motion reference point of the shooting target; when the predicted motion direction and the predicted motion speed are determined, combined with the historical motion trajectory, the predicted motion trajectory of the shooting target can be determined.
  • the object attributes of the shooting target are determined first, and then the motion reference point, predicted motion direction and predicted motion speed of the shooting target are determined according to the object attributes, and finally combined with the historical motion trajectory, it is determined that the shooting target is in the next preset unit duration. Predict movement trajectory.
  • step S212 includes:
  • Step A10 Determine the current motion reference point of the shooting target according to the historical motion trajectory
  • the current motion reference point refers to the position point at the current time on the historical motion trajectory.
  • Step A20 search for the predicted motion direction mapped by the current motion reference point in the motion trajectory description table
  • Each reference motion point in the motion trajectory description table has its own corresponding predicted motion direction. Only when the direction of the reference motion point is determined, can the motion direction of the shooting target be estimated.
  • Step A30 Determine the predicted movement trajectory according to the predicted movement speed and the predicted movement direction.
  • the motion trajectory of the shooting target after the current motion reference point can be estimated to obtain the predicted motion trajectory.
  • the shooting target is a swinging clock, referred to as a pendulum.
  • the historical movement trajectory of this pendulum (the solid arc part) is from the highest point on the left that the pendulum can reach to the pendulum vertical to the ground.
  • the lowest point when the lowest point of the motion trajectory is selected as the motion reference point, the predicted motion direction corresponding to the lowest point is the tangent direction of the pendulum motion trajectory; the predicted motion speed is gradually reduced from the maximum speed of the lowest point ; Predict the movement trajectory (the part of the dashed line of the arc) from the lowest point to the highest point on the right.
  • step S22 includes:
  • Step S221 taking a preset number of position points on the predicted motion trajectory, and determining the reference position center of the graph formed by each position point on the predicted motion trajectory;
  • Step S222 taking the center of the reference position as the geometric center, determine the predicted focus area.
  • the user can adjust the predicted focus area, such as zoom in, zoom out, rotate, etc., to obtain the predicted focus area.
  • the predicted focus area is determined, so that the predicted focus area is more likely to be in the shooting target The location to be moved to.
  • step S222 includes:
  • Step A40 taking the center of the reference position as the geometric center of the overall area formed by each shooting target
  • the center of the reference position may be taken as the geometric center of the entire area formed by each shooting target; that is, the reference center position of the entire area of the frame selection covering all the shooting targets is determined first.
  • Step A50 Determine the display size of the entire area according to the screen-to-body ratio of each photographed target relative to the terminal screen to generate a predicted focus area.
  • the entire area at this time is used as the predicted focus area to generate the predicted focus area Therefore, it is predicted that the size of the focus area is moderate, which can not only include all the shooting targets without being too large in size, and the position of the preset focus area is on the predicted motion track, which is more conducive to the subsequent accurate focus of the shooting target.
  • step S30 includes:
  • each shooting target It is determined whether the geometric center of the overall area formed by each shooting target overlaps with the geometric center of the predicted focus area, and if they overlap, it is determined that each shooting target enters the predicted focus area.
  • the geometric center is the center of gravity of the shooting target; when there are more than one shooting targets, refer to Figure 13, the geometric center is the center of gravity of the overall area formed by each shooting target, and when When the number of shooting targets is more than one, the speed and direction of each shooting target may not be the same during the movement, so the center of gravity of each position will change in real time; when the geometric center of the shooting target coincides with the geometric center of the predicted focus area When, it means that the geometric center of the shooting target and the geometric center of the predicted focus area are at the same position in the preview frame of the terminal camera, so that it can be concluded that the shooting target enters the predicted focus area.
  • the steps before step S10 include:
  • Step A60 detecting whether the shooting target in the preview frame of the terminal camera is a non-stationary living body or a non-stationary object
  • the shooting target refers to the non-stationary living body (people or living animals and plants) or objects that the user wants to shoot, such as walking people, lively pets, plants floating in the wind, moving trains, swaying Big clock and so on.
  • Step A70 if yes, execute the step of acquiring and tracking the displacement information of the shooting target in the preview frame of the terminal camera.
  • the shooting target is a non-stationary living body or a non-stationary object
  • the displacement information of the shooting target is acquired.
  • the shooting target is a non-stationary living body or a non-stationary object, and if the step of acquiring displacement information of the shooting target is executed, the accuracy of information acquisition is improved.
  • step S40 includes:
  • the user can manually confirm the shooting or start the shooting module to automatically shoot.
  • the shooting target When the shooting target enters the predicted focus area, it means that the focus of the shooting target has been completed. At this time, the user can manually click the confirmation box to complete the shooting of the shooting target; or the application will start the shooting module to automatically complete the shooting of the shooting target, and the effect of automatic shooting Better than manual shooting.
  • the manual mode is convenient for the user to independently select the time to take a photo, and the effect of automatic shooting is better than that of manual shooting.
  • an embodiment of the present application also proposes a terminal photographing device, and the terminal photographing device includes:
  • the information acquisition module is used to acquire and track the displacement information of the shooting target in the preview frame of the terminal camera
  • the area determination module is used to determine the predicted focus area according to the displacement information, and perform focusing in the predicted focus area;
  • the judging module is used to judge whether the shooting target enters the predicted focus area
  • the shooting module is used to complete the shooting of the shooting target if the shooting target enters the predicted focus area.
  • an embodiment of the present application further proposes a terminal photographing device, and the terminal photographing device further includes:
  • the detection module is used to detect whether the shooting target of the terminal camera preview frame is a non-stationary living body or a non-stationary object, and if so, the information acquisition module is triggered to perform the step of acquiring and tracking the displacement information of the shooting target in the terminal camera preview frame.
  • the area determination module includes a prediction unit and a determination unit:
  • the prediction unit is configured to obtain a predicted movement trajectory of the shooting target in the next preset unit duration according to the historical movement trajectory;
  • the determining unit is configured to determine a predicted focus area according to a position point on the predicted motion track.
  • the prediction unit is also used for:
  • predicted motion speed and historical motion trajectory determine the predicted motion trajectory of the shooting target in the next preset unit duration.
  • the prediction unit is also used for:
  • the predicted movement trajectory is determined.
  • the determining unit is also used for:
  • Predict the position points on the motion trajectory by taking a preset number, and determine the reference position center of the graph formed by each position point on the predicted motion trajectory;
  • the predicted focus area is determined.
  • the determining unit is also used for:
  • the display size of the entire area is determined to generate a predicted focus area.
  • the judgment module is also used to:
  • each shooting target It is determined whether the geometric center of the overall area formed by each shooting target coincides with the geometric center of the predicted focus area, and if they overlap, it is determined that each shooting target enters the predicted focus area.
  • the steps implemented by the functional modules of the terminal photographing device can refer to the various embodiments of the terminal photographing method of the present application, which will not be repeated here.
  • this application also provides a mobile terminal.
  • the mobile terminal includes a memory 109, a processor 110, and a terminal shooting program stored on the memory 109 and running on the processor 110.
  • the display control program of the mobile terminal is controlled by the processor 110. When executed, the steps of the foregoing embodiments of the terminal shooting method are implemented.
  • the present application also provides a computer-readable storage medium that stores one or more programs, and the one or more programs may also be executed by one or more processors for use For realizing the steps of each embodiment of the above terminal shooting method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种终端拍摄方法、装置、移动终端和计算机可读存储介质,通过获取并跟踪终端相机预览框拍摄目标的位移信息;然后根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;最后判断所述拍摄目标是否进入所述预测对焦区域;若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄。本申请在拍摄非静止的活体(人或有生命的动植物体)或者物体时,无需用户手动对焦,避免了对焦不准导致照片不清晰,改善了拍摄效果;同时简化了拍摄流程,减少了用户拍摄操作步骤,提升了用户体验。

Description

终端拍摄方法、装置、移动终端及可读存储介质
本申请要求于2019年09月12日提交中国专利局、申请号为201910873337.6、发明名称为“终端拍摄方法、装置、移动终端及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及终端智能拍摄技术领域,尤其涉及一种终端拍摄方法、装置、移动终端及可读存储介质。
背景技术
随着移动终端拍摄功能的广泛使用,用户对移动终端拍摄功能的拍摄效果要求越来越高,如何简便地拍摄出清晰的照片是现有移动终端需要解决的问题之一。现有技术中,照片的拍摄是通过用户对拍摄目标进行手动对焦然后点击确认框完成的,当需要拍摄非静止的活体(人或有生命的动植物体)或者物体时,如行走的人、活泼好动的宠物、风中飘动的植物、前进的列车、摇摆的大钟等等,因为从手动对焦此过程需要一定的时间,而拍摄目标的位置已经发生了变化,从而导致对焦不准,因此照片不清晰,拍摄效果不佳,用户体验差。
发明内容
本申请的主要目的在于提供一种终端拍摄方法、装置、移动终端及计算机存储介质,旨在解决现有技术中移动终端相机拍摄非静止的拍摄目标时对焦不准,导致拍摄效果不清晰的技术问题。
为实现上述目的,本申请实施例提供一种终端拍摄方法,所述终端拍摄方法包括以下步骤:
获取并跟踪终端相机预览框拍摄目标的位移信息;
根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;
判断所述拍摄目标是否进入所述预测对焦区域;
若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄。
可选地,所述位移信息包括当前时刻之前预设单位时长内的历史运动轨迹和对象属性,所述根据所述位移信息确定预测对焦区域的步骤包括:
根据所述历史运动轨迹,获取所述拍摄目标在下一个预设单位时长的预测运动轨迹;
根据所述预测运动轨迹上的位置点,确定预测对焦区域。
可选地,所述根据所述历史运动轨迹,获取拍摄目标在下一个预设单位时长的预测运动轨迹的步骤包括:
确定所述拍摄目标的对象属性,并根据所述对象属性,确定所述拍摄目标的运动规律描述表和预测运动速度,其中所述运动规律描述表包括运动参考点与所述运动参考点所映射的预测运动方向;
根据所述运动规律描述表、所述预测运动速度和所述历史运动轨迹,确定拍摄目标在下一个预设单位时长的预测运动轨迹。
可选地,所述根据运动规律描述表、所述预测运动速度和所述历史运动轨迹,确定所述预测运动轨迹的步骤包括:
根据所述历史运动轨迹确定所述拍摄目标的当前运动参考点;
在所述运动轨迹描述表搜索到所述当前运动参考点所映射的预测运动方向;
根据所述预测运动速度和所述预测运动方向,确定所述预测运动轨迹。
可选地,所述根据所述预测运动轨迹上的位置点,确定预测对焦区域的步骤包括:
取预设个数所述预测运动轨迹上的位置点,确定所述预测运动轨迹上的各位置点所构成图形的参考位置中心;
以所述参考位置中心为几何中心,确定所述预测对焦区域。
可选地,所述以所述参考位置中心为几何中心,确定所述预测对焦区域的步骤包括:
将所述参考位置中心作为各所述拍摄目标所形成的整体区域的几何中心;
根据各所述拍摄目标相对终端屏幕的屏占比,确定所述整体区域的显示尺寸,以生成预测对焦区域。
可选地,所述判断所述拍摄目标是否进入所述预测对焦区域的步骤包括:
判断各所述拍摄目标所形成的整体区域的几何中心与所述预测对焦区域的几何中心是否重合,若重合则判定所述各拍摄目标进入所述预测对焦区域。
可选地,所述获取并跟踪终端相机预览框拍摄目标的位移信息的步骤之前,包括:
检测所述终端相机预览框的拍摄目标是否为非静止的活体或者非静止的物体;
若是,则执行获取并跟踪终端相机预览框拍摄目标的位移信息的步骤。
可选地,所述若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄的步骤包括:
若所述拍摄目标进入所述预测对焦区域,则用户可手动确认拍摄或者启动拍摄模块自动拍摄。
本申请还提供一种终端拍摄装置,所述终端拍摄装置包括:
信息获取模块,用于获取并跟踪终端相机预览框拍摄目标的位移信息;
区域确定模块,用于根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;
判断模块,用于判断所述拍摄目标是否进入所述预测对焦区域;
拍摄模块,用于若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄。
可选地,所述终端拍摄装置的区域确定模块包括预测单元和确定单元:
所述预测单元,用于根据所述历史运动轨迹,获取所述拍摄目标在下一个预设单位时长的预测运动轨迹;
所述确定单元,用于根据所述预测运动轨迹上的位置点,确定预测对焦区域。
本申请还提供一种移动终端,所述移动终端包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的终端拍摄程序,所述终端拍摄程序被所述处理器执行时实现如上所述的终端拍摄方法的步骤。
本申请还提供一种计算机存储介质,所述计算机存储介质上存储有终端拍摄程序,所述终端拍摄程序被处理器执行时实现如上所述的终端拍摄方法的步骤。
在本实施例中,在对非静止的活体(人或有生命的动植物体)和非静止的物体进行拍摄时,通过先获取并跟踪终端相机预览框拍摄目标的位移信息;然后根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;最后判断所述拍摄目标是否进入所述预测对焦区域;若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄。本申请在预测对焦区域进行对焦,当拍摄目标进入预测对焦区域则可进行拍摄,无需用户手动对焦,避免了拍摄目标的位置改变从而导致对焦不准以及照片不清晰,改善了拍摄效果;同时简化了拍摄流程,减少了用户拍摄操作步骤,提升了用户体验。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例一个可选的移动终端的硬件结构示意图;
图2为本申请终端拍摄方法一实施例的流程示意图;
图3为本申请终端拍摄方法一实施例步骤S20的细化流程示意图;
图4为本申请终端拍摄方法一实施例步骤S21的细化流程示意图;
图5为本申请终端拍摄方法一实施例步骤S212的细化流程示意图;
图6为本申请终端拍摄方法一实施例步骤S22的细化流程示意图;
图7为本申请终端拍摄方法一实施例步骤S222的细化流程示意图;
图8为本申请终端拍摄装置区域的功能模块示意图;
图9为本申请终端拍摄装置区域确定模块的细化功能模块示意图;
图10为本申请终端拍摄方法实施例的一个应用场景示意图;
图11为本申请终端拍摄方法实施例的另一个应用场景示意图;
图12为本申请终端拍摄方法实施例的又一个应用场景示意图;
图13为本申请终端拍摄方法实施例的又一个应用场景示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或“单元”可以混合地使用。
移动终端可以以各种形式来实施。例如,本申请中描述的移动终端可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)等移动终端。
后续描述中将以移动终端为例进行说明,请参阅图1,其为实现本申请各个实施例的一种移动终端的硬件结构示意图,该移动终端100可以包括: RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图1中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1对移动终端的各个部件进行具体的介绍:
射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将基站的下行信息接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信与网络和其他设备通信。
WiFi属于短距离无线传输技术,移动终端通过WiFi模块102可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块102,但是可以理解的是,其并不属于移动终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
音频输出单元103可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将射频单元101或WiFi模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103可以包括扬声器、蜂鸣器等等。
A/V输入单元104用于接收音频或视频信号。A/V输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或WiFi模块102进行发送。麦克风1042可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风1042接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。麦克风1042可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
移动终端100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在移动终端100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等; 至于手机还可配置的指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode, OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107可包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作),并根据预先设定的程式驱动相应的连接装置。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。具体地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种,具体此处不做限定。
进一步的,触控面板1071可覆盖显示面板1061,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图1中,触控面板1071与显示面板1061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元108用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据,存储器109可为一种计算机存储介质,该存储器109存储有本申请终端拍摄程序。存储器109可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。如处理器110执行存储器109中的终端拍摄程序,以实现本申请终端拍摄方法各实施例的步骤。
处理器110可包括一个或多个处理单元;可选的,处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
移动终端100还可以包括给各个部件供电的电源111(比如电池),可选的,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图1未示出,移动终端100还可以包括蓝牙模块等,在此不再赘述。移动终端100通过蓝牙,可以与其他终端设备连接,实现通信以及信息的交互。
基于上述移动终端硬件结构,提出本申请方法各个实施例。
本申请提供一种终端拍摄方法,该终端拍摄方法应用于移动终端,在终端拍摄方法一实施例中,参照图2,该终端拍摄方法包括:
步骤S10,获取并跟踪终端相机预览框拍摄目标的位移信息;
本申请的执行主体可以是一个终端拍摄应用;终端相机预览框指的是用户打开移动终端相机应用之后移动终端相机的显示界面;当用户打开移动终端相机时,用户通过调整持有的移动终端的位置使得拍摄目标按照用户的拍摄构图显示在预览框中,此时拍摄目标会在预览框中发生移动,从而拍摄目标的速度和方向发生改变,即产生了位移,描述位移的信息即为位移信息。
步骤S20,根据位移信息确定预测对焦区域,在预测对焦区域进行对焦;
预测对焦区域指的是位于移动的拍摄目标在已形成的位移的基础上,接下来将有可能形成的轨迹上的区域;在预测对焦区域进行对焦指的是终端拍摄应用在拍摄目标进入预测对焦区域之前,提前在预测对焦区域进行对焦。
步骤S30,判断拍摄目标是否进入预测对焦区域;
由于拍摄目标在拍摄过程中会发生位移,并非绝对静止,因此拍摄目标在移动过程中,跟预测对焦区域的位置关系会发生实时变化。
步骤S40,若拍摄目标进入预测对焦区域,则完成拍摄目标的拍摄。
当判断得出拍摄目标进入预测对焦区域时,则可进行拍摄,无需用户手动对焦。
在本实施例中,在对非静止的活体(人或有生命的动植物体)和非静止的物体进行拍摄时,通过先获取并跟踪终端相机预览框拍摄目标的位移信息;然后根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;最后判断所述拍摄目标是否进入所述预测对焦区域;若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄。本申请在预测对焦区域进行对焦,当拍摄目标进入预测对焦区域则进行拍摄,无需用户手动对焦,避免了拍摄目标的位置改变从而导致对焦不准以及照片不清晰,改善了拍摄效果;同时简化了拍摄流程,减少了用户拍摄操作步骤,提升了用户体验。
进一步地,在本申请终端拍摄方法一实施例的基础上,所述位移信息包括当前时刻之前预设单位时长内的历史运动轨迹,参照图3 ,步骤S20包括:
步骤S21,根据历史运动轨迹,获取拍摄目标在下一个预设单位时长的预测运动轨迹;
历史运动轨迹属于位移信息之一,指的是拍摄目标在终端相机预览框中已经发生的位移所形成的轨迹;下一个预设单位时长的预测运动轨迹指的是拍摄目标在当前时刻之后的预设单位时间内将有可能发生的位移所形成的轨迹。
步骤S22,根据预测运动轨迹上的位置点,确定预测对焦区域。
预测运动轨迹上的位置点指的是形成拍摄目标预测运动轨迹的点。
在本实施例中,首先由信息获取模块获取拍摄目标的历史运动轨迹,然后由历史运动轨迹得到预测运动轨迹,再由预测运动轨迹上的位置点确定预测对焦区域,从而本终端拍摄应用可以提前在预测对焦区域进行对焦,只要拍摄目标进入预测对焦区域便可进行拍摄,避免了对焦不准而导致照片不清晰。
进一步地,在本申请终端拍摄方法一实施例的基础上,所述位移信息还包括拍摄目标的对象属性,参照图4 ,步骤S21包括:
步骤S211,确定拍摄目标的对象属性,并根据对象属性,确定拍摄目标的运动规律描述表和预测运动速度,其中运动规律描述表包括运动参考点与运动参考点所映射的预测运动方向;
拍摄目标的对象属性属于另一个位移信息,指的是拍摄目标可以为人类、动物、植物、物体其中一种或者多种;运动参考点与运动参考点所映射的预测运动方向指的是根据对象属性的普遍运动规律,得出的拍摄目标在某一个运动参考点时所对应的预估的运动方向,如摇摆的大钟,简称钟摆,当选取的运动参考点为钟摆运动轨迹的最低点时,则大钟在这个运动参考点的预测运动方向为钟摆运动轨迹的切线方向;运动规律描述表指的是描述不同对象属性的运动参考点与预测运动方向的映射关系表,由运动规律描述表可以查询到运动参考点以及运动参考点所映射的预测运动方向;预测运动速度指的是根据对象属性的普遍运动规律,预测拍摄目标的运动速度;因为不同类型的对象属性,运动规律和运动速度不同,因此需要通过先确定拍摄目标的对象属性,再根据对象属性确定拍摄目标的运动规律和预测运动速度。
步骤S212,根据运动规律描述表、预测运动速度和历史运动轨迹,确定拍摄目标在下一个预设单位时长的预测运动轨迹。
运动规律描述表可以查询与拍摄目标运动参考点存在映射关系的预测运动方向;当确定了预测运动方向和预测运动速度,再结合历史运动轨迹,便可以确定拍摄目标的预测运动轨迹。
在本实施例中,首先确定拍摄目标的对象属性,然后根据对象属性确定拍摄目标的运动参考点、预测运动方向和预测运动速度,最后结合历史运动轨迹,确定拍摄目标在下一个预设单位时长的预测运动轨迹。
进一步地,在本申请终端拍摄方法一实施例的基础上,参照图5 ,步骤S212包括:
步骤A10,根据历史运动轨迹确定拍摄目标的当前运动参考点;
当前运动参考点指的是历史运动轨迹上当前时刻的位置点。
步骤A20,在运动轨迹描述表搜索到当前运动参考点所映射的预测运动方向;
运动轨迹描述表中每个参考运动点都有各自对应的预测运动方向,只有确定了参考运动点的方向,才能预估拍摄目标的运动方向。
步骤A30,根据预测运动速度和预测运动方向,确定预测运动轨迹。
当确定了预测运动速度和预测运动方向,便可预估拍摄目标从当前运动参考点之后的运动轨迹,得到预测运动轨迹。
在本实施例中,参照图10,假设拍摄目标为摇摆的大钟,简称钟摆,此钟摆的历史运动轨迹(弧线实线部分)为从钟摆能达到的左侧最高点运动到钟摆垂直地面的点,即最低点:当选取其运动轨迹的最低点为运动参考点时,则最低点对应的预测运动方向为钟摆运动轨迹的切线方向;预测运动速度为由最低点的最大速度逐渐减小;预测运动轨迹(弧线虚线部分)为由最低点运动到右侧最高点。
进一步地,在本申请终端拍摄方法一实施例的基础上,参照图6,步骤S22包括:
步骤S221,取预设个数预测运动轨迹上的位置点,确定预测运动轨迹上的各位置点所构成图形的参考位置中心;
参照图11,取预测运动轨迹上的3个位置点,则这3个为位置点构成的三角形的中心为参考位置中心。
步骤S222,以参考位置中心为几何中心,确定预测对焦区域。
将参考位置中心作为预测对焦区域的几何中心之后,用户可以对预测对焦区域进行调整,如放大、缩小、旋转等,从而得到预测对焦区域。
在本实施例中,通过取预设个数预测运动轨迹上的位置点所构成图形的参考位置中心作为预测对焦区域的几何中心,从而确定预测对焦区域,使预测对焦区域更大可能处于拍摄目标将要移动至的位置。
进一步地,在本申请终端拍摄方法一实施例的基础上,参照图7,步骤S222包括:
步骤A40,将参考位置中心作为各拍摄目标所形成的整体区域的几何中心;
在确定参考位置中心之后,可将所述参考位置中心作为各所述拍摄目标所形成的整体区域的几何中心;即先确定涵盖所有拍摄目标的框选整体区域的基准中心位置。
步骤A50,根据各所拍摄目标相对终端屏幕的屏占比,确定整体区域的显示尺寸,以生成预测对焦区域。
根据各所述拍摄目标相对终端屏幕的屏占比,确定所述整体区域的显示尺寸,拍摄目标想对终端屏幕的屏占比越大,该整体区域的显示尺寸越大,
在本实施例中,在确定整体区域的几何中心位置为参考位置中心,以及基于所述屏占比确定整体区域的显示尺寸之后,将此时的整体区域作为预测对焦区域,以生成预测对焦区域,从而预测对焦区域的尺寸适中,既能包含所有拍摄目标又不至于尺寸过大,并且预设对焦区域的位置处于预测运动轨迹上,更利于后续准确对焦拍摄目标。
进一步地,在本申请终端拍摄方法一实施例的基础上,步骤S30包括:
判断各拍摄目标所形成的整体区域的几何中心与预测对焦区域的几何中心是否重合,若重合则判定所述各拍摄目标进入预测对焦区域。
当拍摄目标个数为一个时,参照图12,几何中心为此拍摄目标的重心;当拍摄目标个数不止一个时,参照图13,几何中心为各个拍摄目标形成的整体区域的重心,而当拍摄目标个数不止一个时,移动过程中各个拍摄目标的速度大小与方向的改变不一定保持一致,因此每个位置重心会发生实时变化;当拍摄目标的几何中心与预测对焦区域的几何中心重合时,说明拍摄目标的几何中心与预测对焦区域的几何中心在终端相机的预览框中位置相同,从而得出拍摄目标进入预测对焦区域。
在本实施例中,通过判断拍摄目标是否进入对焦区域,若进入则可进行拍摄,从而避免了对焦不准的问题,提升了拍摄效果。
进一步地,在本申请终端拍摄方法一实施例的基础上,步骤S10之前的步骤包括:
步骤A60,检测终端相机预览框的拍摄目标是否为非静止的活体或者非静止的物体;
拍摄目标指的是用户想要进行拍摄的非静止的活体(人或者有生命的动植物)或者物体,如行走的人、活泼好动的宠物、风中飘动的植物、前进的列车、摇摆的大钟等等。
步骤A70,若是,则执行获取并跟踪终端相机预览框拍摄目标的位移信息的步骤。
当拍摄目标为非静止的活体或者非静止的物体,再获取拍摄目标的位移信息。
在本实施例中,先判断拍摄目标是否为非静止的活体或者非静止的物体,若是再执行获取拍摄目标的位移信息的步骤,提升了信息获取的准确率。
进一步地,在本申请终端拍摄方法一实施例的基础上,步骤S40包括:
若拍摄目标进入预测对焦区域,则用户可手动确认拍摄或者启动拍摄模块自动拍摄。
当拍摄目标进入预测对焦区域,说明已经完成了拍摄目标的对焦,此时用户可以手动点击确认框完成拍摄目标的拍摄;或者由本应用启动拍摄模块,自动完成拍摄目标的拍摄,且自动拍摄的效果比手动拍摄效果更佳。
在本实施例中,完成拍摄目标的拍摄有手动拍摄和自动拍摄两种方式:手动方式方便用户自主选择拍照时机,而自动拍摄的效果比手动拍摄效果更佳。
此外,参照图8,本申请实施例还提出一种终端拍摄装置,终端拍摄装置包括:
信息获取模块,用于获取并跟踪终端相机预览框拍摄目标的位移信息;
区域确定模块,用于根据位移信息确定预测对焦区域,在预测对焦区域进行对焦;
判断模块,用于判断拍摄目标是否进入预测对焦区域;
拍摄模块,用于若拍摄目标进入预测对焦区域,则完成拍摄目标的拍摄。
可选地,参照图8,本申请实施例还提出一种终端拍摄装置,终端拍摄装置还包括:
检测模块,用于检测终端相机预览框的拍摄目标是否为非静止的活体或者非静止的物体,若是,则触发信息获取模块执行获取并跟踪终端相机预览框拍摄目标的位移信息的步骤。
可选地,参照图9,区域确定模块包括预测单元和确定单元:
所述预测单元,用于根据所述历史运动轨迹,获取所述拍摄目标在下一个预设单位时长的预测运动轨迹;
所述确定单元,用于根据所述预测运动轨迹上的位置点,确定预测对焦区域。
可选地,预测单元还用于:
确定拍摄目标的对象属性,并根据对象属性,确定拍摄目标的运动规律描述表和预测运动速度,其中运动规律描述表包括运动参考点与运动参考点所映射的预测运动方向;
根据运动规律描述表、预测运动速度和历史运动轨迹,确定拍摄目标在下一个预设单位时长的预测运动轨迹。
可选地,预测单元还用于:
根据历史运动轨迹确定拍摄目标的当前运动参考点;
在运动轨迹描述表搜索到当前运动参考点所映射的预测运动方向;
根据预测运动速度和预测运动方向,确定预测运动轨迹。
可选地,确定单元还用于:
取预设个数预测运动轨迹上的位置点,确定预测运动轨迹上的各位置点所构成图形的参考位置中心;
以参考位置中心为几何中心,确定预测对焦区域。
可选地,确定单元还用于:
将各拍摄目标所形成的整体区域的中心作为各拍摄目标的几何中心;
根据各拍摄目标的屏占比确定预测对焦区域的屏占比。
将参考位置中心作为各拍摄目标所形成的整体区域的几何中心;
根据各拍摄目标相对终端屏幕的屏占比,确定整体区域的显示尺寸,以生成预测对焦区域。
可选地,判断模块还用于:
判断各拍摄目标所形成的整体区域的几何中心与预测对焦区域的几何中心是否重合,若重合则判定各拍摄目标进入预测对焦区域。
其中,终端拍摄装置的各个功能模块实现的步骤可参照本申请终端拍摄方法的各个实施例,此处不再赘述。
此外,本申请还提供了一种移动终端,移动终端包括:存储器109、处理器110及存储在存储器109上并可在处理器110上运行的终端拍摄程序,移动终端显示控制程序被处理器110执行时实现上述的终端拍摄方法各实施例的步骤。
此外,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序还可被一个或者一个以上的处理器执行以用于实现上述终端拍摄方法各实施例的步骤。
本申请装置、移动终端和可读存储介质(即计算机可读存储介质)的具体实施方式的拓展内容与上述终端拍摄方法各实施例基本相同,在此不做赘述。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本申请的保护之内。

Claims (13)

  1. 一种终端拍摄方法,其中,所述终端拍摄方法包括:
    获取并跟踪终端相机预览框拍摄目标的位移信息;
    根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;
    判断所述拍摄目标是否进入所述预测对焦区域;
    若所述拍摄目标进入所述预测对焦区域,完成所述拍摄目标的拍摄。
  2. 如权利要求1所述的终端拍摄方法,其中,所述位移信息包括当前时刻之前预设单位时长内的历史运动轨迹,所述根据所述位移信息确定预测对焦区域的步骤包括:
    根据所述历史运动轨迹,获取所述拍摄目标在下一个预设单位时长的预测运动轨迹;
    根据所述预测运动轨迹上的位置点,确定预测对焦区域。
  3. 如权利要求2所述的终端拍摄方法,其中,所述位移信息还包括拍摄目标的对象属性,所述根据所述历史运动轨迹,获取拍摄目标在下一个预设单位时长的预测运动轨迹的步骤包括:
    确定所述拍摄目标的对象属性,并根据所述对象属性,确定所述拍摄目标的运动规律描述表和预测运动速度,其中所述运动规律描述表包括运动参考点与所述运动参考点所映射的预测运动方向;
    根据所述运动规律描述表、所述预测运动速度和所述历史运动轨迹,确定拍摄目标在下一个预设单位时长的预测运动轨迹。
  4. 如权利要求3所述的终端拍摄方法,其中,所述根据运动规律描述表、所述预测运动速度和所述历史运动轨迹,确定所述预测运动轨迹的步骤包括:
    根据所述历史运动轨迹确定所述拍摄目标的当前运动参考点;
    在所述运动轨迹描述表搜索到所述当前运动参考点所映射的预测运动方向;
    根据所述预测运动速度和所述预测运动方向,确定所述预测运动轨迹。
  5. 如权利要求2所述的终端拍摄方法,其中,所述根据所述预测运动轨迹上的位置点,确定预测对焦区域的步骤包括:
    取预设个数所述预测运动轨迹上的位置点,确定所述预测运动轨迹上的各位置点所构成图形的参考位置中心;
    以所述参考位置中心为几何中心,确定所述预测对焦区域。
  6. 如权利要求5所述的终端拍摄方法,其中,所述以所述参考位置中心为几何中心,确定所述预测对焦区域的步骤包括:
    将所述参考位置中心作为各所述拍摄目标所形成的整体区域的几何中心;
    根据各所述拍摄目标相对终端屏幕的屏占比,确定所述整体区域的显示尺寸,以生成预测对焦区域。
  7. 如权利要求1所述的终端拍摄方法,其中,所述判断所述拍摄目标是否进入所述预测对焦区域的步骤包括:
    判断各所述拍摄目标所形成的整体区域的几何中心与所述预测对焦区域的几何中心是否重合,若重合则判定所述各拍摄目标进入所述预测对焦区域。
  8. 如权利要求1所述的终端拍摄方法,其中,所述获取并跟踪终端相机预览框拍摄目标的位移信息的步骤之前,包括:
    检测所述终端相机预览框的拍摄目标是否为非静止的活体或者非静止的物体;
    若是,则执行获取并跟踪终端相机预览框拍摄目标的位移信息的步骤。
  9. 如权利要求1所述的终端拍摄方法,其中,所述若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄的步骤包括:
    若所述拍摄目标进入所述预测对焦区域,则用户可手动确认拍摄或者启动拍摄模块自动拍摄。
  10. 一种终端拍摄装置,其中,所述终端拍摄装置包括:
    信息获取模块,用于获取并跟踪终端相机预览框拍摄目标的位移信息;
    区域确定模块,用于根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;
    判断模块,用于判断所述拍摄目标是否进入所述预测对焦区域;
    拍摄模块,用于若所述拍摄目标进入所述预测对焦区域,则完成所述拍摄目标的拍摄。
  11. 如权利要求10所述的终端拍摄装置,其中,所述终端拍摄装置的区域确定模块包括预测单元和确定单元:
    所述预测单元,用于根据所述历史运动轨迹,获取所述拍摄目标在下一个预设单位时长的预测运动轨迹;
    所述确定单元,用于根据所述预测运动轨迹上的位置点,确定预测对焦区域。
  12. 一种移动终端,其中,所述移动终端包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的终端拍摄程序,所述终端拍摄程序被所述处理器执行时实现以下的步骤:
    获取并跟踪终端相机预览框拍摄目标的位移信息;
    根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;
    判断所述拍摄目标是否进入所述预测对焦区域;
    若所述拍摄目标进入所述预测对焦区域,完成所述拍摄目标的拍摄。
  13. 一种存储介质,其中,所述存储介质上存储有终端拍摄程序,所述终端拍摄程序被处理器执行时实现以下的步骤:
    获取并跟踪终端相机预览框拍摄目标的位移信息;
    根据所述位移信息确定预测对焦区域,在所述预测对焦区域进行对焦;
    判断所述拍摄目标是否进入所述预测对焦区域;
    若所述拍摄目标进入所述预测对焦区域,完成所述拍摄目标的拍摄。
PCT/CN2019/122776 2019-09-12 2019-12-03 终端拍摄方法、装置、移动终端及可读存储介质 WO2021047070A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910873337.6 2019-09-12
CN201910873337.6A CN110505408B (zh) 2019-09-12 2019-09-12 终端拍摄方法、装置、移动终端及可读存储介质

Publications (1)

Publication Number Publication Date
WO2021047070A1 true WO2021047070A1 (zh) 2021-03-18

Family

ID=68591921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122776 WO2021047070A1 (zh) 2019-09-12 2019-12-03 终端拍摄方法、装置、移动终端及可读存储介质

Country Status (2)

Country Link
CN (1) CN110505408B (zh)
WO (1) WO2021047070A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546111A (zh) * 2022-09-13 2022-12-30 武汉海微科技有限公司 曲面屏检测方法、装置、设备及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505408B (zh) * 2019-09-12 2021-07-27 深圳传音控股股份有限公司 终端拍摄方法、装置、移动终端及可读存储介质
CN110933303B (zh) * 2019-11-27 2021-05-18 维沃移动通信(杭州)有限公司 拍照方法及电子设备
CN112312005A (zh) * 2020-02-12 2021-02-02 北京字节跳动网络技术有限公司 图像获取方法及装置
WO2021258321A1 (zh) * 2020-06-24 2021-12-30 华为技术有限公司 一种图像获取方法以及装置
CN114979455A (zh) * 2021-02-25 2022-08-30 北京小米移动软件有限公司 拍摄方法、装置以及存储介质
CN113724338B (zh) * 2021-08-31 2024-05-03 上海西井科技股份有限公司 基于球台拍摄移动对象的方法、系统、设备及存储介质
CN113780214B (zh) * 2021-09-16 2024-04-19 上海西井科技股份有限公司 基于人群进行图像识别的方法、系统、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105554367A (zh) * 2015-09-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 一种运动拍摄方法及移动终端
JP2017103601A (ja) * 2015-12-01 2017-06-08 株式会社ニコン 焦点検出装置およびカメラ
CN106961552A (zh) * 2017-03-27 2017-07-18 联想(北京)有限公司 一种对焦控制方法及电子设备
US20180007254A1 (en) * 2016-06-30 2018-01-04 Canon Kabushiki Kaisha Focus adjusting apparatus, focus adjusting method, and image capturing apparatus
CN110505408A (zh) * 2019-09-12 2019-11-26 深圳传音控股股份有限公司 终端拍摄方法、装置、移动终端及可读存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387812B (zh) * 2007-09-13 2011-03-23 鸿富锦精密工业(深圳)有限公司 相机自动对焦系统及方法
CN101247286B (zh) * 2008-03-21 2011-01-05 中兴通讯股份有限公司 一种对视频分发系统进行服务质量检测方法和系统
CN102056010A (zh) * 2009-11-02 2011-05-11 鸿富锦精密工业(深圳)有限公司 笔记本电脑照相机功能自动测试系统及方法
CN103369227A (zh) * 2012-03-26 2013-10-23 联想(北京)有限公司 一种运动对象的拍照方法及电子设备
CN103929596B (zh) * 2014-04-30 2016-09-14 努比亚技术有限公司 引导拍摄构图的方法及装置
CN104125433A (zh) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 基于多球机联动结构的视频运动目标监控方法
CN105827928A (zh) * 2015-01-05 2016-08-03 中兴通讯股份有限公司 一种选择对焦区域的方法及装置
CN106060373B (zh) * 2015-04-03 2019-12-20 佳能株式会社 焦点检测装置及其控制方法
US10009536B2 (en) * 2016-06-12 2018-06-26 Apple Inc. Applying a simulated optical effect based on data received from multiple camera sensors
CN106357973A (zh) * 2016-08-26 2017-01-25 深圳市金立通信设备有限公司 一种聚焦的方法及终端
JPWO2018062368A1 (ja) * 2016-09-30 2019-08-15 株式会社ニコン 撮像装置および撮像システム
CN106454135B (zh) * 2016-11-29 2019-11-01 维沃移动通信有限公司 一种拍照提醒方法及移动终端
CN107124556B (zh) * 2017-05-31 2021-03-02 Oppo广东移动通信有限公司 对焦方法、装置、计算机可读存储介质和移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105554367A (zh) * 2015-09-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 一种运动拍摄方法及移动终端
JP2017103601A (ja) * 2015-12-01 2017-06-08 株式会社ニコン 焦点検出装置およびカメラ
US20180007254A1 (en) * 2016-06-30 2018-01-04 Canon Kabushiki Kaisha Focus adjusting apparatus, focus adjusting method, and image capturing apparatus
CN106961552A (zh) * 2017-03-27 2017-07-18 联想(北京)有限公司 一种对焦控制方法及电子设备
CN110505408A (zh) * 2019-09-12 2019-11-26 深圳传音控股股份有限公司 终端拍摄方法、装置、移动终端及可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546111A (zh) * 2022-09-13 2022-12-30 武汉海微科技有限公司 曲面屏检测方法、装置、设备及存储介质
CN115546111B (zh) * 2022-09-13 2023-12-05 武汉海微科技有限公司 曲面屏检测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN110505408B (zh) 2021-07-27
CN110505408A (zh) 2019-11-26

Similar Documents

Publication Publication Date Title
WO2021047070A1 (zh) 终端拍摄方法、装置、移动终端及可读存储介质
WO2016208797A1 (ko) 헤드셋 및 그 제어 방법
WO2015160193A1 (en) Wearable device, master device operating with the wearable device, and control method for wearable device
WO2012091185A1 (en) Display device and method of providing feedback for gestures thereof
WO2014038916A1 (en) System and method of controlling external apparatus connected with device
WO2018070624A2 (en) Mobile terminal and control method thereof
WO2013042804A1 (en) Mobile terminal, method for controlling of the mobile terminal and system
WO2018124334A1 (ko) 전자장치
WO2015178561A1 (ko) 이동 단말기 및 그의 동적 프레임 조절 방법
WO2015180013A1 (zh) 一种终端的触摸操作方法及装置
WO2020022780A1 (en) Method and apparatus for establishing device connection
WO2020013676A1 (en) Electronic device and operating method of controlling brightness of light source
EP3808097A1 (en) Method and apparatus for establishing device connection
WO2018090822A1 (zh) 基于智能手表的移动终端相机控制方法及控制系统
WO2016182090A1 (ko) 안경형 단말기 및 이의 제어방법
WO2018049715A1 (zh) 一种信息处理方法及其相关设备
WO2015190668A1 (ko) 이동 단말기
WO2018135675A1 (ko) 전자장치
WO2020153766A1 (en) Method for displaying visual information associated with voice input and electronic device supporting the same
WO2020171342A1 (ko) 외부 객체의 정보에 기반하여 시각화된 인공 지능 서비스를 제공하는 전자 장치 및 전자 장치의 동작 방법
WO2021080290A1 (en) Electronic apparatus and control method thereof
WO2016024707A1 (ko) 이동 단말기 및 그 제어 방법
WO2018131747A1 (ko) 이동 단말기 및 그 제어 방법
WO2017119536A1 (ko) 모바일 디바이스 및 모바일 디바이스의 제어방법
WO2020013363A1 (ko) 이동단말기 및 그 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945085

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945085

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/08/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19945085

Country of ref document: EP

Kind code of ref document: A1