CN108307105B - Shooting method, terminal and computer readable storage medium - Google Patents

Shooting method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN108307105B
CN108307105B CN201711449205.8A CN201711449205A CN108307105B CN 108307105 B CN108307105 B CN 108307105B CN 201711449205 A CN201711449205 A CN 201711449205A CN 108307105 B CN108307105 B CN 108307105B
Authority
CN
China
Prior art keywords
video
shooting
picture
video picture
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711449205.8A
Other languages
Chinese (zh)
Other versions
CN108307105A (en
Inventor
朱艺师
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201711449205.8A priority Critical patent/CN108307105B/en
Publication of CN108307105A publication Critical patent/CN108307105A/en
Application granted granted Critical
Publication of CN108307105B publication Critical patent/CN108307105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a shooting method, which comprises the steps of selecting a first video picture after a panoramic video shooting mode is started, and shooting the first video picture for preset duration through a camera; moving the shooting direction of the camera according to the user operation, and acquiring a plurality of video pictures shot by the camera; synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video: acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area; deleting the video pictures in the overlapped area in the (N + 1) th video picture; splicing the nth video picture with the rest video pictures of the (N + 1) th video picture; the invention also provides a terminal and a computer readable storage medium, when shooting, dynamic people and objects are presented in the panoramic video in a dynamic form, the picture effect is more shocky and magnificent, and the user experience is improved.

Description

Shooting method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of terminal technologies, and in particular, to a photographing method, a terminal, and a computer-readable storage medium.
Background
Panoramic photography is popular among many photographers because of its ability to photograph a wider viewing angle and richer contents, and is particularly suitable for photographing large-scale activities, beautiful natural landscapes, and the like.
However, at present, terminals such as mobile phones, tablet computers (PAD), cameras and the like only support the function of taking a still picture in a panoramic manner, and do not support the function of taking a dynamic video in a panoramic manner, and the still picture cannot reflect the atmosphere of taking the video in a visual and vivid manner, so that the user experience is poor.
Disclosure of Invention
The invention mainly aims to provide a shooting method, a terminal and a computer readable storage medium, and aims to solve the problems that the prior art does not support the function of panoramic shooting of dynamic videos and the user experience is poor.
In order to solve the above technical problem, the present invention provides a shooting method, including the steps of:
after a panoramic video shooting mode is started, selecting a first video picture, and shooting the first video picture for a preset duration through a camera;
moving the shooting direction of the camera according to user operation, and acquiring a plurality of video pictures shot by the camera in the process of moving the shooting direction of the camera;
synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video:
acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area;
deleting the video pictures in the overlapped area in the (N + 1) th video picture;
splicing the Nth video picture with the rest video pictures of the (N + 1) th video picture;
and N is an integer greater than or equal to 1.
Optionally, in the process of moving the shooting direction of the camera, acquiring the plurality of video pictures shot by the camera includes: and shooting the plurality of video pictures with the same at least one shooting parameter in the process of moving the shooting direction of the camera.
Optionally, in the process of moving the shooting direction of the camera, shooting the plurality of video pictures with the same at least one shooting parameter includes: and shooting the plurality of video pictures with the same exposure and white balance in the process of moving the shooting direction of the camera.
Optionally, in the process of shooting the first video frame by the camera for the preset duration, the method further includes the following steps:
acquiring a central point when the first video picture is shot, and recording the central point as a reference central point;
in the process of moving the shooting direction of the camera and acquiring a plurality of video pictures shot by the camera, the method further comprises the following steps:
acquiring a central point when the Nth video picture is shot;
calculating the central point of the Nth video picture and the deviation degree of the central point from the reference central point;
and if the deviation degree is greater than a first preset threshold and less than or equal to a second preset threshold, filling a deviation part with a preset picture.
Optionally, the method further comprises the following steps: and if the deviation degree is greater than the second preset threshold value, stopping shooting and prompting the user to shoot again.
Optionally, the method further comprises the following steps: and if the deviation degree is greater than the second preset threshold value, deleting the whole Nth video picture, synthesizing the first N-1 video pictures, and storing.
Optionally, after synthesizing all the shot video frames to obtain a panoramic video, the method further includes the following steps: and automatically playing the panoramic video.
Optionally, the method further comprises the following steps: and when the shooting direction of the camera moving in the reverse direction is detected, automatically stopping shooting, and storing the obtained panoramic video.
Further, the present invention provides a terminal comprising a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the photographing method as described above.
Further, the present invention provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the photographing method as described above.
Advantageous effects
The invention provides a shooting method, a terminal and a computer readable storage medium, wherein the shooting method comprises the following steps: after a panoramic video shooting mode is started, selecting a first video picture, and shooting the first video picture for a preset duration through a camera; moving the shooting direction of the camera according to user operation, and acquiring a plurality of video pictures shot by the camera in the process of moving the shooting direction of the camera; synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video: acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area; deleting the video pictures in the overlapped area in the (N + 1) th video picture; splicing the Nth video picture with the rest video pictures of the (N + 1) th video picture; n is an integer greater than or equal to 1; by the scheme, the panoramic video is obtained, dynamic people and objects are presented in the panoramic video in a dynamic mode during shooting, the shooting atmosphere can be visually and vividly reflected, the picture effect of the panoramic video is more shocking and vivid, and the user experience is improved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a schematic diagram of a hardware structure of an optional terminal for implementing various embodiments of the present invention;
fig. 2 is a flowchart of a photographing method according to a first embodiment of the present invention;
FIG. 3 is a reference diagram of a panoramic video shot according to various embodiments of the present invention;
fig. 4 is a display schematic diagram of displaying, on a shooting preview interface, a mark indicating a position of a center point according to various embodiments of the present invention;
FIG. 5 is a diagram of a first video frame A according to various embodiments of the present invention1And a second video picture A2Reference scheme for carrying out the synthesis;
FIG. 6 is a block diagram of a video frame A according to various embodiments of the present invention1A2And a third video picture A3Reference scheme for carrying out the synthesis;
FIG. 7 is a block diagram of a video frame A according to various embodiments of the present invention1A2A3…AN-1And the Nth video picture ANReference scheme for carrying out the synthesis;
fig. 8 is a flowchart of a photographing method according to a second embodiment of the present invention;
fig. 9 is a schematic diagram of a terminal according to a third embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include mobile terminals such as a mobile phone, a tablet computer, a camera, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a digital TV, a desktop computer, and the like.
It will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of an optional terminal for implementing various embodiments of the present invention, the terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and the like. Those skilled in the art will appreciate that the terminal configuration shown in fig. 1 is not intended to be limiting, and that the terminal may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes the various components of the terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA1000(Code Division Multiple Access 1000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The terminal 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal 100. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 100 or may be used to transmit data between the terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal 100, connects various parts of the entire terminal 100 using various interfaces and lines, performs various functions of the terminal 100 and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal 100. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
Although not shown in fig. 1, the terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
Based on the above terminal hardware structure, the present invention is described in detail below by specific embodiments.
First embodiment
In this embodiment, a shooting method is provided, where the shooting method of this embodiment is applicable to all electronic devices with video shooting functions, such as mobile phones, tablet computers, cameras, and the like, referring to fig. 2, fig. 2 is a flowchart of the shooting method provided in this embodiment, where the shooting method includes the following steps:
s201: after the panoramic video shooting mode is started, selecting a first video picture, and shooting the first video picture for a preset duration through a camera;
the first video picture is the initial video picture;
the preset duration can be decided by a user;
setting a value of a preset time length in a default mode, and reminding a user to shoot again when the duration of shooting the first video picture by the user is less than the value of the set preset time length; when a user shoots a first video picture, the shooting duration can be longer than the value of the set preset duration;
optionally, the preset time is 2 seconds, that is, when the first video picture is shot, the system stays for 2 seconds, and then the system stores the first video picture.
Optionally, in the process of shooting the first video frame for the preset duration through the camera in S201, the method further includes the following steps:
acquiring a central point when a first video picture is shot and recording the central point as a reference central point;
the center point when the subsequent video picture is taken may be referenced to the reference center point.
Referring to fig. 3, fig. 3 is a reference schematic diagram of panoramic video shooting provided in this embodiment, in fig. 3, the whole shooting range of a panoramic video is shown, a frame on the left side is a picture at one time point of a first video picture, a mobile phone is uniformly moved to the right side to record a video, and a line in the middle of a picture on the right side is a position where a center point is located.
For the user to know the center point during shooting, a mark indicating the position of the center point may be displayed on the shooting preview interface, for example, referring to fig. 4, fig. 4 is a schematic display diagram of displaying the mark indicating the position of the center point on the shooting preview interface according to this embodiment.
S202: according to the shooting direction of a user operation mobile camera, and in the process of moving the shooting direction of the camera, acquiring a plurality of video pictures shot by the camera;
the method comprises the steps that a user moves the shooting direction of a camera, and a plurality of video pictures shot by the camera are obtained in the process of moving the shooting direction of the camera;
in order to ensure that the shot video effect is good, a user can preferably move uniformly in the process of moving the shooting direction of the camera, wherein the uniform movement means that the deviation degree of the central point from the reference central point is small when the video picture is shot, and the moving speed is moderate.
Because the panoramic video is synthesized by a plurality of video pictures, and the effects of the plurality of video pictures are not the same, in order to ensure that the synthesized panoramic video has a good effect, optionally, in the process of moving the shooting direction of the camera, the step S202 of acquiring the plurality of video pictures shot by the camera includes: in the process of moving the shooting direction of the camera, a plurality of video pictures are shot with the same at least one shooting parameter.
The shooting parameters comprise exposure, white balance, color temperature and the like;
optionally, in the process of moving the shooting direction of the camera, shooting a plurality of video pictures with the same at least one shooting parameter includes: in moving the shooting direction of the camera, a plurality of video pictures are shot with the same exposure and white balance. In the subsequent synthesis, the problems of brightness and color seams are avoided.
If all video frames are taken with the same exposure and white balance settings, the final image effect will be very good, avoiding the problems of luminance and color seams.
S203: synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video:
acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area;
deleting the video pictures in the overlapped area in the (N + 1) th video picture;
splicing the nth video picture with the rest video pictures of the (N + 1) th video picture;
n is an integer of 1 or more.
During synthesis, video pictures can be synthesized in sequence according to the shooting time sequence;
first, a second video frame (denoted as A)2) Middle, first video picture (denoted as A)1) Deleting the video pictures in the overlapped area, and then deleting the first video picture A1And a second video picture A2Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2(ii) a For example, referring to fig. 5, fig. 5 is a first video frame a provided in this embodiment1And a second video picture A2Reference scheme for carrying out the synthesis;
let the third video picture (denoted A)3) In and A1A2Deleting the video pictures in the overlapped area, and then deleting A1A2And a third video picture A3Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2A3(ii) a For example, referring to fig. 6, fig. 6 is a video picture a provided by the present embodiment1A2And a third video picture A3Reference scheme for carrying out the synthesis;
by the way of analogy, the method can be used,
finally, the Nth video picture (denoted as A)N) In and A1A2A3…AN-1Deleting the video pictures in the overlapped area, and then deleting A1A2A3…AN-1And the Nth video picture ANSplicing the residual video pictures, and splicing the residual video picturesVideo pictures are marked as A1A2A3…AN-1AN(ii) a For example, referring to fig. 7, fig. 7 is a video picture a provided by the present embodiment1A2A3…AN-1And the Nth video picture ANReference scheme for carrying out the synthesis; at this time A1A2A3…AN-1ANThe video picture is the panoramic video.
In fig. 5 to 7, the thickness of the lines is merely to distinguish different video frames, and is not practical.
Optionally, in the process of moving the shooting direction of the camera and acquiring a plurality of video frames shot by the camera in S202, the method further includes the following steps:
acquiring a central point when the Nth video picture is shot;
calculating the deviation degrees of the center point and the deviation degree from the reference center point when the Nth video picture is taken;
and if the deviation degree is greater than the first preset threshold and less than or equal to the second preset threshold, filling the deviation part with a preset picture.
The preset picture can be any picture, can be a dynamic video, and can also be a static picture; the source of the preset picture can be to intercept a piece of dynamic video occupying the whole screen or a piece of static picture occupying the whole screen according to the area size of the deviation part.
The preset picture may be a video picture which is copied and cut at the same height as the deviated part from the N-1 st video picture.
Optionally, the method further comprises the following steps: and if the deviation degree is greater than a second preset threshold value, stopping shooting and prompting the user to shoot again.
If the center point deviates too much from the reference center point, when the video picture is synthesized, because the deviation part is more and the deviation part is generally displayed as black, the synthesized video picture is not ideal in effect, even if the video picture is not viewed directly, the shooting is stopped and the user is prompted to shoot again under the condition that the center point deviates too much from the reference center point.
Optionally, if it is detected that the deviation degrees of the center point and the deviation degree from the reference center point in the nth video picture are greater than the second preset threshold, since the deviation degree of the center point and the deviation degree from the reference center point in the first N-1 video pictures does not exceed the second preset threshold, it is indicated that the deviation parts of the first N-1 video pictures are less, only the first N-1 video pictures may be synthesized and stored, the nth video picture is entirely deleted, and the effect of the synthesized first N-1 video pictures is also better.
Optionally, after S203 synthesizes all the shot video frames to obtain a panoramic video, the method further includes the following steps: and automatically playing the panoramic video. The user can check the shooting effect immediately after the panoramic video is synthesized without manually clicking the panoramic video.
Optionally, the method further comprises the following steps: and when the shooting direction of the camera moving in the reverse direction is detected, the shooting is automatically stopped, and the obtained panoramic video is stored.
For example, when the shooting direction of the mobile camera is horizontal to the right, and the shooting direction of the mobile camera is detected to be horizontal to the left, the shooting is automatically stopped, and the obtained panoramic video is stored.
Through the implementation of the embodiment, a panoramic video is obtained, dynamic people and objects are presented in the panoramic video in a dynamic mode during shooting, the shooting atmosphere can be visually and vividly reflected, the picture effect of the panoramic video is more shocky and spectacular, and the user experience is improved.
Second embodiment
The embodiment provides a shooting method, and takes the example that a user shoots a section of panoramic video through a mobile phone; referring to fig. 8, fig. 8 is a flowchart of a shooting method provided in this embodiment, where the shooting method includes the following steps:
s801: the user clicks a camera function in the mobile phone, then clicks a panoramic video shooting mode in the camera function, operates the camera to select a first video picture, and shoots the first video picture for 2 seconds through the camera;
if dynamic people and objects exist in the first video picture with the duration of 2 seconds, when the panoramic video is synthesized subsequently, the dynamic people and objects still exist in the first video picture of the panoramic video.
S802: the user horizontally and uniformly moves the shooting direction of the camera to the right, and a plurality of video pictures shot by the camera are obtained in the process of moving the shooting direction of the camera;
s803: synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video:
acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area;
deleting the video pictures in the overlapped area in the (N + 1) th video picture;
splicing the nth video picture with the rest video pictures of the (N + 1) th video picture;
n is an integer of 1 or more.
During synthesis, video pictures can be synthesized in sequence according to the shooting time sequence;
first, a second video frame (denoted as A)2) Middle, first video picture (denoted as A)1) Deleting the video pictures in the overlapped area, and then deleting the first video picture A1And a second video picture A2Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2(ii) a For example, referring to fig. 5, fig. 5 is a first video frame a provided in this embodiment1And a second video picture A2Reference scheme for carrying out the synthesis;
let the third video picture (denoted A)3) In and A1A2Deleting the video pictures in the overlapped area, and then deleting A1A2And a third video picture A3Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2A3(ii) a For example, referring to fig. 6, fig. 6 is a video picture a provided by the present embodiment1A2And the third visionFrequency picture A3Reference scheme for carrying out the synthesis;
by the way of analogy, the method can be used,
finally, the Nth video picture (denoted as A)N) In and A1A2A3…AN-1Deleting the video pictures in the overlapped area, and then deleting A1A2A3…AN-1And the Nth video picture ANSplicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2A3…AN-1AN(ii) a For example, referring to fig. 7, fig. 7 is a video picture a provided by the present embodiment1A2A3…AN-1And the Nth video picture ANReference scheme for carrying out the synthesis; at this time A1A2A3…AN-1ANThe video picture is the panoramic video.
In fig. 5 to 7, the thickness of the lines is merely to distinguish different video frames, and is not practical.
Through the implementation of the embodiment, a panoramic video is obtained, dynamic people and objects are presented in the panoramic video in a dynamic mode during shooting, the shooting atmosphere can be visually and vividly reflected, the picture effect of the panoramic video is more shocky and spectacular, and the user experience is improved.
Third embodiment
Fig. 9 is a schematic diagram of a terminal provided in this embodiment, where the terminal includes a processor 901, a memory 902, and a communication bus 903, where:
the communication bus 903 is used for realizing connection communication between the processor 901 and the memory 902;
the processor 901 is configured to execute one or more programs stored in the memory 902 to implement the steps of the photographing method in the first and second embodiments.
Specifically, the first embodiment is described below, and the processor 901 is configured to execute one or more programs stored in the memory 902 to implement the following steps:
s201: after the panoramic video shooting mode is started, selecting a first video picture, and shooting the first video picture for a preset duration through a camera;
the first video picture is the initial video picture;
the preset duration can be decided by a user;
setting a value of a preset time length in a default mode, and reminding a user to shoot again when the duration of shooting the first video picture by the user is less than the value of the set preset time length; when a user shoots a first video picture, the shooting duration can be longer than the value of the set preset duration;
optionally, the preset time is 2 seconds, that is, when the first video picture is shot, the system stays for 2 seconds, and then the system stores the first video picture.
Optionally, in the process of shooting the first video frame for a preset time duration by using the camera in S201, the processor 901 is further configured to execute one or more programs stored in the memory 902, so as to implement the following steps:
acquiring a central point when a first video picture is shot and recording the central point as a reference central point;
the center point when the subsequent video picture is taken may be referenced to the reference center point.
Referring to fig. 3, fig. 3 is a reference schematic diagram of panoramic video shooting provided in this embodiment, in fig. 3, the whole shooting range of a panoramic video is shown, a frame on the left side is a picture at one time point of a first video picture, a mobile phone is uniformly moved to the right side to record a video, and a line in the middle of a picture on the right side is a position where a center point is located.
For the user to know the center point during shooting, a mark indicating the position of the center point may be displayed on the shooting preview interface, for example, referring to fig. 4, fig. 4 is a schematic display diagram of displaying the mark indicating the position of the center point on the shooting preview interface according to this embodiment.
S202: according to the shooting direction of a user operation mobile camera, and in the process of moving the shooting direction of the camera, acquiring a plurality of video pictures shot by the camera;
the method comprises the steps that a user moves the shooting direction of a camera, and a plurality of video pictures shot by the camera are obtained in the process of moving the shooting direction of the camera;
in order to ensure that the shot video effect is good, a user can preferably move uniformly in the process of moving the shooting direction of the camera, wherein the uniform movement means that the deviation degree of the central point from the reference central point is small when the video picture is shot, and the moving speed is moderate.
Because the panoramic video is synthesized by a plurality of video pictures, and the effects of the plurality of video pictures are not the same, in order to ensure that the synthesized panoramic video has a good effect, optionally, in the process of moving the shooting direction of the camera, the step S202 of acquiring the plurality of video pictures shot by the camera includes: in the process of moving the shooting direction of the camera, a plurality of video pictures are shot with the same at least one shooting parameter.
The shooting parameters comprise exposure, white balance, color temperature and the like;
optionally, in the process of moving the shooting direction of the camera, shooting a plurality of video pictures with the same at least one shooting parameter includes: in moving the shooting direction of the camera, a plurality of video pictures are shot with the same exposure and white balance. In the subsequent synthesis, the problems of brightness and color seams are avoided.
If all video frames are taken with the same exposure and white balance settings, the final image effect will be very good, avoiding the problems of luminance and color seams.
S203: synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video:
acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area;
deleting the video pictures in the overlapped area in the (N + 1) th video picture;
splicing the nth video picture with the rest video pictures of the (N + 1) th video picture;
n is an integer of 1 or more.
During synthesis, video pictures can be synthesized in sequence according to the shooting time sequence;
first, a second video frame (denoted as A)2) Middle, first video picture (denoted as A)1) Deleting the video pictures in the overlapped area, and then deleting the first video picture A1And a second video picture A2Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2(ii) a For example, referring to fig. 5, fig. 5 is a first video frame a provided in this embodiment1And a second video picture A2Reference scheme for carrying out the synthesis;
let the third video picture (denoted A)3) In and A1A2Deleting the video pictures in the overlapped area, and then deleting A1A2And a third video picture A3Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2A3(ii) a For example, referring to fig. 6, fig. 6 is a video picture a provided by the present embodiment1A2And a third video picture A3Reference scheme for carrying out the synthesis;
by the way of analogy, the method can be used,
finally, the Nth video picture (denoted as A)N) In and A1A2A3…AN-1Deleting the video pictures in the overlapped area, and then deleting A1A2A3…AN-1And the Nth video picture ANSplicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2A3…AN-1AN(ii) a For example, referring to fig. 7, fig. 7 is a video picture a provided by the present embodiment1A2A3…AN-1And the Nth video picture ANReference scheme for carrying out the synthesis; at this time A1A2A3…AN-1ANThe video picture is the panoramic video.
In fig. 5 to 7, the thickness of the lines is merely to distinguish different video frames, and is not practical.
Optionally, in S202, in the process of moving the shooting direction of the camera and acquiring a plurality of video pictures shot by the camera, the processor 901 is further configured to execute one or more programs stored in the memory 902, so as to implement the following steps:
acquiring a central point when the Nth video picture is shot;
calculating the deviation degrees of the center point and the deviation degree from the reference center point when the Nth video picture is taken;
and if the deviation degree is greater than the first preset threshold and less than or equal to the second preset threshold, filling the deviation part with a preset picture.
The preset picture can be any picture, can be a dynamic video, and can also be a static picture; the source of the preset picture can be to intercept a piece of dynamic video occupying the whole screen or a piece of static picture occupying the whole screen according to the area size of the deviation part.
The preset picture may be a video picture which is copied and cut at the same height as the deviated part from the N-1 st video picture.
Optionally, the processor 901 is further configured to execute one or more programs stored in the memory 902 to implement the following steps: and if the deviation degree is greater than a second preset threshold value, stopping shooting and prompting the user to shoot again.
If the center point deviates too much from the reference center point, when the video picture is synthesized, because the deviation part is more and the deviation part is generally displayed as black, the synthesized video picture is not ideal in effect, even if the video picture is not viewed directly, the shooting is stopped and the user is prompted to shoot again under the condition that the center point deviates too much from the reference center point.
Optionally, if it is detected that the deviation degrees of the center point and the deviation degree from the reference center point in the nth video picture are greater than the second preset threshold, since the deviation degree of the center point and the deviation degree from the reference center point in the first N-1 video pictures does not exceed the second preset threshold, it is indicated that the deviation parts of the first N-1 video pictures are less, only the first N-1 video pictures may be synthesized and stored, the nth video picture is entirely deleted, and the effect of the synthesized first N-1 video pictures is also better.
Optionally, after S203 synthesizes all the captured video frames to obtain a panoramic video, the processor 901 is further configured to execute one or more programs stored in the memory 902 to implement the following steps: and automatically playing the panoramic video. The user can check the shooting effect immediately after the panoramic video is synthesized without manually clicking the panoramic video.
Optionally, the processor 901 is further configured to execute one or more programs stored in the memory 902 to implement the following steps: and when the shooting direction of the camera moving in the reverse direction is detected, the shooting is automatically stopped, and the obtained panoramic video is stored.
For example, when the shooting direction of the mobile camera is horizontal to the right, and the shooting direction of the mobile camera is detected to be horizontal to the left, the shooting is automatically stopped, and the obtained panoramic video is stored.
Through the implementation of the embodiment, a panoramic video is obtained, dynamic people and objects are presented in the panoramic video in a dynamic mode during shooting, the shooting atmosphere can be visually and vividly reflected, the picture effect of the panoramic video is more shocky and spectacular, and the user experience is improved.
Fourth embodiment
The present embodiment provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the photographing method in the first and second embodiments.
In particular, as described below for the first embodiment, one or more programs may be executed by one or more processors to implement the steps of:
s201: after the panoramic video shooting mode is started, selecting a first video picture, and shooting the first video picture for a preset duration through a camera;
the first video picture is the initial video picture;
the preset duration can be decided by a user;
setting a value of a preset time length in a default mode, and reminding a user to shoot again when the duration of shooting the first video picture by the user is less than the value of the set preset time length; when a user shoots a first video picture, the shooting duration can be longer than the value of the set preset duration;
optionally, the preset time is 2 seconds, that is, when the first video picture is shot, the system stays for 2 seconds, and then the system stores the first video picture.
Optionally, in the process of shooting the first video frame for the preset duration by the camera in S201, the one or more programs may be further executed by the one or more processors to implement the following steps:
acquiring a central point when a first video picture is shot and recording the central point as a reference central point;
the center point when the subsequent video picture is taken may be referenced to the reference center point.
Referring to fig. 3, fig. 3 is a reference schematic diagram of panoramic video shooting provided in this embodiment, in fig. 3, the whole shooting range of a panoramic video is shown, a frame on the left side is a picture at one time point of a first video picture, a mobile phone is uniformly moved to the right side to record a video, and a line in the middle of a picture on the right side is a position where a center point is located.
For the user to know the center point during shooting, a mark indicating the position of the center point may be displayed on the shooting preview interface, for example, referring to fig. 4, fig. 4 is a schematic display diagram of displaying the mark indicating the position of the center point on the shooting preview interface according to this embodiment.
S202: according to the shooting direction of a user operation mobile camera, and in the process of moving the shooting direction of the camera, acquiring a plurality of video pictures shot by the camera;
the method comprises the steps that a user moves the shooting direction of a camera, and a plurality of video pictures shot by the camera are obtained in the process of moving the shooting direction of the camera;
in order to ensure that the shot video effect is good, a user can preferably move uniformly in the process of moving the shooting direction of the camera, wherein the uniform movement means that the deviation degree of the central point from the reference central point is small when the video picture is shot, and the moving speed is moderate.
Because the panoramic video is synthesized by a plurality of video pictures, and the effects of the plurality of video pictures are not the same, in order to ensure that the synthesized panoramic video has a good effect, optionally, in the process of moving the shooting direction of the camera, the step S202 of acquiring the plurality of video pictures shot by the camera includes: in the process of moving the shooting direction of the camera, a plurality of video pictures are shot with the same at least one shooting parameter.
The shooting parameters comprise exposure, white balance, color temperature and the like;
optionally, in the process of moving the shooting direction of the camera, shooting a plurality of video pictures with the same at least one shooting parameter includes: in moving the shooting direction of the camera, a plurality of video pictures are shot with the same exposure and white balance. In the subsequent synthesis, the problems of brightness and color seams are avoided.
If all video frames are taken with the same exposure and white balance settings, the final image effect will be very good, avoiding the problems of luminance and color seams.
S203: synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video:
acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area;
deleting the video pictures in the overlapped area in the (N + 1) th video picture;
splicing the nth video picture with the rest video pictures of the (N + 1) th video picture;
n is an integer of 1 or more.
During synthesis, video pictures can be synthesized in sequence according to the shooting time sequence;
first, a second video frame (denoted as A)2) Middle, first video picture (denoted as A)1) Deleting the video pictures in the overlapped area, and then deleting the first video picture A1And a second video picture A2Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2(ii) a For example, referring to fig. 5, fig. 5 is a first video frame a provided in this embodiment1And a second video picture A2Reference scheme for carrying out the synthesis;
let the third video picture (denoted A)3) In and A1A2Deleting the video pictures in the overlapped area, and then deleting A1A2And a third video picture A3Splicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2A3(ii) a For example, referring to fig. 6, fig. 6 is a video picture a provided by the present embodiment1A2And a third video picture A3Reference scheme for carrying out the synthesis;
by the way of analogy, the method can be used,
finally, the Nth video picture (denoted as A)N) In and A1A2A3…AN-1Deleting the video pictures in the overlapped area, and then deleting A1A2A3…AN-1And the Nth video picture ANSplicing the residual video pictures, and recording the video pictures obtained by splicing at the moment as A1A2A3…AN-1AN(ii) a For example, referring to fig. 7, fig. 7 is a video picture a provided by the present embodiment1A2A3…AN-1And the Nth video picture ANReference scheme for carrying out the synthesis; at this time A1A2A3…AN-1ANThe video picture is the panoramic video.
In fig. 5 to 7, the thickness of the lines is merely to distinguish different video frames, and is not practical.
Optionally, in the process of moving the shooting direction of the camera and acquiring a plurality of video frames shot by the camera in S202, the one or more programs may be further executed by the one or more processors to implement the following steps:
acquiring a central point when the Nth video picture is shot;
calculating the deviation degrees of the center point and the deviation degree from the reference center point when the Nth video picture is taken;
and if the deviation degree is greater than the first preset threshold and less than or equal to the second preset threshold, filling the deviation part with a preset picture.
The preset picture can be any picture, can be a dynamic video, and can also be a static picture; the source of the preset picture can be to intercept a piece of dynamic video occupying the whole screen or a piece of static picture occupying the whole screen according to the area size of the deviation part.
The preset picture may be a video picture which is copied and cut at the same height as the deviated part from the N-1 st video picture.
Optionally, the one or more programs are further executable by the one or more processors to perform the steps of: and if the deviation degree is greater than a second preset threshold value, stopping shooting and prompting the user to shoot again.
If the center point deviates too much from the reference center point, when the video picture is synthesized, because the deviation part is more and the deviation part is generally displayed as black, the synthesized video picture is not ideal in effect, even if the video picture is not viewed directly, the shooting is stopped and the user is prompted to shoot again under the condition that the center point deviates too much from the reference center point.
Optionally, if it is detected that the deviation degrees of the center point and the deviation degree from the reference center point in the nth video picture are greater than the second preset threshold, since the deviation degree of the center point and the deviation degree from the reference center point in the first N-1 video pictures does not exceed the second preset threshold, it is indicated that the deviation parts of the first N-1 video pictures are less, only the first N-1 video pictures may be synthesized and stored, the nth video picture is entirely deleted, and the effect of the synthesized first N-1 video pictures is also better.
Optionally, after S203 synthesizes all the captured video frames to obtain a panoramic video, the one or more programs may be further executed by the one or more processors to implement the following steps: and automatically playing the panoramic video. The user can check the shooting effect immediately after the panoramic video is synthesized without manually clicking the panoramic video.
Optionally, the one or more programs are further executable by the one or more processors to perform the steps of: and when the shooting direction of the camera moving in the reverse direction is detected, the shooting is automatically stopped, and the obtained panoramic video is stored.
For example, when the shooting direction of the mobile camera is horizontal to the right, and the shooting direction of the mobile camera is detected to be horizontal to the left, the shooting is automatically stopped, and the obtained panoramic video is stored.
Through the implementation of the embodiment, a panoramic video is obtained, dynamic people and objects are presented in the panoramic video in a dynamic mode during shooting, the shooting atmosphere can be visually and vividly reflected, the picture effect of the panoramic video is more shocky and spectacular, and the user experience is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A photographing method characterized by comprising the steps of:
after a panoramic video shooting mode is started, selecting a first video picture, and shooting the first video picture for a preset duration through a camera; in the process of shooting the first video picture for the preset duration, the method further comprises the following steps:
acquiring a central point when the first video picture is shot, and recording the central point as a reference central point;
after the shooting step is completed, moving the shooting direction of the camera according to user operation, and acquiring a plurality of video pictures shot by the camera in the process of moving the shooting direction of the camera;
synthesizing all the shot video pictures according to the following preset rules to obtain a panoramic video:
acquiring an Nth video picture and an (N + 1) th video picture, wherein the Nth video picture and the (N + 1) th video picture have an overlapping area;
deleting the video pictures in the overlapped area in the (N + 1) th video picture;
splicing the Nth video picture with the rest video pictures of the (N + 1) th video picture;
n is an integer greater than or equal to 1;
in the process of moving the shooting direction of the camera and acquiring a plurality of video pictures shot by the camera, the method further comprises the following steps:
acquiring a central point when the Nth video picture is shot;
calculating the central point of the Nth video picture and the deviation degree of the central point from the reference central point;
and if the deviation degree is greater than a first preset threshold and less than or equal to a second preset threshold, filling a deviation part with a preset picture.
2. The shooting method according to claim 1, wherein the acquiring a plurality of video pictures shot by the camera in the process of moving the shooting direction of the camera comprises: and shooting the plurality of video pictures with the same at least one shooting parameter in the process of moving the shooting direction of the camera.
3. The shooting method according to claim 2, wherein said shooting the plurality of video pictures with the same at least one shooting parameter in moving the shooting direction of the camera includes: and shooting the plurality of video pictures with the same exposure and white balance in the process of moving the shooting direction of the camera.
4. The photographing method according to claim 1, wherein the method further comprises the steps of: and if the deviation degree is greater than the second preset threshold value, stopping shooting and prompting the user to shoot again.
5. The photographing method according to claim 1, wherein the method further comprises the steps of: and if the deviation degree is greater than the second preset threshold value, deleting the whole Nth video picture, synthesizing the first N-1 video pictures, and storing.
6. The shooting method according to any one of claims 1 to 5, wherein after synthesizing all the shot video pictures to obtain a panoramic video, the method further comprises the following steps: and automatically playing the panoramic video.
7. The photographing method according to any one of claims 1 to 5, wherein the method further comprises the steps of: and when the shooting direction of the camera moving in the reverse direction is detected, automatically stopping shooting, and storing the obtained panoramic video.
8. A terminal, characterized in that the terminal comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the photographing method according to any one of claims 1 to 7.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs executable by one or more processors to implement the steps of the photographing method according to any one of claims 1 to 7.
CN201711449205.8A 2017-12-27 2017-12-27 Shooting method, terminal and computer readable storage medium Active CN108307105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711449205.8A CN108307105B (en) 2017-12-27 2017-12-27 Shooting method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711449205.8A CN108307105B (en) 2017-12-27 2017-12-27 Shooting method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108307105A CN108307105A (en) 2018-07-20
CN108307105B true CN108307105B (en) 2020-07-07

Family

ID=62867878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711449205.8A Active CN108307105B (en) 2017-12-27 2017-12-27 Shooting method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108307105B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522814B (en) * 2018-10-25 2020-10-02 清华大学 Target tracking method and device based on video data
CN110213496A (en) * 2019-03-21 2019-09-06 南京泓众电子科技有限公司 A kind of rotary panorama camera light measuring method of monocular, system, portable terminal
CN111047622B (en) * 2019-11-20 2023-05-30 腾讯科技(深圳)有限公司 Method and device for matching objects in video, storage medium and electronic device
CN112565590A (en) * 2020-11-16 2021-03-26 李诚专 Object 360-degree all-round-looking image generation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998047291A2 (en) * 1997-04-16 1998-10-22 Isight Ltd. Video teleconferencing
CN101247513A (en) * 2007-12-25 2008-08-20 谢维信 Method for real-time generating 360 degree seamless full-view video image by single camera
CN103561209A (en) * 2013-10-21 2014-02-05 广东明创软件科技有限公司 Method for shooting panoramic photos or videos based on mobile terminal and mobile terminal
CN106204456A (en) * 2016-07-18 2016-12-07 电子科技大学 Panoramic video sequences estimation is crossed the border folding searching method
CN106780305A (en) * 2016-12-07 2017-05-31 景德镇陶瓷大学 A kind of planar design to non-flat design conversion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998047291A2 (en) * 1997-04-16 1998-10-22 Isight Ltd. Video teleconferencing
CN101247513A (en) * 2007-12-25 2008-08-20 谢维信 Method for real-time generating 360 degree seamless full-view video image by single camera
CN103561209A (en) * 2013-10-21 2014-02-05 广东明创软件科技有限公司 Method for shooting panoramic photos or videos based on mobile terminal and mobile terminal
CN106204456A (en) * 2016-07-18 2016-12-07 电子科技大学 Panoramic video sequences estimation is crossed the border folding searching method
CN106780305A (en) * 2016-12-07 2017-05-31 景德镇陶瓷大学 A kind of planar design to non-flat design conversion method

Also Published As

Publication number Publication date
CN108307105A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
US20220279116A1 (en) Object tracking method and electronic device
CN108184050B (en) Photographing method and mobile terminal
CN108881733B (en) Panoramic shooting method and mobile terminal
CN109246360B (en) Prompting method and mobile terminal
CN109660723B (en) Panoramic shooting method and device
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN110602401A (en) Photographing method and terminal
CN108174103B (en) Shooting prompting method and mobile terminal
CN107248137B (en) Method for realizing image processing and mobile terminal
CN108307105B (en) Shooting method, terminal and computer readable storage medium
CN108038825B (en) Image processing method and mobile terminal
CN108307106B (en) Image processing method and device and mobile terminal
CN108449541B (en) Panoramic image shooting method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN110213485B (en) Image processing method and terminal
CN109348019B (en) Display method and device
CN110602389B (en) Display method and electronic equipment
CN111010523B (en) Video recording method and electronic equipment
CN108335258B (en) Image processing method and device of mobile terminal
CN111597370B (en) Shooting method and electronic equipment
CN107153500B (en) Method and equipment for realizing image display
US11863901B2 (en) Photographing method and terminal
CN108881721B (en) Display method and terminal
CN108924422B (en) Panoramic photographing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant