WO2017054704A1 - Method and device for generating video image - Google Patents

Method and device for generating video image Download PDF

Info

Publication number
WO2017054704A1
WO2017054704A1 PCT/CN2016/100334 CN2016100334W WO2017054704A1 WO 2017054704 A1 WO2017054704 A1 WO 2017054704A1 CN 2016100334 W CN2016100334 W CN 2016100334W WO 2017054704 A1 WO2017054704 A1 WO 2017054704A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
picture
video
time
unit
Prior art date
Application number
PCT/CN2016/100334
Other languages
French (fr)
Chinese (zh)
Inventor
刘林汶
何耀平
苗雷
里强
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017054704A1 publication Critical patent/WO2017054704A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • This application relates to, but is not limited to, the field of photography.
  • An apparatus for generating a video picture comprising: a picture data acquiring unit, a video data acquiring unit, and a synthesizing unit;
  • the image data acquiring unit is configured to: acquire image data
  • the video data acquiring unit is configured to: acquire video data
  • the synthesizing unit is configured to encapsulate the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into one file.
  • the synthesizing unit is configured to encapsulate the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into a file, including:
  • An identifier is written in the picture file.
  • the picture data acquiring unit is further configured to: acquire a data length of the picture data
  • the synthesizing unit is configured to write the picture data and the video data into the created picture file, including:
  • the picture file further includes one or more of the following: a data length of the picture data, a start location identifier of the picture data, and a start location identifier of the video data.
  • the device further includes:
  • the photographing unit is configured to: when receiving the photographing instruction, take a picture and trigger the picture data acquiring unit to acquire the picture data.
  • the device further includes:
  • the storage unit is configured to: when the photographing unit is activated to perform framing, store the acquired image data.
  • the device further includes:
  • a shooting time acquisition unit configured to: acquire a time T at which the shooting unit takes a picture
  • the video data acquiring unit is configured to acquire video data, including:
  • Image data of the image data from time T-T1 to time T+T2 is encoded to generate video data in a video format.
  • the device further includes:
  • a storage unit configured to: when the shooting unit is activated to perform framing, storing the acquired image data
  • An audio data collecting unit configured to: collect audio data synchronized with the image data stored by the storage unit;
  • the storage unit is further configured to: store audio data collected by the audio data collection unit.
  • the device further includes:
  • a shooting time acquisition unit configured to: acquire a time T at which the shooting unit takes a picture
  • the video data acquiring unit is configured to acquire video data, including:
  • the image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
  • the T1 is a first preset time interval
  • the T2 is a second preset time interval.
  • a method of generating a video picture comprising:
  • the encapsulating the picture data and the video data into a file includes:
  • An identifier is written in the picture file.
  • the method before the writing the picture data and the video data into the created picture file, the method further includes:
  • the writing the picture data and the video data into the created picture file includes:
  • the data length of the picture data, the video data, and the picture data is written into the created picture file.
  • the picture file further includes one or more of the following: a data length of the picture data, a start location identifier of the picture data, and a start location identifier of the video data.
  • the acquiring image data includes:
  • the method further includes:
  • the shooting unit When the shooting unit is activated for framing, the acquired image data is stored.
  • the method further includes:
  • the obtaining video data includes:
  • the image data of the image data from the time T-T1 to the time T+T2 is encoded to generate video data in a video format.
  • the method further includes:
  • the shooting unit When the shooting unit is activated for framing, the acquired image data is stored;
  • Audio data synchronized with the image data is acquired, and the collected audio data is stored.
  • the method further includes:
  • the obtaining video data includes:
  • the image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
  • the T1 is a first preset time interval
  • T2 is a second preset time interval.
  • the method and device for generating a video picture acquires picture data by a picture data acquiring unit; and acquires video data by using a video data acquiring unit; thereby obtaining, by the synthesizing unit, the picture data and video data acquiring unit acquired by the picture data acquiring unit
  • the video data is encapsulated into a file.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention
  • FIG. 2 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart diagram of another method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of still another method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of time selection of a video data in a method for generating a video picture according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart diagram of still another method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a photographing interface of a mobile terminal in a method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of another recorded video interface of a mobile terminal in a method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 9 is a schematic structural diagram of an apparatus for generating a video picture according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of another apparatus for generating a video picture according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of an electrical structure of a camera in an apparatus for generating a video picture according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may also include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • Broadcast signals can exist in various forms, for example, they can be widely used in digital multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), etc. exist.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast apparatuses.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • DMB-T multimedia broadcast-terrestrial
  • DMB-S digital multimedia broadcast-satellite
  • DVD-H digital video broadcast-handheld
  • the digital broadcasting device of the @) data broadcasting device, the terrestrial digital broadcasting integrated service (ISDB-T), or the like receives the digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast apparatuses suitable for providing broadcast signals as well as the above-described digital broadcast apparatuses.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wide Band (UWB), ZigbeeTM, and the like.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is a GPS (Global Positioning Device).
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information according to longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the error of the calculated position and time information by using another satellite. Further, the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122, the camera 121 being imaged in a video capture mode or an image capture mode
  • the image data of the still picture or video obtained by the capture device is processed.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 141 which will be described below in connection with a touch screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • a device having an identification module may take the form of a smart card. Therefore, the identification device can be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a pickup, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides Operate the appropriate power required for each component and component.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • FIG. 2 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention.
  • the method for generating a video picture provided in this embodiment is applied to an intelligent terminal, where the smart terminal includes, for example, a smart phone, a tablet computer, and the like.
  • the method may include the following steps, namely, S110 to S130:
  • the source of the picture data may be the picture data captured by the shooting unit, the picture saved in the terminal, or the picture stored on the server.
  • the user can open the camera of the mobile terminal, take a photo by the camera to obtain the image data, or select the image saved in the mobile terminal, obtain the image data through the corresponding module, and obtain the image of the server through the network to read the image. data.
  • the source of the video data may be diverse.
  • the video data may be collected through the camera preview data of the mobile terminal, or may be the camera function through the mobile terminal.
  • the video data that can be captured can also be video data that has been saved in the mobile terminal (or other memory).
  • the method for generating a video picture according to the present invention, the picture data and the video data are encapsulated into a file by acquiring the picture data, and the picture data and the video data are encapsulated into a file; the technical solution provided by the embodiment of the invention solves the picture in the related art
  • the video has independent storage files and display effects, which leads to a single display effect.
  • the function of synthesizing pictures and videos into one file brings more joy to the user and improves the user experience.
  • FIG. 3 is a schematic flowchart of another method for generating a video picture according to an embodiment of the present invention.
  • an implementation manner of encapsulating picture data and video data into a file is specifically described in FIG. 2 .
  • S130 in this embodiment may include the following steps, namely, S131 to S133:
  • the created image file is saved in standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .png, .bmp and other formats.
  • This embodiment adds additional data based on the data of the picture file, and may include, for example, video data, and the data length of the picture data, or the start position identifier of the picture data, or the start position identifier of the video data, which is guaranteed.
  • the standard format of the image file is not destroyed and saved in the standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .Png, .bmp, etc., so that any terminal can preview before adding additional data.
  • Original image file can be saved as: .jpg, .jpeg, .gif, .Png, .bmp, etc.
  • the identifier is used to indicate that the picture file is a video picture. If the file format information of the image file includes an identifier of the video data, the image file is a video image file; The file format information of the image file contains only the header of the file and the related information of the image data, and the image file is a normal image file. In this way, when the terminal reads the identifier to determine that a picture file is a video picture file, the picture data can be read from the video picture file, and the picture data of the read picture file is sent to the picture player and the picture player is prompted.
  • Playing and, according to the data length of the picture data and/or the start position identifier of the picture data, moving to the start position of the video data, reading the video file data, and reading the video file
  • the data is sent to the video player and prompted to the video player for playback.
  • the embodiment may further include: acquiring a data length of the picture data; correspondingly, the S132 in the embodiment may include: writing the data length of the picture data, the video data, and the picture data to be created.
  • the start position identifier of the picture data or/and the start position identifier of the video data may also be written.
  • FIG. 4 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention.
  • the method provided in this embodiment may include the following steps, that is, S210-S260:
  • the shooting unit may acquire image data of the photographic subject, and the capturing unit acquires the image data by sending the image data to the storage unit in the mobile terminal through the internal interface for subsequent Perform step utilization.
  • the image data may be stored in the memory card of the mobile terminal, or may be temporarily stored in the cache of the mobile terminal, which is not limited in this embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a photographing interface of a mobile terminal in a method for generating a video image according to an embodiment of the present invention
  • FIG. 8 is a recording video interface of another mobile terminal in a method for generating a video image according to an embodiment of the present disclosure
  • the schematic diagram shows that the display interface of the mobile terminal is a photographing interface in the image capturing mode.
  • the display interface of the mobile terminal is shown as a recorded video interface in the image capturing mode.
  • the mobile terminal When the mobile terminal receives the photographing instruction, the photograph is taken, and the image is obtained according to the photographed image. data.
  • the mobile terminal When the mobile terminal receives the photographing instruction to take a picture, the mobile terminal records the time T at which the picture is taken, and when necessary, the recording file can be read to obtain the data.
  • the photographing time ie, the time T
  • Image data is added to the image file as video data.
  • FIG. 5 is a schematic diagram of time selection of a video data in a method for generating a video picture according to an embodiment of the present invention.
  • the image data acquired by the shooting unit of the mobile terminal is stored, and the image data from the time T-T1 to the time T+T2 is intercepted according to the time T of the captured picture acquired in S230.
  • the T1 in this embodiment is a first preset value
  • T2 is a second preset value.
  • S250 Encode image data in the image data from T-T1 time to T+T2 time to generate video data in a video format.
  • the image data may be encoded into video data in a video format by an encoding tool in the mobile terminal, and the encoded video data may be output.
  • the image data may be encoded into a common video format, for example, including: video/avc, video/3gpp, video/mp4v-es, etc., and the video coding mode may be universal in the related art. Coding technology.
  • S260 Encapsulate image data and video data into a file.
  • S260 in this embodiment reference may be made to S131 to S133 in the embodiment shown in FIG. 3, and details are not described herein again.
  • FIG. 6 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present disclosure.
  • the method for generating a video picture in this embodiment may include the following steps, that is, S310 to S360:
  • the shooting unit may acquire image data of the photographic subject, and the capturing unit acquires the image data by sending the image data to the storage unit in the mobile terminal through the internal interface for subsequent Perform step utilization.
  • audio data synchronized with the video data may be collected by an audio device (such as a microphone) of the mobile terminal, and the collected audio data may be stored.
  • the image data may be stored in the memory card of the mobile terminal, or may be temporarily stored in the cache of the mobile terminal, which is not limited in this embodiment of the present invention.
  • the mobile terminal When the mobile terminal receives the photographing instruction, the photograph is taken, and the image data is obtained according to the photographed image.
  • the mobile terminal When the mobile terminal receives the photographing instruction to take a picture, the mobile terminal records the time T at which the picture is taken, and when necessary, the recording file can be read to obtain the data.
  • S340 Acquire image data of the image data from T-T1 time to T+T2 time, and acquire audio data of the audio data from T-T1 time to T+T2 time.
  • the photographing time ie, the time T
  • Image data and audio data The composite video data is added to the image file.
  • the audio data time selection is the same as the video data time selection method shown in FIG. 5, and therefore will not be described herein.
  • the image data acquired by the shooting unit of the mobile terminal is stored, and the image data from the time T-T1 to the time T+T2 is intercepted according to the time T of the captured picture acquired in S330. And audio data from the T-T1 time to the T+T2 time.
  • the T1 in this embodiment is a first preset value
  • T2 is a second preset value.
  • S350 Encode the image data in the image data from the T-T1 time to the T+T2 time and the audio data in the audio data from the T-T1 time to the T+T2 time to generate the video data in the video format.
  • the image data and the audio data can be encoded into the video data in the video format by the coding tool in the mobile terminal, and the coded output is obtained.
  • Video data In this embodiment, in S350, the foregoing data may be encoded into a common video format, for example, including: video/avc, video/3gpp, video/mp4v-es, etc., and the video coding mode used may be a general-purpose coding in the related art. technology.
  • an embodiment of the present invention further provides an apparatus for generating a video picture.
  • a schematic structural diagram of an apparatus for generating a video picture according to an embodiment of the present invention may be configured in a smart terminal, where the smart terminal may be a smart phone, a tablet computer, or the like.
  • the apparatus for generating a video picture may include a picture data acquiring unit 10, a video data acquiring unit 20, and a synthesizing unit 30.
  • the picture data obtaining unit 10 is configured to: obtain picture data
  • the video data acquiring unit 20 is configured to: acquire video data
  • the synthesizing unit 30 is configured to encapsulate the picture data acquired by the picture data acquiring unit 10 and the video data acquired by the video data acquiring unit 20 into one file.
  • the source of the image data acquired by the image data acquiring unit 10 may be the image data captured by the shooting unit, the image saved in the terminal, or the image stored on the server.
  • the user can open the camera of the terminal, take a photo by the camera to obtain the image data, or select the image saved in the terminal, obtain the image data through the corresponding module, and obtain the image of the server through the network to read the image data.
  • the source of the video data acquired by the video data acquiring unit 20 may be various.
  • the video data may be collected through the camera preview data of the terminal, or may be video data captured by the camera function of the terminal, or may be saved in the terminal.
  • Video data (or other memory).
  • the synthesizing unit 30 encapsulates the acquired picture data and video data into a file, associates the picture data with the video data, and generates a new file, so that the effect of playing the associated video data can be played when the photo is viewed.
  • the synthesizing unit 30 is configured to encapsulate the image data acquired by the image data acquiring unit 10 and the video data acquired by the video data acquiring unit 20 into a file, including:
  • the created image file is saved in standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .png, .bmp, etc.
  • This embodiment adds additional data based on the data of the picture file, and may include, for example, video data, and the data length of the picture data, or the start position identifier of the picture data, or the start position identifier of the video data, which is guaranteed.
  • the standard format of the image file is not destroyed and saved in the standard image format, for example, can be saved as: .jpg, .jpeg, .Gif, .Png, .bmp, etc., so that any terminal can preview before adding additional data.
  • Original image file can be saved as: .jpg, .jpeg, .Gif, .Png, .bmp, etc.
  • the identifier is used to indicate that the picture file is a video picture. If the file format information of the image file includes an identifier of the video data, the image file is a video image file; if the file format information of the image file includes only the file header and related information of the image data, the image file is a normal image. file. In this way, when the terminal reads the identifier to determine that a picture file is a video picture file, the picture data can be read from the video picture file, and the picture data of the read picture file is sent to the picture player and the picture player is prompted.
  • Playing and, according to the data length of the picture data and/or the start position identifier of the picture data, moving to the start position of the video data, reading the video file data, and reading the video file
  • the data is sent to the video player and prompted to the video player for playback.
  • the picture data acquiring unit 10 in this embodiment is further configured to: obtain a data length of the picture data; correspondingly, the synthesizing unit 30 in the embodiment writes the picture data and the video data.
  • the picture file may include: the data length of the picture data and the picture data acquired by the picture data obtaining unit 10, and the video data acquired by the video data acquiring unit 20 are written into the created picture file.
  • the start position identifier of the picture data or/and the start position identifier of the video data may also be written.
  • FIG. 10 is a schematic structural diagram of another apparatus for generating a video picture according to an embodiment of the present invention.
  • the device for generating a video picture according to the embodiment of the present disclosure may further include:
  • the photographing unit 40 is configured to: when receiving the photographing instruction, take a picture and trigger the picture data acquiring unit 10 to acquire the picture data.
  • the apparatus provided in this embodiment may further include: a storage unit 50 configured to store the acquired image data when the photographing unit 40 is activated to perform the framing.
  • the apparatus provided in this embodiment may further include:
  • the photographing time acquisition unit 60 is configured to: acquire the time T at which the photographing unit 40 takes a picture;
  • the video data acquiring unit 20 in this embodiment is configured to acquire video data, including:
  • the image data of the image data from the time T-T1 to the time T+T2 is encoded to generate video data in a video format.
  • the apparatus provided in this embodiment may further include: an audio data collecting unit 70;
  • the storage unit 50 in this embodiment is further configured to: when the shooting unit 40 is activated to perform framing, store the acquired image data;
  • the audio data collecting unit 70 is configured to: collect audio data synchronized with the image data stored by the storage unit 50;
  • the storage unit 50 is further configured to store the audio data collected by the audio data collection unit 70.
  • the shooting time acquisition unit 60 in this embodiment is configured to: acquire the time T at which the shooting unit 40 captures a picture;
  • the video data acquiring unit 20 in this embodiment is configured to acquire video data, including:
  • the image data in the image data from the T-T1 time to the T+T2 time and the audio data in the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in the video format.
  • FIG. 11 is a schematic diagram showing the electrical structure of a camera in an apparatus for generating a video picture according to an embodiment of the present invention.
  • the photographic lens 1211 may include a plurality of optical lenses that form a subject image, and may be a single focus lens or a zoom lens.
  • the photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance.
  • the lens drive control circuit 1222 drives and controls the lens driver 1221 in accordance with a control command from the microcomputer 1217.
  • An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211.
  • the imaging element 1212 is provided to image the subject image and acquire captured image data.
  • Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, which is subjected to charge accumulation by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of red, green, blue (abbreviation: RGB) color filters.
  • the imaging element 1212 is connected to an imaging circuit 1213 that performs charge accumulation control and image signal readout control in the imaging element 1212, and reduces the reset noise after the read image signal (for example, an analog image signal).
  • the shaping is performed, and the gain is increased to obtain an appropriate signal level.
  • the imaging circuit 1213 is connected to an analog-to-digital conversion (A/D) converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal to the bus 1227 (hereinafter referred to as It is image data).
  • A/D analog-to-digital conversion
  • the bus 1227 is configured to transmit a transmission path of various data read or generated inside the camera. path.
  • the A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, and a Synchronous Dynamic Random Access Memory (SDRAM) 1218 are connected.
  • a memory interface hereinafter referred to as a memory I/F
  • a liquid crystal display (LCD) driver 1220 a memory interface
  • the image processor 1215 performs output buffering (Output Buffer, abbreviated as: OB) subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, and the image data based on the output of the imaging element 1212.
  • OB output buffering
  • the JPEG processor 1216 compresses the image data read out from the SDRAM 1218 in accordance with the JPEG compression method when the image data is recorded on the recording medium 1225. Further, the JPEG processor 1216 performs decompression of JPEG image data for image reproduction display.
  • the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226.
  • the JPEG method is adopted as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
  • the microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
  • the operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button
  • the operation controls such as various input buttons and various input keys detect the operation state of these operation controls.
  • the detection result is output to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217.
  • the microcomputer 1217 executes various processing sequences corresponding to the user's operation in accordance with the detection result from the operation position of the operation unit 1223.
  • the flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217.
  • the microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 1218 is provided as an electrically rewritable volatile memory that temporarily stores image data or the like.
  • the SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
  • the memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225.
  • the recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 1210 is connected to the LCD 1226, and stores image data processed by the image processor 1215 in the SDRAM 1218.
  • the image data stored in the SDRAM 1218 is read and displayed on the LCD 1226, or the image data stored in the JPEG processor 1216 is compressed.
  • the JPEG processor 1216 reads the compressed image data of the SDRAM 1218, decompresses it, and displays the decompressed image data through the LCD 1226.
  • the LCD 1226 is configured to display an image on the back of the camera body.
  • the LCD 1226 may be an LCD, but is not limited thereto, and the LCD 1226 may be implemented by other display panels such as organic electroluminescence (EL), but is not limited thereto.
  • EL organic electroluminescence
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • the device/function module/functional unit in the above embodiment When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the image data is acquired by the image data acquiring unit; and the video data is acquired by the video data acquiring unit; the image data acquired by the image data acquiring unit and the video data obtained by the video data acquiring unit are encapsulated into a file by the synthesizing unit.
  • the technical solution provided by the embodiment of the present invention solves the problem that the picture and the video respectively have independent storage files and display effects in the related art, and the display effect is relatively simple, and the function of synthesizing the picture and the video into one file is implemented for the user. Bring more joy and improve the user experience.

Abstract

A method and device for generating a video image. The device comprises: an image data acquisition unit configured to acquire image data; a video data acquisition unit configured to acquire video data; and a synthesizing unit configured to encapsulate the image data acquired by the image data acquisition unit and the video data acquired by the video acquisition unit in one file.

Description

生成视频图片的方法及装置Method and device for generating video picture 技术领域Technical field
本申请涉及但不限于拍摄技术领域。This application relates to, but is not limited to, the field of photography.
背景技术Background technique
随着智能终端(如智能手机)越来越普及,拍摄功能已经成为智能终端必不可少的功能,人们可通过拍摄功能记录生活的点滴、记录美好时刻等等。尽管相关技术中的智能终端使用相机拍照已经很常见,而且智能终端的图像处理功能也很强大,但是图片和视频通常为两种独立的存储数据,分别具有独立的存储文件和显示效果。With the increasing popularity of smart terminals (such as smart phones), shooting functions have become an indispensable function of smart terminals, and people can record the life's bit by bit, record good moments, and so on. Although the smart terminal in the related art has been photographed using a camera, and the image processing function of the smart terminal is also very powerful, the picture and the video are usually two independent storage data, each having independent storage files and display effects.
发明内容Summary of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics detailed in this document. This Summary is not intended to limit the scope of the claims.
本文提出一种生成视频图片的方法及装置,以解决相关技术中图片和视频分别具有独立的存储文件和显示效果,而导致显示效果较为单一的问题。In this paper, a method and a device for generating a video picture are proposed to solve the problem that the picture and the video respectively have independent storage files and display effects in the related art, and the display effect is relatively simple.
一种生成视频图片的装置,包括:图片数据获取单元、视频数据获取单元和合成单元;An apparatus for generating a video picture, comprising: a picture data acquiring unit, a video data acquiring unit, and a synthesizing unit;
其中,所述图片数据获取单元,设置为:获取图片数据;The image data acquiring unit is configured to: acquire image data;
所述视频数据获取单元,设置为:获取视频数据;The video data acquiring unit is configured to: acquire video data;
所述合成单元,设置为:将所述图片数据获取单元获取的图片数据和所述视频数据获取单元获取的视频数据封装到一个文件中。The synthesizing unit is configured to encapsulate the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into one file.
可选地,所述合成单元设置为将所述图片数据获取单元获取的图片数据和所述视频数据获取单元获取的视频数据封装到一个文件中,包括:Optionally, the synthesizing unit is configured to encapsulate the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into a file, including:
创建一个图片文件;Create an image file;
将所述图片数据和所述视频数据写入所创建的图片文件中; Writing the picture data and the video data into the created picture file;
在所述图片文件中写入标识符。An identifier is written in the picture file.
可选地,所述图片数据获取单元,还设置为:获取所述图片数据的数据长度;Optionally, the picture data acquiring unit is further configured to: acquire a data length of the picture data;
所述合成单元设置为将所述图片数据和所述视频数据写入所创建的图片文件中,包括:The synthesizing unit is configured to write the picture data and the video data into the created picture file, including:
将所述图片数据获取单元获取的所述图片数据和所述图片数据的数据长度,以及所述视频数据获取单元获取的所述视频数据写入所创建的图片文件中。And the data length of the picture data and the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit are written into the created picture file.
可选地,所述图片文件中还包括以下一项或多项:所述图片数据的数据长度,所述图片数据的起始位置标示符和所述视频数据的起始位置标识符。Optionally, the picture file further includes one or more of the following: a data length of the picture data, a start location identifier of the picture data, and a start location identifier of the video data.
可选地,所述装置还包括:Optionally, the device further includes:
拍摄单元,设置为:在接收到拍照指令时,拍摄图片并触发所述图片数据获取单元获取图片数据。The photographing unit is configured to: when receiving the photographing instruction, take a picture and trigger the picture data acquiring unit to acquire the picture data.
可选地,所述装置还包括:Optionally, the device further includes:
存储单元,设置为:当启动所述拍摄单元进行取景时,存储获取到的图像数据。The storage unit is configured to: when the photographing unit is activated to perform framing, store the acquired image data.
可选地,所述装置还包括:Optionally, the device further includes:
拍摄时刻获取单元,设置为:获取所述拍摄单元拍摄图片的时刻T;a shooting time acquisition unit, configured to: acquire a time T at which the shooting unit takes a picture;
所述视频数据获取单元设置为获取视频数据,包括:The video data acquiring unit is configured to acquire video data, including:
获取所述存储单元存储的所述图像数据中T-T1时刻到T+T2时刻的数据;Acquiring data in the image data stored by the storage unit from a time T-T1 to a time T+T2;
将所述图像数据中T-T1时刻至T+T2时刻的图像数据进行编码,生成视频格式的视频数据。Image data of the image data from time T-T1 to time T+T2 is encoded to generate video data in a video format.
可选地,所述装置还包括:Optionally, the device further includes:
存储单元,设置为:当启动拍摄单元进行取景时,存储获取到的图像数据;a storage unit, configured to: when the shooting unit is activated to perform framing, storing the acquired image data;
音频数据采集单元,设置为:采集与所述存储单元存储的所述图像数据同步的音频数据; An audio data collecting unit, configured to: collect audio data synchronized with the image data stored by the storage unit;
所述存储单元,还设置为:存储所述音频数据采集单元采集的音频数据。The storage unit is further configured to: store audio data collected by the audio data collection unit.
可选地,所述装置还包括:Optionally, the device further includes:
拍摄时刻获取单元,设置为:获取所述拍摄单元拍摄图片的时刻T;a shooting time acquisition unit, configured to: acquire a time T at which the shooting unit takes a picture;
所述视频数据获取单元设置为获取视频数据,包括:The video data acquiring unit is configured to acquire video data, including:
获取所述存储单元存储的所述图像数据中T-T1时刻到T+T2时刻的数据,并获取所述存储单元存储的所述音频数据中T-T1时刻到T+T2时刻的音频数据;Obtaining data of the image data stored by the storage unit from a time T-T1 to a time T+T2, and acquiring audio data of the audio data stored by the storage unit from a time T-T1 to a time T+T2;
将所述图像数据中T-T1时刻到T+T2时刻的图像数据和所述音频数据中T-T1时刻到T+T2时刻的音频数据进行编码,生成视频格式的视频数据。The image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
可选地,所述T1为第一预设时间间隔,所述T2为第二预设时间间隔。Optionally, the T1 is a first preset time interval, and the T2 is a second preset time interval.
一种生成视频图片的方法,包括:A method of generating a video picture, comprising:
获取图片数据;Get image data;
获取视频数据;Obtain video data;
将所述图片数据和所述视频数据封装到一个文件中。Encapsulating the picture data and the video data into a file.
可选地,所述将所述图片数据和所述视频数据封装到一个文件中,包括:Optionally, the encapsulating the picture data and the video data into a file includes:
创建一个图片文件;Create an image file;
将所述图片数据和所述视频数据写入所创建的图片文件中;Writing the picture data and the video data into the created picture file;
在所述图片文件中写入标识符。An identifier is written in the picture file.
可选地,所述将所述图片数据和所述视频数据写入所创建的图片文件中之前,所述方法还包括:Optionally, before the writing the picture data and the video data into the created picture file, the method further includes:
获取所述图片数据的数据长度;Obtaining a data length of the picture data;
所述将所述图片数据和所述视频数据写入所创建的图片文件中,包括:The writing the picture data and the video data into the created picture file includes:
将所述图片数据、所述视频数据和所述图片数据的数据长度写入所创建的图片文件中。The data length of the picture data, the video data, and the picture data is written into the created picture file.
可选地,所述图片文件中还包括以下一项或多项:所述图片数据的数据长度,所述图片数据的起始位置标示符和所述视频数据的起始位置标识符。 Optionally, the picture file further includes one or more of the following: a data length of the picture data, a start location identifier of the picture data, and a start location identifier of the video data.
可选地,所述获取图片数据,包括:Optionally, the acquiring image data includes:
在接收到拍照指令时,拍摄图片并获取所述图片数据。When a photographing instruction is received, a picture is taken and the picture data is acquired.
可选地,所述方法还包括:Optionally, the method further includes:
当启动拍摄单元进行取景时,存储获取到的图像数据。When the shooting unit is activated for framing, the acquired image data is stored.
可选地,所述方法还包括:Optionally, the method further includes:
获取拍摄图片的时刻T;Obtain the time T at which the picture was taken;
所述获取视频数据,包括:The obtaining video data includes:
获取所述图像数据中T-T1时刻到T+T2时刻的数据;Obtaining data of the image data from the time T-T1 to the time T+T2;
将所述图像数据中T-T1时刻到T+T2时刻的图像数据进行编码,生成视频格式的视频数据。The image data of the image data from the time T-T1 to the time T+T2 is encoded to generate video data in a video format.
可选地,所述方法还包括:Optionally, the method further includes:
当启动拍摄单元进行取景时,存储获取到的图像数据;When the shooting unit is activated for framing, the acquired image data is stored;
采集与所述图像数据同步的音频数据,并存储所采集的音频数据。Audio data synchronized with the image data is acquired, and the collected audio data is stored.
可选地,所述方法还包括:Optionally, the method further includes:
获取拍摄图片的时刻T;Obtain the time T at which the picture was taken;
所述获取视频数据,包括:The obtaining video data includes:
获取所述图像数据中T-T1时刻到T+T2时刻的数据,并获取所述音频数据中T-T1时刻到T+T2时刻的音频数据;Obtaining data of the image data from a time T-T1 to a time T+T2, and acquiring audio data of the audio data from a time T-T1 to a time T+T2;
将所述图像数据中T-T1时刻到T+T2时刻的图像数据和所述音频数据中T-T1时刻到T+T2时刻的音频数据进行编码,生成视频格式的视频数据。The image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
可选地,所述T1为第一预设时间间隔,T2为第二预设时间间隔。Optionally, the T1 is a first preset time interval, and T2 is a second preset time interval.
本发明提出的生成视频图片的方法及装置,通过图片数据获取单元获取图片数据;并通过视频数据获取单元获取视频数据;从而由合成单元将图片数据获取单元获取的图片数据和视频数据获取单元获取的视频数据封装到一个文件中。通过本发明实施例提供的技术方案,解决了相关技术中图片和视频分别具有独立的存储文件和显示效果,而导致显示效果较为单一的问题,实现了把图片和视频合成一个文件的功能为用户带来更多欢乐,提高用 户的体验。The method and device for generating a video picture according to the present invention acquires picture data by a picture data acquiring unit; and acquires video data by using a video data acquiring unit; thereby obtaining, by the synthesizing unit, the picture data and video data acquiring unit acquired by the picture data acquiring unit The video data is encapsulated into a file. The technical solution provided by the embodiment of the present invention solves the problem that the picture and the video respectively have independent storage files and display effects in the related art, and the display effect is relatively simple, and the function of synthesizing the picture and the video into one file is implemented for the user. Bring more joy and improve The experience of the household.
在阅读并理解了附图和详细描述后,可以明白其他方面。Other aspects will be apparent upon reading and understanding the drawings and detailed description.
附图概述BRIEF abstract
图1为实现本发明各个实施例的移动终端的硬件结构示意图;1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention;
图2为本发明实施例提供的一种生成视频图片的方法的流程示意图;2 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention;
图3为本发明实施例提供的另一种生成视频图片的方法的流程示意图;FIG. 3 is a schematic flowchart diagram of another method for generating a video picture according to an embodiment of the present disclosure;
图4为本发明实施例提供的又一种生成视频图片的方法的流程示意图;FIG. 4 is a schematic flowchart diagram of still another method for generating a video picture according to an embodiment of the present disclosure;
图5为本发明实施例提供的生成视频图片的方法中一种视频数据时间选取的示意图;FIG. 5 is a schematic diagram of time selection of a video data in a method for generating a video picture according to an embodiment of the present invention; FIG.
图6为本发明实施例提供的再一种生成视频图片的方法的流程示意图;FIG. 6 is a schematic flowchart diagram of still another method for generating a video picture according to an embodiment of the present disclosure;
图7为本发明实施例提供的生成视频图片的方法中一种移动终端的拍照界面的示意图;FIG. 7 is a schematic diagram of a photographing interface of a mobile terminal in a method for generating a video picture according to an embodiment of the present disclosure;
图8为本发明实施例提供的生成视频图片的方法中另一种移动终端的录制视频界面的示意图;FIG. 8 is a schematic diagram of another recorded video interface of a mobile terminal in a method for generating a video picture according to an embodiment of the present disclosure;
图9为本发明实施例提供的一种生成视频图片的装置的结构示意图;FIG. 9 is a schematic structural diagram of an apparatus for generating a video picture according to an embodiment of the present disclosure;
图10为本发明实施例提供的另一种生成视频图片的装置的结构示意图;FIG. 10 is a schematic structural diagram of another apparatus for generating a video picture according to an embodiment of the present disclosure;
图11为本发明实施例提供的生成视频图片的装置中一种相机的电气结构示意图。FIG. 11 is a schematic diagram of an electrical structure of a camera in an apparatus for generating a video picture according to an embodiment of the present invention.
本发明的实施方式Embodiments of the invention
应当理解,以下所描述的实施例仅仅用以解释本发明,并不用于限定本发明。It is to be understood that the embodiments described below are merely illustrative of the invention and are not intended to limit the invention.
下文中将结合附图对本发明的实施方式进行详细说明。需要说明的是,在不冲突的情况下,本文中的实施例及实施例中的特征可以相互任意组合。Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the features in the embodiments and the embodiments herein may be arbitrarily combined with each other.
在附图的流程图示出的步骤可以在诸根据一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况 下,可以以不同于此处的顺序执行所示出或描述的步骤。The steps illustrated in the flowchart of the figures may be executed in a computer system in accordance with a set of computer executable instructions. And, although the logical order is shown in the flowchart, in some cases The steps shown or described may be performed in an order different from that herein.
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。A mobile terminal embodying various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, the use of suffixes such as "module", "component" or "unit" for indicating an element is merely an explanation for facilitating the present invention, and does not have a specific meaning per se. Therefore, "module" and "component" can be used in combination.
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。The mobile terminal can be implemented in various forms. For example, the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
图1为实现本发明各个实施例的移动终端的硬件结构示意。FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。The mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more. Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信装置或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。 Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network. For example, the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号还可以包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广 播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播装置接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播装置、地面数字广播综合服务(ISDB-T)等等的数字广播装置接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播装置以及上述数字广播装置。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel can include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Moreover, the broadcast signal may also include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. Broadcast signals can exist in various forms, for example, they can be widely used in digital multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), etc. exist. The broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast apparatuses. In particular, the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H) The digital broadcasting device of the @) data broadcasting device, the terrestrial digital broadcasting integrated service (ISDB-T), or the like receives the digital broadcasting. The broadcast receiving module 111 can be constructed as various broadcast apparatuses suitable for providing broadcast signals as well as the above-described digital broadcast apparatuses. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。The mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。The wireless internet module 113 supports wireless internet access of the mobile terminal. The module can be internally or externally coupled to the terminal. The wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。The short range communication module 114 is a module for supporting short range communication. Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wide Band (UWB), ZigbeeTM, and the like.
位置信息模块115是用于检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位装置)。根据相关的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。相关技术中,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。The location information module 115 is a module for checking or acquiring location information of the mobile terminal. A typical example of a location information module is a GPS (Global Positioning Device). According to the related art, the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information according to longitude, latitude, and altitude. In the related art, the method for calculating position and time information uses three satellites and corrects the error of the calculated position and time information by using another satellite. Further, the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像 捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。The A/V input unit 120 is for receiving an audio or video signal. The A/V input unit 120 may include a camera 121 and a microphone 122, the camera 121 being imaged in a video capture mode or an image capture mode The image data of the still picture or video obtained by the capture device is processed. The processed image frame can be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal. The microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data. The processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode. The microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。The user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal. The user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc. In particular, when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或将速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。感测单元140可以包括接近传感器141将在下面结合触摸屏来对此进行描述。The sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal. The orientation of 100, the acceleration of the mobile terminal 100 or the speed of movement and direction, and the like, and generates a command or signal for controlling the operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide type mobile phone, the sensing unit 140 can sense whether the slide type phone is turned on or off. In addition, the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device. Sensing unit 140 may include proximity sensor 141 which will be described below in connection with a touch screen.
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式, 因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。The interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more. The identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter referred to as "identification device") may take the form of a smart card. Therefore, the identification device can be connected to the mobile terminal 100 via a port or other connection device. The interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。In addition, when the mobile terminal 100 is connected to the external base, the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal. Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base. Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。The display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。Meanwhile, when the display unit 151 and the touch panel are superposed on each other in the form of a layer to form a touch screen, the display unit 151 can function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like. According to a particular desired embodiment, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) . The touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。 而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括拾音器、蜂鸣器等等。The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like. The audio signal is output as sound. Moreover, the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100. The audio output module 152 can include a pickup, a buzzer, and the like.
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incoming communication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。The alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。The memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. Moreover, the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。The controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供 操作各元件和组件所需的适当的电力。The power supply unit 190 receives external power or internal power under the control of the controller 180 and provides Operate the appropriate power required for each component and component.
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。The various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementations, the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle. For software implementations, implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation. The software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by controller 180.
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。So far, the mobile terminal has been described in terms of its function. Hereinafter, for the sake of brevity, a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
如图2所示,为本发明实施例提供的一种生成视频图片的方法的流程示意图,本实施例提供的生成视频图片的方法应用于智能终端,该智能终端例如包括智能手机、平板电脑等,该方法可以包括如下步骤,即S110~S130:FIG. 2 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention. The method for generating a video picture provided in this embodiment is applied to an intelligent terminal, where the smart terminal includes, for example, a smart phone, a tablet computer, and the like. The method may include the following steps, namely, S110 to S130:
S110、获取图片数据。S110. Acquire image data.
本发明实施例在S110中,图片数据的来源可以是通过拍摄单元进行捕获的图片数据,也可以是终端内保存的图片,还可以是存储在服务器上的图片。例如,用户可以打开移动终端的摄像头,通过摄像头拍摄照片获取图片数据;也可以选取移动终端内保存的图片,通过相应的模块获取图片数据;还可以通过网络获取服务器的上图片,以读取图片数据。In the embodiment of the present invention, the source of the picture data may be the picture data captured by the shooting unit, the picture saved in the terminal, or the picture stored on the server. For example, the user can open the camera of the mobile terminal, take a photo by the camera to obtain the image data, or select the image saved in the mobile terminal, obtain the image data through the corresponding module, and obtain the image of the server through the network to read the image. data.
S120、获取视频数据。S120. Acquire video data.
在本发明实施例中,视频数据的来源可以是多样的,例如,可以通过移动终端的摄像头预览数据来收集视频数据,也可以是通过移动终端的摄像功 能拍摄的视频数据,还可以是已保存在移动终端内(或其他存储器内)的视频数据。In the embodiment of the present invention, the source of the video data may be diverse. For example, the video data may be collected through the camera preview data of the mobile terminal, or may be the camera function through the mobile terminal. The video data that can be captured can also be video data that has been saved in the mobile terminal (or other memory).
S130、将图片数据和视频数据封装到一个文件中。S130. Encapsulate picture data and video data into a file.
本发明实施例在S130中,通过将获取到的图片数据和视频数据封装到一个文件中,把图片数据和视频数据相关联起来,生成一个新的文件,达到查看照片的时候可以播放相关联的视频数据的效果。In S130, in S130, the image data and the video data are encapsulated into a file, and the image data and the video data are associated with each other to generate a new file, and the associated image can be played when the photo is viewed. The effect of video data.
本发明提出的生成视频图片的方法,通过获取图片数据,以及获取视频数据;从而将图片数据和视频数据封装到一个文件中;通过本发明实施例提供的技术方案,解决了相关技术中图片和视频分别具有独立的存储文件和显示效果,而导致显示效果较为单一的问题,实现了把图片和视频合成一个文件的功能为用户带来更多欢乐,提高用户的体验。The method for generating a video picture according to the present invention, the picture data and the video data are encapsulated into a file by acquiring the picture data, and the picture data and the video data are encapsulated into a file; the technical solution provided by the embodiment of the invention solves the picture in the related art The video has independent storage files and display effects, which leads to a single display effect. The function of synthesizing pictures and videos into one file brings more joy to the user and improves the user experience.
可选地,图3为本发明实施例提供的另一种生成视频图片的方法的流程示意图,本实施例中详细说明将图片数据和视频数据封装到一个文件中的实现方式,即在图2所示实施例的基础上,本本实施例中的S130可以包括如下步骤,即S131~S133:Optionally, FIG. 3 is a schematic flowchart of another method for generating a video picture according to an embodiment of the present invention. In this embodiment, an implementation manner of encapsulating picture data and video data into a file is specifically described in FIG. 2 . On the basis of the illustrated embodiment, S130 in this embodiment may include the following steps, namely, S131 to S133:
S131、创建一个图片文件。S131. Create an image file.
创建一个图片文件,创建的图片文件以标准图片格式保存,例如可以保存为:.jpg、.jpeg、.gif、.png、.bmp等格式。Create an image file, the created image file is saved in standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .png, .bmp and other formats.
S132、将图片数据和视频数据写入所创建的图片文件中。S132. Write image data and video data into the created image file.
本实施例在图片文件的数据基础上,添加额外数据,例如可以包括:视频数据,以及图片数据的数据长度,或者图片数据的起始位置标识符,或者视频数据的起始位置标识符,保证了图片文件标准格式没有被破坏,并以标准图片格式保存,例如可以保存为:.jpg、.jpeg、.gif、.Png、.bmp等格式,从而使任何终端都可以预览添加额外数据前的原图片文件。This embodiment adds additional data based on the data of the picture file, and may include, for example, video data, and the data length of the picture data, or the start position identifier of the picture data, or the start position identifier of the video data, which is guaranteed. The standard format of the image file is not destroyed and saved in the standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .Png, .bmp, etc., so that any terminal can preview before adding additional data. Original image file.
S133、在图片文件中写入标识符。S133. Write an identifier in the picture file.
在本实施例中,该标识符用于表明图片文件为视频图片。若图片文件的文件格式信息中包含视频数据的标识符,则该图片文件为视频图片文件;若 图片文件的文件格式信息中仅包含文件头以及图片数据的相关信息,则该图片文件是普通图片文件。这样,当终端读取到标识符确定一个图片文件为视频图片文件时,可以从视频图片文件中读取图片数据,将读取到的图片文件的图片数据发送给图片播放器并提示图片播放器进行播放;并且,还可以根据图片数据的数据长度和/或图片数据的起始位置标志符,移动到视频数据的起始位置,读取视频文件数据,将读取到的所述视频文件的数据发送给视频播放器并提示视频播放器进行播放。In this embodiment, the identifier is used to indicate that the picture file is a video picture. If the file format information of the image file includes an identifier of the video data, the image file is a video image file; The file format information of the image file contains only the header of the file and the related information of the image data, and the image file is a normal image file. In this way, when the terminal reads the identifier to determine that a picture file is a video picture file, the picture data can be read from the video picture file, and the picture data of the read picture file is sent to the picture player and the picture player is prompted. Playing; and, according to the data length of the picture data and/or the start position identifier of the picture data, moving to the start position of the video data, reading the video file data, and reading the video file The data is sent to the video player and prompted to the video player for playback.
可选地,本实施例在S132之前,还可以包括:获取图片数据的数据长度;相应地,本实施例中的S132可以包括:将图片数据、视频数据和图片数据的数据长度写入所创建的图片文件中。另外,本发明实施例中图片文件中还可以写入图片数据的起始位置标示符或/和视频数据的起始位置标识符。Optionally, before the S132, the embodiment may further include: acquiring a data length of the picture data; correspondingly, the S132 in the embodiment may include: writing the data length of the picture data, the video data, and the picture data to be created. In the picture file. In addition, in the picture file in the embodiment of the present invention, the start position identifier of the picture data or/and the start position identifier of the video data may also be written.
可选地,图4为本发明实施例提供的又一种生成视频图片的方法的流程示意图,本实施例提供的方法可以包括如下步骤,即S210~S260:Optionally, FIG. 4 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention. The method provided in this embodiment may include the following steps, that is, S210-S260:
S210、当启动拍摄单元进行取景时,存储获取到的图像数据。S210. Store the acquired image data when the shooting unit is activated to perform framing.
当检测到移动终端处于拍摄取景状态时,拍摄单元可以获取到拍摄对象的图像数据,拍摄单元获取图像数据的方式为,可以通过内部接口把图像数据发送给移动终端内的存储单元,以供后续执行步骤利用。可以理解的是,本实施例在S210中,可以将图像数据存储到移动终端的存储卡里,也可以暂时存储在移动终端的缓存中,本发明实施例对此不作限制。When detecting that the mobile terminal is in the shooting framing state, the shooting unit may acquire image data of the photographic subject, and the capturing unit acquires the image data by sending the image data to the storage unit in the mobile terminal through the internal interface for subsequent Perform step utilization. It can be understood that, in this embodiment, the image data may be stored in the memory card of the mobile terminal, or may be temporarily stored in the cache of the mobile terminal, which is not limited in this embodiment of the present invention.
图7为本发明实施例提供的生成视频图片的方法中一种移动终端的拍照界面的示意图,图8为本发明实施例提供的生成视频图片的方法中另一种移动终端的录制视频界面的示意图,如图7所示,示出了移动终端的显示界面为摄像模式下的拍照界面,如图8所示,示出了移动终端的显示界面为摄像模式下的录制视频界面。FIG. 7 is a schematic diagram of a photographing interface of a mobile terminal in a method for generating a video image according to an embodiment of the present invention, and FIG. 8 is a recording video interface of another mobile terminal in a method for generating a video image according to an embodiment of the present disclosure; The schematic diagram, as shown in FIG. 7, shows that the display interface of the mobile terminal is a photographing interface in the image capturing mode. As shown in FIG. 8 , the display interface of the mobile terminal is shown as a recorded video interface in the image capturing mode.
S220、在接收到拍照指令时,拍摄图片并获取图片数据。S220. When receiving a photographing instruction, take a picture and obtain picture data.
在移动终端接收到拍照指令时,拍摄图片,根据拍摄的图片获取到图片 数据。When the mobile terminal receives the photographing instruction, the photograph is taken, and the image is obtained according to the photographed image. data.
S230、获取拍摄图片的时刻T。S230. Obtain a time T at which a picture is taken.
在移动终端接收到拍照指令拍摄图片时,移动终端会记录下拍摄图片的时刻T,当需要时,可以读取记录文件获取此数据。When the mobile terminal receives the photographing instruction to take a picture, the mobile terminal records the time T at which the picture is taken, and when necessary, the recording file can be read to obtain the data.
S240、获取图像数据中T-T1时刻到T+T2时刻的图像数据。S240. Acquire image data of the image data from the time T-T1 to the time T+T2.
当用户触发拍照指令时,通常是由于当前拍摄的景象是其喜欢的,为了避免出现相关技术中的由于拍摄延迟导致错过精彩画面的情况,可以选取拍照时刻(即时刻T)的前后一段时间的图像数据作为视频数据加入到图片文件中。When the user triggers the photographing instruction, it is usually because the currently photographed scene is like it. In order to avoid the situation in the related art that the missed highlight picture is missed due to the shooting delay, the photographing time (ie, the time T) may be selected for a period of time. Image data is added to the image file as video data.
请参见图5,为本发明实施例提供的生成视频图片的方法中一种视频数据时间选取的示意图。本实施例在S210中,已存储了移动终端的拍摄单元取景时获取到的图像数据,根据在S230中获取到的拍摄图片的时刻T,截取从T-T1时刻到T+T2时刻的图像数据。其中,本实施例中的T1为第一预设值,T2为第二预设值。FIG. 5 is a schematic diagram of time selection of a video data in a method for generating a video picture according to an embodiment of the present invention. In the embodiment, in S210, the image data acquired by the shooting unit of the mobile terminal is stored, and the image data from the time T-T1 to the time T+T2 is intercepted according to the time T of the captured picture acquired in S230. . The T1 in this embodiment is a first preset value, and T2 is a second preset value.
S250、将图像数据中T-T1时刻到T+T2时刻的图像数据进行编码,生成视频格式的视频数据。S250: Encode image data in the image data from T-T1 time to T+T2 time to generate video data in a video format.
在获取到T-T1时刻到T+T2时刻的图像数据后,可以通过移动终端内的编码工具,将上述图像数据编码成视频格式的视频数据,并输出编码后得到的视频数据。本实施例在S250中,可以将上述图像数据编码成常用的视频格式,例如包括:video/avc、video/3gpp、video/mp4v-es等格式,采用的视频编码方式可以是相关技术中的通用编码技术。After acquiring the image data from the time T-T1 to the time T+T2, the image data may be encoded into video data in a video format by an encoding tool in the mobile terminal, and the encoded video data may be output. In this embodiment, in S250, the image data may be encoded into a common video format, for example, including: video/avc, video/3gpp, video/mp4v-es, etc., and the video coding mode may be universal in the related art. Coding technology.
S260、将图片数据和视频数据封装到一个文件中。S260. Encapsulate image data and video data into a file.
本实施例中S260的实现方式,可以参照图3所示实施例中的S131~S133,故在此不再赘述。For the implementation of S260 in this embodiment, reference may be made to S131 to S133 in the embodiment shown in FIG. 3, and details are not described herein again.
可选地,图6为本发明实施例提供的再一种生成视频图片的方法的流程示意图,本实施例提供的生成视频图片的方法可以包括如下步骤,即S310~S360: Optionally, FIG. 6 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present disclosure. The method for generating a video picture in this embodiment may include the following steps, that is, S310 to S360:
S310、当启动拍摄单元进行取景时,存储获取到的图像数据,采集与该图像数据同步的音频数据,并存储所采集的音频数据。S310. When the photographing unit is activated to perform framing, the acquired image data is stored, audio data synchronized with the image data is collected, and the collected audio data is stored.
当检测到移动终端处于拍摄取景状态时,拍摄单元可以获取到拍摄对象的图像数据,拍摄单元获取图像数据的方式为,可以通过内部接口把图像数据发送给移动终端内的存储单元,以供后续执行步骤利用。本实施例在获取图像数据的同时,可以通过移动终端的音频设备(如麦克风)采集与该视频数据同步的音频数据,并存储所采集的音频数据。When detecting that the mobile terminal is in the shooting framing state, the shooting unit may acquire image data of the photographic subject, and the capturing unit acquires the image data by sending the image data to the storage unit in the mobile terminal through the internal interface for subsequent Perform step utilization. In this embodiment, while acquiring image data, audio data synchronized with the video data may be collected by an audio device (such as a microphone) of the mobile terminal, and the collected audio data may be stored.
可以理解的是,本实施例在S310中,可以将图像数据存储到移动终端的存储卡里,也可以暂时存储在移动终端的缓存中,本发明实施例对此不作限制。It can be understood that, in this embodiment, the image data may be stored in the memory card of the mobile terminal, or may be temporarily stored in the cache of the mobile terminal, which is not limited in this embodiment of the present invention.
S320、在接收到拍照指令时,拍摄图片并获取图片数据。S320. When receiving a photographing instruction, take a picture and obtain image data.
在移动终端接收到拍照指令时,拍摄图片,根据拍摄的图片获取到图片数据。When the mobile terminal receives the photographing instruction, the photograph is taken, and the image data is obtained according to the photographed image.
S330、获取拍摄图片的时刻T。S330. Obtain a time T at which a picture is taken.
在移动终端接收到拍照指令拍摄图片时,移动终端会记录下拍摄图片的时刻T,当需要时,可以读取记录文件获取此数据。When the mobile terminal receives the photographing instruction to take a picture, the mobile terminal records the time T at which the picture is taken, and when necessary, the recording file can be read to obtain the data.
S340、获取图像数据中T-T1时刻到T+T2时刻的图像数据,并获取音频数据中T-T1时刻到T+T2时刻的音频数据。S340. Acquire image data of the image data from T-T1 time to T+T2 time, and acquire audio data of the audio data from T-T1 time to T+T2 time.
当用户触发拍照指令时,通常是由于当前拍摄的景象是其喜欢的,为了避免出现相关技术中的由于拍摄延迟导致错过精彩画面的情况,可以选取拍照时刻(即时刻T)的前后一段时间的图像数据和音频数据合成视频数据加入到图片文件中。When the user triggers the photographing instruction, it is usually because the currently photographed scene is like it. In order to avoid the situation in the related art that the missed highlight picture is missed due to the shooting delay, the photographing time (ie, the time T) may be selected for a period of time. Image data and audio data The composite video data is added to the image file.
同样可以参照图5所示的视频数据时间选取的示意图,音频数据时间选取与图5所示视频数据时间选取的方式相同,故在此不再赘述。本实施例在S310中,已存储了移动终端的拍摄单元取景时获取到的图像数据,根据在S330中获取到的拍摄图片的时刻T,截取从T-T1时刻到T+T2时刻的图像数据,以及从T-T1时刻到T+T2时刻的音频数据。其中,本实施例中的T1为第一预设值,T2为第二预设值。 Referring to the schematic diagram of the video data time selection shown in FIG. 5, the audio data time selection is the same as the video data time selection method shown in FIG. 5, and therefore will not be described herein. In this embodiment, in S310, the image data acquired by the shooting unit of the mobile terminal is stored, and the image data from the time T-T1 to the time T+T2 is intercepted according to the time T of the captured picture acquired in S330. And audio data from the T-T1 time to the T+T2 time. The T1 in this embodiment is a first preset value, and T2 is a second preset value.
S350、将图像数据中T-T1时刻到T+T2时刻的图像数据和音频数据中T-T1时刻到T+T2时刻的音频数据进行编码,生成视频格式的视频数据。S350. Encode the image data in the image data from the T-T1 time to the T+T2 time and the audio data in the audio data from the T-T1 time to the T+T2 time to generate the video data in the video format.
在获取到T-T1时刻到T+T2时刻的图像数据和音频数据后,可以通过移动终端内的编码工具,将上述图像数据和音频数据编码成视频格式的视频数据,并输出编码后得到的视频数据。本实施例在S350中,可以将上述数据编码成常用的视频格式,例如包括:video/avc、video/3gpp、video/mp4v-es等格式,采用的视频编码方式可以是相关技术中的通用编码技术。After acquiring the image data and the audio data from the T-T1 time to the T+T2 time, the image data and the audio data can be encoded into the video data in the video format by the coding tool in the mobile terminal, and the coded output is obtained. Video data. In this embodiment, in S350, the foregoing data may be encoded into a common video format, for example, including: video/avc, video/3gpp, video/mp4v-es, etc., and the video coding mode used may be a general-purpose coding in the related art. technology.
S360、将图片数据和视频数据封装到一个文件中。S360, encapsulating picture data and video data into a file.
本实施例中S360的实现方式,可以参照图3所示实施例中的S131~S133,故在此不再赘述。For the implementation of S360 in this embodiment, reference may be made to S131 to S133 in the embodiment shown in FIG. 3, and details are not described herein again.
根据本发明实施例提供的上述方法实施例,本发明实施例还提供了生成视频图片的装置的实施例。如图9所示,为本发明实施例提供的一种生成视频图片的装置的结构示意图,该装置可以设置于智能终端中,该智能终端可以是智能手机、平板电脑等,本实施例提供的生成视频图片的装置可以包括:图片数据获取单元10、视频数据获取单元20和合成单元30。According to the above method embodiment provided by the embodiment of the present invention, an embodiment of the present invention further provides an apparatus for generating a video picture. As shown in FIG. 9 , a schematic structural diagram of an apparatus for generating a video picture according to an embodiment of the present invention may be configured in a smart terminal, where the smart terminal may be a smart phone, a tablet computer, or the like. The apparatus for generating a video picture may include a picture data acquiring unit 10, a video data acquiring unit 20, and a synthesizing unit 30.
图片数据获取单元10,设置为:获取图片数据;The picture data obtaining unit 10 is configured to: obtain picture data;
视频数据获取单元20,设置为:获取视频数据;The video data acquiring unit 20 is configured to: acquire video data;
合成单元30,设置为:将图片数据获取单元10获取的图片数据和视频数据获取单元20获取的视频数据封装到一个文件中。The synthesizing unit 30 is configured to encapsulate the picture data acquired by the picture data acquiring unit 10 and the video data acquired by the video data acquiring unit 20 into one file.
本发明实施例中,图片数据获取单元10获取图片数据的来源可以是通过拍摄单元进行捕获的图片数据,也可以是终端内保存的图片,还可以是存储在服务器上的图片。例如,用户可以打开终端的摄像头,通过摄像头拍摄照片获取图片数据;也可以选取终端内保存的图片,通过相应的模块获取图片数据;还可以通过网络获取服务器的上图片,以读取图片数据。In the embodiment of the present invention, the source of the image data acquired by the image data acquiring unit 10 may be the image data captured by the shooting unit, the image saved in the terminal, or the image stored on the server. For example, the user can open the camera of the terminal, take a photo by the camera to obtain the image data, or select the image saved in the terminal, obtain the image data through the corresponding module, and obtain the image of the server through the network to read the image data.
视频数据获取单元20获取视频数据的来源可以是多样的,例如,可以通过终端的摄像头预览数据来收集视频数据,也可以是通过终端的摄像功能拍摄的视频数据,还可以是已保存在终端内(或其他存储器内)的视频数据。 The source of the video data acquired by the video data acquiring unit 20 may be various. For example, the video data may be collected through the camera preview data of the terminal, or may be video data captured by the camera function of the terminal, or may be saved in the terminal. Video data (or other memory).
合成单元30通过将获取到的图片数据和视频数据封装到一个文件中,把图片数据和视频数据关联起来,生成一个新的文件,达到查看照片的时候可以播放相关联的视频数据的效果。The synthesizing unit 30 encapsulates the acquired picture data and video data into a file, associates the picture data with the video data, and generates a new file, so that the effect of playing the associated video data can be played when the photo is viewed.
可选地,在本发明实施例中,合成单元30设置为将图片数据获取单元10获取的图片数据和视频数据获取单元20获取的视频数据封装到一个文件中,包括:Optionally, in the embodiment of the present invention, the synthesizing unit 30 is configured to encapsulate the image data acquired by the image data acquiring unit 10 and the video data acquired by the video data acquiring unit 20 into a file, including:
A、创建一个图片文件。A. Create an image file.
创建一个图片文件,创建的图片文件以标准图片格式保存,例如可以保存为:.jpg、.jpeg、.gif、.png、.bmp等。Create an image file, the created image file is saved in standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .png, .bmp, etc.
B、将图片数据和视频数据写入所创建的图片文件中。B. Write picture data and video data into the created image file.
本实施例在图片文件的数据基础上,添加额外数据,例如可以包括:视频数据,以及图片数据的数据长度,或者图片数据的起始位置标识符,或者视频数据的起始位置标识符,保证了图片文件标准格式没有被破坏,并以标准图片格式保存,例如可以保存为:.jpg、.jpeg、.Gif、.Png、.bmp等格式,从而使任何终端都可以预览添加额外数据前的原图片文件。This embodiment adds additional data based on the data of the picture file, and may include, for example, video data, and the data length of the picture data, or the start position identifier of the picture data, or the start position identifier of the video data, which is guaranteed. The standard format of the image file is not destroyed and saved in the standard image format, for example, can be saved as: .jpg, .jpeg, .Gif, .Png, .bmp, etc., so that any terminal can preview before adding additional data. Original image file.
C、在图片文件中写入标识符。C. Write an identifier in the image file.
在本实施例中,该标识符用于表明图片文件为视频图片。若图片文件的文件格式信息中包含视频数据的标识符,则该图片文件为视频图片文件;若图片文件的文件格式信息中仅包含文件头以及图片数据的相关信息,则该图片文件是普通图片文件。这样,当终端读取到标识符确定一个图片文件为视频图片文件时,可以从视频图片文件中读取图片数据,将读取到的图片文件的图片数据发送给图片播放器并提示图片播放器进行播放;并且,还可以根据图片数据的数据长度和/或图片数据的起始位置标志符,移动到视频数据的起始位置,读取视频文件数据,将读取到的所述视频文件的数据发送给视频播放器并提示视频播放器进行播放。In this embodiment, the identifier is used to indicate that the picture file is a video picture. If the file format information of the image file includes an identifier of the video data, the image file is a video image file; if the file format information of the image file includes only the file header and related information of the image data, the image file is a normal image. file. In this way, when the terminal reads the identifier to determine that a picture file is a video picture file, the picture data can be read from the video picture file, and the picture data of the read picture file is sent to the picture player and the picture player is prompted. Playing; and, according to the data length of the picture data and/or the start position identifier of the picture data, moving to the start position of the video data, reading the video file data, and reading the video file The data is sent to the video player and prompted to the video player for playback.
可选地,本实施例中的图片数据获取单元10,还设置为:获取图片数据的数据长度;相应地,本实施例中的合成单元30将图片数据和视频数据写入 图片文件中,可以包括:将图片数据获取单元10获取的图片数据和图片数据的数据长度,以及视频数据获取单元20获取的视频数据写入所创建的图片文件中。另外,本发明实施例中图片文件中还可以写入图片数据的起始位置标示符或/和视频数据的起始位置标识符。Optionally, the picture data acquiring unit 10 in this embodiment is further configured to: obtain a data length of the picture data; correspondingly, the synthesizing unit 30 in the embodiment writes the picture data and the video data. The picture file may include: the data length of the picture data and the picture data acquired by the picture data obtaining unit 10, and the video data acquired by the video data acquiring unit 20 are written into the created picture file. In addition, in the picture file in the embodiment of the present invention, the start position identifier of the picture data or/and the start position identifier of the video data may also be written.
可选地,图10为本发明实施例提供的另一种生成视频图片的装置的结构示意图,在图9所示实施例的基础上,本实施例提供的生成视频图片的装置还可以包括:Optionally, FIG. 10 is a schematic structural diagram of another apparatus for generating a video picture according to an embodiment of the present invention. The device for generating a video picture according to the embodiment of the present disclosure may further include:
拍摄单元40,设置为:在接收到拍照指令时,拍摄图片并触发图片数据获取单元10获取所述图片数据。The photographing unit 40 is configured to: when receiving the photographing instruction, take a picture and trigger the picture data acquiring unit 10 to acquire the picture data.
在本发明实施例的一种可选地实现方式中,本实施例提供的装置还可以包括:存储单元50,设置为:当启动拍摄单元40进行取景时,存储获取到的图像数据。In an optional implementation manner of the embodiment of the present invention, the apparatus provided in this embodiment may further include: a storage unit 50 configured to store the acquired image data when the photographing unit 40 is activated to perform the framing.
可选地,本实施例提供的装置还可以包括:Optionally, the apparatus provided in this embodiment may further include:
拍摄时刻获取单元60,设置为:获取拍摄单元40拍摄图片的时刻T;The photographing time acquisition unit 60 is configured to: acquire the time T at which the photographing unit 40 takes a picture;
相应地,本实施例中的视频数据获取单元20设置为获取视频数据,包括:Correspondingly, the video data acquiring unit 20 in this embodiment is configured to acquire video data, including:
获取存储单元50存储的图像数据中T-T1时刻到T+T2时刻的数据;Obtaining data of the image data stored by the storage unit 50 from the time T-T1 to the time T+T2;
将图像数据中T-T1时刻至T+T2时刻的图像数据进行编码,生成视频格式的视频数据。The image data of the image data from the time T-T1 to the time T+T2 is encoded to generate video data in a video format.
在本发明实施例的另一种可选地实现方式中,本实施例提供的装置还可以包括:音频数据采集单元70;In another optional implementation manner of the embodiment of the present invention, the apparatus provided in this embodiment may further include: an audio data collecting unit 70;
其中,本实施例中的存储单元50,还设置为:当启动拍摄单元40进行取景时,存储获取到的图像数据;The storage unit 50 in this embodiment is further configured to: when the shooting unit 40 is activated to perform framing, store the acquired image data;
音频数据采集单元70,设置为:采集与存储单元50存储的图像数据同步的音频数据;The audio data collecting unit 70 is configured to: collect audio data synchronized with the image data stored by the storage unit 50;
存储单元50,还设置为:存储音频数据采集单元70采集的音频数据。The storage unit 50 is further configured to store the audio data collected by the audio data collection unit 70.
本实施例中的拍摄时刻获取单元60,设置为:获取拍摄单元40拍摄图片的时刻T; The shooting time acquisition unit 60 in this embodiment is configured to: acquire the time T at which the shooting unit 40 captures a picture;
相应地,本实施例中的视频数据获取单元20设置为获取视频数据,包括:Correspondingly, the video data acquiring unit 20 in this embodiment is configured to acquire video data, including:
获取存储单元50存储的图像数据中T-T1时刻到T+T2时刻的数据,并获取存储单元50存储的音频数据中T-T1时刻到T+T2时刻的音频数据;Acquiring data in the image data stored in the storage unit 50 from the time T-T1 to the time T+T2, and acquiring the audio data in the audio data stored by the storage unit 50 from the time T-T1 to the time T+T2;
将图像数据中T-T1时刻到T+T2时刻的图像数据和音频数据中T-T1时刻到T+T2时刻的音频数据进行编码,生成视频格式的视频数据。The image data in the image data from the T-T1 time to the T+T2 time and the audio data in the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in the video format.
如图11所示,为本发明实施例提供的生成视频图片的装置中一种相机的电气结构示意图。FIG. 11 is a schematic diagram showing the electrical structure of a camera in an apparatus for generating a video picture according to an embodiment of the present invention.
摄影镜头1211中可以包括形成被摄体像的多个光学镜头,可以为单焦点镜头或变焦镜头。摄影镜头1211在镜头驱动器1221的控制下能够在光轴方向上移动,镜头驱动器1221根据来自镜头驱动控制电路1222的控制信号,控制摄影镜头1211的焦点位置,在变焦镜头的情况下,也可控制焦点距离。镜头驱动控制电路1222按照来自微型计算机1217的控制命令对镜头驱动器1221进行驱动控制。The photographic lens 1211 may include a plurality of optical lenses that form a subject image, and may be a single focus lens or a zoom lens. The photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance. The lens drive control circuit 1222 drives and controls the lens driver 1221 in accordance with a control command from the microcomputer 1217.
在摄影镜头1211的光轴上、由摄影镜头1211形成的被摄体像的位置附近配置有摄像元件1212。摄像元件1212,设置为:对被摄体像摄像并取得摄像图像数据。在摄像元件1212上二维且呈矩阵状配置有构成每个像素的光电二极管。每个光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与每个光电二极管连接的电容器进行电荷蓄积。每个像素的前表面配置有拜耳排列的红、绿、蓝(Red、Green、Blue,简称为:RGB)滤色器。An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211. The imaging element 1212 is provided to image the subject image and acquire captured image data. Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, which is subjected to charge accumulation by a capacitor connected to each photodiode. The front surface of each pixel is provided with a Bayer array of red, green, blue (abbreviation: RGB) color filters.
摄像元件1212与摄像电路1213连接,该摄像电路1213在摄像元件1212中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(例如为模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。The imaging element 1212 is connected to an imaging circuit 1213 that performs charge accumulation control and image signal readout control in the imaging element 1212, and reduces the reset noise after the read image signal (for example, an analog image signal). The shaping is performed, and the gain is increased to obtain an appropriate signal level.
摄像电路1213与模数转换(Analog Digital Converter,简称为:A/D)转换器1214连接,该A/D转换器1214对模拟图像信号进行模数转换,向总线1227输出数字图像信号(以下称之为图像数据)。The imaging circuit 1213 is connected to an analog-to-digital conversion (A/D) converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal to the bus 1227 (hereinafter referred to as It is image data).
总线1227,设置为:传送在相机的内部读出或生成的各种数据的传送路 径。在总线1227连接着上述A/D转换器1214,此外还连接着图像处理器1215、JPEG处理器1216、微型计算机1217、同步动态随机存取内存(Synchronous Dynamic random access memory,简称为:SDRAM)1218、存储器接口(以下称之为存储器I/F)1219、液晶显示器(Liquid Crystal Display,简称为:LCD)驱动器1220。The bus 1227 is configured to transmit a transmission path of various data read or generated inside the camera. path. The A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, and a Synchronous Dynamic Random Access Memory (SDRAM) 1218 are connected. A memory interface (hereinafter referred to as a memory I/F) 1219 and a liquid crystal display (LCD) driver 1220.
图像处理器1215对基于摄像元件1212的输出的图像数据进行输出缓冲(Output Buffer,简称为:OB)相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。JPEG处理器1216在将图像数据记录于记录介质1225时,按照JPEG压缩方式压缩从SDRAM1218读出的图像数据。此外,JPEG处理器1216为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质1225中的文件,在JPEG处理器1216中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM1218中并在LCD1226上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。The image processor 1215 performs output buffering (Output Buffer, abbreviated as: OB) subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, and the image data based on the output of the imaging element 1212. Various image processing such as simultaneous processing and edge processing. The JPEG processor 1216 compresses the image data read out from the SDRAM 1218 in accordance with the JPEG compression method when the image data is recorded on the recording medium 1225. Further, the JPEG processor 1216 performs decompression of JPEG image data for image reproduction display. At the time of decompression, the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226. Further, in the present embodiment, the JPEG method is adopted as the image compression/decompression method. However, the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
微型计算机1217发挥作为该相机整体的控制部的功能,统一控制相机的各种处理序列。微型计算机1217连接着操作单元1223和闪存1224。The microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera. The microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
操作单元1223包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作控件,检测这些操作控件的操作状态,。The operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button The operation controls such as various input buttons and various input keys detect the operation state of these operation controls.
将检测结果向微型计算机1217输出。此外,在作为显示器的LCD1226的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机1217输出。微型计算机1217根据来自操作单元1223的操作位置的检测结果,执行与用户的操作对应的各种处理序列。The detection result is output to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217. The microcomputer 1217 executes various processing sequences corresponding to the user's operation in accordance with the detection result from the operation position of the operation unit 1223.
闪存1224存储用于执行微型计算机1217的各种处理序列的程序。微型计算机1217根据该程序进行相机整体的控制。此外,闪存1224存储相机的各种调整值,微型计算机1217读出调整值,按照该调整值进行相机的控制。 The flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217. The microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
SDRAM1218,设置为:对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM1218暂时存储从A/D转换器1214输出的图像数据和在图像处理器1215、JPEG处理器1216等中进行了处理后的图像数据。The SDRAM 1218 is provided as an electrically rewritable volatile memory that temporarily stores image data or the like. The SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
存储器接口1219与记录介质1225连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质1225和从记录介质1225中读出的控制。记录介质1225例如为能够在相机主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在相机主体中的硬盘等。The memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225. The recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body. However, the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
LCD驱动器1210与LCD1226连接,将由图像处理器1215处理后的图像数据存储于SDRAM1218,需要显示时,读取SDRAM1218存储的图像数据并在LCD1226上显示,或者,JPEG处理器1216压缩过的图像数据存储于SDRAM1218,在需要显示时,JPEG处理器1216读取SDRAM1218的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD1226进行显示。The LCD driver 1210 is connected to the LCD 1226, and stores image data processed by the image processor 1215 in the SDRAM 1218. When display is required, the image data stored in the SDRAM 1218 is read and displayed on the LCD 1226, or the image data stored in the JPEG processor 1216 is compressed. In the SDRAM 1218, when display is required, the JPEG processor 1216 reads the compressed image data of the SDRAM 1218, decompresses it, and displays the decompressed image data through the LCD 1226.
LCD1226配置在相机主体的背面进行图像显示。该LCD1226可以为LCD,然而不限于此,也可以采用有机电致发光(electroluminescent,简称为:EL)等其它显示面板实现LCD1226,然而不限于此。The LCD 1226 is configured to display an image on the back of the camera body. The LCD 1226 may be an LCD, but is not limited thereto, and the LCD 1226 may be implemented by other display panels such as organic electroluminescence (EL), but is not limited thereto.
以上仅为本发明的实施例和可选实施方式,并非因此限制本发明实施例的保护范围,凡是利用本文说明书及说明书附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only the embodiments and the optional embodiments of the present invention, and are not intended to limit the scope of the embodiments of the present invention. Any equivalent structure or equivalent process transformations made by the contents of the specification and the drawings herein, or directly or indirectly In other related technical fields, the same is included in the scope of patent protection of the present invention.
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(根据系统、设备、装置、器件等)执行,在执行时,包括方法实施例的步骤之一或其组合。One of ordinary skill in the art will appreciate that all or a portion of the steps of the above-described embodiments can be implemented using a computer program flow, which can be stored in a computer readable storage medium on a corresponding hardware platform (according to The system, device, device, device, etc. are executed, and when executed, include one or a combination of the steps of the method embodiments.
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。Alternatively, all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
上述实施例中的装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。 The devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
上述实施例中的装置/功能模块/功能单元以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium. The above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
工业实用性Industrial applicability
本发明实施例通过图片数据获取单元获取图片数据;并通过视频数据获取单元获取视频数据;从而由合成单元将图片数据获取单元获取的图片数据和视频数据获取单元获取的视频数据封装到一个文件中。通过本发明实施例提供的技术方案,解决了相关技术中图片和视频分别具有独立的存储文件和显示效果,而导致显示效果较为单一的问题,实现了把图片和视频合成一个文件的功能为用户带来更多欢乐,提高用户的体验。 In the embodiment of the present invention, the image data is acquired by the image data acquiring unit; and the video data is acquired by the video data acquiring unit; the image data acquired by the image data acquiring unit and the video data obtained by the video data acquiring unit are encapsulated into a file by the synthesizing unit. . The technical solution provided by the embodiment of the present invention solves the problem that the picture and the video respectively have independent storage files and display effects in the related art, and the display effect is relatively simple, and the function of synthesizing the picture and the video into one file is implemented for the user. Bring more joy and improve the user experience.

Claims (20)

  1. 一种生成视频图片的装置,包括:图片数据获取单元、视频数据获取单元和合成单元;An apparatus for generating a video picture, comprising: a picture data acquiring unit, a video data acquiring unit, and a synthesizing unit;
    其中,所述图片数据获取单元,设置为:获取图片数据;The image data acquiring unit is configured to: acquire image data;
    所述视频数据获取单元,设置为:获取视频数据;The video data acquiring unit is configured to: acquire video data;
    所述合成单元,设置为:将所述图片数据获取单元获取的图片数据和所述视频数据获取单元获取的视频数据封装到一个文件中。The synthesizing unit is configured to encapsulate the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into one file.
  2. 根据权利要求1所述的装置,其中,所述合成单元设置为将所述图片数据获取单元获取的图片数据和所述视频数据获取单元获取的视频数据封装到一个文件中,包括:The apparatus according to claim 1, wherein the synthesizing unit is configured to package the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into a file, including:
    创建一个图片文件;Create an image file;
    将所述图片数据和所述视频数据写入所创建的图片文件中;Writing the picture data and the video data into the created picture file;
    在所述图片文件中写入标识符。An identifier is written in the picture file.
  3. 根据权利要求2所述的装置,其中,所述图片数据获取单元,还设置为:获取所述图片数据的数据长度;The device according to claim 2, wherein the picture data obtaining unit is further configured to: acquire a data length of the picture data;
    所述合成单元设置为将所述图片数据和所述视频数据写入所创建的图片文件中,包括:The synthesizing unit is configured to write the picture data and the video data into the created picture file, including:
    将所述图片数据获取单元获取的所述图片数据和所述图片数据的数据长度,以及所述视频数据获取单元获取的所述视频数据写入所创建的图片文件中。And the data length of the picture data and the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit are written into the created picture file.
  4. 根据权利要求2所述的装置,其中,所述图片文件中还包括以下一项或多项:所述图片数据的数据长度,所述图片数据的起始位置标示符和所述视频数据的起始位置标识符。The device according to claim 2, wherein the picture file further comprises one or more of the following: a data length of the picture data, a start position identifier of the picture data, and a start of the video data. Start location identifier.
  5. 根据权利要求1~4中任一项所述的装置,还包括:The apparatus according to any one of claims 1 to 4, further comprising:
    拍摄单元,设置为:在接收到拍照指令时,拍摄图片并触发所述图片数据获取单元获取图片数据。The photographing unit is configured to: when receiving the photographing instruction, take a picture and trigger the picture data acquiring unit to acquire the picture data.
  6. 根据权利要求5所述的装置,还包括: The apparatus of claim 5 further comprising:
    存储单元,设置为:当启动所述拍摄单元进行取景时,存储获取到的图像数据。The storage unit is configured to: when the photographing unit is activated to perform framing, store the acquired image data.
  7. 根据权利要求6所述的装置,还包括:The apparatus of claim 6 further comprising:
    拍摄时刻获取单元,设置为:获取所述拍摄单元拍摄图片的时刻T;a shooting time acquisition unit, configured to: acquire a time T at which the shooting unit takes a picture;
    所述视频数据获取单元设置为获取视频数据,包括:The video data acquiring unit is configured to acquire video data, including:
    获取所述存储单元存储的所述图像数据中T-T1时刻到T+T2时刻的数据;Acquiring data in the image data stored by the storage unit from a time T-T1 to a time T+T2;
    将所述图像数据中T-T1时刻至T+T2时刻的图像数据进行编码,生成视频格式的视频数据。Image data of the image data from time T-T1 to time T+T2 is encoded to generate video data in a video format.
  8. 根据权利要求5所述的装置,还包括:The apparatus of claim 5 further comprising:
    存储单元,设置为:当启动拍摄单元进行取景时,存储获取到的图像数据;a storage unit, configured to: when the shooting unit is activated to perform framing, storing the acquired image data;
    音频数据采集单元,设置为:采集与所述存储单元存储的所述图像数据同步的音频数据;An audio data collecting unit, configured to: collect audio data synchronized with the image data stored by the storage unit;
    所述存储单元,还设置为:存储所述音频数据采集单元采集的音频数据。The storage unit is further configured to: store audio data collected by the audio data collection unit.
  9. 根据权利要求8所述的装置,还包括:The apparatus of claim 8 further comprising:
    拍摄时刻获取单元,设置为:获取所述拍摄单元拍摄图片的时刻T;a shooting time acquisition unit, configured to: acquire a time T at which the shooting unit takes a picture;
    所述视频数据获取单元设置为获取视频数据,包括:The video data acquiring unit is configured to acquire video data, including:
    获取所述存储单元存储的所述图像数据中T-T1时刻到T+T2时刻的数据,并获取所述存储单元存储的所述音频数据中T-T1时刻到T+T2时刻的音频数据;Obtaining data of the image data stored by the storage unit from a time T-T1 to a time T+T2, and acquiring audio data of the audio data stored by the storage unit from a time T-T1 to a time T+T2;
    将所述图像数据中T-T1时刻到T+T2时刻的图像数据和所述音频数据中T-T1时刻到T+T2时刻的音频数据进行编码,生成视频格式的视频数据。The image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
  10. 根据权利要求7或9所述的装置,其中,所述T1为第一预设时间间隔,所述T2为第二预设时间间隔。The device according to claim 7 or 9, wherein the T1 is a first preset time interval, and the T2 is a second preset time interval.
  11. 一种生成视频图片的方法,包括:A method of generating a video picture, comprising:
    获取图片数据;Get image data;
    获取视频数据; Obtain video data;
    将所述图片数据和所述视频数据封装到一个文件中。Encapsulating the picture data and the video data into a file.
  12. 根据权利要求11所述的方法,其中,所述将所述图片数据和所述视频数据封装到一个文件中,包括:The method of claim 11 wherein said encapsulating said picture data and said video data into a file comprises:
    创建一个图片文件;Create an image file;
    将所述图片数据和所述视频数据写入所创建的图片文件中;Writing the picture data and the video data into the created picture file;
    在所述图片文件中写入标识符。An identifier is written in the picture file.
  13. 根据权利要求12所述的方法,其中,所述将所述图片数据和所述视频数据写入所创建的图片文件中之前,所述方法还包括:The method of claim 12, wherein before the writing the picture data and the video data into the created picture file, the method further comprises:
    获取所述图片数据的数据长度;Obtaining a data length of the picture data;
    所述将所述图片数据和所述视频数据写入所创建的图片文件中,包括:The writing the picture data and the video data into the created picture file includes:
    将所述图片数据、所述视频数据和所述图片数据的数据长度写入所创建的图片文件中。The data length of the picture data, the video data, and the picture data is written into the created picture file.
  14. 根据权利要求12所述的方法,其中,所述图片文件中还包括以下一项或多项:所述图片数据的数据长度,所述图片数据的起始位置标示符和所述视频数据的起始位置标识符。The method according to claim 12, wherein the picture file further comprises one or more of the following: a data length of the picture data, a start position identifier of the picture data, and a start of the video data. Start location identifier.
  15. 根据权利要求11~14中任一项所述的方法,其中,所述获取图片数据,包括:The method according to any one of claims 11 to 14, wherein the acquiring picture data comprises:
    在接收到拍照指令时,拍摄图片并获取所述图片数据。When a photographing instruction is received, a picture is taken and the picture data is acquired.
  16. 根据权利要求15所述的方法,还包括:The method of claim 15 further comprising:
    当启动拍摄单元进行取景时,存储获取到的图像数据。When the shooting unit is activated for framing, the acquired image data is stored.
  17. 根据权利要求16所述的方法,还包括:The method of claim 16 further comprising:
    获取拍摄图片的时刻T;Obtain the time T at which the picture was taken;
    所述获取视频数据,包括:The obtaining video data includes:
    获取所述图像数据中T-T1时刻到T+T2时刻的数据;Obtaining data of the image data from the time T-T1 to the time T+T2;
    将所述图像数据中T-T1时刻到T+T2时刻的图像数据进行编码,生成视频格式的视频数据。 The image data of the image data from the time T-T1 to the time T+T2 is encoded to generate video data in a video format.
  18. 根据权利要求15所述的方法,还包括:The method of claim 15 further comprising:
    当启动拍摄单元进行取景时,存储获取到的图像数据;When the shooting unit is activated for framing, the acquired image data is stored;
    采集与所述图像数据同步的音频数据,并存储所采集的音频数据。Audio data synchronized with the image data is acquired, and the collected audio data is stored.
  19. 根据权利要求18所述的方法,还包括:The method of claim 18, further comprising:
    获取拍摄图片的时刻T;Obtain the time T at which the picture was taken;
    所述获取视频数据,包括:The obtaining video data includes:
    获取所述图像数据中T-T1时刻到T+T2时刻的数据,并获取所述音频数据中T-T1时刻到T+T2时刻的音频数据;Obtaining data of the image data from a time T-T1 to a time T+T2, and acquiring audio data of the audio data from a time T-T1 to a time T+T2;
    将所述图像数据中T-T1时刻到T+T2时刻的图像数据和所述音频数据中T-T1时刻到T+T2时刻的音频数据进行编码,生成视频格式的视频数据。The image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
  20. 根据权利要求17或19所述的方法,其中,所述T1为第一预设时间间隔,所述T2为第二预设时间间隔。 The method according to claim 17 or 19, wherein the T1 is a first preset time interval, and the T2 is a second preset time interval.
PCT/CN2016/100334 2015-09-28 2016-09-27 Method and device for generating video image WO2017054704A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510627994.4 2015-09-28
CN201510627994.4A CN105245777A (en) 2015-09-28 2015-09-28 Method and device for generating video image

Publications (1)

Publication Number Publication Date
WO2017054704A1 true WO2017054704A1 (en) 2017-04-06

Family

ID=55043255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/100334 WO2017054704A1 (en) 2015-09-28 2016-09-27 Method and device for generating video image

Country Status (2)

Country Link
CN (1) CN105245777A (en)
WO (1) WO2017054704A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245777A (en) * 2015-09-28 2016-01-13 努比亚技术有限公司 Method and device for generating video image
WO2017128288A1 (en) * 2016-01-29 2017-08-03 华为技术有限公司 Processing method and portable electronic device
CN105704387A (en) * 2016-04-05 2016-06-22 广东欧珀移动通信有限公司 Shooting method and device of intelligent terminal and intelligent terminal
CN105847688B (en) * 2016-04-07 2019-03-08 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN106303290B (en) * 2016-09-29 2019-10-08 努比亚技术有限公司 A kind of terminal and the method for obtaining video
CN106375681A (en) * 2016-09-29 2017-02-01 维沃移动通信有限公司 Static-dynamic image production method, and mobile terminal
CN106303292B (en) * 2016-09-30 2019-05-03 努比亚技术有限公司 A kind of generation method and terminal of video data
CN106686298A (en) * 2016-11-29 2017-05-17 努比亚技术有限公司 Post-shooting processing method, post-shooting processing device and mobile terminal
CN106657776A (en) * 2016-11-29 2017-05-10 努比亚技术有限公司 Shooting post-processing method, shooting post-processing device and mobile terminal
CN106911881B (en) * 2017-02-27 2020-10-16 努比亚技术有限公司 Dynamic photo shooting device and method based on double cameras and terminal
CN109922252B (en) * 2017-12-12 2021-11-02 北京小米移动软件有限公司 Short video generation method and device and electronic equipment
CN110248116B (en) * 2019-06-10 2021-10-26 腾讯科技(深圳)有限公司 Picture processing method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092150A1 (en) * 2008-10-13 2010-04-15 Samsung Electronics Co., Ltd. Successive video recording method using udta information and portable device therefor
CN102325237A (en) * 2011-10-26 2012-01-18 天津三星光电子有限公司 Digital camera with picture-in-picture video recording and playing function
CN104065869A (en) * 2013-03-18 2014-09-24 三星电子株式会社 Method for displaying image combined with playing audio in an electronic device
CN104125388A (en) * 2013-04-25 2014-10-29 广州华多网络科技有限公司 Method for shooting and storing photos and device thereof
CN105245777A (en) * 2015-09-28 2016-01-13 努比亚技术有限公司 Method and device for generating video image
CN105354219A (en) * 2015-09-28 2016-02-24 努比亚技术有限公司 File encoding method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092150A1 (en) * 2008-10-13 2010-04-15 Samsung Electronics Co., Ltd. Successive video recording method using udta information and portable device therefor
CN102325237A (en) * 2011-10-26 2012-01-18 天津三星光电子有限公司 Digital camera with picture-in-picture video recording and playing function
CN104065869A (en) * 2013-03-18 2014-09-24 三星电子株式会社 Method for displaying image combined with playing audio in an electronic device
CN104125388A (en) * 2013-04-25 2014-10-29 广州华多网络科技有限公司 Method for shooting and storing photos and device thereof
CN105245777A (en) * 2015-09-28 2016-01-13 努比亚技术有限公司 Method and device for generating video image
CN105354219A (en) * 2015-09-28 2016-02-24 努比亚技术有限公司 File encoding method and apparatus

Also Published As

Publication number Publication date
CN105245777A (en) 2016-01-13

Similar Documents

Publication Publication Date Title
WO2017054704A1 (en) Method and device for generating video image
WO2017071559A1 (en) Image processing apparatus and method
WO2017107629A1 (en) Mobile terminal, data transmission system and shooting method of mobile terminal
US9225905B2 (en) Image processing method and apparatus
KR102314594B1 (en) Image display method and electronic device
WO2017067520A1 (en) Mobile terminal having binocular cameras and photographing method therefor
WO2023015981A1 (en) Image processing method and related device therefor
WO2017118353A1 (en) Device and method for displaying video file
US20140354880A1 (en) Camera with Hall Effect Switch
US20140270688A1 (en) Personal Video Replay
US20090135274A1 (en) System and method for inserting position information into image
US9124548B2 (en) Method for uploading media file, electronic device using the same, and non-transitory storage medium
CN103297682A (en) Moving image shooting apparatus and method of using a camera device
WO2017045647A1 (en) Method and mobile terminal for processing image
WO2017054677A1 (en) Mobile terminal photographing system and mobile terminal photographing method
WO2017084429A1 (en) Image acquisition method and apparatus, and computer storage medium
CN105335458B (en) Preview picture method and device
WO2017088609A1 (en) Image denoising apparatus and method
WO2018059206A1 (en) Terminal, method of acquiring video, and data storage medium
KR20080113698A (en) System for inputting position information in captured image and method thereof
WO2017185866A1 (en) Mobile terminal, exposure method and device therefor, storage medium
WO2017071558A1 (en) Mobile terminal photographing device and method
WO2015180683A1 (en) Mobile terminal, method and device for setting image pickup parameters, and computer storage medium
WO2017088662A1 (en) Focusing method and device
US9609167B2 (en) Imaging device capable of temporarily storing a plurality of image data, and control method for an imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16850328

Country of ref document: EP

Kind code of ref document: A1