WO2017054704A1 - Procédé et dispositif pour générer une image vidéo - Google Patents

Procédé et dispositif pour générer une image vidéo Download PDF

Info

Publication number
WO2017054704A1
WO2017054704A1 PCT/CN2016/100334 CN2016100334W WO2017054704A1 WO 2017054704 A1 WO2017054704 A1 WO 2017054704A1 CN 2016100334 W CN2016100334 W CN 2016100334W WO 2017054704 A1 WO2017054704 A1 WO 2017054704A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
picture
video
time
unit
Prior art date
Application number
PCT/CN2016/100334
Other languages
English (en)
Chinese (zh)
Inventor
刘林汶
何耀平
苗雷
里强
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017054704A1 publication Critical patent/WO2017054704A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • This application relates to, but is not limited to, the field of photography.
  • An apparatus for generating a video picture comprising: a picture data acquiring unit, a video data acquiring unit, and a synthesizing unit;
  • the image data acquiring unit is configured to: acquire image data
  • the video data acquiring unit is configured to: acquire video data
  • the synthesizing unit is configured to encapsulate the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into one file.
  • the synthesizing unit is configured to encapsulate the picture data acquired by the picture data acquiring unit and the video data acquired by the video data acquiring unit into a file, including:
  • An identifier is written in the picture file.
  • the picture data acquiring unit is further configured to: acquire a data length of the picture data
  • the synthesizing unit is configured to write the picture data and the video data into the created picture file, including:
  • the picture file further includes one or more of the following: a data length of the picture data, a start location identifier of the picture data, and a start location identifier of the video data.
  • the device further includes:
  • the photographing unit is configured to: when receiving the photographing instruction, take a picture and trigger the picture data acquiring unit to acquire the picture data.
  • the device further includes:
  • the storage unit is configured to: when the photographing unit is activated to perform framing, store the acquired image data.
  • the device further includes:
  • a shooting time acquisition unit configured to: acquire a time T at which the shooting unit takes a picture
  • the video data acquiring unit is configured to acquire video data, including:
  • Image data of the image data from time T-T1 to time T+T2 is encoded to generate video data in a video format.
  • the device further includes:
  • a storage unit configured to: when the shooting unit is activated to perform framing, storing the acquired image data
  • An audio data collecting unit configured to: collect audio data synchronized with the image data stored by the storage unit;
  • the storage unit is further configured to: store audio data collected by the audio data collection unit.
  • the device further includes:
  • a shooting time acquisition unit configured to: acquire a time T at which the shooting unit takes a picture
  • the video data acquiring unit is configured to acquire video data, including:
  • the image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
  • the T1 is a first preset time interval
  • the T2 is a second preset time interval.
  • a method of generating a video picture comprising:
  • the encapsulating the picture data and the video data into a file includes:
  • An identifier is written in the picture file.
  • the method before the writing the picture data and the video data into the created picture file, the method further includes:
  • the writing the picture data and the video data into the created picture file includes:
  • the data length of the picture data, the video data, and the picture data is written into the created picture file.
  • the picture file further includes one or more of the following: a data length of the picture data, a start location identifier of the picture data, and a start location identifier of the video data.
  • the acquiring image data includes:
  • the method further includes:
  • the shooting unit When the shooting unit is activated for framing, the acquired image data is stored.
  • the method further includes:
  • the obtaining video data includes:
  • the image data of the image data from the time T-T1 to the time T+T2 is encoded to generate video data in a video format.
  • the method further includes:
  • the shooting unit When the shooting unit is activated for framing, the acquired image data is stored;
  • Audio data synchronized with the image data is acquired, and the collected audio data is stored.
  • the method further includes:
  • the obtaining video data includes:
  • the image data of the image data from the T-T1 time to the T+T2 time and the audio data of the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in a video format.
  • the T1 is a first preset time interval
  • T2 is a second preset time interval.
  • the method and device for generating a video picture acquires picture data by a picture data acquiring unit; and acquires video data by using a video data acquiring unit; thereby obtaining, by the synthesizing unit, the picture data and video data acquiring unit acquired by the picture data acquiring unit
  • the video data is encapsulated into a file.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention
  • FIG. 2 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart diagram of another method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of still another method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of time selection of a video data in a method for generating a video picture according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart diagram of still another method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a photographing interface of a mobile terminal in a method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of another recorded video interface of a mobile terminal in a method for generating a video picture according to an embodiment of the present disclosure
  • FIG. 9 is a schematic structural diagram of an apparatus for generating a video picture according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of another apparatus for generating a video picture according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of an electrical structure of a camera in an apparatus for generating a video picture according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may also include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • Broadcast signals can exist in various forms, for example, they can be widely used in digital multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), etc. exist.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast apparatuses.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • DMB-T multimedia broadcast-terrestrial
  • DMB-S digital multimedia broadcast-satellite
  • DVD-H digital video broadcast-handheld
  • the digital broadcasting device of the @) data broadcasting device, the terrestrial digital broadcasting integrated service (ISDB-T), or the like receives the digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast apparatuses suitable for providing broadcast signals as well as the above-described digital broadcast apparatuses.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wide Band (UWB), ZigbeeTM, and the like.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is a GPS (Global Positioning Device).
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information according to longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the error of the calculated position and time information by using another satellite. Further, the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122, the camera 121 being imaged in a video capture mode or an image capture mode
  • the image data of the still picture or video obtained by the capture device is processed.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 141 which will be described below in connection with a touch screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • a device having an identification module may take the form of a smart card. Therefore, the identification device can be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a pickup, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides Operate the appropriate power required for each component and component.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • FIG. 2 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention.
  • the method for generating a video picture provided in this embodiment is applied to an intelligent terminal, where the smart terminal includes, for example, a smart phone, a tablet computer, and the like.
  • the method may include the following steps, namely, S110 to S130:
  • the source of the picture data may be the picture data captured by the shooting unit, the picture saved in the terminal, or the picture stored on the server.
  • the user can open the camera of the mobile terminal, take a photo by the camera to obtain the image data, or select the image saved in the mobile terminal, obtain the image data through the corresponding module, and obtain the image of the server through the network to read the image. data.
  • the source of the video data may be diverse.
  • the video data may be collected through the camera preview data of the mobile terminal, or may be the camera function through the mobile terminal.
  • the video data that can be captured can also be video data that has been saved in the mobile terminal (or other memory).
  • the method for generating a video picture according to the present invention, the picture data and the video data are encapsulated into a file by acquiring the picture data, and the picture data and the video data are encapsulated into a file; the technical solution provided by the embodiment of the invention solves the picture in the related art
  • the video has independent storage files and display effects, which leads to a single display effect.
  • the function of synthesizing pictures and videos into one file brings more joy to the user and improves the user experience.
  • FIG. 3 is a schematic flowchart of another method for generating a video picture according to an embodiment of the present invention.
  • an implementation manner of encapsulating picture data and video data into a file is specifically described in FIG. 2 .
  • S130 in this embodiment may include the following steps, namely, S131 to S133:
  • the created image file is saved in standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .png, .bmp and other formats.
  • This embodiment adds additional data based on the data of the picture file, and may include, for example, video data, and the data length of the picture data, or the start position identifier of the picture data, or the start position identifier of the video data, which is guaranteed.
  • the standard format of the image file is not destroyed and saved in the standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .Png, .bmp, etc., so that any terminal can preview before adding additional data.
  • Original image file can be saved as: .jpg, .jpeg, .gif, .Png, .bmp, etc.
  • the identifier is used to indicate that the picture file is a video picture. If the file format information of the image file includes an identifier of the video data, the image file is a video image file; The file format information of the image file contains only the header of the file and the related information of the image data, and the image file is a normal image file. In this way, when the terminal reads the identifier to determine that a picture file is a video picture file, the picture data can be read from the video picture file, and the picture data of the read picture file is sent to the picture player and the picture player is prompted.
  • Playing and, according to the data length of the picture data and/or the start position identifier of the picture data, moving to the start position of the video data, reading the video file data, and reading the video file
  • the data is sent to the video player and prompted to the video player for playback.
  • the embodiment may further include: acquiring a data length of the picture data; correspondingly, the S132 in the embodiment may include: writing the data length of the picture data, the video data, and the picture data to be created.
  • the start position identifier of the picture data or/and the start position identifier of the video data may also be written.
  • FIG. 4 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present invention.
  • the method provided in this embodiment may include the following steps, that is, S210-S260:
  • the shooting unit may acquire image data of the photographic subject, and the capturing unit acquires the image data by sending the image data to the storage unit in the mobile terminal through the internal interface for subsequent Perform step utilization.
  • the image data may be stored in the memory card of the mobile terminal, or may be temporarily stored in the cache of the mobile terminal, which is not limited in this embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a photographing interface of a mobile terminal in a method for generating a video image according to an embodiment of the present invention
  • FIG. 8 is a recording video interface of another mobile terminal in a method for generating a video image according to an embodiment of the present disclosure
  • the schematic diagram shows that the display interface of the mobile terminal is a photographing interface in the image capturing mode.
  • the display interface of the mobile terminal is shown as a recorded video interface in the image capturing mode.
  • the mobile terminal When the mobile terminal receives the photographing instruction, the photograph is taken, and the image is obtained according to the photographed image. data.
  • the mobile terminal When the mobile terminal receives the photographing instruction to take a picture, the mobile terminal records the time T at which the picture is taken, and when necessary, the recording file can be read to obtain the data.
  • the photographing time ie, the time T
  • Image data is added to the image file as video data.
  • FIG. 5 is a schematic diagram of time selection of a video data in a method for generating a video picture according to an embodiment of the present invention.
  • the image data acquired by the shooting unit of the mobile terminal is stored, and the image data from the time T-T1 to the time T+T2 is intercepted according to the time T of the captured picture acquired in S230.
  • the T1 in this embodiment is a first preset value
  • T2 is a second preset value.
  • S250 Encode image data in the image data from T-T1 time to T+T2 time to generate video data in a video format.
  • the image data may be encoded into video data in a video format by an encoding tool in the mobile terminal, and the encoded video data may be output.
  • the image data may be encoded into a common video format, for example, including: video/avc, video/3gpp, video/mp4v-es, etc., and the video coding mode may be universal in the related art. Coding technology.
  • S260 Encapsulate image data and video data into a file.
  • S260 in this embodiment reference may be made to S131 to S133 in the embodiment shown in FIG. 3, and details are not described herein again.
  • FIG. 6 is a schematic flowchart of a method for generating a video picture according to an embodiment of the present disclosure.
  • the method for generating a video picture in this embodiment may include the following steps, that is, S310 to S360:
  • the shooting unit may acquire image data of the photographic subject, and the capturing unit acquires the image data by sending the image data to the storage unit in the mobile terminal through the internal interface for subsequent Perform step utilization.
  • audio data synchronized with the video data may be collected by an audio device (such as a microphone) of the mobile terminal, and the collected audio data may be stored.
  • the image data may be stored in the memory card of the mobile terminal, or may be temporarily stored in the cache of the mobile terminal, which is not limited in this embodiment of the present invention.
  • the mobile terminal When the mobile terminal receives the photographing instruction, the photograph is taken, and the image data is obtained according to the photographed image.
  • the mobile terminal When the mobile terminal receives the photographing instruction to take a picture, the mobile terminal records the time T at which the picture is taken, and when necessary, the recording file can be read to obtain the data.
  • S340 Acquire image data of the image data from T-T1 time to T+T2 time, and acquire audio data of the audio data from T-T1 time to T+T2 time.
  • the photographing time ie, the time T
  • Image data and audio data The composite video data is added to the image file.
  • the audio data time selection is the same as the video data time selection method shown in FIG. 5, and therefore will not be described herein.
  • the image data acquired by the shooting unit of the mobile terminal is stored, and the image data from the time T-T1 to the time T+T2 is intercepted according to the time T of the captured picture acquired in S330. And audio data from the T-T1 time to the T+T2 time.
  • the T1 in this embodiment is a first preset value
  • T2 is a second preset value.
  • S350 Encode the image data in the image data from the T-T1 time to the T+T2 time and the audio data in the audio data from the T-T1 time to the T+T2 time to generate the video data in the video format.
  • the image data and the audio data can be encoded into the video data in the video format by the coding tool in the mobile terminal, and the coded output is obtained.
  • Video data In this embodiment, in S350, the foregoing data may be encoded into a common video format, for example, including: video/avc, video/3gpp, video/mp4v-es, etc., and the video coding mode used may be a general-purpose coding in the related art. technology.
  • an embodiment of the present invention further provides an apparatus for generating a video picture.
  • a schematic structural diagram of an apparatus for generating a video picture according to an embodiment of the present invention may be configured in a smart terminal, where the smart terminal may be a smart phone, a tablet computer, or the like.
  • the apparatus for generating a video picture may include a picture data acquiring unit 10, a video data acquiring unit 20, and a synthesizing unit 30.
  • the picture data obtaining unit 10 is configured to: obtain picture data
  • the video data acquiring unit 20 is configured to: acquire video data
  • the synthesizing unit 30 is configured to encapsulate the picture data acquired by the picture data acquiring unit 10 and the video data acquired by the video data acquiring unit 20 into one file.
  • the source of the image data acquired by the image data acquiring unit 10 may be the image data captured by the shooting unit, the image saved in the terminal, or the image stored on the server.
  • the user can open the camera of the terminal, take a photo by the camera to obtain the image data, or select the image saved in the terminal, obtain the image data through the corresponding module, and obtain the image of the server through the network to read the image data.
  • the source of the video data acquired by the video data acquiring unit 20 may be various.
  • the video data may be collected through the camera preview data of the terminal, or may be video data captured by the camera function of the terminal, or may be saved in the terminal.
  • Video data (or other memory).
  • the synthesizing unit 30 encapsulates the acquired picture data and video data into a file, associates the picture data with the video data, and generates a new file, so that the effect of playing the associated video data can be played when the photo is viewed.
  • the synthesizing unit 30 is configured to encapsulate the image data acquired by the image data acquiring unit 10 and the video data acquired by the video data acquiring unit 20 into a file, including:
  • the created image file is saved in standard image format, for example, can be saved as: .jpg, .jpeg, .gif, .png, .bmp, etc.
  • This embodiment adds additional data based on the data of the picture file, and may include, for example, video data, and the data length of the picture data, or the start position identifier of the picture data, or the start position identifier of the video data, which is guaranteed.
  • the standard format of the image file is not destroyed and saved in the standard image format, for example, can be saved as: .jpg, .jpeg, .Gif, .Png, .bmp, etc., so that any terminal can preview before adding additional data.
  • Original image file can be saved as: .jpg, .jpeg, .Gif, .Png, .bmp, etc.
  • the identifier is used to indicate that the picture file is a video picture. If the file format information of the image file includes an identifier of the video data, the image file is a video image file; if the file format information of the image file includes only the file header and related information of the image data, the image file is a normal image. file. In this way, when the terminal reads the identifier to determine that a picture file is a video picture file, the picture data can be read from the video picture file, and the picture data of the read picture file is sent to the picture player and the picture player is prompted.
  • Playing and, according to the data length of the picture data and/or the start position identifier of the picture data, moving to the start position of the video data, reading the video file data, and reading the video file
  • the data is sent to the video player and prompted to the video player for playback.
  • the picture data acquiring unit 10 in this embodiment is further configured to: obtain a data length of the picture data; correspondingly, the synthesizing unit 30 in the embodiment writes the picture data and the video data.
  • the picture file may include: the data length of the picture data and the picture data acquired by the picture data obtaining unit 10, and the video data acquired by the video data acquiring unit 20 are written into the created picture file.
  • the start position identifier of the picture data or/and the start position identifier of the video data may also be written.
  • FIG. 10 is a schematic structural diagram of another apparatus for generating a video picture according to an embodiment of the present invention.
  • the device for generating a video picture according to the embodiment of the present disclosure may further include:
  • the photographing unit 40 is configured to: when receiving the photographing instruction, take a picture and trigger the picture data acquiring unit 10 to acquire the picture data.
  • the apparatus provided in this embodiment may further include: a storage unit 50 configured to store the acquired image data when the photographing unit 40 is activated to perform the framing.
  • the apparatus provided in this embodiment may further include:
  • the photographing time acquisition unit 60 is configured to: acquire the time T at which the photographing unit 40 takes a picture;
  • the video data acquiring unit 20 in this embodiment is configured to acquire video data, including:
  • the image data of the image data from the time T-T1 to the time T+T2 is encoded to generate video data in a video format.
  • the apparatus provided in this embodiment may further include: an audio data collecting unit 70;
  • the storage unit 50 in this embodiment is further configured to: when the shooting unit 40 is activated to perform framing, store the acquired image data;
  • the audio data collecting unit 70 is configured to: collect audio data synchronized with the image data stored by the storage unit 50;
  • the storage unit 50 is further configured to store the audio data collected by the audio data collection unit 70.
  • the shooting time acquisition unit 60 in this embodiment is configured to: acquire the time T at which the shooting unit 40 captures a picture;
  • the video data acquiring unit 20 in this embodiment is configured to acquire video data, including:
  • the image data in the image data from the T-T1 time to the T+T2 time and the audio data in the audio data from the T-T1 time to the T+T2 time are encoded to generate video data in the video format.
  • FIG. 11 is a schematic diagram showing the electrical structure of a camera in an apparatus for generating a video picture according to an embodiment of the present invention.
  • the photographic lens 1211 may include a plurality of optical lenses that form a subject image, and may be a single focus lens or a zoom lens.
  • the photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance.
  • the lens drive control circuit 1222 drives and controls the lens driver 1221 in accordance with a control command from the microcomputer 1217.
  • An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211.
  • the imaging element 1212 is provided to image the subject image and acquire captured image data.
  • Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, which is subjected to charge accumulation by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of red, green, blue (abbreviation: RGB) color filters.
  • the imaging element 1212 is connected to an imaging circuit 1213 that performs charge accumulation control and image signal readout control in the imaging element 1212, and reduces the reset noise after the read image signal (for example, an analog image signal).
  • the shaping is performed, and the gain is increased to obtain an appropriate signal level.
  • the imaging circuit 1213 is connected to an analog-to-digital conversion (A/D) converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal to the bus 1227 (hereinafter referred to as It is image data).
  • A/D analog-to-digital conversion
  • the bus 1227 is configured to transmit a transmission path of various data read or generated inside the camera. path.
  • the A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, and a Synchronous Dynamic Random Access Memory (SDRAM) 1218 are connected.
  • a memory interface hereinafter referred to as a memory I/F
  • a liquid crystal display (LCD) driver 1220 a memory interface
  • the image processor 1215 performs output buffering (Output Buffer, abbreviated as: OB) subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, and the image data based on the output of the imaging element 1212.
  • OB output buffering
  • the JPEG processor 1216 compresses the image data read out from the SDRAM 1218 in accordance with the JPEG compression method when the image data is recorded on the recording medium 1225. Further, the JPEG processor 1216 performs decompression of JPEG image data for image reproduction display.
  • the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226.
  • the JPEG method is adopted as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
  • the microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
  • the operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button
  • the operation controls such as various input buttons and various input keys detect the operation state of these operation controls.
  • the detection result is output to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217.
  • the microcomputer 1217 executes various processing sequences corresponding to the user's operation in accordance with the detection result from the operation position of the operation unit 1223.
  • the flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217.
  • the microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 1218 is provided as an electrically rewritable volatile memory that temporarily stores image data or the like.
  • the SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
  • the memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225.
  • the recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 1210 is connected to the LCD 1226, and stores image data processed by the image processor 1215 in the SDRAM 1218.
  • the image data stored in the SDRAM 1218 is read and displayed on the LCD 1226, or the image data stored in the JPEG processor 1216 is compressed.
  • the JPEG processor 1216 reads the compressed image data of the SDRAM 1218, decompresses it, and displays the decompressed image data through the LCD 1226.
  • the LCD 1226 is configured to display an image on the back of the camera body.
  • the LCD 1226 may be an LCD, but is not limited thereto, and the LCD 1226 may be implemented by other display panels such as organic electroluminescence (EL), but is not limited thereto.
  • EL organic electroluminescence
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • the device/function module/functional unit in the above embodiment When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the image data is acquired by the image data acquiring unit; and the video data is acquired by the video data acquiring unit; the image data acquired by the image data acquiring unit and the video data obtained by the video data acquiring unit are encapsulated into a file by the synthesizing unit.
  • the technical solution provided by the embodiment of the present invention solves the problem that the picture and the video respectively have independent storage files and display effects in the related art, and the display effect is relatively simple, and the function of synthesizing the picture and the video into one file is implemented for the user. Bring more joy and improve the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

L'invention concerne un procédé et un dispositif pour générer une image vidéo. Le dispositif comprend : une unité d'acquisition de données d'image configurée pour acquérir des données d'image ; une unité d'acquisition de données vidéo configurée pour acquérir des données vidéo ; et une unité de synthèse configurée pour encapsuler les données d'image acquises par l'unité d'acquisition de données d'image et les données vidéo acquises par l'unité d'acquisition de données vidéo dans un fichier.
PCT/CN2016/100334 2015-09-28 2016-09-27 Procédé et dispositif pour générer une image vidéo WO2017054704A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510627994.4 2015-09-28
CN201510627994.4A CN105245777A (zh) 2015-09-28 2015-09-28 生成视频图片的方法及装置

Publications (1)

Publication Number Publication Date
WO2017054704A1 true WO2017054704A1 (fr) 2017-04-06

Family

ID=55043255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/100334 WO2017054704A1 (fr) 2015-09-28 2016-09-27 Procédé et dispositif pour générer une image vidéo

Country Status (2)

Country Link
CN (1) CN105245777A (fr)
WO (1) WO2017054704A1 (fr)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245777A (zh) * 2015-09-28 2016-01-13 努比亚技术有限公司 生成视频图片的方法及装置
CN107431752B (zh) * 2016-01-29 2020-10-23 华为技术有限公司 一种处理方法及便携式电子设备
CN105704387A (zh) * 2016-04-05 2016-06-22 广东欧珀移动通信有限公司 一种智能终端的拍照方法、装置及智能终端
CN105847688B (zh) * 2016-04-07 2019-03-08 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN106303290B (zh) * 2016-09-29 2019-10-08 努比亚技术有限公司 一种终端及获取视频的方法
CN106375681A (zh) * 2016-09-29 2017-02-01 维沃移动通信有限公司 一种动静结合影像的生成方法和移动终端
CN106303292B (zh) * 2016-09-30 2019-05-03 努比亚技术有限公司 一种视频数据的生成方法和终端
CN106686298A (zh) * 2016-11-29 2017-05-17 努比亚技术有限公司 拍摄后处理方法、拍摄后处理装置及移动终端
CN106657776A (zh) * 2016-11-29 2017-05-10 努比亚技术有限公司 拍摄后处理方法、拍摄后处理装置及移动终端
CN106911881B (zh) * 2017-02-27 2020-10-16 努比亚技术有限公司 一种基于双摄像头的动态照片拍摄装置、方法和终端
CN109922252B (zh) * 2017-12-12 2021-11-02 北京小米移动软件有限公司 短视频的生成方法及装置、电子设备
CN110248116B (zh) * 2019-06-10 2021-10-26 腾讯科技(深圳)有限公司 图片处理方法、装置、计算机设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092150A1 (en) * 2008-10-13 2010-04-15 Samsung Electronics Co., Ltd. Successive video recording method using udta information and portable device therefor
CN102325237A (zh) * 2011-10-26 2012-01-18 天津三星光电子有限公司 一种具有画中画视频录制播放功能的数码相机
CN104065869A (zh) * 2013-03-18 2014-09-24 三星电子株式会社 在电子装置中与播放音频组合地显示图像的方法
CN104125388A (zh) * 2013-04-25 2014-10-29 广州华多网络科技有限公司 一种拍摄并存储相片的方法和装置
CN105245777A (zh) * 2015-09-28 2016-01-13 努比亚技术有限公司 生成视频图片的方法及装置
CN105354219A (zh) * 2015-09-28 2016-02-24 努比亚技术有限公司 一种文件编码方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092150A1 (en) * 2008-10-13 2010-04-15 Samsung Electronics Co., Ltd. Successive video recording method using udta information and portable device therefor
CN102325237A (zh) * 2011-10-26 2012-01-18 天津三星光电子有限公司 一种具有画中画视频录制播放功能的数码相机
CN104065869A (zh) * 2013-03-18 2014-09-24 三星电子株式会社 在电子装置中与播放音频组合地显示图像的方法
CN104125388A (zh) * 2013-04-25 2014-10-29 广州华多网络科技有限公司 一种拍摄并存储相片的方法和装置
CN105245777A (zh) * 2015-09-28 2016-01-13 努比亚技术有限公司 生成视频图片的方法及装置
CN105354219A (zh) * 2015-09-28 2016-02-24 努比亚技术有限公司 一种文件编码方法及装置

Also Published As

Publication number Publication date
CN105245777A (zh) 2016-01-13

Similar Documents

Publication Publication Date Title
WO2017054704A1 (fr) Procédé et dispositif pour générer une image vidéo
WO2017071559A1 (fr) Appareil et procédé de traitement d'image
WO2017107629A1 (fr) Terminal mobile, système de transmission de données et procédé de prise de vues de terminal mobile
US9225905B2 (en) Image processing method and apparatus
WO2017067520A1 (fr) Terminal mobile ayant des appareils de prise de vues binoculaires et procédé de photographie associé
WO2023015981A1 (fr) Procédé de traitement d'images et son dispositif associé
WO2017118353A1 (fr) Dispositif et procédé d'affichage d'un fichier vidéo
US20090135274A1 (en) System and method for inserting position information into image
US20140354880A1 (en) Camera with Hall Effect Switch
US20140270688A1 (en) Personal Video Replay
WO2017054677A1 (fr) Système de photographie de terminal mobile et procédé de photographie de terminal mobile
CN103297682A (zh) 运动图像拍摄设备和使用摄像机装置的方法
WO2017045647A1 (fr) Procédé et terminal mobile pour traiter une image
WO2017084429A1 (fr) Procédé et appareil d'acquisition d'image, et support de stockage informatique
JP2020510928A (ja) 画像表示方法及び電子デバイス
CN105335458B (zh) 图片预览方法及装置
WO2017088609A1 (fr) Appareil et procédé de débruitage d'image
US20130227082A1 (en) Method for uploading media file, electronic device using the same, and non-transitory storage medium
WO2018059206A1 (fr) Terminal, procédé d'acquisition de vidéo, et support de stockage de données
KR20080113698A (ko) 영상에 위치정보를 입력하는 시스템 및 그 동작 방법
WO2017185866A1 (fr) Terminal mobile, procédé d'exposition et dispositif associé, ainsi que support de stockage
WO2017071558A1 (fr) Dispositif et procédé de photographie de terminal mobile
WO2015180683A1 (fr) Terminal mobile, procédé et dispositif pour le réglage de paramètres de capture d'images et support de stockage informatique
WO2017088662A1 (fr) Procédé et dispositif de mise au point
US9609167B2 (en) Imaging device capable of temporarily storing a plurality of image data, and control method for an imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16850328

Country of ref document: EP

Kind code of ref document: A1