WO2022166371A1 - 多景录像方法、装置及电子设备 - Google Patents

多景录像方法、装置及电子设备 Download PDF

Info

Publication number
WO2022166371A1
WO2022166371A1 PCT/CN2021/136160 CN2021136160W WO2022166371A1 WO 2022166371 A1 WO2022166371 A1 WO 2022166371A1 CN 2021136160 W CN2021136160 W CN 2021136160W WO 2022166371 A1 WO2022166371 A1 WO 2022166371A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
camera
file
cameras
recording
Prior art date
Application number
PCT/CN2021/136160
Other languages
English (en)
French (fr)
Inventor
杨丽霞
陈绍君
卞超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21924364.9A priority Critical patent/EP4274224A1/en
Publication of WO2022166371A1 publication Critical patent/WO2022166371A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present application relates to the field of terminal technologies, and in particular, to a multi-view video recording method, device, and electronic device.
  • a multi-view recording function such as dual-view recording
  • multiple cameras on the smart terminal can jointly record videos, and during the recording process, the images captured by each camera can be individually presented on the display screen of the smart terminal.
  • users can adjust the recording parameters of each camera based on their own needs, such as adjusting the focal length. But at present, it is often difficult for users to edit the video files recorded by each camera after the multi-view recording is completed.
  • the present application provides a multi-view video recording method, device and electronic device.
  • a description file can be generated, and the video files recorded by each camera can be stored separately, so that the description file can be used to simultaneously record on the same interface. Play, edit or share video files recorded by each camera.
  • the present application provides a multi-view video recording method, the method includes: determining that a plurality of cameras on a first terminal are in a video recording state, wherein the video of each camera is displayed on a display screen of the first terminal Recording interface; generate a first description file, the first description file includes the video recording information of each camera and the operation information of the user in the process of recording the video, and the first description file is used to simultaneously perform the first description on the video files recorded by each camera on the same interface.
  • An operation includes at least one of playing, editing and sharing, wherein the video files recorded by each camera are stored separately.
  • a description file can be generated in the process of multi-view recording, and the video files recorded by each camera can be saved separately, and then the description files can be used to play, edit or share the video files recorded by each camera at the same time on the same interface, so that Users can operate video files generated by multi-view recording based on their own needs, which improves the convenience of video file processing.
  • the method further includes: determining that the first video file recorded by the first camera is in a playing state, and the multiple cameras include the first camera; based on the first description file, recording the second video file recorded by the second camera The video file is loaded on the playing interface of the first video file, and the plurality of cameras includes a second camera, wherein the first description file, the first video file and the second video file have a first association relationship. In this way, multi-view video playback is realized.
  • the method further includes: determining that the third video file recorded by the third camera is in an editing state, and the plurality of cameras include the third camera; The recorded fourth video file is loaded on the editing interface of the third video file, and the plurality of cameras includes a fourth camera, wherein the first description file, the third video file and the fourth video file have a second association relationship; determine Editing information of the target video file by the user, the target video file includes at least one of a third video file and a fourth video file; based on the editing information, the first description file is updated. In this way, multi-view video editing is realized.
  • the original video file of the target video file is not modified.
  • the editing information includes one or more of the following: the size of the display interface of at least one video stream, the position of the display interface of at least one video stream, the sound size of at least one video stream, or, The start time of each editing of at least one video stream by the user.
  • the method further includes: in response to the video sharing request, based on the first description file, synthesizing the video files recorded by each camera into a fifth video file, and sharing the fifth video file. In this way, the sharing of multi-view videos is realized.
  • the video recording information includes one or more of the following: the start recording time of each camera, the identification information of each camera, the display mode of the recording interface of each camera, the video shooting ratio, or, the video During the recording process, the rotation angle of the display screen of the terminal to which the multiple cameras belong;
  • the operation information includes one or more of the following: the size adjustment information of the recording interface of each camera, the display adjustment information of the recording interface of each camera, or the user's adjustment to each camera.
  • the present application provides a multi-view video recording device, comprising:
  • a determining module configured to determine that the plurality of cameras on the first terminal are all in a video recording state, wherein the video recording interface of each camera is displayed on the display screen of the first terminal;
  • the processing module is used to generate a first description file, the first description file includes the video recording information of each camera and the operation information of the user in the process of recording the video, and the first description file is used for the video files recorded by each camera simultaneously on the same interface
  • a first operation is performed, and the first operation includes at least one of playing, editing, and sharing, wherein the video files recorded by each camera are stored separately.
  • processing module is also used to:
  • the plurality of cameras includes the first camera
  • the second video file recorded by the second camera is loaded onto the playback interface of the first video file, and the plurality of cameras includes the second camera, wherein the first description file, the first video file and the second video There is a first association relationship between the files.
  • processing module is also used to:
  • the fourth video file recorded by the fourth camera is loaded onto the editing interface of the third video file, and the plurality of cameras includes the fourth camera, wherein the first description file, the third video file and the fourth video There is a second association relationship between the files;
  • the target video file includes at least one of the third video file and the fourth video file;
  • the first description file is updated.
  • the original video file of the target video file is not modified.
  • the editing information includes one or more of the following: the size of the display interface of at least one video stream, the position of the display interface of at least one video stream, the sound size of at least one video stream, or, The start time of each editing of at least one video stream by the user.
  • processing module is also used to:
  • the video files recorded by the respective cameras are synthesized into a fifth video file, and the fifth video file is shared.
  • the video recording information includes one or more of the following: the start recording time of each camera, the identification information of each camera, the display mode of the recording interface of each camera, the video shooting ratio, or, the video During the recording process, the rotation angle of the display screen of the terminal to which the multiple cameras belong;
  • the operation information includes one or more of the following: size adjustment information of the recording interface of each camera, display adjustment information of the recording interface of each camera, or start time of each operation by the user on the recording interface of each camera.
  • the present application provides an electronic device, comprising:
  • At least one memory for storing programs
  • At least one processor for invoking a program stored in the memory to execute the method provided in the first aspect.
  • the present application provides a computer storage medium, where instructions are stored in the computer storage medium, and when the instructions are executed on the computer, the computer is made to execute the method provided in the first aspect.
  • the present application provides a computer program product comprising instructions that, when executed on a computer, cause the computer to perform the method provided in the first aspect.
  • the present application provides a chip, including at least one processor and an interface;
  • At least one processor obtains program instructions or data through an interface
  • At least one processor is configured to execute program line instructions to implement the method provided in the first aspect.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • 3a is a schematic diagram of an interface display of an electronic device provided by an embodiment of the present application.
  • 3b is a schematic diagram of an interface display of an electronic device provided by an embodiment of the present application.
  • 3c is a schematic diagram of an interface display of an electronic device provided by an embodiment of the present application.
  • 4a is a schematic diagram of information contained in a description file provided by an embodiment of the present application.
  • FIG. 4b is a schematic diagram of information contained in a description file provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a drag trajectory curve provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a playback process of a dual-recording video provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of information contained in a description file provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an interface display of an electronic device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a dual-view video recording process provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a dual-view video browsing/editing process provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of functional modules included in an electronic device provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a multi-view video recording method provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a multi-view video recording device provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • words such as “exemplary”, “such as” or “for example” are used to mean serving as an example, illustration or illustration. Any embodiments or designs described in the embodiments of the present application as “exemplary,” “such as,” or “by way of example” should not be construed as preferred or advantageous over other embodiments or designs. Rather, use of words such as “exemplary,” “such as,” or “by way of example” is intended to present the related concepts in a specific manner.
  • the term "and/or" is only an association relationship for describing associated objects, indicating that there may be three relationships, for example, A and/or B, which may indicate: A alone exists, A alone exists There is B, and there are three cases of A and B at the same time.
  • the term "plurality" means two or more.
  • multiple systems refer to two or more systems
  • multiple terminals refer to two or more terminals
  • multiple video streams refer to two or more video streams.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implying the indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the user has enabled the multi-view video recording function on terminal A.
  • the user selects two rear cameras to record video on terminal A, and selects the split-screen display mode.
  • the video of the two rear cameras can be displayed on the display screen of terminal A.
  • Recording interface namely video recording interface 11 and 12.
  • this solution can store the video files recorded by each camera as a separate video file, that is, the video files recorded by the two rear cameras are stored as two video files instead of one video file. , that is to say, the video files recorded by each camera are stored separately.
  • this solution can also generate one or more description files, and the description files can include video recording information of each camera and operation information of the user during the video recording process.
  • the video recording information includes one or more of the following: the start recording time of each camera, the identification information of each camera, the display mode of the recording interface of each camera, the video shooting ratio, or, during the video recording process, multiple cameras
  • the rotation angle of the display screen of the terminal to which it belongs includes one or more of the following: size adjustment information of the recording interface of each camera, display adjustment information of the recording interface of each camera, or each time the user operates the recording interface of each camera start time.
  • terminal A can automatically load the video files recorded by the remaining cameras to the playback interface based on the description file, thereby restoring the dual-view recording process.
  • terminal A can also automatically load the video files recorded by the remaining cameras to the editing interface based on the description file, thereby restoring the dual-view recording process, so that the user can edit the videos recorded by each camera at the same time. document.
  • the terminal A can also combine the video files recorded by each camera based on the description file, and then share the combined video file, wherein the combined video file can be Presents the dual scene recording process.
  • terminal A can be a mobile phone, tablet computer, digital camera, personal digital assistant (PDA), wearable device, smart TV, Huawei smart screen, Raspberry Pi (Raspberry Pi), Industrial school (IndustriPi) and so on.
  • Exemplary embodiments of terminal A include, but are not limited to, electronic devices equipped with iOS, android, Windows, Harmony OS or other operating systems.
  • the electronic device described above may also be other electronic devices, such as a laptop or the like having a touch-sensitive surface (eg, a touch panel).
  • the embodiment of the present application does not specifically limit the type of the electronic device.
  • the following introduces a schematic diagram of the hardware structure of an electronic device provided in this solution.
  • the electronic device may be the terminal A shown in FIG. 1 .
  • FIG. 2 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, an antenna 1A, an antenna 1B, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, Microphone 270C, sensor module 280, buttons 290, camera 293, display screen 294, etc.
  • the sensor module 280 may include a pressure sensor 280A, a gyro sensor 280B, an acceleration sensor 280E, a distance sensor 280F, a touch sensor 280K, an ambient light sensor 280L, and the like.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 200 .
  • the electronic device 200 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 210 may include one or more processing units, for example, the processor 210 may include an application processor (application processor, AP), a modem (modem), a graphics processor (graphics processing unit, GPU), an image signal processor ( image signal processor (ISP), controller, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU), etc. one or more. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. Exemplarily, the processor 210 may determine whether each camera on the terminal is in a video recording state, may also generate the description file mentioned above, or play, edit and share video files recorded by each camera based on the description file, etc. .
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of the instruction and execute the instruction.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 210 . If the processor 210 needs to use the instruction or data again, it can be directly called from the memory to avoid repeated access, reduce the waiting time of the processor 210, and improve the efficiency of the system.
  • the wireless communication function of the electronic device 200 may be implemented by the antenna 1A, the antenna 1B, the mobile communication module 250, the wireless communication module 260, the modem, the baseband processor, and the like.
  • the antenna 1A and the antenna 1B are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 200 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1A can be multiplexed as a diversity antenna of the wireless local area network. In other examples, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 250 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the electronic device 200 .
  • the mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 250 can receive electromagnetic waves by at least two antennas including the antenna 1A, filter, amplify, etc. the received electromagnetic waves, and transmit them to the modem for demodulation.
  • the mobile communication module 250 can also amplify the signal modulated by the modem, and then convert it into electromagnetic waves for radiation through the antenna 1A.
  • at least part of the functional modules of the mobile communication module 250 may be provided in the processor 210 .
  • at least part of the functional modules of the mobile communication module 250 may be provided in the same device as at least part of the modules of the processor 210 .
  • a modem may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 270A, the receiver 270B, etc.), or displays images or videos through the display screen 294 .
  • the modem may be a stand-alone device.
  • the modem may be independent of the processor 210 and provided in the same device as the mobile communication module 250 or other functional modules.
  • the mobile communication module 250 may be a module in a modem.
  • the wireless communication module 260 can provide applications on the electronic device 200 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 260 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 260 receives the electromagnetic wave via the antenna 1B, frequency modulates and filters the electromagnetic wave signal, and sends the processed signal to the processor 210 .
  • the wireless communication module 260 can also receive the signal to be sent from the processor 210, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 1B.
  • the electronic device 200 implements a display function through a GPU, a display screen 294, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 294 is used to display images, videos, and the like.
  • Display screen 294 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • emitting diode, AMOLED organic light-emitting diode
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • electronic device 200 may include one or more display screens 294 .
  • the display screen 294 may display a video recording interface, a video playing interface, a video editing interface, a video sharing interface and the like of each camera on the terminal.
  • the electronic device 200 can realize the shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294 and the application processor.
  • the ISP is used to process the data fed back by the camera 293 .
  • the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 293 .
  • the camera 293 is used for capturing still images or videos, for example, capturing facial feature information, posture feature information of a person, and the like.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) phototransistor.
  • CMOS complementary Metal Oxide Semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • electronic device 200 may include one or more cameras 293 .
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 200 may support one or more video codecs.
  • the electronic device 200 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • the external memory interface 220 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 200.
  • the external memory card communicates with the processor 210 through the external memory interface 220 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 221 may be used to store computer executable program code, which includes instructions.
  • the processor 210 executes various functional applications and data processing of the electronic device 200 by executing the instructions stored in the internal memory 221 .
  • the internal memory 221 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 200 and the like.
  • the internal memory 221 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 200 may implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 270 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 270 may also be used to encode and decode audio signals. In some examples, the audio module 270 may be provided in the processor 210 , or some functional modules of the audio module 270 may be provided in the processor 210 .
  • Speaker 270A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 200 can listen to music through the speaker 270A, or listen to a hands-free call.
  • the receiver 270B also referred to as an "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 270B close to the human ear.
  • the microphone 270C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 270C through the human mouth, and input the sound signal into the microphone 270C.
  • the electronic device 200 may be provided with at least one microphone 270C.
  • the electronic device 200 may be provided with two microphones 270C, which may implement a noise reduction function in addition to collecting sound signals.
  • the electronic device 200 may further be provided with three, four or more microphones 270C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the sensor module 280 may include a pressure sensor 280A, a gyro sensor 280B, an acceleration sensor 280E, a distance sensor 280F, a touch sensor 280K, an ambient light sensor 280L, and the like.
  • the pressure sensor 280A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • pressure sensor 280A may be provided on display screen 294 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to pressure sensor 280A, the capacitance between the electrodes changes.
  • the electronic device 200 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 294, the electronic device 200 detects the intensity of the touch operation according to the pressure sensor 280A.
  • the electronic device 200 may also calculate the touched position according to the detection signal of the pressure sensor 280A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 280B may be used to determine the motion attitude of the electronic device 200 .
  • the angular velocity of electronic device 200 about three axes ie, x, y, and z axes
  • Gyro sensor 280B can be used for image stabilization.
  • the gyro sensor 280B detects the angle at which the electronic device 200 shakes, calculates the distance that the lens module needs to compensate for according to the angle, and allows the lens to counteract the electronic device through reverse motion. The shaking of the device 200 realizes anti-shaking.
  • the acceleration sensor 280E can detect the magnitude of the acceleration of the electronic device 200 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 200 is stationary. It can also be used to identify the posture of electronic devices, and can be used in horizontal and vertical screen switching, pedometers and other applications.
  • the electronic device 200 can measure the distance through infrared or laser. In some examples, when the electronic device is used to collect the user characteristic information of the user in the environment, the electronic device 200 may use the distance sensor 280F to measure the distance to achieve fast focusing.
  • the ambient light sensor 280L is used to sense ambient light brightness.
  • the electronic device 200 can adaptively adjust the brightness of the display screen 294 according to the perceived ambient light brightness.
  • the touch sensor 280K is also called “touch device”.
  • the touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, also called a "touch screen”.
  • the touch sensor 280K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided via display screen 294 .
  • the keys 290 include a power-on key, a volume key, an input keyboard, and the like. Keys 290 may be mechanical keys. Touch buttons are also possible.
  • the electronic device 200 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 200 .
  • the user can enable the dual-view recording function on the terminal to activate the dual-view recording mode.
  • each camera on the terminal can be listed for the user to select two video streams.
  • terminal B is configured with four cameras.
  • the display interface of terminal B can list the four cameras, that is, the front camera in Figure 3a, the rear camera It has a main camera, a rear ultra-wide-angle camera and a rear telephoto camera.
  • the user can arbitrarily select two cameras. For example, as shown in Figure 3a, the user can select a rear main camera and a rear telephoto camera, or a rear main camera and a front camera, and so on. Wherein, when the user selects the camera, he or she can select by touch or by voice control, which is not limited here.
  • video can be recorded.
  • the video captured by each camera may be presented by default, for example, the default is split-screen display (such as left-right split-screen or top-bottom split-screen, etc.) or picture-in-picture display.
  • the split-screen display can be defaulted; as shown in Figure 3c, when the two cameras selected by the user are When it is the rear main camera and the front camera, it can be displayed as picture-in-picture by default.
  • the video files collected by each camera can be encoded and stored separately. For example, if the video file recorded by the front camera is Video 1, and the video file recorded by the rear main camera is Video 2, you can store Video 1 and Video 2 separately instead of combining Video 1 and Video 2 into one video file to store. It can be understood that there may be an association relationship between the separately encoded and stored video files, for example, the video files recorded this time are associated with each other through file names or database records.
  • the description file may include video recording information of each camera and operation information of the user during video recording.
  • the video recording information includes one or more of the following: the start recording time of each camera, the identification information of each camera, the display mode of the recording interface of each camera, the video shooting ratio, or, during the video recording process, multiple cameras
  • the operation information includes one or more of the following: size adjustment information of the recording interface of each camera, display adjustment information of the recording interface of each camera, or each time the user operates the recording interface of each camera start time.
  • the description file may include information describing the entirety of the dual-view video, and the information may be irrelevant to the recording time axis.
  • each camera such as: the main camera (Main-Lens), the front camera (Front-Lens); there are names of the video files recorded by each camera, such as: VideoFile0: VID_YYYYMMDD_HHMMSS_Main, VideoFile1: VID_YYYYMMDD_HHMMSS_Front;
  • the presentation method of the video collected by each camera during dual-view recording such as Picture-In-Picture of Main-Lens//The rear main camera is a picture-in-picture with a large picture; there are horizontal and vertical screen types, such as: 90 (ie vertical screen), when it is 180, it is horizontal screen; there is a video shooting ratio, such as: 16/9.
  • the description file can also include a record of the recording process according to the time axis, which can record the process of changing the screen.
  • the drag of the picture-in-picture frame happens left and right and ends around the 20th second. It can be understood that, in the process of dragging the picture-in-picture frame by the user, since the dragging is free and irregular, in order to reproduce the user's dragging operation in the later stage, the dragging track can be recorded.
  • the center point of the medium picture rectangle is the reference point to draw the drag track curve and record it as a file, as shown in Figure 5, which shows the drag track curve.
  • the description file can be associated with the video files recorded by each camera that are stored separately. For example, associate them by filename or database record.
  • the terminal may superimpose the other video files on the video file for presentation based on the association between the description file and the video files recorded by the cameras stored separately.
  • the presentation method can be restored with the description file. For example, if the presentation mode recorded in the description file is split-screen display, each video file is superimposed and presented in the split-screen display mode. That is to say, when one of the video files (such as video file 1) is in the playing state, other video files (such as video file 2) associated with the video file (such as video file 1) can be based on the generated description file. ) is automatically loaded into the playback interface of the video file (eg, video file 1).
  • the current playback time information can be obtained, and then based on the current playback time information, the playback status of other video files under the current playback time is searched from the description file, and then based on the playback status of other video files.
  • the playback state load other video files to the playback interface of video file 1.
  • the description file In the process of playing a video file recorded with dual scenes, when the description file records that the user interaction changes (such as the ratio adjustment of the split screen window, the switching of picture-in-picture, etc.) have occurred at a certain point in time, the description file can also be used to perform restore, and then adjust the playback screen correspondingly during the playback process, so as to achieve the same effect as the user's recording process.
  • the description file records that from the 10th second to the 15th second, the user drags a screen, and when the playback reaches the 10th second, according to the user's dragging track recorded in the description file, it starts to reproduce the user's dragging. drag operation.
  • each video file can be in a read-only state, and in addition, the generated description file can also be in a read-only state.
  • the video file described in this solution is in the playing state, which may mean that the user has selected to play the video file, that is, the terminal has received the play instruction issued by the user, but the terminal has not yet started to play the video file.
  • This stage can be understood as Ready to play state; it can also mean that after the terminal receives the play instruction issued by the user, the terminal has started to play the video file, and this stage can be understood as the official play state.
  • the video file VideoFile0 has not been officially played at 00:00:000, and other video files VideoFile1 can be loaded at this time.
  • the loading process it can be determined based on the generated description file. Horizontal and vertical screen type; when the playback time reaches 00:10:500, the description file records the detected screen change, at this time, you can read the track map of the screen change, and reproduce the screen change process based on the track map; at the playback time At 00:20:300, it is recorded in the description file that the end of the screen change is detected, and the process of reproducing the screen change can be ended at this time.
  • the terminal may superimpose the other video files on the video file for presentation based on the association between the description file and the video files recorded by the cameras stored separately.
  • the presentation method can be restored with the description file. For example, if the presentation mode recorded in the description file is split-screen display, each video file is superimposed and presented in the split-screen display mode. That is to say, when one of the video files (such as video file 1) is in the playing state, other video files (such as video file 2) associated with the video file (such as video file 1) can be based on the generated description file. ) is automatically loaded into the editing interface of the video file (eg video file 1).
  • the editing information of each video file by the user can be recorded synchronously. After that, based on the recorded editing information, the description file generated during the dual-view recording process is updated.
  • the editing information may include one or more of the following: the size of the display interface of at least one video stream, the position of the display interface of at least one video stream, the sound level of at least one video stream, or the user's response to at least one video stream The start time of each edit. It is understandable that one video stream may correspond to one video file.
  • the user's editing operation may include: adjustment of a multi-view interface, such as resizing of a split screen and switching and dragging of a picture-in-picture, etc.; may also include selecting a certain picture to perform audio mute processing, and the like.
  • the recorded editing information of the user is: the video file recorded by the main camera was muted at 00:00:08:200, and the video file recorded by the main camera was muted at 00:00:10:500.
  • the PIP switching process starts the PIP drag processing at 00:00:15:100, ends the PIP drag processing at 00:00:20:300, and ends at 00:00:21:100 Unmuted the video files recorded by the main camera.
  • each video file by the user may also include performing filters, text, music, superimposition of watermark borders on the video file, and the like.
  • the related static resources (such as music, watermark frame, etc.) used by the user in the editing operation can also be stored together with the description file, so that the related static resources can be directly called during the later playback.
  • Users can share multiple video files, and can choose to share a single video file or combine multiple video files to share.
  • the user can choose to share the picture-in-picture video, that is, multiple video files; also can choose to share the clipped video, that is, the video file edited by the user; and can also choose to share the foreground video or the background video, that is single video file.
  • the user in addition to the option to share the picture-in-picture video, the user can also select other modes of video, such as the video in the split-screen mode, wherein, the picture-in-picture video and the video in the split-screen mode can be called Record video for multiple scenes.
  • the logic of merging and sharing video files is similar to the logic of playing or editing video files. It also reads the status of each video file during the recording process, the superimposed editing effect, etc. from the description file, and then multiple Video files are combined into one video file and shared to social platforms or other devices. That is to say, after the user sends a video sharing request, the video files recorded by each camera can be synthesized into one video file based on the generated or updated description file, and then the synthesized video file can be shared. Exemplarily, the video file to be synthesized and the corresponding description file may be written into the cache stream, and then the cache stream may be compressed to obtain the combined video file.
  • a correspondence between video frames and timestamps in each video file may be established first, and then multiple video frames corresponding to the same timestamp may be synthesized into one video frame, wherein,
  • synthesizing based on the position of each video frame under the timestamp in the description file, the position of each synthesized video frame in the same interface after being merged can be determined; finally, multiple synthesized video frames are processed. , to get the synthesized video file.
  • For the dual-view video recording process as shown in Figure 9, first, start the dual-view video recording on the terminal. Next, all camera video streams are listed on the terminal. After that, users can select two video streams for capture, as well as select dual video recording formats, such as split screen or picture-in-picture. After that, the camera selected by the user can capture the dual-recording video and encode it into video separately, that is, two video files, video file 1 and video file 2, are generated. At the same time, during the recording process, the operation process of double recording video is recorded, and a recording file (ie, the description file mentioned above) is generated. Finally, associate the dual recording video with the recording file.
  • a recording file ie, the description file mentioned above
  • the user starts video browsing on the terminal.
  • the terminal reads the dual-recording video association information to determine the video file 1, the video file 2, and the recording file (ie, the description file mentioned above).
  • the player in the terminal can combine and play the double-recorded video according to the recording file.
  • the dual-view video editing process continue to refer to FIG. 10, first, the user starts video browsing on the terminal. Next, the terminal reads the dual-recording video association information to determine the video file 1, the video file 2, and the recording file (ie, the description file mentioned above). Then, the terminal can reproduce the double-recorded video according to the recording file. Next, the user readjusts the video and performs editing processing on a certain channel of video. Finally, based on the user's edit information, the record file (ie, the above-mentioned description file) is updated.
  • the record file ie, the above-mentioned description file
  • each functional module involved in the terminal may be implemented by, but not limited to, the processor 210 in the electronic device 200 described in FIG. 2 above.
  • the terminal may include a camera selection module 31, a video acquisition module 32, a dual-view video description file generation module 33, a dual-view video association module 34, a dual-view video playback module 35, and a dual-view video Editing module 36 and dual-view video sharing module 37.
  • the camera selection module 31 can collect video streams of each camera of the terminal, that is, collect the video shot by each camera.
  • the video acquisition module 32 can select two video streams for output based on the camera selection module 31, and encode and save the two video streams separately when outputting.
  • the video capture module 32 can present two video streams by default, such as picture-in-picture or split screen. Wherein, in the case of multi-view video recording, the video capture module 32 can also output more than 2 video streams, for example, 3 video streams, or 4 video streams.
  • the dual-view video description file generation module 33 can generate a description file associated with the dual-view video while the video capture module 32 outputs two video files.
  • the description file may include time, type, mode, horizontal and vertical screen, resolution, and the like.
  • the dual-view video association module 34 can associate two video files and one description file respectively recorded in a specific way, and the logic of the association can be associated through a file name or through a database record.
  • the dual-view video playback module 35 can, when the user selects any dual-view video to play, based on the association provided by the dual-view video association module 34, superimpose another channel of video with this channel of video, and the superimposed presentation mode is completely based on the associated relationship. Description file to restore.
  • the dual-view video editing module 36 can synchronously write the editing operation into the associated description file when the user edits the dual-view video, that is, update the description file generated by the dual-view video description file generation module 33 based on the user's editing information.
  • the dual-view video editing module 36 may provide the user with editing by selecting a display mode.
  • the dual-view video sharing module 37 enables the user to share on demand, for example, to share one or more video files.
  • the dual-view video sharing module 37 can combine the multiple video files into one video file and share it on a social platform or other devices.
  • a multi-view video recording method provided by an embodiment of the present application is introduced. It can be understood that this method is another expression of the multi-view video recording process described above, and the two are combined. The method is proposed based on the multi-view video recording process described above, and for some or all of the content of the method, please refer to the above description of the multi-view video recording process.
  • FIG. 12 is a schematic flowchart of a multi-view video recording method provided by an embodiment of the present application. It can be understood that the method can be performed by any apparatus, device, platform, or device cluster with computing and processing capabilities. As shown in Figure 8, the multi-view video recording method includes:
  • Step S101 It is determined that multiple cameras on the first terminal are in a video recording state, wherein a video recording interface of each camera is displayed on a display screen of the first terminal.
  • Step S102 generating a first description file
  • the first description file includes the video recording information of each camera and the operation information of the user in the process of recording the video, and the first description file is used to simultaneously perform the first description on the video files recorded by each camera on the same interface.
  • An operation, the first operation includes at least one of playing, editing and sharing, wherein the video files recorded by each camera are stored separately. It can be understood that the first description file is the description file described above.
  • the method may further determine that the first video file recorded by the first camera is in a playing state, the plurality of cameras include the first camera; and based on the first description file, load the second video file recorded by the second camera to On the playing interface of the first video file, the plurality of cameras include a second camera, wherein the first description file, the first video file and the second video file have a first association relationship.
  • the method may further determine that the third video file recorded by the third camera is in an editing state, and the plurality of cameras include the third camera; and based on the first description file, load the fourth video file recorded by the fourth camera into the third camera.
  • the plurality of cameras include a fourth camera, wherein the first description file, the third video file and the fourth video file have a second association relationship; determine the editing information of the user on the target video file, The target video file includes at least one of a third video file and a fourth video file; based on the editing information, the first description file is updated.
  • the original video file of the target video file has not been modified.
  • the editing information includes one or more of the following: the size of the display interface of at least one video stream, the position of the display interface of at least one video stream, the sound level of at least one video stream, or the user's response to at least one video stream.
  • the start time of each edit of the video stream includes one or more of the following: the size of the display interface of at least one video stream, the position of the display interface of at least one video stream, the sound level of at least one video stream, or the user's response to at least one video stream.
  • the method may further synthesize the video files recorded by each camera into a fifth video file based on the first description file in response to the video sharing request, and share the fifth video file.
  • the video recording information includes one or more of the following: the start recording time of each camera, the identification information of each camera, the display mode of the recording interface of each camera, the video shooting ratio, or, during the video recording process, The rotation angle of the display screen of the terminal to which the multiple cameras belong;
  • the operation information includes one or more of the following: the size adjustment information of the recording interface of each camera, the display adjustment information of the recording interface of each camera, or the recording interface of each camera by the user The start time of each operation.
  • the video files recorded by each camera are stored separately, and a description file is generated.
  • the description file can be used to simultaneously play, edit or share the video files recorded by each camera on the same interface during post-video processing, so that users can operate the video files generated by multi-scene recordings based on their own needs, improving the video files. Ease of handling.
  • FIG. 13 is a schematic structural diagram of a multi-view video recording apparatus provided by an embodiment of the present application.
  • the multi-view video recording apparatus 300 includes: a determination module 301 and a processing module 302 .
  • the determining module 301 can be used to determine that the multiple cameras on the first terminal are all in the video recording state, wherein the video recording interface of each camera is displayed on the display screen of the first terminal;
  • the processing module 302 can be used to generate the first A description file.
  • the first description file includes video recording information of each camera and operation information of the user during the video recording process.
  • the first description file is used to simultaneously perform the first operation on the video files recorded by each camera on the same interface.
  • the operation includes at least one of playing, editing and sharing, wherein the video files recorded by each camera are stored separately.
  • the processing module 302 may also be configured to determine that the first video file recorded by the first camera is in a playing state, and the multiple cameras include the first camera; based on the first description file, the second video recorded by the second camera is The file is loaded on the playing interface of the first video file, the plurality of cameras include a second camera, wherein the first description file, the first video file and the second video file have a first association relationship.
  • the processing module 302 may also be configured to determine that the third video file recorded by the third camera is in an editing state, and the multiple cameras include the third camera; based on the first description file, the fourth video recorded by the fourth camera is The file is loaded on the editing interface of the third video file, and the plurality of cameras includes a fourth camera, wherein the first description file, the third video file and the fourth video file have a second association relationship;
  • the editing information of the target video file includes at least one of the third video file and the fourth video file; based on the editing information, the first description file is updated.
  • the original video file of the target video file has not been modified.
  • the editing information includes one or more of the following: the size of the display interface of at least one video stream, the position of the display interface of at least one video stream, the sound level of at least one video stream, or the user's response to at least one video stream.
  • the start time of each edit of the video stream includes one or more of the following: the size of the display interface of at least one video stream, the position of the display interface of at least one video stream, the sound level of at least one video stream, or the user's response to at least one video stream.
  • the processing module 302 may be further configured to, in response to the video sharing request, synthesize the video files recorded by each camera into a fifth video file based on the first description file, and share the fifth video file.
  • the video recording information includes one or more of the following: the start recording time of each camera, the identification information of each camera, the display mode of the recording interface of each camera, the video shooting ratio, or, during the video recording process, The rotation angle of the display screen of the terminal to which the multiple cameras belong;
  • the operation information includes one or more of the following: the size adjustment information of the recording interface of each camera, the display adjustment information of the recording interface of each camera, or the recording interface of each camera by the user The start time of each operation.
  • an embodiment of the present application further provides a chip.
  • FIG. 14 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • the chip 1400 includes one or more processors 1401 and an interface circuit 1402 .
  • the chip 1400 may further include a bus 1403 . in:
  • the processor 1401 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method may be completed by an integrated logic circuit of hardware in the processor 1401 or an instruction in the form of software.
  • the above-mentioned processor 1401 may be a general purpose processor, a digital communicator (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSP digital communicator
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the interface circuit 1402 can be used to send or receive data, instructions or information.
  • the processor 1401 can use the data, instructions or other information received by the interface circuit 1402 to process, and can send the processing completion information through the interface circuit 1402.
  • the chip 1400 further includes a memory, which may include a read-only memory and a random access memory, and provides operation instructions and data to the processor.
  • a portion of the memory may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory stores executable software modules or data structures
  • the processor may execute corresponding operations by calling operation instructions stored in the memory (the operation instructions may be stored in the operating system).
  • the interface circuit 1402 can be used to output the execution result of the processor 1401 .
  • processor 1401 and the interface circuit 1402 can be implemented by hardware design, software design, or a combination of software and hardware, which is not limited here.
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application-specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processors
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (programmable rom) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs or known in the art in any other form of storage medium.
  • An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage medium may reside in an ASIC.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted over a computer-readable storage medium.
  • the computer instructions can be sent from one website site, computer, server, or data center to another website site by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) , computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.

Abstract

本申请提供了一种多景录像方法、装置及电子设备,该方法在多景录像过程中可以生成一个描述文件,以及将各个摄像头录制的视频文件分别单独保存,进而可以利用描述文件在同一界面上同时播放、编辑或分享各个摄像头录制的视频文件,使得用户可以基于自身需求对多景录像产生的视频文件进行操作,提升了视频文件处理的便利性。

Description

多景录像方法、装置及电子设备
本申请要求于2021年2月7日提交中国国家知识产权局、申请号为2021101772644、申请名称为“多景录像方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种多景录像方法、装置及电子设备。
背景技术
随着科技的快速发展,智能终端的硬件和软件均得到了快速升级。目前,大量的智能终端(如手机,平板电脑等)上的摄像头由一个变更为多个。智能终端上的摄像头不断增多,提升了智能终端的拍照水平和摄像水平。
目前,用户在使用具有多个摄像头的智能终端时,可以使用多景录像功能,例如双景录像。当用户使用多景录像功能时,智能终端上的多个摄像头可以共同录制视频,且在录制过程中,每个摄像头拍摄的画面均可以单独呈现在智能终端的显示屏幕上。在多景录制时,用户可以基于自身需求,调整每个摄像头的录制参数,如调整焦距等。但目前在多景录制完成后,用户常常难以编辑每个摄像头所录制的视频文件。
发明内容
本申请提供了一种多景录像方法、装置及电子设备,在多景录像过程中可以生成一个描述文件,以及将各个摄像头录制的视频文件分别单独保存,进而可以利用描述文件在同一界面上同时播放、编辑或分享各个摄像头录制的视频文件。
第一方面,本申请提供了一种多景录像方法,该方法包括:确定第一终端上的多个摄像头均处于视频录制状态,其中,在第一终端的显示屏幕上显示有各个摄像头的视频录制界面;生成第一描述文件,第一描述文件包括各个摄像头的视频录制信息和录制视频过程中用户的操作信息,第一描述文件用于在同一界面上同时对各个摄像头录制的视频文件进行第一操作,第一操作包括播放、编辑和分享中的至少一项,其中,各个摄像头录制的视频文件分别单独存储。
由此,在多景录像过程中可以生成一个描述文件,以及将各个摄像头录制的视频文件分别单独保存,进而可以利用描述文件在同一界面上同时播放、编辑或分享各个摄像头录制的视频文件,使得用户可以基于自身需求对多景录像产生的视频文件进行操作,提升了视频文件处理的便利性。
在一种可能的实现方式中,该方法还包括:确定第一摄像头录制的第一视频文件处于播放状态,多个摄像头包括第一摄像头;基于第一描述文件,将第二摄像头录制的第二视频文件加载至第一视频文件的播放界面上,多个摄像头包括第二摄像头,其中,第一描述文件,第一视频文件和第二视频文件之间具有第一关联关系。由此实现多景视频的播放。
在一种可能的实现方式中,其特征在于,该方法还包括:确定第三摄像头录制的第三视频文件处于编辑状态,多个摄像头包括第三摄像头;基于第一描述文件,将第四摄像头录制 的第四视频文件加载至第三视频文件的编辑界面上,多个摄像头包括第四摄像头,其中,第一描述文件,第三视频文件和第四视频文件之间具有第二关联关系;确定用户对目标视频文件的编辑信息,目标视频文件包括第三视频文件和第四视频文件中的至少一项;基于编辑信息,更新第一描述文件。由此实现多景视频的编辑。
在一种可能的实现方式中,目标视频文件的原始视频文件未被修改。
在一种可能的实现方式中,编辑信息包括以下一项或多项:至少一路视频流的显示界面的尺寸大小,至少一路视频流的显示界面的位置,至少一路视频流的声音大小,或者,用户对至少一路视频流每次编辑的起始时间。
在一种可能的实现方式中,该方法还包括:响应于视频分享请求,基于第一描述文件,将各个摄像头录制的视频文件合成为第五视频文件,以及分享第五视频文件。由此实现多景视频的分享。
在一种可能的实现方式中,视频录制信息包括以下一项或多项:各个摄像头的起始录制时间,各个摄像头的标识信息,各个摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过程中,多个摄像头所属终端的显示屏幕的旋转角度;操作信息包括以下一项或多项:各个摄像头的录制界面的尺寸调整信息,各个摄像头的录制界面的显示调整信息,或者用户对各个摄像头的录制界面每次操作的起始时间。
第二方面,本申请提供了一种多景录像装置,包括:
确定模块,用于确定第一终端上的多个摄像头均处于视频录制状态,其中,在第一终端的显示屏幕上显示有各个摄像头的视频录制界面;
处理模块,用于生成第一描述文件,第一描述文件包括各个摄像头的视频录制信息和录制视频过程中用户的操作信息,第一描述文件用于在同一界面上同时对各个摄像头录制的视频文件进行第一操作,第一操作包括播放、编辑和分享中的至少一项,其中,各个摄像头录制的视频文件分别单独存储。
在一种可能的实现方式中,处理模块,还用于:
确定第一摄像头录制的第一视频文件处于播放状态,多个摄像头包括第一摄像头;
基于第一描述文件,将第二摄像头录制的第二视频文件加载至第一视频文件的播放界面上,多个摄像头包括第二摄像头,其中,第一描述文件,第一视频文件和第二视频文件之间具有第一关联关系。
在一种可能的实现方式中,处理模块,还用于:
确定第三摄像头录制的第三视频文件处于编辑状态,多个摄像头包括第三摄像头;
基于第一描述文件,将第四摄像头录制的第四视频文件加载至第三视频文件的编辑界面上,多个摄像头包括第四摄像头,其中,第一描述文件,第三视频文件和第四视频文件之间具有第二关联关系;
确定用户对目标视频文件的编辑信息,目标视频文件包括第三视频文件和第四视频文件中的至少一项;
基于编辑信息,更新第一描述文件。
在一种可能的实现方式中,目标视频文件的原始视频文件未被修改。
在一种可能的实现方式中,编辑信息包括以下一项或多项:至少一路视频流的显示界面的尺寸大小,至少一路视频流的显示界面的位置,至少一路视频流的声音大小,或者,用户 对至少一路视频流每次编辑的起始时间。
在一种可能的实现方式中,处理模块,还用于:
响应于视频分享请求,基于第一描述文件,将各个摄像头录制的视频文件合成为第五视频文件,以及分享第五视频文件。
在一种可能的实现方式中,视频录制信息包括以下一项或多项:各个摄像头的起始录制时间,各个摄像头的标识信息,各个摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过程中,多个摄像头所属终端的显示屏幕的旋转角度;
操作信息包括以下一项或多项:各个摄像头的录制界面的尺寸调整信息,各个摄像头的录制界面的显示调整信息,或者用户对各个摄像头的录制界面每次操作的起始时间。
第三方面,本申请提供了一种是电子设备,包括:
至少一个存储器,用于存储程序;
至少一个处理器,用于调用存储器存储的程序,以执行第一方面中提供的方法。
第四方面,本申请提供了一种计算机存储介质,计算机存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行第一方面中提供的方法。
第五方面,本申请提供了一种包含指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第一方面中提供的方法。
第六方面,本申请提供了一种芯片,包括至少一个处理器和接口;
至少一个处理器通过接口获取程序指令或者数据;
至少一个处理器用于执行程序行指令,以实现第一方面中提供的方法。
附图说明
图1是本申请实施例提供的一种应用场景示意图;
图2是本申请实施例提供的一种电子设备的硬件结构示意图;
图3a是本申请实施例提供的一种电子设备的界面显示示意图;
图3b是本申请实施例提供的一种电子设备的界面显示示意图;
[根据细则91更正 23.12.2021] 
图3c是本申请实施例提供的一种电子设备的界面显示示意图;
图4a是本申请实施例提供的一种描述文件所包含的信息的示意图;
图4b是本申请实施例提供的一种描述文件所包含的信息的示意图
图5是本申请实施例提供的一种拖动轨迹曲线示意图;
图6是本申请实施例提供的一种双录视频的播放过程示意图;
图7是本申请实施例提供的一种描述文件所包含的信息的示意图;
图8是本申请实施例提供的一种电子设备的界面显示示意图;
图9是本申请实施例提供的一种双景视频录像过程的示意图;
图10是本申请实施例提供的一种双景视频浏览/编辑过程的示意图;
图11是本申请实施例提供的一种电子设备所包含的功能模块的示意图;
图12是本申请实施例提供的一种多景录像方法的流程示意图;
图13是本申请实施例提供的一种多景录像装置的结构示意图;
图14是本申请实施例提供的一种芯片的结构示意图。
具体实施方式
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图,对本申请实施例中的技术方案进行描述。
在本申请实施例的描述中,“示例性的”、“例如”或者“举例来说”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”、“例如”或者“举例来说”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”、“例如”或者“举例来说”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,单独存在B,同时存在A和B这三种情况。另外,除非另有说明,术语“多个”的含义是指两个或两个以上。例如,多个系统是指两个或两个以上的系统,多个终端是指两个或两个以上的终端,多路视频流是指两路或两路以上的视频流。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
图1是本申请实施例提供的一种应用场景示意图。如图1所示,用户在终端A上开启了多景录像功能。以双景录像为例,用户在终端A上选择了两个后置摄像头录制视频,并选取了分屏显示模式,此时,在终端A的显示屏幕上可以显示有两个后置摄像头的视频录制界面,即视频录制界面11和12。在双景录像过程中,本方案可以将每个摄像头录制的视频文件均按照单独的视频文件存储,即将两个后置摄像头录制的视频文件按照两个视频文件存储,而不是按照一个视频文件存储,也即是说,各个摄像头录制的视频文件分别单独存储。此外,在双景录制过程中,本方案也可以生成一个或多个描述文件,该描述文件中可以包括各个摄像头的视频录制信息和录制视频过程中用户的操作信息。其中,视频录制信息包括以下一项或多项:各个摄像头的起始录制时间,各个摄像头的标识信息,各个摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过程中,多个摄像头所属终端的显示屏幕的旋转角度;操作信息包括以下一项或多项:各个摄像头的录制界面的尺寸调整信息,各个摄像头的录制界面的显示调整信息,或者用户对各个摄像头的录制界面每次操作的起始时间。
在双景录像完毕后,当用户可以选择播放其中一个视频文件时,终端A可以基于该描述文件将剩余摄像头录制的视频文件自动加载至播放界面上,从而还原双景录制过程。当用户选择编辑其中一个视频文件时,终端A也可以基于该描述文件将剩余摄像头录制的视频文件自动加载至编辑界面上,从而还原双景录制过程,进而使得用户可以同时编辑各个摄像头录制的视频文件。当用户选择同时分享双景录像产生的各个视频文件时,终端A也可以基于该描述文件将各个摄像头录制的视频文件进行合成,然后再分享合成后的视频文件,其中,合成后的视频文件可以呈现双景录制过程。
可以理解的是,本方案中,终端A可以为手机,平板电脑,数码相机,个人数字助理(personal digitalassistant,PDA),可穿戴设备,智能电视,华为智慧屏,树莓派(Raspberry Pi),工业派(IndustriPi)等。终端A的示例性实施例包括但不限于搭载iOS、android、Windows、鸿蒙系统(Harmony OS)或者其他操作系统的电子设备。上述电子设备也可以是其他电子设备,诸如具有触敏表面(例如触控面板)的膝上型计算机(laptop)等。本申请实施例对电子设备的类型不做具体限定。
下面介绍本方案中提供的一种电子设备的硬件结构示意图。该电子设备可以为图1中所示的终端A。
图2是本申请实施例提供的一种电子设备的硬件结构示意图。如图2所示,电子设备200可以包括处理器210,外部存储器接口220,内部存储器221,天线1A,天线1B,移动通信模块250,无线通信模块260,音频模块270,扬声器270A,受话器270B,麦克风270C,传感器模块280,按键290,摄像头293,显示屏294等。其中传感器模块280可以包括压力传感器280A,陀螺仪传感器280B,加速度传感器280E,距离传感器280F,触摸传感器280K,环境光传感器280L等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备200的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元,例如,处理器210可以包括应用处理器(application processor,AP)、调制解调器(modem)、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、和/或神经网络处理器(neural-network processing unit,NPU)等中的一项或多项。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。示例性的,处理器210可以确定终端上的各个摄像头是否处于视频录制状态,也可以生成上文所提及的描述文件,亦可以基于该描述文件播放、编辑和分享各个摄像头录制的视频文件等。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成指令和执行指令的控制。
处理器210中还可以设置存储器,用于存储指令和数据。在一些示例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用,以避免重复存取,减少处理器210的等待时间,提高系统的效率。
电子设备200的无线通信功能可以通过天线1A,天线1B,移动通信模块250,无线通信模块260,调制解调器以及基带处理器等实现。
天线1A和天线1B用于发射和接收电磁波信号。电子设备200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1A复用为无线局域网的分集天线。在另外一些示例中,天线可以和调谐开关结合使用。
移动通信模块250可以提供应用在电子设备200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块250可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块250可以由包括天线1A的至少两根天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调器进行解调。移动通信模块250还可以对经调制解调器调制后的信号放大,经天线1A转为电磁波辐射出去。在一些示例中,移动通信模块250的至少部分功能模块可以被设置于处理器210中。在一些示例中,移动通信模块250的至少部分功能模块可以与处理器210的至少部分模块被设置在同一个器件中。
调制解调器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器270A,受话器270B等)输出声音信号,或通过显示屏294显示图像或视频。在一些示例中,调制解调器可以是独立的器件。在另一些示例中,调制解调器可以独立于处理器210,与移动通信模块250或其他功能模块设置在同一个器件中。在另一些示例中,移动通信模块250可以是调制解调器中的模块。
无线通信模块260可以提供应用在电子设备200上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块260可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块260经由天线1B接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器210。无线通信模块260还可以从处理器210接收待发送的信号,对其进行调频,放大,经天线1B转为电磁波辐射出去。
电子设备200通过GPU,显示屏294,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏294和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可以包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些示例中,电子设备200可以包括一个或多个显示屏294。示例性的,显示屏294可以显示终端上各个摄像头的视频录制界面,视频播放界面,视频编辑界面,视频分享界面等等。
电子设备200可以通过ISP,摄像头293,视频编解码器,GPU,显示屏294以及应用处理器等实现拍摄功能。
ISP用于处理摄像头293反馈的数据。例如,拍摄时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些示例中,ISP可以设置在摄像头293中。
摄像头293用于捕获静态图像或视频,例如,捕获人物的面部特征信息、姿态特征信息等。物体通镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(ComplementaryMetalOxideSemiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些示例中,电子设备200可以包括一个或多个摄像头293。
视频编解码器用于对数字视频压缩或解压缩。电子设备200可以支持一种或多种视频编解码器。这样,电子设备200可以播放或录制多种编码格式的视频,例如:动态图像专家组 (moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
外部存储器接口220可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备200的存储能力。外部存储卡通过外部存储器接口220与处理器210通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行电子设备200的各种功能应用以及数据处理。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备200使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备200可以通过音频模块270,扬声器270A,受话器270B,麦克风270C,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块270用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块270还可以用于对音频信号编码和解码。在一些示例中,音频模块270可以设置于处理器210中,或将音频模块270的部分功能模块设置于处理器210中。
扬声器270A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备200可以通过扬声器270A收听音乐,或收听免提通话。
受话器270B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备200接听电话或语音信息时,可以通过将受话器270B靠近人耳接听语音。
麦克风270C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风270C发声,将声音信号输入到麦克风270C。电子设备200可以设置至少一个麦克风270C。在另一些示例中,电子设备200可以设置两个麦克风270C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备200还可以设置三个,四个或更多麦克风270C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
传感器模块280可以包括压力传感器280A,陀螺仪传感器280B,加速度传感器280E,距离传感器280F,触摸传感器280K,环境光传感器280L等。
其中,压力传感器280A用于感受压力信号,可以将压力信号转换成电信号。在一些示例中,压力传感器280A可以设置于显示屏294。压力传感器280A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器280A,电极之间的电容改变。电子设备200根据电容的变化确定压力的强度。当有触摸操作作用于显示屏294,电子设备200根据压力传感器280A检测所述触摸操作强度。电子设备200也可以根据压力传感器280A的检测信号计算触摸的位置。在一些示例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器280B可以用于确定电子设备200的运动姿态。在一些示例中,可以通过陀螺仪传感器280B确定电子设备200围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器 280B可以用于拍摄防抖。示例性的,当使用电子设备200采集环境中的用户特征信息时,陀螺仪传感器280B检测电子设备200抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备200的抖动,实现防抖。
加速度传感器280E可检测电子设备200在各个方向上(一般为三轴)加速度的大小。当电子设备200静止时可检测出重力的大小及方向。还可以用于识别电子设备的姿态,应用于横竖屏切换,计步器等应用。
距离传感器280F,用于测量距离。电子设备200可以通过红外或激光测量距离。在一些示例中,当利用电子设备采集环境中用户的用户特征信息时,电子设备200可以利用距离传感器280F测距以实现快速对焦。
环境光传感器280L用于感知环境光亮度。电子设备200可以根据感知的环境光亮度自适应调节显示屏294亮度。
触摸传感器280K,也称“触控器件”。触摸传感器280K可以设置于显示屏294,由触摸传感器280K与显示屏294组成触摸屏,也称“触控屏”。触摸传感器280K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏294提供与触摸操作相关的视觉输出。
按键290包括开机键,音量键,输入键盘等。按键290可以是机械按键。也可以是触摸式按键。电子设备200可以接收按键输入,产生与电子设备200的用户设置以及功能控制有关的键信号输入。
接下来,基于图1所示的应用场景和图2所示的电子设备的硬件结构,以双景录像为例,对本方案的多景录像过程进行详细说明。
(1)用户选择双景录制模式
用户可以在终端上开启双景录制功能,以启动双景录制模式。在启动双景录制模式后,可以列出终端上各个摄像头,以供用户选择两路视频流。例如,如图3a所示,终端B上配置有四个摄像头,用户在启动双景录制模式后,终端B的显示界面上可以将四个摄像头列出,即图3a中的前置摄像头,后置主摄像头,后置超广角摄像头和后置长焦摄像头。
(2)用户选择双景录制所需的摄像头
终端在列出各个摄像头后,用户可以任意选择两个摄像头。例如,如图3a所示,用户可以选择后置主摄像头和后置长焦摄像头,也可以选择后置主摄像头和前置摄像头,等等。其中,用户在选择摄像头时,可以通过触控选择,也可以通过声控选择,在此不做限定。
(3)双景录制
在用户选择出两个摄像头后,即可以录制视频。在双景录制过程中,各个摄像头所采集的视频的呈现方式可以是默认的,例如,默认是分屏显示(如左右分屏或上下分屏等)或画中画显示等。示例性的,如图3b所示,当用户选择的两个摄像头为后置主摄像头和后置长焦摄像头时,可以默认是分屏显示;如图3c所示,当用户选择的两个摄像头为后置主摄像头和前置摄像头时,可以默认是画中画显示。
在双景录制过程中,各个摄像头所采集的视频文件可以分别编码存储。例如,前置摄像头录制的视频文件为视频1,后置主摄像头录制的视频文件为视频2,则可以将这视频1和视频2单独存储,而不是将视频1和视频2合成为一个视频文件进行存储。可以理解的是,分别编码存储的视频文件之间可以具有关联关系,例如通过文件名或数据库记录将此次录制的 各个视频文件进行关联。
在双景录制过程中,可以生成与双景录制所产生的视频相关联的描述文件。该描述文件中可以包括各个摄像头的视频录制信息和录制视频过程中用户的操作信息。其中,视频录制信息包括以下一项或多项:各个摄像头的起始录制时间,各个摄像头的标识信息,各个摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过程中,多个摄像头所属终端的显示屏幕的旋转角度;操作信息包括以下一项或多项:各个摄像头的录制界面的尺寸调整信息,各个摄像头的录制界面的显示调整信息,或者用户对各个摄像头的录制界面每次操作的起始时间。
为更好理解,举例说明描述文件的格式。如图4a所示,描述文件中可以包括描述双景视频整体相关的信息,该信息可以与录制时间轴无关。在图4a中,有各个摄像头的名称,如:主摄像头(Main-Lens),前置摄像头(Front-Lens);有各个摄像头录制的视频文件的名称,如:VideoFile0:VID_YYYYMMDD_HHMMSS_Main,VideoFile1:VID_YYYYMMDD_HHMMSS_Front;有双景录制时各个摄像头所采集的视频的呈现方式,如Picture-In-Picture of Main-Lens//后置主摄为大图的画中画;有横竖屏类型,如:90(即竖屏),当为180时为横屏;有视频拍摄比例,如:16/9。如图4b所示,描述文件中也可以包括按照时间轴进行录制过程的记录,该记录可以记录画面发生改变的过程,如:在第10秒左右发生了前后摄像头画面的切换,在第15秒左右发生了画中画框的拖动,并在第20秒左右结束。可以理解的是,在用户拖拽画中画框的过程中,由于拖拽是自由无规则拖动,为了能够在后期复现用户的拖拽操作,因此可以记录拖拽轨迹,例如,以画中画矩形中心点为基准点画出拖动轨迹曲线并记录为文件,如图5所示,该图示出了拖动轨迹曲线。
生成描述文件后,可以将该描述文件与分别单独存储的各个摄像头录制的视频文件进行关联。例如,通过文件名或数据库记录将它们进行关联。
在用户结束双景录制时,即完成双景录制。
(4)播放双景录制的视频文件
用户可以选择任一双景录制的视频文件进行播放。当其中一个视频文件处于播放状态时,终端可以基于描述文件与分别单独存储的各个摄像头录制的视频文件之间的关联关系,将其他的视频视频文件与该视频文件进行叠加进行呈现。叠加时呈现方式则可以以描述文件进行还原。例如,若描述文件中记录的呈现方式为分屏显示,则以分屏显示的方式将各个视频文件叠加进行呈现。也即是说,在其中一个视频文件(如视频文件1)处于播放状态时,可以基于生成的描述文件,将与该视频文件(如视频文件1)相关联的其他视频文件(如视频文件2)自动加载至该视频文件(如视频文件1)的播放界面上。示例性的,在播放视频文件1的过程中,可以获取当前播放时间信息,然后基于当前播放时间信息从描述文件中查找当前播放时间下其他视频文件的播放状态,然后,再基于其他视频文件的播放状态,将其他视频文件加载至视频文件1的播放界面上。
在播放双景录制的视频文件过程中,当描述文件中记录了在某一时间点发生过用户交互变化(如分屏窗口的比例调整、画中画的切换等),也可以按描述文件进行还原,进而在播放过程中对应的调整播放画面,以实现播放与用户录制过程一致的效果。例如,描述文件中记录的是在第10秒至第15秒,用户拖拽了一个画面,则播放到第10秒时,依据描述文件中记录的用户的拖拽轨迹,开始复现用户的拖拽操作。
可以理解的是,在播放双景录制的视频文件过程中,各个视频文件均可以处于只读状态, 此外,已生成的描述文件也可以处于只读状态。此外,本方案中描述的视频文件处于播放状态,可以是指用户已选择播放该视频文件,即终端接收到用户下发的播放指令,但终端还未开始播放该视频文件,此阶段可以理解为预备播放状态;也可以指终端接收到用户下发的播放指令后,终端已开始播放该视频文件,此阶段可以理解为正式播放状态。示例性的,如图6所示,视频文件VideoFi le0在00:00:000时还未正式播放,此时可以加载其他的视频文件VideoFi le1,在加载过程中可以基于已生成的描述文件确定出横竖屏类型;当播放时间到00:10:500时,描述文件中记录有检测到画面变化,此时可以读取画面变化的轨迹图,并基于该轨迹图复现画面变化过程;在播放时间到00:20:300时,描述文件中记录有检测到画面变化结束,此时即可以结束复现画面变化过程。
(5)编辑双景录制的视频文件
用户可以选择任一双景录制的视频文件进行编辑。当其中一个视频文件处于编辑状态时,终端可以基于描述文件与分别单独存储的各个摄像头录制的视频文件之间的关联关系,将其他的视频文件与该视频文件进行叠加进行呈现。叠加时呈现方式则可以以描述文件进行还原。例如,若描述文件中记录的呈现方式为分屏显示,则以分屏显示的方式将各个视频文件叠加进行呈现。也即是说,在其中一个视频文件(如视频文件1)处于播放状态时,可以基于生成的描述文件,将与该视频文件(如视频文件1)相关联的其他视频文件(如视频文件2)自动加载至该视频文件(如视频文件1)的编辑界面上。
在编辑双景录制的视频文件过程中,可以同步记录用户对各个视频文件的编辑信息。之后,再基于记录的编辑信息,更新在双景录制过程中生成的描述文件。其中,编辑信息可以包括以下一项或多项:至少一路视频流的显示界面的尺寸大小,至少一路视频流的显示界面的位置,至少一路视频流的声音大小,或者,用户对至少一路视频流每次编辑的起始时间。可以理解的是,一路视频流可以对应一个视频文件。示例性的,用户的编辑操作可以包括:多景界面的调整,如分屏画面的大小调整与画中画的切换与拖动等;也可以包括选定某一画面进行音频静音处理等。举例来说,如图7所示,记录的用户的编辑信息为:在00:00:08:200时将主摄像头录制的视频文件进行了静音处理,在00:00:10:500时进行了画中画切换处理,在00:00:15:100时开始进行画中画拖动处理,在00:00:20:300时结束画中画拖动处理,在00:00:21:100时将主摄像头录制的视频文件解除了静音。
可以理解的是,用户对各个视频文件的编辑操作,还可以包含对视频文件进行滤镜、文字、音乐、水印边框的叠加等等。用户在编辑操作中所使用的相关静态资源(如音乐、水印边框等)也可以与描述文件一同存储,以使得在后期播放时,可以直接调用相关静态资源。
需说明的是,本方案中,用户在对各个视频文件进行编辑时,原始视频文件并不会被修改。在后期分享编辑后的视频文件时,可以基于更新后的描述文件,在原始视频文件的基础上复现编辑后的视频文件。
(6)分享双景录制的视频文件
用户可将多个视频文件进行分享,分享时可选择单独的一个视频文件分享或者将多个视频文件合并后分享。例如,如图8所示,用户可以选择分享画中画视频,即多个视频文件;也可以选择分享剪辑视频,即用户编辑过的视频文件;还可以选择分享前景视频或者后景视频,即单个视频文件。可以理解的是,在图8中用户除了可以选择分享画中画视频外,也可以选择其他模式的视频,例如分屏模式的视频,其中,画中画视频和分屏模式的视频可以称之为多景录像视频。
本方案中,视频文件合并分享时的逻辑与视频文件播放或编辑时的逻辑相似,同样从描述文件中读取录制过程中每个视频文件的状态、叠加的编辑效果等,然后再将多个视频文件合成为一个视频文件分享至社交平台或者其他设备。也即是说,在用户下发视频分享请求后,可以基于已生成或者更新的描述文件将各个摄像头录制的视频文件合成为一个视频文件,然后再将合成后的视频文件进行分享。示例性的,可以将需要合成的视频文件,以及相应的描述文件写入缓存流中,之后,可对该缓存流进行压缩处理,以得到合并后的视频文件。示例性的,在合成视频文件时,还可以先建立每个视频文件中视频帧与时间戳之间的对应关系,然后,再将同一时间戳对应的多个视频帧合成一个视频帧,其中,在合成时,可以基于描述文件中该时间戳下各个视频帧所处的位置,确定合成后的各个视频帧合并后再同一界面中的位置;最后,再将多个合成后的视频帧进行处理,得到合成后的视频文件。
为了便于理解上述所描述的双景录像过程,下面基于上文的描述,继续以双录视频录像举例说明。可以理解,下述为描述是上文的另一中表达方式,两者是相结合的。
对于双景录像过程:如图9所示,首先,在终端上启动双录视频录制。接着,终端上列举出所有摄像头视频流。之后,用户可以选择两路视频流进行采集,以及选择双录视频记录形式,如分屏或画中画。之后,用户所选择的摄像头可以采集双录视频并分别单独编码为视频,即产生视频文件1和视频文件2两个视频文件。同时在录制过程中,记录双录视频操作过程,并生成记录文件(即上文所提及的描述文件)。最后,将双录视频与记录文件进行关联即可。
对于双景视频浏览过程:如图10所示,首先,用户在终端上启动视频浏览。接着,终端读取双录视频关联信息,以确定出视频文件1、视频文件2和记录文件(即上文所提及的描述文件)。然后,终端中的播放器可以按记录文件将双录视频进行合并播放。
对于双景视频编辑过程:继续参阅图10,首先,用户在终端上启动视频浏览。接着,终端读取双录视频关联信息,以确定出视频文件1、视频文件2和记录文件(即上文所提及的描述文件)。然后,终端可以按记录文件重现双录视频。接着,用户重新调整视频,并对某路视频进行编辑处理。最后,基于用户的编辑信息,更新记录文件(即上文所提及的描述文件)。
接下来,基于上文的描述,以双景录像为例,对本方案中所提及的终端上所涉及的功能模块进行介绍。可以理解的是,以下介绍的功能模块可以用来实现上文描述的双景录制过程,各个功能模块详细的工作过程,可参见上文双景录制过程中的描述,在此就不再一一赘述。此外,终端中涉及的各个功能模块的实现均可以但不限于由上述图2中所描述的电子设备200中的处理器210来实现。
如图11所示,本方案中,终端中可以包括摄像头选择模块31,视频采集模块32,双景视频描述文件生成模块33,双景视频关联模块34,双景视频播放模块35,双景视频编辑模块36和双景视频分享模块37。
其中,摄像头选择模块31可以采集终端的各个摄像头视频流,也即采集各个摄像头拍摄的视频。
视频采集模块32可以基于摄像头选择模块31,选定2路视频流输出,输出时将双路视频分别编码保存。视频采集模块32可以按默认方式呈现2路视频流,如画中画或分屏。其中,当为多景录像时,视频采集模块32也可以输出大于2路的视频流,例如3路视频流,或者4 路视频流等。
双景视频描述文件生成模块33可以在视频采集模块32输出两个视频文件的同时,生成与该双景视频相关联的描述文件。示例性的,描述文件中可以包括时间、类型、模式、横竖屏、分辨率等等。
双景视频关联模块34可以将分别记录的两个视频文件、一个描述文件通过特定方式进行关联,关联的逻辑可通过文件名或者通过数据库记录关联。
双景视频播放模块35可以在用户选择任一双景视频进行播放时,基于双景视频关联模块34提供的关联关系,将另一路视频与该路视频进行叠加,叠加时呈现方式则完全以关联的描述文件进行还原。
双景视频编辑模块36可以在用户进行双景视频进行编辑时,同步将编辑操作写入关联的描述文件,即基于用户的编辑信息,更新双景视频描述文件生成模块33生成的描述文件。示例性的,双景视频编辑模块36可以提供给用户进行选择显示方式编辑。
双景视频分享模块37使用户可以按需分享,例如分享一个或多个视频文件。在用户选择分享多个视频文件时,双景视频分享模块37可以将多个视频文件合成为一个视频文件分享至社交平台或其他设备。
接下来,基于上文所描述的多景录像过程,介绍本申请实施例提供的一种多景录像方法。可以理解的是,该方法是上文所描述的多景录像过程的另一种表达方式,两者是相结合的。该方法是基于上文所描述的多景录像过程提出,该方法中的部分或全部内容可以参见上文对多景录像过程的描述。
请参阅图12,图12是本申请实施例提供的一种多景录像方法的流程示意图。可以理解,该方法可以通过任何具有计算、处理能力的装置、设备、平台、设备集群来执行。如图8所示,该多景录像方法包括:
步骤S101、确定第一终端上的多个摄像头均处于视频录制状态,其中,在第一终端的显示屏幕上显示有各个摄像头的视频录制界面。
步骤S102、生成第一描述文件,第一描述文件包括各个摄像头的视频录制信息和录制视频过程中用户的操作信息,第一描述文件用于在同一界面上同时对各个摄像头录制的视频文件进行第一操作,第一操作包括播放、编辑和分享中的至少一项,其中,各个摄像头录制的视频文件分别单独存储。可以理解的是,第一描述文件即为上文所描述的描述文件。
在一个例子中,该方法还可以确定第一摄像头录制的第一视频文件处于播放状态,多个摄像头包括第一摄像头;以及基于第一描述文件,将第二摄像头录制的第二视频文件加载至第一视频文件的播放界面上,多个摄像头包括第二摄像头,其中,第一描述文件,第一视频文件和第二视频文件之间具有第一关联关系。
在一个例子中,该方法还可以确定第三摄像头录制的第三视频文件处于编辑状态,多个摄像头包括第三摄像头;基于第一描述文件,将第四摄像头录制的第四视频文件加载至第三视频文件的编辑界面上,多个摄像头包括第四摄像头,其中,第一描述文件,第三视频文件和第四视频文件之间具有第二关联关系;确定用户对目标视频文件的编辑信息,目标视频文件包括第三视频文件和第四视频文件中的至少一项;基于编辑信息,更新第一描述文件。
在一个例子中,目标视频文件的原始视频文件未被修改。
在一个例子中,编辑信息包括以下一项或多项:至少一路视频流的显示界面的尺寸大小, 至少一路视频流的显示界面的位置,至少一路视频流的声音大小,或者,用户对至少一路视频流每次编辑的起始时间。
在一个例子中,该方法还可以响应于视频分享请求,基于第一描述文件,将各个摄像头录制的视频文件合成为第五视频文件,以及分享第五视频文件。
在一个例子中,视频录制信息包括以下一项或多项:各个摄像头的起始录制时间,各个摄像头的标识信息,各个摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过程中,多个摄像头所属终端的显示屏幕的旋转角度;操作信息包括以下一项或多项:各个摄像头的录制界面的尺寸调整信息,各个摄像头的录制界面的显示调整信息,或者用户对各个摄像头的录制界面每次操作的起始时间。
可以理解的是,本方案中,在多景录像过程中,将各个摄像头录制的视频文件分别单独存储,以及生成描述文件。由此,在后期视频处理时可以利用描述文件在同一界面上同时播放、编辑或分享各个摄像头录制的视频文件,使得用户可以基于自身需求对多景录像产生的视频文件进行操作,提升了视频文件处理的便利性。
基于上述实施例中的方法,本申请实施例提供了一种多景录像装置。请参阅图13,图13是本申请实施例提供的一种多景录像装置的结构示意图。如图13所示,该多景录像装置300包括:确定模块301和处理模块302。其中,确定模块301可以用于确定第一终端上的多个摄像头均处于视频录制状态,其中,在第一终端的显示屏幕上显示有各个摄像头的视频录制界面;处理模块302可以用于生成第一描述文件,第一描述文件包括各个摄像头的视频录制信息和录制视频过程中用户的操作信息,第一描述文件用于在同一界面上同时对各个摄像头录制的视频文件进行第一操作,第一操作包括播放、编辑和分享中的至少一项,其中,各个摄像头录制的视频文件分别单独存储。
在一个例子中,处理模块302,还可以用于确定第一摄像头录制的第一视频文件处于播放状态,多个摄像头包括第一摄像头;基于第一描述文件,将第二摄像头录制的第二视频文件加载至第一视频文件的播放界面上,多个摄像头包括第二摄像头,其中,第一描述文件,第一视频文件和第二视频文件之间具有第一关联关系。
在一个例子中,处理模块302,还可以用于确定第三摄像头录制的第三视频文件处于编辑状态,多个摄像头包括第三摄像头;基于第一描述文件,将第四摄像头录制的第四视频文件加载至第三视频文件的编辑界面上,多个摄像头包括第四摄像头,其中,第一描述文件,第三视频文件和第四视频文件之间具有第二关联关系;确定用户对目标视频文件的编辑信息,目标视频文件包括第三视频文件和第四视频文件中的至少一项;基于编辑信息,更新第一描述文件。
在一个例子中,目标视频文件的原始视频文件未被修改。
在一个例子中,编辑信息包括以下一项或多项:至少一路视频流的显示界面的尺寸大小,至少一路视频流的显示界面的位置,至少一路视频流的声音大小,或者,用户对至少一路视频流每次编辑的起始时间。
在一个例子中,处理模块302,还可以用于响应于视频分享请求,基于第一描述文件,将各个摄像头录制的视频文件合成为第五视频文件,以及分享第五视频文件。
在一个例子中,视频录制信息包括以下一项或多项:各个摄像头的起始录制时间,各个摄像头的标识信息,各个摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过 程中,多个摄像头所属终端的显示屏幕的旋转角度;操作信息包括以下一项或多项:各个摄像头的录制界面的尺寸调整信息,各个摄像头的录制界面的显示调整信息,或者用户对各个摄像头的录制界面每次操作的起始时间。
应当理解的是,上述装置用于执行上述实施例中的方法,装置中相应的程序模块,其实现原理和技术效果与上述方法中的描述类似,该装置的工作过程可参考上述方法中的对应过程,此处不再赘述。
基于上述实施例中的方法,本申请实施例还提供了一种芯片。请参阅图14,图14为本申请实施例提供的一种芯片的结构示意图。如图14所示,芯片1400包括一个或多个处理器1401以及接口电路1402。可选的,芯片1400还可以包含总线1403。其中:
处理器1401可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1401中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1401可以是通用处理器、数字通信器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
接口电路1402可以用于数据、指令或者信息的发送或者接收,处理器1401可以利用接口电路1402接收的数据、指令或者其它信息,进行加工,可以将加工完成信息通过接口电路1402发送出去。
可选的,芯片1400还包括存储器,存储器可以包括只读存储器和随机存取存储器,并向处理器提供操作指令和数据。存储器的一部分还可以包括非易失性随机存取存储器(NVRAM)。
可选的,存储器存储了可执行软件模块或者数据结构,处理器可以通过调用存储器存储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。
可选的,接口电路1402可用于输出处理器1401的执行结果。
需要说明的,处理器1401、接口电路1402各自对应的功能既可以通过硬件设计实现,也可以通过软件设计来实现,还可以通过软硬件结合的方式来实现,这里不作限制。
应理解,上述方法实施例的各步骤可以通过处理器中的硬件形式的逻辑电路或者软件形式的指令完成。其中,该芯片可以应用于图2所示的电子设备中。
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable rom,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而 使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。

Claims (18)

  1. 一种多景录像方法,其特征在于,所述方法包括:
    确定第一终端上的多个摄像头均处于视频录制状态,其中,在所述第一终端的显示屏幕上显示有各个所述摄像头的视频录制界面;
    生成第一描述文件,所述第一描述文件包括各个所述摄像头的视频录制信息和录制视频过程中用户的操作信息,所述第一描述文件用于在同一界面上同时对各个所述摄像头录制的视频文件进行第一操作,所述第一操作包括播放、编辑和分享中的至少一项,其中,各个所述摄像头录制的视频文件分别单独存储。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定第一摄像头录制的第一视频文件处于播放状态,多个所述摄像头包括所述第一摄像头;
    基于所述第一描述文件,将第二摄像头录制的第二视频文件加载至所述第一视频文件的播放界面上,多个所述摄像头包括所述第二摄像头,其中,所述第一描述文件,所述第一视频文件和所述第二视频文件之间具有第一关联关系。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    确定第三摄像头录制的第三视频文件处于编辑状态,多个所述摄像头包括所述第三摄像头;
    基于所述第一描述文件,将第四摄像头录制的第四视频文件加载至所述第三视频文件的编辑界面上,多个所述摄像头包括所述第四摄像头,其中,所述第一描述文件,所述第三视频文件和所述第四视频文件之间具有第二关联关系;
    确定用户对目标视频文件的编辑信息,所述目标视频文件包括所述第三视频文件和所述第四视频文件中的至少一项;
    基于所述编辑信息,更新所述第一描述文件。
  4. 根据权利要求3所述的方法,其特征在于,所述目标视频文件的原始视频文件未被修改。
  5. 根据权利要求3或4所述的方法,其特征在于,所述编辑信息包括以下一项或多项:至少一路所述视频流的显示界面的尺寸大小,至少一路所述视频流的显示界面的位置,至少一路所述视频流的声音大小,或者,用户对至少一路所述视频流每次编辑的起始时间。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述方法还包括:
    响应于视频分享请求,基于所述第一描述文件,将各个所述摄像头录制的视频文件合成为第五视频文件,以及分享所述第五视频文件。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述视频录制信息包括以下一项或多项:各个所述摄像头的起始录制时间,各个所述摄像头的标识信息,各个所述摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过程中,多个所述摄像头所属终端的显示屏幕的旋转角度;
    所述操作信息包括以下一项或多项:各个所述摄像头的录制界面的尺寸调整信息,各个所述摄像头的录制界面的显示调整信息,或者用户对各个所述摄像头的录制界面每次操作的起始时间。
  8. 一种多景录像装置,其特征在于,包括:
    确定模块,用于确定第一终端上的多个摄像头均处于视频录制状态,其中,在所述第一终端的显示屏幕上显示有各个所述摄像头的视频录制界面;
    处理模块,用于生成第一描述文件,所述第一描述文件包括各个所述摄像头的视频录制信息和录制视频过程中用户的操作信息,所述第一描述文件用于在同一界面上同时对各个所述摄像头录制的视频文件进行第一操作,所述第一操作包括播放、编辑和分享中的至少一项,其中,各个所述摄像头录制的视频文件分别单独存储。
  9. 根据权利要求8所述的装置,其特征在于,所述处理模块,还用于:
    确定第一摄像头录制的第一视频文件处于播放状态,多个所述摄像头包括所述第一摄像头;
    基于所述第一描述文件,将第二摄像头录制的第二视频文件加载至所述第一视频文件的播放界面上,多个所述摄像头包括所述第二摄像头,其中,所述第一描述文件,所述第一视频文件和所述第二视频文件之间具有第一关联关系。
  10. 根据权利要求8或9所述的装置,其特征在于,所述处理模块,还用于:
    确定第三摄像头录制的第三视频文件处于编辑状态,多个所述摄像头包括所述第三摄像头;
    基于所述第一描述文件,将第四摄像头录制的第四视频文件加载至所述第三视频文件的编辑界面上,多个所述摄像头包括所述第四摄像头,其中,所述第一描述文件,所述第三视频文件和所述第四视频文件之间具有第二关联关系;
    确定用户对目标视频文件的编辑信息,所述目标视频文件包括所述第三视频文件和所述第四视频文件中的至少一项;
    基于所述编辑信息,更新所述第一描述文件。
  11. 根据权利要求10所述的装置,其特征在于,所述目标视频文件的原始视频文件未被修改。
  12. 根据权利要求10或11所述的装置,其特征在于,所述编辑信息包括以下一项或多项:至少一路所述视频流的显示界面的尺寸大小,至少一路所述视频流的显示界面的位置,至少一路所述视频流的声音大小,或者,用户对至少一路所述视频流每次编辑的起始时间。
  13. 根据权利要求8-12任一所述的装置,其特征在于,所述处理模块,还用于:
    响应于视频分享请求,基于所述第一描述文件,将各个所述摄像头录制的视频文件合成为第五视频文件,以及分享所述第五视频文件。
  14. 根据权利要求8-13任一所述的装置,其特征在于,所述视频录制信息包括以下一项或多项:各个所述摄像头的起始录制时间,各个所述摄像头的标识信息,各个所述摄像头的录制界面的显示方式,视频拍摄比例,或者,视频录制过程中,多个所述摄像头所属终端的显示屏幕的旋转角度;
    所述操作信息包括以下一项或多项:各个所述摄像头的录制界面的尺寸调整信息,各个所述摄像头的录制界面的显示调整信息,或者用户对各个所述摄像头的录制界面每次操作的起始时间。
  15. 一种电子设备,其特征在于,包括:
    至少一个存储器,用于存储程序;
    至少一个处理器,用于调用所述存储器存储的程序,以执行如权利要求1-7任一所述的方法。
  16. 一种计算机存储介质,所述计算机存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1-7任一所述的方法。
  17. 一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1-7任一所述的方法。
  18. 一种芯片,其特征在于,包括至少一个处理器和接口;
    所述至少一个处理器通过所述接口获取程序指令或者数据;
    所述至少一个处理器用于执行所述程序行指令,以实现如权利要求1-7任一所述的方法。
PCT/CN2021/136160 2021-02-07 2021-12-07 多景录像方法、装置及电子设备 WO2022166371A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21924364.9A EP4274224A1 (en) 2021-02-07 2021-12-07 Multi-scene video recording method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110177264.4A CN114915745B (zh) 2021-02-07 2021-02-07 多景录像方法、装置及电子设备
CN202110177264.4 2021-02-07

Publications (1)

Publication Number Publication Date
WO2022166371A1 true WO2022166371A1 (zh) 2022-08-11

Family

ID=82741861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/136160 WO2022166371A1 (zh) 2021-02-07 2021-12-07 多景录像方法、装置及电子设备

Country Status (3)

Country Link
EP (1) EP4274224A1 (zh)
CN (1) CN114915745B (zh)
WO (1) WO2022166371A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116170527A (zh) * 2023-02-16 2023-05-26 南京金阵微电子技术有限公司 报文编辑方法、报文编辑装置、介质及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460352B (zh) * 2022-11-07 2023-04-07 摩尔线程智能科技(北京)有限责任公司 车载视频的处理方法、装置、设备、存储介质和程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207807A (zh) * 2007-12-18 2008-06-25 孟智平 一种处理视频的方法及其系统
WO2015064855A1 (ko) * 2013-11-01 2015-05-07 주식회사 모브릭 멀티앵글영상촬영을 위한 사용자인터페이스 메뉴 제공방법 및 사용자 인터페이스메뉴 제공장치
CN109451178A (zh) * 2018-12-27 2019-03-08 维沃移动通信有限公司 视频播放方法及终端
CN110072070A (zh) * 2019-03-18 2019-07-30 华为技术有限公司 一种多路录像方法及设备
CN110166652A (zh) * 2019-05-28 2019-08-23 成都依能科技股份有限公司 多轨道音视频同步编辑方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207807A (zh) * 2007-12-18 2008-06-25 孟智平 一种处理视频的方法及其系统
WO2015064855A1 (ko) * 2013-11-01 2015-05-07 주식회사 모브릭 멀티앵글영상촬영을 위한 사용자인터페이스 메뉴 제공방법 및 사용자 인터페이스메뉴 제공장치
CN109451178A (zh) * 2018-12-27 2019-03-08 维沃移动通信有限公司 视频播放方法及终端
CN110072070A (zh) * 2019-03-18 2019-07-30 华为技术有限公司 一种多路录像方法及设备
CN110166652A (zh) * 2019-05-28 2019-08-23 成都依能科技股份有限公司 多轨道音视频同步编辑方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116170527A (zh) * 2023-02-16 2023-05-26 南京金阵微电子技术有限公司 报文编辑方法、报文编辑装置、介质及电子设备
CN116170527B (zh) * 2023-02-16 2023-11-07 南京金阵微电子技术有限公司 报文编辑方法、报文编辑装置、介质及电子设备

Also Published As

Publication number Publication date
CN114915745A (zh) 2022-08-16
EP4274224A1 (en) 2023-11-08
CN114915745B (zh) 2023-11-03

Similar Documents

Publication Publication Date Title
US11669242B2 (en) Screenshot method and electronic device
CN108965980B (zh) 推荐内容显示方法、装置、终端及存储介质
US20230116044A1 (en) Audio processing method and device
CN112394895B (zh) 画面跨设备显示方法与装置、电子设备
US9860448B2 (en) Method and electronic device for stabilizing video
WO2021036542A1 (zh) 录屏方法及移动终端
US9491367B2 (en) Image data processing method and electronic device supporting the same
WO2022258024A1 (zh) 一种图像处理方法和电子设备
US20110117851A1 (en) Method and apparatus for remote controlling bluetooth device
WO2022166371A1 (zh) 多景录像方法、装置及电子设备
US20230328429A1 (en) Audio processing method and electronic device
WO2021013147A1 (zh) 视频处理方法、装置、终端及存储介质
WO2021037227A1 (zh) 一种图像处理方法、电子设备及云服务器
WO2023160285A9 (zh) 视频处理方法和装置
CN111464830A (zh) 图像显示的方法、装置、系统、设备及存储介质
CN111741366A (zh) 音频播放方法、装置、终端及存储介质
CN109451248B (zh) 视频数据的处理方法、装置、终端及存储介质
WO2022042769A2 (zh) 多屏交互的系统、方法、装置和介质
WO2023160295A9 (zh) 视频处理方法和装置
US20230353862A1 (en) Image capture method, graphic user interface, and electronic device
CN109819314B (zh) 音视频处理方法、装置、终端及存储介质
CN112822544A (zh) 视频素材文件生成方法、视频合成方法、设备及介质
CN110971840A (zh) 视频贴图方法及装置、计算机设备及存储介质
JP2017046160A (ja) 画像処理装置、その制御方法、および制御プログラム、並びに記憶媒体
CN111294509A (zh) 视频拍摄方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21924364

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021924364

Country of ref document: EP

Effective date: 20230804

NENP Non-entry into the national phase

Ref country code: DE