WO2017185808A1 - Data processing method, electronic device, and storage medium - Google Patents

Data processing method, electronic device, and storage medium Download PDF

Info

Publication number
WO2017185808A1
WO2017185808A1 PCT/CN2016/113980 CN2016113980W WO2017185808A1 WO 2017185808 A1 WO2017185808 A1 WO 2017185808A1 CN 2016113980 W CN2016113980 W CN 2016113980W WO 2017185808 A1 WO2017185808 A1 WO 2017185808A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video data
feature parameter
data
target sub
Prior art date
Application number
PCT/CN2016/113980
Other languages
French (fr)
Chinese (zh)
Inventor
刘林汶
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017185808A1 publication Critical patent/WO2017185808A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window

Definitions

  • the present invention relates to electronic technologies, and in particular, to a data processing method, an electronic device, and a storage medium.
  • the functions of existing electronic devices are more and more, and more and more functions have become the standard configuration of electronic devices, for example, the camera function of electronic devices; users can use the camera function of electronic devices to take photos or video recording;
  • the electronic device When the video recording is performed by using the photographing function, the electronic device usually processes the collected video data in the same video processing strategy, and is presented in the same presentation manner on the display screen of the electronic device when presented. Can not meet the user's demand for diversified presentation methods, reducing the user experience.
  • the embodiments of the present invention provide a data processing method, an electronic device, and a storage medium, which can at least solve the problems existing in the prior art and enrich the user experience. Improved user experience.
  • a first aspect of the embodiments of the present invention provides a data processing method, including:
  • the electronic device acquires at least one video data in real time in an image capturing area corresponding to the at least one image capturing device by using at least one image capturing device;
  • Generating target video data based on the at least two target sub-video data to make the target Video data is included in the video data that can be rendered in at least two different presentations.
  • the video data processed by the first video processing policy and the second video processing policy may be all
  • the same video data can also be partially identical video data, and can also be completely different video data.
  • the image processing unit is further configured to select different or the same video processing policies for the video data of different display areas by using a user operation implemented in the framing interface of the electronic device.
  • the different presentation manners represent different video feature parameters of the presented video data.
  • the image processing unit is further configured to acquire a video feature parameter input by the user through a user interaction interface, and determine a video processing policy based on the video feature parameter input by the user;
  • the video feature parameter matching the size of the area of the display area is selected from the preset relationship list by the size of the area of the display area selected by the user, and the video processing policy is determined based on the selected video feature parameter; the preset relationship list A list of correspondences between the size of the area of the display area and the video feature parameters.
  • the processing by using the first video processing policy, the at least part of the at least one piece of the video data collected in real time, includes:
  • the video feature parameter characterizing a number of video frames per unit time
  • At least part of the at least one piece of video data that is reduced by the video feature parameter is used as the first target sub-video data; wherein the first target sub-video data is included in the at least two target sub-video data
  • the first target sub-video data can be presented in a first presentation.
  • the video feature parameter characterizing a number of video frames per unit time
  • At least part of the at least one piece of video data whose audio feature parameter is increased is used as the second target sub-video data; wherein the second target sub-video data is included in the at least two target sub-video data
  • the second target sub-video data can be presented in a second presentation.
  • the adjusting the video feature parameters of at least part of the at least one of the at least one video data collected in real time includes:
  • the method further includes:
  • the at least two target sub-video data in the target video data are presented in different presentation manners in at least a first display area and a second display area of the electronic device based on a user operation.
  • a second aspect of the embodiments of the present invention provides an electronic device, including:
  • the image acquisition unit is configured to acquire at least one video data in real time in an image collection area corresponding to the at least one image acquisition device by using at least one image acquisition device;
  • the image processing unit is configured to process at least part of the at least one of the at least one video data collected in real time by using at least the first video processing policy and the second video processing policy to obtain at least two target sub video data;
  • the first video processing policy is different from the second video processing strategy;
  • a video data generating unit configured to generate a mesh based on the at least two target sub video data The video data is marked such that the target video data includes video data that can be presented in at least two different presentations.
  • the video data processed by the first video processing policy and the second video processing policy may be all
  • the same video data can also be partially identical video data, and can also be completely different video data.
  • the method further includes:
  • the video processing policy is a preset processing policy or a processing policy set according to a user operation.
  • the method further includes:
  • the video feature parameter matching the size of the area of the display area is selected from the preset relationship list by the size of the area of the display area selected by the user, and the video processing policy is determined based on the selected video feature parameter; the preset relationship list A list of correspondences between the size of the area of the display area and the video feature parameters.
  • the image processing unit is further configured to reduce a video feature parameter corresponding to at least a part of the at least one piece of the video data collected in real time; the video feature parameter is used to represent a video frame in a unit time. number;
  • the first target sub-video data at least part of the at least one video data that is reduced by the video feature parameter; wherein the first target sub-video data is included in the at least two target sub- In the video data; the first target sub video data can be the first Presented in a presentation.
  • the image processing unit is further configured to: adjust a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; and the video feature parameter represents a video frame in a unit time number;
  • the second target sub-video data configuring, as the second target sub-video data, at least part of the at least one video data that is adjusted by the video feature parameter; wherein the second target sub-video data is included in the at least two target sub- In the video data; the second target sub-video data can be presented in a second presentation.
  • the image processing unit is further configured to determine a video storage feature parameter based on the collected feature parameter, and delete the captured video frame in the video data according to the determined video storage feature parameter. And adjusting the video feature parameters in the video data collected in real time; wherein the video storage feature represents the number of video frames to be saved in a unit time.
  • the electronic device further includes a storage unit and a video display unit;
  • a third aspect of the embodiments of the present invention provides a computer storage medium, wherein the computer storage medium stores a computer program for executing the data processing method described above.
  • the data processing method and the electronic device and the storage medium collect at least one video data in real time in the image collection area corresponding to the at least one image capturing device by using at least one image capturing device, and at least utilize the first a video processing strategy and a second video processing policy, processing at least part of the at least one of the at least one video data collected in real time to obtain at least two target sub video data, and further based on the at least two target sub-
  • the video data generates the target video data, so that the video data included in the target video data can be presented in different presentation manners. Therefore, the method described in the embodiment of the present invention enriches the user experience and improves the user experience; It also satisfies the user's need for diversification of presentation methods.
  • FIG. 2 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 1;
  • FIG. 4 is a schematic diagram 1 of a specific application of a data processing method according to an embodiment of the present invention.
  • FIG. 5 is a second schematic diagram of a specific application of a data processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that in addition to being specifically used for mobile
  • the configuration according to an embodiment of the present invention can also be applied to a fixed type of terminal.
  • FIG. 1 is a schematic diagram showing the hardware structure of an optional mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, and user input.
  • FIG. 1 illustrates a mobile terminal 100 having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal 100 will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal 100.
  • the wireless internet module 113 can be internally or externally coupled to the terminal.
  • the wireless internet access technologies involved in the wireless internet module 113 may include wireless local area network (WLAN), wireless compatibility authentication (Wi-Fi), wireless broadband (Wibro), global microwave interconnection access (Wimax), and high speed downlink. Packet Access (HSDPA) and more.
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal 100.
  • a typical example of the location information module 115 is a location information module 115 as a global positioning system (GPS).
  • GPS global positioning system
  • the position information module 115 as a GPS calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate the three-dimensional current based on longitude, latitude, and altitude. location information.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite.
  • the position information module 115 as a GPS can calculate the speed information by continuously calculating the current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode. Processed map
  • the image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal 100.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal 100.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port (a typical example is a universal serial bus USB port), for connection having The port of the device that identifies the module, the audio input/output (I/O) port, the video I/O port, the headphone port, and so on.
  • the identification module can be stored for storage
  • the user is authenticated with various information of the mobile terminal 100 and may include a User Identity Module (UIM), a Customer Identity Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module (hereinafter referred to as "identification device”) may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path of the terminal 100.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal 100 is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays can be constructed to be transparent to allow the user to view from the outside, which can be referred to as transparent display
  • a typical transparent display can be, for example, a transparent organic light emitting diode (TOLED) display or the like.
  • TOLED transparent organic light emitting diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal 100 may include an external display unit (not shown) and an internal display unit (not shown) ).
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may output audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is converted and output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), random. Access memory (RAM), static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. Wait.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal 100.
  • the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing or playing back multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal 100 has been described in terms of its function.
  • the slide type mobile terminal 100 in various types of mobile terminals 100 such as a folding type, a bar type, a swing type, a slide type mobile terminal 100, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal 100, and is not limited to the slide type mobile terminal 100.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system in which the mobile terminal 100 according to the present invention can operate will now be described with reference to FIG.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the BS 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), with each partition covered by a multi-directional antenna or an antenna pointing in a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally mean a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as multiple cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the system for operation.
  • the mobile terminal 100 is made.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • several satellites 300 are shown, for example GPS satellites 300 may be employed.
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the position information module 115 as a GPS as shown in FIG. 1 is generally configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking techniques or in addition to GPS tracking techniques, other techniques that can track the location of the mobile terminal 100 can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular BS 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC 275 provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280
  • MSC 280 interfaces with BSC 275
  • BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the mobile communication module 112 of the wireless communication unit 110 in the mobile terminal accesses the mobile based on necessary data (including user identification information and authentication information) of the mobile communication network (such as 2G/3G/4G mobile communication network) built in the mobile terminal.
  • the communication network transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for services such as web browsing and network multimedia playback of the mobile terminal user.
  • the wireless internet module 113 of the wireless communication unit 110 implements a function of a wireless hotspot by operating a related protocol function of a wireless hotspot, and the wireless hotspot supports access of a plurality of mobile terminals (any mobile terminal other than the mobile terminal) by multiplexing the mobile communication module.
  • 112 and mobile communication networks The mobile communication connection transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal users such as web browsing and network multimedia broadcasting, since the mobile terminal is substantially a multiplexed mobile terminal and a communication network.
  • the mobile communication connection between the mobile communication data is transmitted, so that the traffic of the mobile communication data consumed by the mobile terminal is included in the communication tariff of the mobile terminal by the charging entity on the communication network side, thereby consuming the communication tariff included in the subscription of the mobile terminal.
  • Data traffic for mobile communication data is transmitted, so that the traffic of the mobile communication data consumed by the mobile terminal is included in the communication tariff of the mobile terminal by the charging entity on the communication network side, thereby consuming the communication tariff included in the subscription of the mobile terminal.
  • FIG. 3 is a schematic flowchart of an implementation of a data processing method according to an embodiment of the present invention; the method is applied to an electronic device, and the electronic device may be specifically described above.
  • a mobile terminal the electronic device is configured or linked with a display screen and at least one image acquisition device; as shown in FIG. 3, the method includes:
  • Step 301 The electronic device acquires at least one video data in real time in an image capturing area corresponding to the at least one image capturing device by using at least one image capturing device;
  • the video data processed by the first video processing policy and the second video processing policy may be all the same Video data, which may also be partially identical video data, or may be completely different video data; for example, using the first camera to collect
  • the obtained video data is the first video data
  • the electronic device may process all the data of the first video data by using the first video processing policy and the second video processing policy, and may also Part of the first video data is processed; when the electronic device processes part of the data in the first video data by using the first video processing policy and the second video processing policy, different video processing policy processing Some of the video data may be the same or different.
  • it may be arbitrarily set according to actual conditions and user requirements, for example, by performing user operations in the framing interface of the electronic device, selecting different or the same videos for video data of different display areas. Processing strategy.
  • FIG. 4 is a schematic diagram 1 of a specific application of the data processing method according to an embodiment of the present invention
  • the electronic device when the electronic device presents the collected video data in the first display area, the electronic device receives the display on the display.
  • the user operation of the screen, and the dashed box shown in the left figure of FIG. 4 is popped up.
  • the size of the dotted frame can be enlarged or reduced by a user dragging, stretching, etc.; after the user determines the dotted frame, the dotted line
  • the sub-video data corresponding to the frame is presented in the second display area of the electronic device in a small screen presentation manner.
  • the electronic device may use the first video processing policy to view the first display area.
  • the video data is processed, and the video data in the second display area is processed by using a second video processing strategy to obtain two target sub-video data, wherein the first target of the two target sub-video data
  • the sub video data is normal video data
  • the second target sub video data of the two target sub video data is video data of the slow motion
  • the target video data contains both ordinary video data playing speed, but also contains video data have slow motion, thereby enriching the user experience, but also enhance the user experience.
  • the electronic device may acquire video data in real time in an image capturing area corresponding to the at least two second cameras by using at least two image capturing devices, such as at least two second cameras.
  • the collected video data has at least two; further, the electronic device processes the different video data collected by the at least two second cameras by using at least the first video processing policy and the second video processing policy, and further Get at least two targets
  • the video data is used to generate target video data based on the at least two target sub-video data, that is, the video data of different presentation modes included in the target video data is data of different video sources.
  • FIG. 5 is a second schematic diagram of a specific application of the data processing method according to the embodiment of the present invention.
  • the electronic device uses the first camera (not shown in FIG. 5) set by itself to collect the first video data. And displaying the second display data in the first display area of the electronic device in real time.
  • the electronic device collects the second video data by using an external second camera (not shown in FIG. 5), and presents the second video data in real time to the second device.
  • a display area further, the electronic device may select and “magnify” different video data presented for each display area according to a gesture of “zooming in” or “zooming out” of the first display area and the second display area by the user in real time.
  • a video processing strategy that matches the gesture of the "reduction”, for example, processing the corresponding first video data in the first display area by using the first video processing policy, and using the second video processing policy to the second Processing corresponding second video data in the display area, thereby obtaining at least two target sub-video data, to finally be based on the at least two target sub-video numbers Generating target video data;
  • the first target sub-video data in the target video data ie, the video data processed by using the first video processing policy on the first video data
  • the target sub-video data ie, the video data processed by the second video processing strategy using the second video processing strategy
  • Step 302 Process at least part of the sub-data in the at least one video data collected in real time by using at least a first video processing policy and a second video processing policy to obtain at least two target sub-video data;
  • the video processing strategy is different from the second video processing strategy;
  • Step 303 Generate target video data based on the at least two target sub-video data, so that the target video data includes video data that can be presented in at least two different presentation manners.
  • the different presentation manners may specifically represent different video feature parameters of the presented video data; where the video feature parameters specifically represent the number of video frames per unit time. That is to say, the different presentation manners may specifically characterize the speed of the presentation, for example, in a slow motion manner, or in a fast lens manner, or in a normal lens manner.
  • the data processing method in the embodiment of the present invention collects at least one video data in real time in the image collection area corresponding to the at least one image capturing device by using at least one image capturing device, and at least utilizes the first video processing strategy. And processing, by the second video processing strategy, at least part of the at least one of the at least one video data collected in real time to obtain at least two target sub-video data, and generating a target video based on the at least two target sub-video data
  • the data in this way, enables the video data contained in the target video data to be presented in different presentation manners. Therefore, the method described in the embodiment of the present invention enriches the user experience and enhances the user experience; The need to present a diverse approach.
  • the embodiment of the present invention provides three specific video processing strategies.
  • the following three specific video processing strategies may be used simultaneously or may be selected. Any two of the three specific video processing strategies, thus laying the foundation for rich presentation.
  • the video processing policy in this embodiment may be preset, or may be set at any time according to user operations, for example, acquiring video feature parameters input by the user through the user interaction interface, thereby determining a video processing policy; or Selecting, by the size of the area of the display area selected by the user, a video feature parameter that matches the size of the area of the display area from the preset relationship list (ie, the correspondence between the size of the area representing the display area and the video feature parameter), and further Determining a video processing strategy; here, those skilled in the art are aware that the embodiments of the present invention aim to emphasize that different video processing strategies are used to process the same video data, or different video data, and the processed video data is used to generate target video data.
  • the target video data is included with video data that can be rendered in different presentation manners, so
  • the setting process of the video processing strategy described above is for explanation only and is not intended to limit the present invention.
  • the first video processing strategy specifically,
  • the electronic device reduces a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter represents a number of video frames per unit time; that is, the electronic The device reduces the number of video frames that are expected to be displayed in a unit time, and further delays the display time interval of the adjacent video frames to implement a slow lens processing strategy. Further, the electronic device reduces the video feature parameters by At least part of the at least one piece of video data as the first target sub-video data; wherein the first target sub-video data is included in the at least two target sub-video data; the first target sub-video data can Presented in a first presentation, such as in the form of a slow motion.
  • the electronic device adjusts a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter represents a number of video frames per unit time; that is, the electronic The device increases the number of video frames that are expected to be displayed in a unit time, thereby shortening the display time interval of the adjacent video frames, and implementing a processing strategy of the fast lens. Further, the electronic device adjusts the video feature parameters. At least part of the at least one piece of video data as the second target sub-video data; wherein the second target sub-video data is included in the at least two target sub-video data; the second target sub-video data can Presented in a second presentation, for example in the form of a fast shot.
  • the present embodiment reduces the video data by using the frame loss processing method based on the above features.
  • the amount of data in order to achieve the purpose of saving storage space.
  • the electronic device determines a video storage feature parameter based on the collected feature parameter (eg, a video capture duration, etc.); the video storage feature represents a number of video frames to be saved in a unit time; and according to the determined Video storage feature parameters for the collected video data
  • the video frame is subjected to deletion processing, and the video feature parameters in the video data collected in real time are adjusted, thereby achieving the purpose of reducing the data amount of the target video data.
  • the number of video frames processed by the frame loss is increased; for example, when the video capture is started, the video as the target video data is selected from the currently collected video data per unit time.
  • the number of frames is N; the N is a positive integer greater than or equal to 1; and the N is less than or equal to the number of video frames per unit time in a normal video, for example, 20 or 30.
  • the length of the video capture is extended, for example, when the video capture duration exceeds the first threshold, the number of video frames selected as the target video data from the currently collected video data becomes M in assuming ten minutes.
  • the M is a positive integer greater than or equal to 1, and M is less than N; that is, when the video acquisition duration exceeds a certain threshold, the video frame as the target video data is selected from the currently collected video data per unit time.
  • the number of the number is small, thereby achieving the purpose of saving storage space.
  • the unselected video frame is the discarded video frame.
  • the third video processing strategy may be specifically an existing common video processing strategy, which is not described here.
  • the embodiment of the present invention can process the collected same or different video data by using the above three video processing strategies, thereby obtaining a plurality of types of video data that can be presented in different presentation manners;
  • the strategy can be arbitrarily combined in two or two, or three video processing strategies appear in one video collection process at the same time, thus satisfying the user's demand for diversified presentation modes, and enriching the user experience and improving the user experience.
  • the electronic device stores the target video data; and based on the user operation in the electronic
  • the at least first display area and the second display area of the device present the at least two target sub-video data in the target video data in different presentation manners.
  • the first display area and the second display area may be preset The area may be a display area determined according to a user operation; further, when the first display area and the second display area are display areas determined according to a user operation, the first display area and The size of the area of the second display area can be arbitrarily set according to user operations.
  • the electronic device may further determine a rendering speed of the video data presented in the local area according to the size of the area of the display area, that is, the area size of the display area corresponds to a video processing policy; for example, The video data presented in the larger display area corresponds to a common video processing strategy, and the video data presented in the smaller display area corresponds to a video processing strategy of a fast lens or a slow motion.
  • the above-mentioned presentation area may be an area determined according to user operations, thereby improving the user's participation and satisfying the user's. Control desire, which in turn enhances the user experience.
  • the functions implemented by the methods in the first embodiment to the third embodiment can be implemented by using a processor in the electronic device to call the program code.
  • the program code can be saved in a computer storage medium, and the electronic device can be seen. At least the processor and the storage medium are included.
  • the embodiment of the invention further describes a computer storage medium, wherein the computer storage medium stores a computer program, and the computer program is used to execute the data processing method described in the embodiment of the invention.
  • the embodiment of the present invention provides an electronic device according to the method of any one of Embodiments 1 to 3; the electronic device may be specifically the mobile terminal described above; specifically, as shown in FIG.
  • the electronic device includes:
  • the image capturing unit 61 is configured to acquire at least one video data in real time in an image capturing area corresponding to the at least one image capturing device by using at least one image capturing device;
  • the video data generating unit 63 is configured to generate target video data based on the at least two target sub-video data such that the target video data includes video data that can be presented in at least two different presentation manners.
  • the image processing unit is further configured to: reduce a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; and the video feature parameter represents a video frame in a unit time And the at least part of the at least one piece of video data that is reduced by the video feature parameter is configured as the first target sub-video data; wherein the first target sub-video data is included in the at least one In the two target sub-video data; the first target sub-video data can be presented in a first presentation manner.
  • the image processing unit is further configured to: adjust a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter represents a video in a unit time
  • the number of frames is further configured to use at least part of the at least one piece of video data that is adjusted by the video feature parameter as the second target sub-video data; wherein the second target sub-video data is included in the In at least two target sub-video data; the second target sub-video data can be presented in a second presentation.
  • the image processing unit is further configured to determine a video storage feature parameter based on the collected feature parameter, and collect the video frame in the captured video data according to the determined video storage feature parameter. And performing a deletion process, and adjusting a video feature parameter in the video data collected in real time; wherein the video storage feature represents a number of video frames to be saved in a unit time.
  • the electronic device further includes a storage unit and a video display unit; wherein
  • the storage unit is configured to store the target video data
  • the video display unit is configured to operate at least first of the electronic device based on a user operation
  • the display area and the second display area present the at least two target sub-video data in the target video data in different presentation manners.
  • the image processing unit 62 and the video data generating unit 63 may each be implemented by a central processing unit (CPU), or a microprocessor (MPU), or a DSP, or an FPGA.
  • the image acquisition unit 61 can be implemented by a camera; the storage unit is implemented by a memory; and the video display unit is implemented by a display screen.
  • the image processing unit 62 and the video data generating unit 63 may each be specifically implemented by a controller; the image capturing unit 61 may be implemented by the camera 121.
  • the storage unit is implemented by a memory 160, and the video display unit may correspond to the display unit 151.
  • an embodiment or “an embodiment” is intended to mean that a particular feature, structure, or characteristic that is associated with an embodiment is included in at least one embodiment of the invention.
  • “a” or “an” In addition, these particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be directed to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the foregoing program may be stored in a computer readable storage medium, and when executed, the program includes The foregoing steps of the method embodiment; and the foregoing storage medium includes: a removable storage device, a ROM, a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a software product.
  • the storage medium includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.
  • the embodiment of the present invention collects at least one video data in real time in the image capturing area corresponding to the at least one image capturing device by using at least one image capturing device, and uses at least the first video processing strategy and the second video processing strategy to perform real-time Processing at least part of the collected at least one piece of video data to obtain at least two target sub-video data, thereby generating target video data based on the at least two target sub-video data, such that the target video is caused
  • the video data included in the data can be presented in different presentation manners. Therefore, the method described in the embodiment of the present invention enriches the user experience and enhances the user experience. At the same time, it also satisfies the user's demand for diversified presentation modes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)

Abstract

Disclosed is a data processing method. The method comprises: an electronic device captures at least one piece of video data in real time in an image capturing region corresponding to at least one image capturing means by using the at least one image capturing means; process, by at least using a first video processing policy and a second video processing policy, at least some sub-data in the at least one piece of video data captured in real time, so as to obtain at least two pieces of target sub-video data, the first video processing policy being different from the second video processing policy; and generate target video data on the basis of the at least two pieces of target sub-video data, so that the target video data comprises video data that can be presented by means of at least two different presentation approaches. Also disclosed are an electronic device and a storage medium.

Description

一种数据处理方法及电子设备、存储介质Data processing method, electronic device and storage medium 技术领域Technical field
本发明涉及电子技术,尤其涉及一种数据处理方法及电子设备、存储介质。The present invention relates to electronic technologies, and in particular, to a data processing method, an electronic device, and a storage medium.
背景技术Background technique
现有电子设备的功能越来越多,且越来越多的功能已然成为电子设备的标准配置,例如,电子设备的拍照功能;用户可以使用电子设备的拍照功能进行拍照或者视频录制;当用户使用拍照功能进行视频录制时,该电子设备通常以同一视频处理策略对采集到的视频数据进行处理,且在呈现时,也是以相同的呈现方式呈现于电子设备的显示屏中,上述方式较为单一,不能满足用户对呈现方式多样化的需求,降低了用户体验。The functions of existing electronic devices are more and more, and more and more functions have become the standard configuration of electronic devices, for example, the camera function of electronic devices; users can use the camera function of electronic devices to take photos or video recording; When the video recording is performed by using the photographing function, the electronic device usually processes the collected video data in the same video processing strategy, and is presented in the same presentation manner on the display screen of the electronic device when presented. Can not meet the user's demand for diversified presentation methods, reducing the user experience.
发明内容Summary of the invention
有鉴于此,为解决现有存在的技术问题,本发明实施例提供了一种数据处理方法及电子设备、存储介质,能够至少解决现有技术中所存在的问题,丰富了用户体验,同时也提升了用户体验。In view of the above, in order to solve the existing technical problems, the embodiments of the present invention provide a data processing method, an electronic device, and a storage medium, which can at least solve the problems existing in the prior art and enrich the user experience. Improved user experience.
本发明实施例的技术方案是这样实现的:The technical solution of the embodiment of the present invention is implemented as follows:
本发明实施例第一方面提供了一种数据处理方法,包括:A first aspect of the embodiments of the present invention provides a data processing method, including:
电子设备利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据;The electronic device acquires at least one video data in real time in an image capturing area corresponding to the at least one image capturing device by using at least one image capturing device;
至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据;所述第一视频处理策略不同于所述第二视频处理策略;Processing at least part of the at least one of the at least one video data collected in real time to obtain at least two target sub video data by using at least a first video processing policy and a second video processing policy; Different from the second video processing strategy;
基于所述至少两个目标子视频数据生成目标视频数据,以使所述目标 视频数据中包含有能够以至少两种不同呈现方式呈现的视频数据。Generating target video data based on the at least two target sub-video data to make the target Video data is included in the video data that can be rendered in at least two different presentations.
上述方案中,当所述目标视频数据中所包含的不同呈现方式的视频数据为同一视频来源的数据时,所述第一视频处理策略和所述第二视频处理策略处理的视频数据可以为全部相同的视频数据,也可以为部分相同的视频数据,还可以为完全不相同的视频数据。In the above solution, when the video data of different presentation modes included in the target video data is data of the same video source, the video data processed by the first video processing policy and the second video processing policy may be all The same video data can also be partially identical video data, and can also be completely different video data.
上述方案中,所述图像处理单元,还配置为通过实施于所述电子设备的取景界面中的用户操作,为不同显示区域的视频数据选取不同的、或者相同的视频处理策略。In the above solution, the image processing unit is further configured to select different or the same video processing policies for the video data of different display areas by using a user operation implemented in the framing interface of the electronic device.
上述方案中,所述不同呈现方式表征呈现的视频数据的视频特征参数不同。In the above solution, the different presentation manners represent different video feature parameters of the presented video data.
上述方案中,所述图像处理单元,还配置为通过用户交互界面获取用户输入的视频特征参数,基于用户输入的视频特征参数确定视频处理策略;或者,In the above solution, the image processing unit is further configured to acquire a video feature parameter input by the user through a user interaction interface, and determine a video processing policy based on the video feature parameter input by the user; or
通过用户选取的显示区域的区域大小,从预设关系列表中选取出与显示区域的区域大小相匹配的视频特征参数,基于选取出的视频特征参数确定出视频处理策略;所述预设关系列表表征显示区域的区域大小与视频特征参数的对应关系列表。The video feature parameter matching the size of the area of the display area is selected from the preset relationship list by the size of the area of the display area selected by the user, and the video processing policy is determined based on the selected video feature parameter; the preset relationship list A list of correspondences between the size of the area of the display area and the video feature parameters.
上述方案中,所述利用第一视频处理策略对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,包括:In the above solution, the processing, by using the first video processing policy, the at least part of the at least one piece of the video data collected in real time, includes:
调小实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;And reducing a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter characterizing a number of video frames per unit time;
将视频特征参数调小后的所述至少一个视频数据中的至少部分子数据作为第一目标子视频数据;其中,所述第一目标子视频数据包含于所述至少两个目标子视频数据中;所述第一目标子视频数据能够以第一呈现方式呈现。And at least part of the at least one piece of video data that is reduced by the video feature parameter is used as the first target sub-video data; wherein the first target sub-video data is included in the at least two target sub-video data The first target sub-video data can be presented in a first presentation.
上述方案中,所述利用第二视频处理策略对实时采集到的所述至少一 个视频数据中的至少部分子数据进行处理,包括:In the above solution, the using the second video processing policy to collect the at least one in real time Processing at least some of the sub-data in the video data, including:
调大实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;And adjusting a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter characterizing a number of video frames per unit time;
将视频特征参数调大后的所述至少一个视频数据中的至少部分子数据作为第二目标子视频数据;其中,所述第二目标子视频数据包含于所述至少两个目标子视频数据中;所述第二目标子视频数据能够以第二呈现方式呈现。At least part of the at least one piece of video data whose audio feature parameter is increased is used as the second target sub-video data; wherein the second target sub-video data is included in the at least two target sub-video data The second target sub-video data can be presented in a second presentation.
上述方案中,所述调大实时采集到的所述至少一个视频数据中的至少部分子数据的视频特征参数,包括:In the above solution, the adjusting the video feature parameters of at least part of the at least one of the at least one video data collected in real time includes:
基于采集特征参数,确定视频存储特征参数;所述视频存储特征表征单位时间内待保存的视频帧的个数;Determining a video storage feature parameter based on the acquisition feature parameter; the video storage feature characterizing a number of video frames to be saved in a unit time;
根据确定出的所述视频存储特征参数,对采集到的所述视频数据中的视频帧进行删除处理,并调大实时采集到的所述视频数据中的视频特征参数。And deleting, according to the determined video storage feature parameter, the captured video frame in the video data, and adjusting a video feature parameter in the video data collected in real time.
上述方案中,所述方法还包括:In the above solution, the method further includes:
存储所述目标视频数据;Storing the target video data;
基于用户操作在所述电子设备的至少第一显示区域和第二显示区域以不同的呈现方式呈现所述目标视频数据中的所述至少两个目标子视频数据。The at least two target sub-video data in the target video data are presented in different presentation manners in at least a first display area and a second display area of the electronic device based on a user operation.
本发明实施例第二方面提供了一种电子设备,包括:A second aspect of the embodiments of the present invention provides an electronic device, including:
图像采集单元,配置为利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据;The image acquisition unit is configured to acquire at least one video data in real time in an image collection area corresponding to the at least one image acquisition device by using at least one image acquisition device;
图像处理单元,配置为至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据;所述第一视频处理策略不同于所述第二视频处理策略;The image processing unit is configured to process at least part of the at least one of the at least one video data collected in real time by using at least the first video processing policy and the second video processing policy to obtain at least two target sub video data; The first video processing policy is different from the second video processing strategy;
视频数据生成单元,配置为基于所述至少两个目标子视频数据生成目 标视频数据,以使所述目标视频数据中包含有能够以至少两种不同呈现方式呈现的视频数据。a video data generating unit configured to generate a mesh based on the at least two target sub video data The video data is marked such that the target video data includes video data that can be presented in at least two different presentations.
上述方案中,当所述目标视频数据中所包含的不同呈现方式的视频数据为同一视频来源的数据时,所述第一视频处理策略和所述第二视频处理策略处理的视频数据可以为全部相同的视频数据,也可以为部分相同的视频数据,还可以为完全不相同的视频数据。In the above solution, when the video data of different presentation modes included in the target video data is data of the same video source, the video data processed by the first video processing policy and the second video processing policy may be all The same video data can also be partially identical video data, and can also be completely different video data.
上述方案中,所述方法还包括:In the above solution, the method further includes:
通过实施于所述电子设备的取景界面中的用户操作,为不同显示区域的视频数据选取不同的、或者相同的视频处理策略。Selecting different or the same video processing strategies for video data of different display areas by user operations implemented in the view interface of the electronic device.
上述方案中,所述不同呈现方式表征呈现的视频数据的视频特征参数不同。In the above solution, the different presentation manners represent different video feature parameters of the presented video data.
上述方案中,所述视频处理策略是预先设置的处理策略,或者是根据用户操作而设置的处理策略。In the above solution, the video processing policy is a preset processing policy or a processing policy set according to a user operation.
上述方案中,所述方法还包括:In the above solution, the method further includes:
通过用户交互界面获取用户输入的视频特征参数,基于用户输入的视频特征参数确定视频处理策略;或者,Obtaining a video feature parameter input by the user through a user interaction interface, and determining a video processing strategy based on a video feature parameter input by the user; or
通过用户选取的显示区域的区域大小,从预设关系列表中选取出与显示区域的区域大小相匹配的视频特征参数,基于选取出的视频特征参数确定出视频处理策略;所述预设关系列表表征显示区域的区域大小与视频特征参数的对应关系列表。The video feature parameter matching the size of the area of the display area is selected from the preset relationship list by the size of the area of the display area selected by the user, and the video processing policy is determined based on the selected video feature parameter; the preset relationship list A list of correspondences between the size of the area of the display area and the video feature parameters.
上述方案中,所述图像处理单元,还配置为调小实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;In the above solution, the image processing unit is further configured to reduce a video feature parameter corresponding to at least a part of the at least one piece of the video data collected in real time; the video feature parameter is used to represent a video frame in a unit time. number;
还配置为将视频特征参数调小后的所述至少一个视频数据中的至少部分子数据作为第一目标子视频数据;其中,所述第一目标子视频数据包含于所述至少两个目标子视频数据中;所述第一目标子视频数据能够以第一 呈现方式呈现。And configuring, as the first target sub-video data, at least part of the at least one video data that is reduced by the video feature parameter; wherein the first target sub-video data is included in the at least two target sub- In the video data; the first target sub video data can be the first Presented in a presentation.
上述方案中,所述图像处理单元,还配置为调大实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;In the above solution, the image processing unit is further configured to: adjust a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; and the video feature parameter represents a video frame in a unit time number;
还配置为将视频特征参数调大后的所述至少一个视频数据中的至少部分子数据作为第二目标子视频数据;其中,所述第二目标子视频数据包含于所述至少两个目标子视频数据中;所述第二目标子视频数据能够以第二呈现方式呈现。And configuring, as the second target sub-video data, at least part of the at least one video data that is adjusted by the video feature parameter; wherein the second target sub-video data is included in the at least two target sub- In the video data; the second target sub-video data can be presented in a second presentation.
上述方案中,所述图像处理单元,还配置为基于采集特征参数,确定视频存储特征参数;根据确定出的所述视频存储特征参数,对采集到的所述视频数据中的视频帧进行删除处理,并调大实时采集到的所述视频数据中的视频特征参数;其中,所述视频存储特征表征单位时间内待保存的视频帧的个数。In the above solution, the image processing unit is further configured to determine a video storage feature parameter based on the collected feature parameter, and delete the captured video frame in the video data according to the determined video storage feature parameter. And adjusting the video feature parameters in the video data collected in real time; wherein the video storage feature represents the number of video frames to be saved in a unit time.
上述方案中,所述电子设备还包括存储单元和视频显示单元;其中,In the above solution, the electronic device further includes a storage unit and a video display unit;
所述存储单元,配置为存储所述目标视频数据;The storage unit is configured to store the target video data;
所述视频显示单元,配置为基于用户操作在所述电子设备的至少第一显示区域和第二显示区域以不同的呈现方式呈现所述目标视频数据中的所述至少两个目标子视频数据。The video display unit is configured to present the at least two target sub-video data in the target video data in different presentation manners according to a user operation in at least a first display area and a second display area of the electronic device.
本发明实施例第三方面提供了一种计算机存储介质,所述计算机存储介质中存储有计算机程序,所述计算机程序用于执行以上所述的数据处理方法。A third aspect of the embodiments of the present invention provides a computer storage medium, wherein the computer storage medium stores a computer program for executing the data processing method described above.
本发明实施例所述的数据处理方法及电子设备、存储介质,通过利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据,并至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据,进而基于所述至少两个目标子 视频数据生成目标视频数据,如此,使得所述目标视频数据中包含的视频数据能够以不同呈现方式呈现,因此,本发明实施例所述的方法丰富了用户体验,也提升了用户体验;同时,也满足了用户对呈现方式多样化的需求。The data processing method and the electronic device and the storage medium according to the embodiment of the present invention collect at least one video data in real time in the image collection area corresponding to the at least one image capturing device by using at least one image capturing device, and at least utilize the first a video processing strategy and a second video processing policy, processing at least part of the at least one of the at least one video data collected in real time to obtain at least two target sub video data, and further based on the at least two target sub- The video data generates the target video data, so that the video data included in the target video data can be presented in different presentation manners. Therefore, the method described in the embodiment of the present invention enriches the user experience and improves the user experience; It also satisfies the user's need for diversification of presentation methods.
附图说明DRAWINGS
图1为实现本发明各个实施例的一个可选的移动终端的硬件结构示意图;1 is a schematic structural diagram of hardware of an optional mobile terminal embodying various embodiments of the present invention;
图2为如图1所示的移动终端的无线通信系统示意图;2 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 1;
图3为本发明实施例一数据处理方法的实现流程示意图;3 is a schematic flowchart of an implementation process of a data processing method according to an embodiment of the present invention;
图4为本发明实施例数据处理方法的具体应用示意图一;4 is a schematic diagram 1 of a specific application of a data processing method according to an embodiment of the present invention;
图5为本发明实施例数据处理方法的具体应用示意图二;FIG. 5 is a second schematic diagram of a specific application of a data processing method according to an embodiment of the present invention; FIG.
图6为本发明实施例电子设备的结构示意图。FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本发明的技术方案,并不用于限定本发明的保护范围。It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the scope of the invention.
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。A mobile terminal embodying various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, the use of suffixes such as "module", "component" or "unit" for indicating an element is merely an explanation for facilitating the present invention, and does not have a specific meaning per se. Therefore, "module" and "component" can be used in combination.
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动 目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。The mobile terminal can be implemented in various forms. For example, the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that in addition to being specifically used for mobile In addition to the elements of the object, the configuration according to an embodiment of the present invention can also be applied to a fixed type of terminal.
图1为实现本发明各个实施例一可选的移动终端的硬件结构示意,如图1所示,移动终端100可以包括无线通信单元110、音频/视频(A/V)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端100,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端100的元件。1 is a schematic diagram showing the hardware structure of an optional mobile terminal embodying various embodiments of the present invention. As shown in FIG. 1, the mobile terminal 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, and user input. The unit 130, the sensing unit 140, the output unit 150, the memory 160, the interface unit 170, the controller 180, the power supply unit 190, and the like. FIG. 1 illustrates a mobile terminal 100 having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal 100 will be described in detail below.
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元110可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。 Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、DVB-H,前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播 接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel can include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Moreover, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like. . The broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems. In particular, the broadcast receiving module 111 can use a data broadcasting system such as Multimedia Broadcast-Turround (DMB-T), Digital Multimedia Broadcast-Satellite (DMB-S), DVB-H, Forward Link Media (MediaFLO @ ), Digital broadcasting systems such as the terrestrial digital broadcasting integrated service (ISDB-T) and the like receive digital broadcasting. The broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。The mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
无线互联网模块113支持移动终端100的无线互联网接入。无线互联网模块113可以内部或外部地耦接到终端。无线互联网模块113所涉及的无线互联网接入技术可以包括无线局域网(WLAN)、无线相容性认证(Wi-Fi)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA)等等。The wireless internet module 113 supports wireless internet access of the mobile terminal 100. The wireless internet module 113 can be internally or externally coupled to the terminal. The wireless internet access technologies involved in the wireless internet module 113 may include wireless local area network (WLAN), wireless compatibility authentication (Wi-Fi), wireless broadband (Wibro), global microwave interconnection access (Wimax), and high speed downlink. Packet Access (HSDPA) and more.
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。The short range communication module 114 is a module for supporting short range communication. Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
位置信息模块115是用于检查或获取移动终端100的位置信息的模块。位置信息模块115的典型示例是作为全球定位系统(GPS)的位置信息模块115。根据当前的技术,作为GPS的位置信息模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,作为GPS的位置信息模块115能够通过实时地连续计算当前位置信息来计算速度信息。The location information module 115 is a module for checking or acquiring location information of the mobile terminal 100. A typical example of the location information module 115 is a location information module 115 as a global positioning system (GPS). According to the current technology, the position information module 115 as a GPS calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate the three-dimensional current based on longitude, latitude, and altitude. location information. Currently, the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite. Further, the position information module 115 as a GPS can calculate the speed information by continuously calculating the current position information in real time.
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图 像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端100的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。The A/V input unit 120 is for receiving an audio or video signal. The A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode. Processed map The image frame can be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal 100. The microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data. The processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode. The microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端100的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。The user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal 100. The user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc. In particular, when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。The sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal. The orientation of 100, the acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling the operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide type mobile phone, the sensing unit 140 can sense whether the slide type phone is turned on or off. In addition, the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口(典型示例是通用串行总线USB端口)、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于 验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。The interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port (a typical example is a universal serial bus USB port), for connection having The port of the device that identifies the module, the audio input/output (I/O) port, the video I/O port, the headphone port, and so on. The identification module can be stored for storage The user is authenticated with various information of the mobile terminal 100 and may include a User Identity Module (UIM), a Customer Identity Module (SIM), a Universal Customer Identity Module (USIM), and the like. In addition, the device having the identification module (hereinafter referred to as "identification device") may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。The interface unit 170 can be configured to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal 100 and externally Data is transferred between devices.
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端100的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端100是否准确地安装在底座上的信号。In addition, when the mobile terminal 100 is connected to the external base, the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path of the terminal 100. Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal 100 is accurately mounted on the base.
输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。 Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。The display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显 示器,典型的透明显示器可以例如为透明有机发光二极管(TOLED)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端100可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。Meanwhile, when the display unit 151 and the touch panel are superposed on each other in the form of a layer to form a touch screen, the display unit 151 can function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays can be constructed to be transparent to allow the user to view from the outside, which can be referred to as transparent display A typical transparent display can be, for example, a transparent organic light emitting diode (TOLED) display or the like. According to a particular desired embodiment, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal 100 may include an external display unit (not shown) and an internal display unit (not shown) ). The touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
音频输出模块152可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。The audio output module 152 may output audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like. The audio signal is converted and output as sound. Moreover, the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100. The audio output module 152 can include a speaker, a buzzer, and the like.
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incoming communication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。The alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储已经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。The memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机 访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), random. Access memory (RAM), static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. Wait. Moreover, the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
控制器180通常控制移动终端100的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现或回放多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。The controller 180 typically controls the overall operation of the mobile terminal 100. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing or playing back multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。The power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。The various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementations, the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle. For software implementations, implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation. The software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by controller 180.
至此,已经按照其功能描述了移动终端100。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端100等等的各种类型的移动终端100中的滑动型移动终端100作为示例。因此,本发明能够应用于任何类型的移动终端100,并且不限于滑动型移动终端100。 So far, the mobile terminal 100 has been described in terms of its function. Hereinafter, for the sake of brevity, the slide type mobile terminal 100 in various types of mobile terminals 100 such as a folding type, a bar type, a swing type, a slide type mobile terminal 100, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal 100, and is not limited to the slide type mobile terminal 100.
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。The mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
现在将参考图2描述其中根据本发明的移动终端100能够操作的通信系统。A communication system in which the mobile terminal 100 according to the present invention can operate will now be described with reference to FIG.
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。Such communication systems may use different air interfaces and/or physical layers. For example, air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc. As a non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC 280被构造为与公共电话交换网络(PSTN)290形成接口。MSC 280还被构造为与可以经由回程线路耦接到BS 270的BSC 275形成接口。回程线路可以根据若干已知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM、IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC 275。Referring to FIG. 2, a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280. The MSC 280 is configured to interface with a public switched telephone network (PSTN) 290. The MSC 280 is also configured to interface with a BSC 275 that can be coupled to the BS 270 via a backhaul line. The backhaul line can be constructed in accordance with any of a number of known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
每个BS 270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS 270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS 270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。Each BS 270 can serve one or more partitions (or regions), with each partition covered by a multi-directional antenna or an antenna pointing in a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
分区与频率分配的交叉可以被称为CDMA信道。BS 270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语“基站”可以用于笼统地表示单个BSC 275和至少一个BS 270。基站也可以被称为“蜂窝站”。或者,特定BS 270的各分区可以被称为多个蜂窝站。The intersection of partitioning and frequency allocation can be referred to as a CDMA channel. BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" can be used to generally mean a single BSC 275 and at least one BS 270. A base station can also be referred to as a "cell station." Alternatively, each partition of a particular BS 270 may be referred to as multiple cellular stations.
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操 作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个卫星300,例如可以采用GPS卫星300。卫星300帮助定位多个移动终端100中的至少一个。As shown in FIG. 2, a broadcast transmitter (BT) 295 transmits a broadcast signal to the system for operation. The mobile terminal 100 is made. A broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In Figure 2, several satellites 300 are shown, for example GPS satellites 300 may be employed. The satellite 300 helps locate at least one of the plurality of mobile terminals 100.
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的作为GPS的位置信息模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端100的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。In Figure 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites. The position information module 115 as a GPS as shown in FIG. 1 is generally configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking techniques or in addition to GPS tracking techniques, other techniques that can track the location of the mobile terminal 100 can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
作为无线通信系统的一个典型操作,BS 270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定BS 270接收的每个反向链路信号被在特定BS 270内进行处理。获得的数据被转发给相关的BSC 275。BSC 275提供通话资源分配和包括BS 270之间的软切换过程的协调的移动管理功能。BSC 275还将接收到的数据路由到MSC 280,其提供用于与PSTN 290形成接口的额外的路由服务。类似地,PSTN 290与MSC 280形成接口,MSC 280与BSC 275形成接口,并且BSC 275相应地控制BS 270以将正向链路信号发送到移动终端100。As a typical operation of a wireless communication system, BS 270 receives reverse link signals from various mobile terminals 100. Mobile terminal 100 typically participates in calls, messaging, and other types of communications. Each reverse link signal received by a particular BS 270 is processed within a particular BS 270. The obtained data is forwarded to the relevant BSC 275. The BSC 275 provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270. The BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290. Similarly, PSTN 290 interfaces with MSC 280, MSC 280 interfaces with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
移动终端中无线通信单元110的移动通信模块112基于移动终端内置的接入移动通信网络(如2G/3G/4G等移动通信网络)的必要数据(包括用户识别信息和鉴权信息)接入移动通信网络为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据)。The mobile communication module 112 of the wireless communication unit 110 in the mobile terminal accesses the mobile based on necessary data (including user identification information and authentication information) of the mobile communication network (such as 2G/3G/4G mobile communication network) built in the mobile terminal. The communication network transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for services such as web browsing and network multimedia playback of the mobile terminal user.
无线通信单元110的无线互联网模块113通过运行无线热点的相关协议功能而实现无线热点的功能,无线热点支持多个移动终端(移动终端之外的任意移动终端)接入,通过复用移动通信模块112与移动通信网络之 间的移动通信连接为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据),由于移动终端实质上是复用移动终端与通信网络之间的移动通信连接传输移动通信数据的,因此移动终端消耗的移动通信数据的流量由通信网络侧的计费实体计入移动终端的通信资费,从而消耗移动终端签约使用的通信资费中包括的移动通信数据的数据流量。The wireless internet module 113 of the wireless communication unit 110 implements a function of a wireless hotspot by operating a related protocol function of a wireless hotspot, and the wireless hotspot supports access of a plurality of mobile terminals (any mobile terminal other than the mobile terminal) by multiplexing the mobile communication module. 112 and mobile communication networks The mobile communication connection transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal users such as web browsing and network multimedia broadcasting, since the mobile terminal is substantially a multiplexed mobile terminal and a communication network. The mobile communication connection between the mobile communication data is transmitted, so that the traffic of the mobile communication data consumed by the mobile terminal is included in the communication tariff of the mobile terminal by the charging entity on the communication network side, thereby consuming the communication tariff included in the subscription of the mobile terminal. Data traffic for mobile communication data.
基于上述移动终端100硬件结构以及通信系统,提出本发明方法各个实施例。Based on the above-described hardware structure of the mobile terminal 100 and the communication system, various embodiments of the method of the present invention are proposed.
实施例一Embodiment 1
本发明实施例提供了一种数据处理方法;具体地,图3为本发明实施例一数据处理方法的实现流程示意图;所述方法应用于电子设备,所述电子设备可以具体为以上所述的移动终端;所述电子设备设置或链接有显示屏、至少一个图像采集装置;如图3所示,所述方法包括:The embodiment of the present invention provides a data processing method. Specifically, FIG. 3 is a schematic flowchart of an implementation of a data processing method according to an embodiment of the present invention; the method is applied to an electronic device, and the electronic device may be specifically described above. a mobile terminal; the electronic device is configured or linked with a display screen and at least one image acquisition device; as shown in FIG. 3, the method includes:
步骤301:所述电子设备利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据;Step 301: The electronic device acquires at least one video data in real time in an image capturing area corresponding to the at least one image capturing device by using at least one image capturing device;
在一实施例中,所述电子设备可以利用一个图像采集装置,如第一摄像头在所述第一摄像头对应的图像采集区域中实时采集视频数据,此时,采集到的视频数据仅有一个;进一步地,所述电子设备至少利用第一视频处理策略和第二视频处理策略对所述第一摄像头采集到的同一视频数据进行处理,进而得到至少两个目标子视频数据,以最终基于所述至少两个目标子视频数据生成目标视频数据,也就是说,所述目标视频数据中所包含的不同呈现方式的视频数据为同一视频来源的数据。In an embodiment, the electronic device may use an image capturing device, such as a first camera to collect video data in real time in an image capturing area corresponding to the first camera. At this time, only one video data is collected; Further, the electronic device processes the same video data collected by the first camera by using at least the first video processing policy and the second video processing policy, thereby obtaining at least two target sub-video data, to finally obtain the The at least two target sub-video data generate target video data, that is, the video data of different presentation modes included in the target video data is data of the same video source.
进一步地,当所述目标视频数据中所包含的不同呈现方式的视频数据为同一视频来源的数据时,所述第一视频处理策略和所述第二视频处理策略处理的视频数据可以为全部相同的视频数据,也可以为部分相同的视频数据,还可以为完全不相同的视频数据;例如,利用所述第一摄像头采集 到的视频数据为第一视频数据,此时,所述电子设备可以利用所述第一视频处理策略和第二视频处理策略对所述第一视频数据的全部数据进行处理,还可以对所述第一视频数据中的部分数据进行处理;当所述电子设备利用所述第一视频处理策略和第二视频处理策略对所述第一视频数据中的部分数据进行处理时,不同视频处理策略处理的部分视频数据可以相同,也可以不相同。这里,在实际应用中,可以根据实际情况,用户需求而任意设置,例如,通过实施于所述电子设备的取景界面中的用户操作,为不同显示区域的视频数据选取不同的、或者相同的视频处理策略。Further, when the video data of different presentation modes included in the target video data is data of the same video source, the video data processed by the first video processing policy and the second video processing policy may be all the same Video data, which may also be partially identical video data, or may be completely different video data; for example, using the first camera to collect The obtained video data is the first video data, and the electronic device may process all the data of the first video data by using the first video processing policy and the second video processing policy, and may also Part of the first video data is processed; when the electronic device processes part of the data in the first video data by using the first video processing policy and the second video processing policy, different video processing policy processing Some of the video data may be the same or different. Here, in practical applications, it may be arbitrarily set according to actual conditions and user requirements, for example, by performing user operations in the framing interface of the electronic device, selecting different or the same videos for video data of different display areas. Processing strategy.
具体地,图4为本发明实施例数据处理方法的具体应用示意图一;如图4所示,当所述电子设备在第一显示区域呈现采集到视频数据时,所述电子设备接收实施于显示屏的用户操作,并弹出如图4中左图所示的虚线框,这里,所述虚线框的大小可以通过用户拖动,拉伸等操作放大或缩小;待用户确定虚线框后,与虚线框对应的子视频数据则以小屏幕的呈现方式呈现于所述电子设备的第二显示区域中,此时,所述电子设备可以利用所述第一视频处理策略对所述第一显示区域中的视频数据进行处理,同时,利用第二视频处理策略对所述第二显示区域中的视频数据进行处理,得到两个目标子视频数据,其中所述两个目标子视频数据中的第一目标子视频数据为正常视频数据,所述两个目标子视频数据中的第二目标子视频数据为慢镜头的视频数据,进而基于所述两个目标子视频数据得到目标视频数据,使所述目标视频数据中既包含有正在播放速度的普通视频数据,又包含有慢镜头的视频数据,进而丰富了用户体验,也提升了用户体验。Specifically, FIG. 4 is a schematic diagram 1 of a specific application of the data processing method according to an embodiment of the present invention; as shown in FIG. 4, when the electronic device presents the collected video data in the first display area, the electronic device receives the display on the display. The user operation of the screen, and the dashed box shown in the left figure of FIG. 4 is popped up. Here, the size of the dotted frame can be enlarged or reduced by a user dragging, stretching, etc.; after the user determines the dotted frame, the dotted line The sub-video data corresponding to the frame is presented in the second display area of the electronic device in a small screen presentation manner. At this time, the electronic device may use the first video processing policy to view the first display area. The video data is processed, and the video data in the second display area is processed by using a second video processing strategy to obtain two target sub-video data, wherein the first target of the two target sub-video data The sub video data is normal video data, and the second target sub video data of the two target sub video data is video data of the slow motion, and further based on the Target sub-video data to obtain video data target, the target video data contains both ordinary video data playing speed, but also contains video data have slow motion, thereby enriching the user experience, but also enhance the user experience.
在另一实施例中,所述电子设备可以利用至少两个图像采集装置,如至少两个第二摄像头,在所述至少两个第二摄像头对应的图像采集区域中实时采集视频数据,此时,采集到的视频数据有至少两个;进一步地,所述电子设备至少利用第一视频处理策略和第二视频处理策略对所述至少两个第二摄像头采集到的不同视频数据进行处理,进而得到至少两个目标子 视频数据,以最终基于所述至少两个目标子视频数据生成目标视频数据,也就是说,所述目标视频数据中所包含的不同呈现方式的视频数据为不同视频来源的数据。In another embodiment, the electronic device may acquire video data in real time in an image capturing area corresponding to the at least two second cameras by using at least two image capturing devices, such as at least two second cameras. The collected video data has at least two; further, the electronic device processes the different video data collected by the at least two second cameras by using at least the first video processing policy and the second video processing policy, and further Get at least two targets The video data is used to generate target video data based on the at least two target sub-video data, that is, the video data of different presentation modes included in the target video data is data of different video sources.
具体地,图5为本发明实施例数据处理方法的具体应用示意图二;如图5所示,所述电子设备利用自身设置的第一摄像头(图5中未示出)采集第一视频数据,并实时呈现于所述电子设备的第一显示区域,同时,所述电子设备利用外接第二摄像头(图5中未示出)采集第二视频数据,并实时呈现于所述电子设备的第二显示区域;进一步地,所述电子设备可以根据用户实时于所述第一显示区域和第二显示区域“放大”或“缩小”的手势,为每一显示区域呈现的不同视频数据选取与“放大”或“缩小”的手势相匹配的视频处理策略,如,利用第一视频处理策略对所述第一显示区域中对应的第一视频数据进行处理,利用第二视频处理策略对所述第二显示区域中对应的第二视频数据进行处理,进而得到至少两个目标子视频数据,以最终基于所述至少两个目标子视频数据生成目标视频数据;这里,所述目标视频数据中的第一目标子视频数据(即利用第一视频处理策略对第一视频数据进行处理后的视频数据)为慢镜头播放的视频数据,第二目标子视频数据(即利用第二视频处理策略对第二视频数据进行处理后的视频数据)可以为快镜头播放的视频数据。Specifically, FIG. 5 is a second schematic diagram of a specific application of the data processing method according to the embodiment of the present invention; as shown in FIG. 5, the electronic device uses the first camera (not shown in FIG. 5) set by itself to collect the first video data. And displaying the second display data in the first display area of the electronic device in real time. At the same time, the electronic device collects the second video data by using an external second camera (not shown in FIG. 5), and presents the second video data in real time to the second device. a display area; further, the electronic device may select and “magnify” different video data presented for each display area according to a gesture of “zooming in” or “zooming out” of the first display area and the second display area by the user in real time. a video processing strategy that matches the gesture of the "reduction", for example, processing the corresponding first video data in the first display area by using the first video processing policy, and using the second video processing policy to the second Processing corresponding second video data in the display area, thereby obtaining at least two target sub-video data, to finally be based on the at least two target sub-video numbers Generating target video data; here, the first target sub-video data in the target video data (ie, the video data processed by using the first video processing policy on the first video data) is video data of slow motion playback, and second The target sub-video data (ie, the video data processed by the second video processing strategy using the second video processing strategy) may be video data played by the fast lens.
值得注意的是,本领域技术人员应该知晓,本实施例所述的“放大”或“缩小”的手势可以根据实际需求而任意设置,例如,可以与现有“放大”或“缩小”图像比例的手势相同。It should be noted that those skilled in the art should know that the “zoom in” or “zoom out” gestures described in this embodiment can be arbitrarily set according to actual needs, for example, can be compared with existing “zoom in” or “zoom out” images. The same gestures.
步骤302:至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据;所述第一视频处理策略不同于所述第二视频处理策略;Step 302: Process at least part of the sub-data in the at least one video data collected in real time by using at least a first video processing policy and a second video processing policy to obtain at least two target sub-video data; The video processing strategy is different from the second video processing strategy;
步骤303:基于所述至少两个目标子视频数据生成目标视频数据,以使所述目标视频数据中包含有能够以至少两种不同呈现方式呈现的视频数据。 Step 303: Generate target video data based on the at least two target sub-video data, so that the target video data includes video data that can be presented in at least two different presentation manners.
本实施例中,所述不同呈现方式可以具体表征呈现的视频数据的视频特征参数不同;这里,所述视频特征参数具体表征单位时间内视频帧的个数。也就是说,所述不同呈现方式可以具体表征呈现的速度不同,例如,以慢镜头方式呈现,或者以快镜头方式呈现,或者以普通镜头方式呈现。In this embodiment, the different presentation manners may specifically represent different video feature parameters of the presented video data; where the video feature parameters specifically represent the number of video frames per unit time. That is to say, the different presentation manners may specifically characterize the speed of the presentation, for example, in a slow motion manner, or in a fast lens manner, or in a normal lens manner.
这样,本发明实施例所述的数据处理方法,通过利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据,并至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据,进而基于所述至少两个目标子视频数据生成目标视频数据,如此,使得所述目标视频数据中包含的视频数据能够以不同呈现方式呈现,因此,本发明实施例所述的方法丰富了用户体验,也提升了用户体验;同时,也满足了用户对呈现方式多样化的需求。In this way, the data processing method in the embodiment of the present invention collects at least one video data in real time in the image collection area corresponding to the at least one image capturing device by using at least one image capturing device, and at least utilizes the first video processing strategy. And processing, by the second video processing strategy, at least part of the at least one of the at least one video data collected in real time to obtain at least two target sub-video data, and generating a target video based on the at least two target sub-video data The data, in this way, enables the video data contained in the target video data to be presented in different presentation manners. Therefore, the method described in the embodiment of the present invention enriches the user experience and enhances the user experience; The need to present a diverse approach.
实施例二Embodiment 2
基于实施例一所述的方法,本发明实施例提供了三种具体的视频处理策略;这里,在一次视频采集过程中,以下所述的三种具体的视频处理策略可以同时使用,也可以选取三种具体视频处理策略中的任意两种,这样,为丰富呈现方式奠定了基础。进一步地,本实施例所述的视频处理策略可以是预先设置的,也可以是根据用户操作而随时设置的,例如,通过用户交互界面获取用户输入的视频特征参数,进而确定视频处理策略;或者,通过用户选取的显示区域的区域大小,从预设关系列表(即表征显示区域的区域大小与视频特征参数的对应关系列表)中选取出与显示区域的区域大小相匹配的视频特征参数,进而确定出视频处理策略;这里,本领域技术人员知晓,本发明实施例旨在强调利用不同的视频处理策略处理同一视频数据,或者不同视频数据,且利用处理后的视频数据生成目标视频数据,以使目标视频数据中包含有能够以不同呈现方式呈现的视频数据,所以, 以上所述的视频处理策略的设置过程仅是用于解释说明,并非用于限制本发明。Based on the method in the first embodiment, the embodiment of the present invention provides three specific video processing strategies. Here, in a video collection process, the following three specific video processing strategies may be used simultaneously or may be selected. Any two of the three specific video processing strategies, thus laying the foundation for rich presentation. Further, the video processing policy in this embodiment may be preset, or may be set at any time according to user operations, for example, acquiring video feature parameters input by the user through the user interaction interface, thereby determining a video processing policy; or Selecting, by the size of the area of the display area selected by the user, a video feature parameter that matches the size of the area of the display area from the preset relationship list (ie, the correspondence between the size of the area representing the display area and the video feature parameter), and further Determining a video processing strategy; here, those skilled in the art are aware that the embodiments of the present invention aim to emphasize that different video processing strategies are used to process the same video data, or different video data, and the processed video data is used to generate target video data. The target video data is included with video data that can be rendered in different presentation manners, so The setting process of the video processing strategy described above is for explanation only and is not intended to limit the present invention.
第一种视频处理策略;具体地,The first video processing strategy; specifically,
所述电子设备调小实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;也就是说,所述电子设备减小单位时间内预期显示的视频帧的个数,进而延迟相邻视频帧的显示时间间隔,实现慢镜头的处理策略;进一步地,所述电子设备将视频特征参数调小后的所述至少一个视频数据中的至少部分子数据作为第一目标子视频数据;其中,所述第一目标子视频数据包含于所述至少两个目标子视频数据中;所述第一目标子视频数据能够以第一呈现方式呈现,例如以慢镜头的呈现方式呈现。The electronic device reduces a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter represents a number of video frames per unit time; that is, the electronic The device reduces the number of video frames that are expected to be displayed in a unit time, and further delays the display time interval of the adjacent video frames to implement a slow lens processing strategy. Further, the electronic device reduces the video feature parameters by At least part of the at least one piece of video data as the first target sub-video data; wherein the first target sub-video data is included in the at least two target sub-video data; the first target sub-video data can Presented in a first presentation, such as in the form of a slow motion.
第二种视频处理策略;具体地,a second video processing strategy; specifically,
所述电子设备调大实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;也就是说,所述电子设备增大单位时间内预期显示的视频帧的个数,进而缩短相邻视频帧的显示时间间隔,实现快镜头的处理策略;进一步地,所述电子设备将视频特征参数调大后的所述至少一个视频数据中的至少部分子数据作为第二目标子视频数据;其中,所述第二目标子视频数据包含于所述至少两个目标子视频数据中;所述第二目标子视频数据能够以第二呈现方式呈现,例如以快镜头的呈现方式呈现。The electronic device adjusts a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter represents a number of video frames per unit time; that is, the electronic The device increases the number of video frames that are expected to be displayed in a unit time, thereby shortening the display time interval of the adjacent video frames, and implementing a processing strategy of the fast lens. Further, the electronic device adjusts the video feature parameters. At least part of the at least one piece of video data as the second target sub-video data; wherein the second target sub-video data is included in the at least two target sub-video data; the second target sub-video data can Presented in a second presentation, for example in the form of a fast shot.
在一具体实施例中,由于人眼对时域方面的掩盖特性,即人眼对快速运动物体的细节往往难以察觉,所以,本实施例基于上述特征采用丢帧处理的方式,减小视频数据的数据量,进而实现节约存储空间的目的。具体地,所述电子设备基于采集特征参数(例如视频采集时长等),确定视频存储特征参数;所述视频存储特征表征单位时间内待保存的视频帧的个数;进而根据确定出的所述视频存储特征参数,对采集到的所述视频数据中的 视频帧进行删除处理,并调大实时采集到的所述视频数据中的视频特征参数,这样,实现缩小目标视频数据的数据量的目的。这里,随着视频采集时长的延长,被丢帧处理的视频帧的个数越多;例如,开始进行视频采集时,单位时间内从当前采集到的视频数据中选出作为目标视频数据的视频帧的个数为N;所述N为大于等于1的正整数;且所述N小于等于正常视频中,单位时间内视频帧的个数,例如20或30。随着视频采集时长的延长,例如当视频采集时长超过第一阈值,假设十分钟时,单位时间内从当前采集到的视频数据中选取出作为目标视频数据的视频帧的个数则变为M;所述M为大于等于1的正整数,且M小于N;也就是说,当视频采集时长超过一定阈值时,单位时间内从当前采集到的视频数据中选取出作为目标视频数据的视频帧的个数较少,进而实现节约存储空间的目的。这里,未选取出的视频帧即为丢弃的视频帧。In a specific embodiment, due to the masking property of the human eye on the time domain, that is, the details of the fast moving object are often difficult to detect by the human eye, the present embodiment reduces the video data by using the frame loss processing method based on the above features. The amount of data, in order to achieve the purpose of saving storage space. Specifically, the electronic device determines a video storage feature parameter based on the collected feature parameter (eg, a video capture duration, etc.); the video storage feature represents a number of video frames to be saved in a unit time; and according to the determined Video storage feature parameters for the collected video data The video frame is subjected to deletion processing, and the video feature parameters in the video data collected in real time are adjusted, thereby achieving the purpose of reducing the data amount of the target video data. Here, as the length of the video capture is extended, the number of video frames processed by the frame loss is increased; for example, when the video capture is started, the video as the target video data is selected from the currently collected video data per unit time. The number of frames is N; the N is a positive integer greater than or equal to 1; and the N is less than or equal to the number of video frames per unit time in a normal video, for example, 20 or 30. As the length of the video capture is extended, for example, when the video capture duration exceeds the first threshold, the number of video frames selected as the target video data from the currently collected video data becomes M in assuming ten minutes. The M is a positive integer greater than or equal to 1, and M is less than N; that is, when the video acquisition duration exceeds a certain threshold, the video frame as the target video data is selected from the currently collected video data per unit time. The number of the number is small, thereby achieving the purpose of saving storage space. Here, the unselected video frame is the discarded video frame.
第三种视频处理策略,可以具体为现有普通视频处理策略,这里不再赘述。The third video processing strategy may be specifically an existing common video processing strategy, which is not described here.
这样,本发明实施例利用上述三种视频处理策略即可对采集到的相同的或不同的视频数据进行处理,进而得到能够以不同呈现方式呈现的多种视频数据;而且,上述三种视频处理策略可以两两任意组合,或者三种视频处理策略同时出现在一次视频采集过程中,这样,满足了用户对呈现方式多样化的需求,同时,丰富了用户体验,也提升了用户体验。In this way, the embodiment of the present invention can process the collected same or different video data by using the above three video processing strategies, thereby obtaining a plurality of types of video data that can be presented in different presentation manners; The strategy can be arbitrarily combined in two or two, or three video processing strategies appear in one video collection process at the same time, thus satisfying the user's demand for diversified presentation modes, and enriching the user experience and improving the user experience.
实施例三Embodiment 3
基于实施例一或实施例二所述的方法,本实施例中,当所述电子设备生成所述目标视频数据后,所述电子设备存储所述目标视频数据;并基于用户操作在所述电子设备的至少第一显示区域和第二显示区域以不同的呈现方式呈现所述目标视频数据中的所述至少两个目标子视频数据。The method according to the first embodiment or the second embodiment, in the embodiment, after the electronic device generates the target video data, the electronic device stores the target video data; and based on the user operation in the electronic The at least first display area and the second display area of the device present the at least two target sub-video data in the target video data in different presentation manners.
这里,在实际应用中,第一显示区域和第二显示区域可以为预先设置 的区域,也可以为根据用户操作而确定出的显示区域;进一步地,当所述第一显示区域和第二显示区域为根据用户操作而确定出的显示区域时,所述第一显示区域和第二显示区域的区域大小可以根据用户操作而任意设置。Here, in practical applications, the first display area and the second display area may be preset The area may be a display area determined according to a user operation; further, when the first display area and the second display area are display areas determined according to a user operation, the first display area and The size of the area of the second display area can be arbitrarily set according to user operations.
在一实施例中,所述电子设备还可以根据显示区域的区域大小确定在本区域中呈现的视频数据的呈现速度,也就是说,显示区域的区域大小与视频处理策略相对应;例如,在较大的显示区域中呈现的视频数据对应普通的视频处理策略,在较小的显示区域中呈现的视频数据对应快镜头或慢镜头的视频处理策略。In an embodiment, the electronic device may further determine a rendering speed of the video data presented in the local area according to the size of the area of the display area, that is, the area size of the display area corresponds to a video processing policy; for example, The video data presented in the larger display area corresponds to a common video processing strategy, and the video data presented in the smaller display area corresponds to a video processing strategy of a fast lens or a slow motion.
这样,通过在不同呈现区域呈现不同呈现方式的视频数据,丰富了用户的感知体验,同时,上述呈现区域可以为根据用户操作而确定的区域,因此,提升了用户的参与感,满足了用户的控制欲,进而提升了用户体验。In this way, by presenting video data of different presentation modes in different presentation areas, the user's perceived experience is enriched. At the same time, the above-mentioned presentation area may be an area determined according to user operations, thereby improving the user's participation and satisfying the user's. Control desire, which in turn enhances the user experience.
这里,值得注意的是,实施例一至实施例三所述方法所实现的功能可以通过电子设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该电子设备至少包括处理器和存储介质。Here, it should be noted that the functions implemented by the methods in the first embodiment to the third embodiment can be implemented by using a processor in the electronic device to call the program code. Of course, the program code can be saved in a computer storage medium, and the electronic device can be seen. At least the processor and the storage medium are included.
本发明实施例还记载了一种计算机存储介质,所述计算机存储介质中存储有计算机程序,所述计算机程序用于执行本发明实施例以上所述的数据处理方法。The embodiment of the invention further describes a computer storage medium, wherein the computer storage medium stores a computer program, and the computer program is used to execute the data processing method described in the embodiment of the invention.
实施例四Embodiment 4
基于实施例一至实施例三任一实施例所述的方法,本发明实施例提供了一种电子设备;所述电子设备可以具体为以上所述的移动终端;具体地,如图6所示,所述电子设备包括:The embodiment of the present invention provides an electronic device according to the method of any one of Embodiments 1 to 3; the electronic device may be specifically the mobile terminal described above; specifically, as shown in FIG. The electronic device includes:
图像采集单元61,配置为利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据;The image capturing unit 61 is configured to acquire at least one video data in real time in an image capturing area corresponding to the at least one image capturing device by using at least one image capturing device;
图像处理单元62,配置为至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处 理,得到至少两个目标子视频数据;所述第一视频处理策略不同于所述第二视频处理策略;The image processing unit 62 is configured to perform at least part of the at least one of the at least one video data collected in real time by using at least the first video processing policy and the second video processing policy And obtaining at least two target sub-video data; the first video processing policy is different from the second video processing strategy;
视频数据生成单元63,配置为基于所述至少两个目标子视频数据生成目标视频数据,以使所述目标视频数据中包含有能够以至少两种不同呈现方式呈现的视频数据。The video data generating unit 63 is configured to generate target video data based on the at least two target sub-video data such that the target video data includes video data that can be presented in at least two different presentation manners.
在一实施例中,所述图像处理单元,还配置为调小实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;还配置为将视频特征参数调小后的所述至少一个视频数据中的至少部分子数据作为第一目标子视频数据;其中,所述第一目标子视频数据包含于所述至少两个目标子视频数据中;所述第一目标子视频数据能够以第一呈现方式呈现。In an embodiment, the image processing unit is further configured to: reduce a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; and the video feature parameter represents a video frame in a unit time And the at least part of the at least one piece of video data that is reduced by the video feature parameter is configured as the first target sub-video data; wherein the first target sub-video data is included in the at least one In the two target sub-video data; the first target sub-video data can be presented in a first presentation manner.
在另一实施例中,所述图像处理单元,还配置为调大实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;还配置为将视频特征参数调大后的所述至少一个视频数据中的至少部分子数据作为第二目标子视频数据;其中,所述第二目标子视频数据包含于所述至少两个目标子视频数据中;所述第二目标子视频数据能够以第二呈现方式呈现。In another embodiment, the image processing unit is further configured to: adjust a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter represents a video in a unit time The number of frames is further configured to use at least part of the at least one piece of video data that is adjusted by the video feature parameter as the second target sub-video data; wherein the second target sub-video data is included in the In at least two target sub-video data; the second target sub-video data can be presented in a second presentation.
在另一实施例中,所述图像处理单元,还配置为基于采集特征参数,确定视频存储特征参数;根据确定出的所述视频存储特征参数,对采集到的所述视频数据中的视频帧进行删除处理,并调大实时采集到的所述视频数据中的视频特征参数;其中,所述视频存储特征表征单位时间内待保存的视频帧的个数。In another embodiment, the image processing unit is further configured to determine a video storage feature parameter based on the collected feature parameter, and collect the video frame in the captured video data according to the determined video storage feature parameter. And performing a deletion process, and adjusting a video feature parameter in the video data collected in real time; wherein the video storage feature represents a number of video frames to be saved in a unit time.
在一具体实施例中,所述电子设备还包括存储单元和视频显示单元;其中,In a specific embodiment, the electronic device further includes a storage unit and a video display unit; wherein
所述存储单元,配置为存储所述目标视频数据;The storage unit is configured to store the target video data;
所述视频显示单元,配置为基于用户操作在所述电子设备的至少第一 显示区域和第二显示区域以不同的呈现方式呈现所述目标视频数据中的所述至少两个目标子视频数据。The video display unit is configured to operate at least first of the electronic device based on a user operation The display area and the second display area present the at least two target sub-video data in the target video data in different presentation manners.
这里需要指出的是:以上电子设备实施例项的描述,与上述方法描述是类似的,具有同方法实施例相同的有益效果,因此不做赘述。对于本发明电子设备实施例中未披露的技术细节,本领域的技术人员请参照本发明方法实施例的描述而理解,为节约篇幅,这里不再赘述。It should be noted here that the description of the above electronic device embodiment item is similar to the above method description, and has the same beneficial effects as the method embodiment, and therefore will not be described again. For the technical details that are not disclosed in the embodiment of the present invention, those skilled in the art should understand that the description of the method embodiment of the present invention is omitted, and the details are not described herein.
在实际应用中,所述图像处理单元62和视频数据生成单元63均可以通过中央处理器(CPU)、或微处理器(MPU)、或DSP、或FPGA实现。所述图像采集单元61可通过摄像头实现;所述存储单元通过存储器实现;所述视频显示单元通过显示屏实现。In practical applications, the image processing unit 62 and the video data generating unit 63 may each be implemented by a central processing unit (CPU), or a microprocessor (MPU), or a DSP, or an FPGA. The image acquisition unit 61 can be implemented by a camera; the storage unit is implemented by a memory; and the video display unit is implemented by a display screen.
进一步地,当所述电子设备具体为如图1所示的移动终端时,所述图像处理单元62和视频数据生成单元63均可具体由控制器实现;所述图像采集单元61可由相机121实现;所述存储单元通过存储器160实现,所述视频显示单元可与显示单元151对应。Further, when the electronic device is specifically a mobile terminal as shown in FIG. 1, the image processing unit 62 and the video data generating unit 63 may each be specifically implemented by a controller; the image capturing unit 61 may be implemented by the camera 121. The storage unit is implemented by a memory 160, and the video display unit may correspond to the display unit 151.
应理解,说明书通篇中提到的“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一实施例中”或“在另一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。It is to be understood that the phrase "an embodiment" or "an embodiment" is intended to mean that a particular feature, structure, or characteristic that is associated with an embodiment is included in at least one embodiment of the invention. Thus, "a" or "an" In addition, these particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be directed to the embodiments of the present invention. The implementation process constitutes any limitation. The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多 限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device comprising a series of elements includes those elements. It also includes other elements that are not explicitly listed, or elements that are inherent to such a process, method, article, or device. No more In the case of a limitation, an element defined by the phrase "comprising a ..." does not exclude the presence of the same element in the process, method, article, or device that comprises the element.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed. In addition, the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; The unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。It will be understood by those skilled in the art that all or part of the steps of implementing the foregoing method embodiments may be performed by hardware related to program instructions. The foregoing program may be stored in a computer readable storage medium, and when executed, the program includes The foregoing steps of the method embodiment; and the foregoing storage medium includes: a removable storage device, a ROM, a magnetic disk, or an optical disk, and the like, which can store program codes.
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一 个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a software product. The storage medium includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the methods described in various embodiments of the present invention. The foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the appended claims.
工业实用性Industrial applicability
本发明实施例通过利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据,并至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据,进而基于所述至少两个目标子视频数据生成目标视频数据,如此,使得所述目标视频数据中包含的视频数据能够以不同呈现方式呈现,因此,本发明实施例所述的方法丰富了用户体验,也提升了用户体验;同时,也满足了用户对呈现方式多样化的需求。 The embodiment of the present invention collects at least one video data in real time in the image capturing area corresponding to the at least one image capturing device by using at least one image capturing device, and uses at least the first video processing strategy and the second video processing strategy to perform real-time Processing at least part of the collected at least one piece of video data to obtain at least two target sub-video data, thereby generating target video data based on the at least two target sub-video data, such that the target video is caused The video data included in the data can be presented in different presentation manners. Therefore, the method described in the embodiment of the present invention enriches the user experience and enhances the user experience. At the same time, it also satisfies the user's demand for diversified presentation modes.

Claims (20)

  1. 一种电子设备,所述电子设备包括:An electronic device, the electronic device comprising:
    图像采集单元,配置为利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据;The image acquisition unit is configured to acquire at least one video data in real time in an image collection area corresponding to the at least one image acquisition device by using at least one image acquisition device;
    图像处理单元,配置为至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据;所述第一视频处理策略不同于所述第二视频处理策略;The image processing unit is configured to process at least part of the at least one of the at least one video data collected in real time by using at least the first video processing policy and the second video processing policy to obtain at least two target sub video data; The first video processing policy is different from the second video processing strategy;
    视频数据生成单元,配置为基于所述至少两个目标子视频数据生成目标视频数据,以使所述目标视频数据中包含有能够以至少两种不同呈现方式呈现的视频数据。The video data generating unit is configured to generate target video data based on the at least two target sub-video data such that the target video data includes video data that can be presented in at least two different presentation manners.
  2. 根据权利要求1所述的电子设备,其中,当所述目标视频数据中所包含的不同呈现方式的视频数据为同一视频来源的数据时,所述第一视频处理策略和所述第二视频处理策略处理的视频数据可以为全部相同的视频数据,也可以为部分相同的视频数据,还可以为完全不相同的视频数据。The electronic device according to claim 1, wherein when the video data of different presentation modes included in the target video data is data of the same video source, the first video processing policy and the second video processing The video data processed by the policy may be all the same video data, or may be partially the same video data, or may be completely different video data.
  3. 根据权利要求1所述的电子设备,其中,所述图像处理单元,还配置为通过实施于所述电子设备的取景界面中的用户操作,为不同显示区域的视频数据选取不同的、或者相同的视频处理策略。The electronic device according to claim 1, wherein the image processing unit is further configured to select different or the same for video data of different display areas by a user operation implemented in a view interface of the electronic device. Video processing strategy.
  4. 根据权利要求1所述的电子设备,其中,所述不同呈现方式表征呈现的视频数据的视频特征参数不同。The electronic device of claim 1, wherein the different presentation modes characterize different video feature parameters of the presented video data.
  5. 根据权利要求1所述的电子设备,其中,所述图像处理单元,还配置为通过用户交互界面获取用户输入的视频特征参数,基于用户输入的视频特征参数确定视频处理策略;或者,The electronic device according to claim 1, wherein the image processing unit is further configured to acquire a video feature parameter input by the user through a user interaction interface, and determine a video processing policy based on the video feature parameter input by the user; or
    通过用户选取的显示区域的区域大小,从预设关系列表中选取出与显示区域的区域大小相匹配的视频特征参数,基于选取出的视频特征参数确 定出视频处理策略;所述预设关系列表表征显示区域的区域大小与视频特征参数的对应关系列表。The video feature parameter matching the size of the area of the display area is selected from the preset relationship list by the size of the area of the display area selected by the user, based on the selected video feature parameters. Determining a video processing policy; the preset relationship list characterizing a correspondence list between the area size of the display area and the video feature parameter.
  6. 根据权利要求1所述的电子设备,其中,所述图像处理单元,还配置为调小实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;The electronic device according to claim 1, wherein the image processing unit is further configured to reduce a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter Characterizing the number of video frames per unit time;
    还配置为将视频特征参数调小后的所述至少一个视频数据中的至少部分子数据作为第一目标子视频数据;其中,所述第一目标子视频数据包含于所述至少两个目标子视频数据中;所述第一目标子视频数据能够以第一呈现方式呈现。And configuring, as the first target sub-video data, at least part of the at least one video data that is reduced by the video feature parameter; wherein the first target sub-video data is included in the at least two target sub- In the video data; the first target sub-video data can be presented in a first presentation manner.
  7. 根据权利要求1所述的电子设备,其中,所述图像处理单元,还配置为调大实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;The electronic device according to claim 1, wherein the image processing unit is further configured to: adjust a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter Characterizing the number of video frames per unit time;
    还配置为将视频特征参数调大后的所述至少一个视频数据中的至少部分子数据作为第二目标子视频数据;其中,所述第二目标子视频数据包含于所述至少两个目标子视频数据中;所述第二目标子视频数据能够以第二呈现方式呈现。And configuring, as the second target sub-video data, at least part of the at least one video data that is adjusted by the video feature parameter; wherein the second target sub-video data is included in the at least two target sub- In the video data; the second target sub-video data can be presented in a second presentation.
  8. 根据权利要求7所述的电子设备,其中,所述图像处理单元,还配置为基于采集特征参数,确定视频存储特征参数;根据确定出的所述视频存储特征参数,对采集到的所述视频数据中的视频帧进行删除处理,并调大实时采集到的所述视频数据中的视频特征参数;其中,The electronic device according to claim 7, wherein the image processing unit is further configured to determine a video storage feature parameter based on the collected feature parameter; and the captured video according to the determined video storage feature parameter The video frame in the data is deleted, and the video feature parameters in the video data collected in real time are adjusted; wherein
    所述视频存储特征表征单位时间内待保存的视频帧的个数。The video storage feature characterizes the number of video frames to be saved in a unit of time.
  9. 根据权利要求6或7所述的电子设备,其中,所述电子设备还包括存储单元和视频显示单元;其中,The electronic device according to claim 6 or 7, wherein the electronic device further comprises a storage unit and a video display unit;
    所述存储单元,配置为存储所述目标视频数据;The storage unit is configured to store the target video data;
    所述视频显示单元,配置为基于用户操作在所述电子设备的至少第一显示区域和第二显示区域以不同的呈现方式呈现所述目标视频数据中的所 述至少两个目标子视频数据。The video display unit is configured to present the target video data in different presentation manners in at least a first display area and a second display area of the electronic device based on a user operation Describe at least two target sub-video data.
  10. 一种数据处理方法,所述方法包括:A data processing method, the method comprising:
    电子设备利用至少一个图像采集装置在所述至少一个图像采集装置所对应的图像采集区域中实时采集至少一个视频数据;The electronic device acquires at least one video data in real time in an image capturing area corresponding to the at least one image capturing device by using at least one image capturing device;
    至少利用第一视频处理策略和第二视频处理策略,对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,得到至少两个目标子视频数据;所述第一视频处理策略不同于所述第二视频处理策略;Processing at least part of the at least one of the at least one video data collected in real time to obtain at least two target sub video data by using at least a first video processing policy and a second video processing policy; Different from the second video processing strategy;
    基于所述至少两个目标子视频数据生成目标视频数据,以使所述目标视频数据中包含有能够以至少两种不同呈现方式呈现的视频数据。Generating target video data based on the at least two target sub-video data such that the target video data includes video data that can be presented in at least two different presentation manners.
  11. 根据权利要求10所述的方法,其中,当所述目标视频数据中所包含的不同呈现方式的视频数据为同一视频来源的数据时,所述第一视频处理策略和所述第二视频处理策略处理的视频数据可以为全部相同的视频数据,也可以为部分相同的视频数据,还可以为完全不相同的视频数据。The method according to claim 10, wherein when the video data of different presentation modes included in the target video data is data of the same video source, the first video processing policy and the second video processing strategy The processed video data may be all the same video data, or may be partially the same video data, or may be completely different video data.
  12. 根据权利要求10所述的方法,其中,所述方法还包括:The method of claim 10, wherein the method further comprises:
    通过实施于所述电子设备的取景界面中的用户操作,为不同显示区域的视频数据选取不同的、或者相同的视频处理策略。Selecting different or the same video processing strategies for video data of different display areas by user operations implemented in the view interface of the electronic device.
  13. 根据权利要求10所述的方法,其中,所述不同呈现方式表征呈现的视频数据的视频特征参数不同。The method of claim 10, wherein the different presentations characterize different video feature parameters of the presented video data.
  14. 根据权利要求10所述的方法,其中,所述视频处理策略是预先设置的处理策略,或者是根据用户操作而设置的处理策略。The method according to claim 10, wherein the video processing policy is a processing policy set in advance, or a processing policy set according to a user operation.
  15. 根据权利要求14所述的方法,其中,所述方法还包括:The method of claim 14, wherein the method further comprises:
    通过用户交互界面获取用户输入的视频特征参数,基于用户输入的视频特征参数确定视频处理策略;或者,Obtaining a video feature parameter input by the user through a user interaction interface, and determining a video processing strategy based on a video feature parameter input by the user; or
    通过用户选取的显示区域的区域大小,从预设关系列表中选取出与显示区域的区域大小相匹配的视频特征参数,基于选取出的视频特征参数确定出视频处理策略;所述预设关系列表表征显示区域的区域大小与视频特 征参数的对应关系列表。The video feature parameter matching the size of the area of the display area is selected from the preset relationship list by the size of the area of the display area selected by the user, and the video processing policy is determined based on the selected video feature parameter; the preset relationship list Characterize the size of the display area and the video A list of correspondences of the parameters.
  16. 根据权利要求10所述的方法,其中,所述利用第一视频处理策略对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,包括:The method of claim 10, wherein the processing, by the first video processing strategy, at least a portion of the at least one of the at least one video data collected in real time comprises:
    调小实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;And reducing a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter characterizing a number of video frames per unit time;
    将视频特征参数调小后的所述至少一个视频数据中的至少部分子数据作为第一目标子视频数据;其中,所述第一目标子视频数据包含于所述至少两个目标子视频数据中;所述第一目标子视频数据能够以第一呈现方式呈现。And at least part of the at least one piece of video data that is reduced by the video feature parameter is used as the first target sub-video data; wherein the first target sub-video data is included in the at least two target sub-video data The first target sub-video data can be presented in a first presentation.
  17. 根据权利要求10所述的方法,其中,所述利用第二视频处理策略对实时采集到的所述至少一个视频数据中的至少部分子数据进行处理,包括:The method of claim 10, wherein the processing, by the second video processing strategy, at least a portion of the at least one of the at least one video data collected in real time comprises:
    调大实时采集到的所述至少一个视频数据中的至少部分子数据对应的视频特征参数;所述视频特征参数表征单位时间内视频帧的个数;And adjusting a video feature parameter corresponding to at least part of the at least one of the at least one video data collected in real time; the video feature parameter characterizing a number of video frames per unit time;
    将视频特征参数调大后的所述至少一个视频数据中的至少部分子数据作为第二目标子视频数据;其中,所述第二目标子视频数据包含于所述至少两个目标子视频数据中;所述第二目标子视频数据能够以第二呈现方式呈现。At least part of the at least one piece of video data whose audio feature parameter is increased is used as the second target sub-video data; wherein the second target sub-video data is included in the at least two target sub-video data The second target sub-video data can be presented in a second presentation.
  18. 根据权利要求12所述的方法,其中,所述调大实时采集到的所述至少一个视频数据中的至少部分子数据的视频特征参数,包括:The method according to claim 12, wherein the adjusting the video feature parameters of at least part of the at least one of the at least one video data collected in real time comprises:
    基于采集特征参数,确定视频存储特征参数;所述视频存储特征表征单位时间内待保存的视频帧的个数;Determining a video storage feature parameter based on the acquisition feature parameter; the video storage feature characterizing a number of video frames to be saved in a unit time;
    根据确定出的所述视频存储特征参数,对采集到的所述视频数据中的视频帧进行删除处理,并调大实时采集到的所述视频数据中的视频特征参数。 And deleting, according to the determined video storage feature parameter, the captured video frame in the video data, and adjusting a video feature parameter in the video data collected in real time.
  19. 根据权利要求11或12所述的方法,其中,所述方法还包括:The method of claim 11 or 12, wherein the method further comprises:
    存储所述目标视频数据;Storing the target video data;
    基于用户操作在所述电子设备的至少第一显示区域和第二显示区域以不同的呈现方式呈现所述目标视频数据中的所述至少两个目标子视频数据。The at least two target sub-video data in the target video data are presented in different presentation manners in at least a first display area and a second display area of the electronic device based on a user operation.
  20. 一种计算机存储介质,所述计算机存储介质中存储有计算机程序,所述计算机程序用于执行前述权利要求10至19任一项所述的数据处理方法。 A computer storage medium storing a computer program for performing the data processing method according to any one of the preceding claims 10 to 19.
PCT/CN2016/113980 2016-04-27 2016-12-30 Data processing method, electronic device, and storage medium WO2017185808A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610270463.9 2016-04-27
CN201610270463.9A CN105898158B (en) 2016-04-27 2016-04-27 A kind of data processing method and electronic equipment

Publications (1)

Publication Number Publication Date
WO2017185808A1 true WO2017185808A1 (en) 2017-11-02

Family

ID=56701852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113980 WO2017185808A1 (en) 2016-04-27 2016-12-30 Data processing method, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN105898158B (en)
WO (1) WO2017185808A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199987A (en) * 2020-08-26 2021-01-08 北京贝思科技术有限公司 Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898158B (en) * 2016-04-27 2019-08-16 努比亚技术有限公司 A kind of data processing method and electronic equipment
CN113079336A (en) * 2020-01-03 2021-07-06 深圳市春盛海科技有限公司 High-speed image recording method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080111881A1 (en) * 2006-11-09 2008-05-15 Innovative Signal Analysis, Inc. Imaging system
CN103926785A (en) * 2014-04-30 2014-07-16 广州视源电子科技股份有限公司 Double-camera implementation method and device
CN104967802A (en) * 2015-04-29 2015-10-07 努比亚技术有限公司 Mobile terminal, recording method of screen multiple areas and recording device of screen multiple areas
CN105208422A (en) * 2014-06-26 2015-12-30 联想(北京)有限公司 Information processing method and electronic device
CN105898158A (en) * 2016-04-27 2016-08-24 努比亚技术有限公司 Data processing method and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5155279B2 (en) * 2009-10-29 2013-03-06 株式会社日立製作所 Centralized monitoring system and centralized monitoring method using multiple surveillance cameras
JP6396682B2 (en) * 2014-05-30 2018-09-26 株式会社日立国際電気 Surveillance camera system
JP6323183B2 (en) * 2014-06-04 2018-05-16 ソニー株式会社 Image processing apparatus and image processing method
CN105611108A (en) * 2015-12-18 2016-05-25 联想(北京)有限公司 Information processing method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080111881A1 (en) * 2006-11-09 2008-05-15 Innovative Signal Analysis, Inc. Imaging system
CN103926785A (en) * 2014-04-30 2014-07-16 广州视源电子科技股份有限公司 Double-camera implementation method and device
CN105208422A (en) * 2014-06-26 2015-12-30 联想(北京)有限公司 Information processing method and electronic device
CN104967802A (en) * 2015-04-29 2015-10-07 努比亚技术有限公司 Mobile terminal, recording method of screen multiple areas and recording device of screen multiple areas
CN105898158A (en) * 2016-04-27 2016-08-24 努比亚技术有限公司 Data processing method and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199987A (en) * 2020-08-26 2021-01-08 北京贝思科技术有限公司 Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment

Also Published As

Publication number Publication date
CN105898158B (en) 2019-08-16
CN105898158A (en) 2016-08-24

Similar Documents

Publication Publication Date Title
WO2018019124A1 (en) Image processing method and electronic device and storage medium
CN106454121B (en) Double-camera shooting method and device
WO2017050115A1 (en) Image synthesis method
WO2016029766A1 (en) Mobile terminal and operation method thereof and computer storage medium
WO2017067526A1 (en) Image enhancement method and mobile terminal
CN106909274B (en) Image display method and device
WO2016058458A1 (en) Method for managing electric quantity of battery, mobile terminal and computer storage medium
WO2016173468A1 (en) Combined operation method and device, touch screen operation method and electronic device
WO2017071481A1 (en) Mobile terminal and split-screen implementation method
WO2017143855A1 (en) Device with screen capturing function and screen capturing method
WO2016161986A1 (en) Operation recognition method and apparatus, mobile terminal and computer storage medium
WO2016155509A1 (en) Method and device for determining holding mode of mobile terminal
CN106302651B (en) Social picture sharing method and terminal with social picture sharing system
WO2017012385A1 (en) Method and apparatus for rapidly starting application, and terminal
CN108093019B (en) Member information refreshing method and terminal
CN111314444B (en) Information pushing and displaying method and device
WO2018019128A1 (en) Method for processing night scene image and mobile terminal
CN106657782B (en) Picture processing method and terminal
CN106911881B (en) Dynamic photo shooting device and method based on double cameras and terminal
WO2018050080A1 (en) Mobile terminal, picture processing method and computer storage medium
WO2017071532A1 (en) Group selfie photography method and apparatus
WO2017113893A1 (en) Method and device for network searching
WO2017071592A1 (en) Method and apparatus for focusing, method and apparatus for photographing
WO2017185808A1 (en) Data processing method, electronic device, and storage medium
CN106973226B (en) Shooting method and terminal

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16900312

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16900312

Country of ref document: EP

Kind code of ref document: A1