WO2017202271A1 - 一种信息处理方法及终端、计算机存储介质 - Google Patents

一种信息处理方法及终端、计算机存储介质 Download PDF

Info

Publication number
WO2017202271A1
WO2017202271A1 PCT/CN2017/085412 CN2017085412W WO2017202271A1 WO 2017202271 A1 WO2017202271 A1 WO 2017202271A1 CN 2017085412 W CN2017085412 W CN 2017085412W WO 2017202271 A1 WO2017202271 A1 WO 2017202271A1
Authority
WO
WIPO (PCT)
Prior art keywords
multimedia information
information
terminal
video
multimedia
Prior art date
Application number
PCT/CN2017/085412
Other languages
English (en)
French (fr)
Inventor
任春剑
周彬
程平峰
吴兵
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2017202271A1 publication Critical patent/WO2017202271A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • H04N13/351Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking for displaying simultaneously

Definitions

  • the present invention relates to communication technologies, and in particular, to an information processing method, a terminal, and a computer storage medium.
  • the popularity of intelligent terminals has become more and more convenient.
  • users can share information interactively through social networking websites or social applications.
  • the second multimedia information such as advertising information
  • the second multimedia information such as advertising information
  • the first multimedia information plays the pause interval or the first multimedia information (such as the video itself) After being played, the user is presented to the user.
  • the embodiments of the present invention are intended to provide an information processing method, a terminal, and a computer storage medium, which at least solve the problems existing in the prior art.
  • An information processing method includes:
  • each frame image in the second multimedia information is simulated as a corresponding at least one curved image image.
  • the area to be imaged of each frame plane image is included in a display area of the terminal, and the area to be imaged of the at least one curved image is larger than the display area of the terminal;
  • the playing interface of the multimedia to be outputted end is divided into two, and the partial display is synchronously displayed on the 1/2 playing interface. The projected content of the shot.
  • a first requesting unit configured to initiate a request for acquiring the first multimedia information
  • a first acquiring unit configured to acquire the first multimedia information
  • a second requesting unit configured to initiate a request for acquiring second multimedia information
  • a second acquiring unit configured to acquire the second multimedia information
  • the playing unit is configured to separately load and play the first multimedia information and the second multimedia information according to a preset playing policy
  • an analog conversion unit configured to: when detecting that the currently played multimedia information is the second multimedia information, enable a virtual reality play mode, and simulate each frame image in the second multimedia information as a corresponding At least one curved surface image, the image to be imaged area of each frame planar image is included in a display area of the terminal, and the image to be imaged area of the at least one curved surface image is larger than the display area of the terminal;
  • a projection unit configured to project each of the at least one surface graphic into a multimedia information to be outputted to perform imaging, and divide the play interface of the multimedia output to be divided into two, in the 1/2 play interface
  • the top-synchronized display shows the locally projected play content.
  • the first requesting unit, the first obtaining unit, the second requesting unit, the second obtaining unit, the playing unit, the analog converting unit, and the projecting unit may adopt It is implemented by a central processing unit (CPU), a digital signal processor (DSP), or a field-programmable gate array (FPGA).
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are configured to execute the information processing method described above.
  • the information processing method of the embodiment of the present invention includes: initiating a request for acquiring the first multimedia information; acquiring the first multimedia information; initiating a request for acquiring the second multimedia information; and acquiring the second multimedia information; Loading and playing the first multimedia information and the second multimedia information according to a preset play policy; and detecting that the currently played multimedia information is the second multimedia information, enabling virtual reality a play mode, each frame image in the second multimedia information is simulated as a corresponding at least one curved image, and an area to be imaged of each frame planar image is included in a display area of the terminal, where the at least The area to be imaged of a curved image is larger than the display area of the terminal; when each of the at least one curved graphic is locally projected to the output end of the multimedia information for imaging, the playing interface of the multimedia to be outputted is divided into Second, the locally projected play content is synchronously displayed on the 1/2 play interface.
  • the first multimedia information and the second multimedia information are separately loaded and played, and the current playback is detected.
  • the multimedia information is the second multimedia information
  • the virtual reality play mode is turned on, and each frame image in the second multimedia information is simulated as a corresponding at least one surface map.
  • the image to be imaged of each frame of the planar image is included in the display area of the terminal, and the area to be imaged of the at least one curved image is larger than the display area of the terminal, so as to achieve the effect of the image enlargement, so that the multimedia information is not presented.
  • the playback interface of the multimedia to be outputted end is divided into two.
  • the /2 play interface synchronously displays the locally projected play content, so that the user can see the more stereoscopic and intuitive second multimedia information, the second multimedia information has a larger playback image and the picture quality is clearer, so that the user can Immerse yourself in the content being played.
  • FIG. 1 is a schematic diagram of an optional hardware structure of a mobile terminal implementing various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a communication system of the mobile terminal shown in FIG. 1;
  • FIG. 3 is a schematic diagram of hardware entities of each party performing information interaction in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a UI interface processed by a virtual reality technology according to an embodiment of the present invention
  • FIG. 6 is a schematic view showing the assembly of a terminal and a carton glasses to which an embodiment of the present invention is applied;
  • FIG. 7 is a schematic view showing the assembly of the terminal and the carton glasses according to the embodiment of the present invention.
  • FIG. 8 is a schematic view showing a plane-turning curved surface to which an embodiment of the present invention is applied;
  • FIG. 9 is a schematic diagram of obtaining a split screen state according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a system of the fourth embodiment
  • FIG. 11 is a schematic diagram of intercepting a video frame and displaying a partial video frame in each screen according to an embodiment of the present invention.
  • first, second, etc. are used herein to describe various elements (or various multimedia information or various applications or various instructions or various operations), etc., these elements (or Multimedia information or applications or instructions or operations should not be limited by these terms. These terms are only used to distinguish one element (or multimedia information or application or instruction or operation) and another element (or multimedia information or application or instruction or operation).
  • the first operation may be referred to as a second operation
  • the second operation may also be referred to as a first operation
  • the first operation and the second operation are both operations, but the two are not the same The operation is only.
  • the steps in the embodiment of the present invention are not necessarily processed in the order of the steps described.
  • the steps may be selectively arranged to be reordered according to requirements, or the steps in the embodiment may be deleted, or the steps in the embodiment may be added.
  • the description of the steps in the embodiments of the present invention is only an optional combination of the steps, and does not represent a combination of the steps of the embodiments of the present invention.
  • the order of the steps in the embodiments is not to be construed as limiting the present invention.
  • the intelligent terminal (such as a mobile terminal) of the embodiment of the present invention can be implemented in various forms.
  • the mobile terminal described in the embodiments of the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (PDA, Personal Digital Assistant), a tablet computer (PAD), a portable multimedia player ( Mobile terminals such as PMP (Portable Media Player), navigation devices, and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD tablet computer
  • PMP Portable Multimedia Player
  • navigation devices and the like
  • fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram of an optional hardware structure of a mobile terminal implementing various embodiments of the present invention.
  • the mobile terminal 100 may include a communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a first request unit 140, a first acquisition unit 141, a second request unit 142, a second acquisition unit 143, The playback unit 144, the analog conversion unit 145, the projection unit 146, the output unit 150, the storage unit 160, the interface unit 170, the processing unit 180, the power supply unit 190, and the like.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network (electrical communication can also be made by wire if the mobile terminal is replaced with a fixed terminal).
  • the communication unit when it is specifically a wireless communication unit, it may include at least one of a broadcast receiving unit 111, a mobile communication unit 112, a wireless internet unit 113, a short-range communication unit 114, and a location information unit 115, which are optional, according to different Demand can be added or deleted.
  • the broadcast receiving unit 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • Broadcast The signals may include TV broadcast signals, radio broadcast signals, data broadcast signals, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via the mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication unit 112.
  • the broadcast signal may exist in various forms, for example, it may be an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), a digital video broadcast handheld (DVB-H, Digital Video Broadcasting-Handheld). ) exists in the form of an ESG (Electronic Service Guide) and the like.
  • EPG Electronic Program Guide
  • DMB Digital Multimedia Broadcasting
  • DVD-H Digital Video Broadcasting-Handheld
  • ESG Electronic Service Guide
  • the broadcast receiving unit 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving unit 111 can use a digital video broadcast handheld (DVB) by using, for example, Multimedia Broadcast Broadcasting-Terrestrial, Digital Multimedia Broadcasting-Satellite (DMB-S), Digital Multimedia Broadcasting-Satellite (DMB-S) -H), a digital broadcast system such as a data broadcast system of Media Forward Link Only (MediaFLO, Media Forward Link Only), Integrated Broadcast Digital Broadcasting (ISDB-T), or the like receives digital broadcast.
  • the broadcast receiving unit 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving unit 111 may be stored in the memory 160 (or other type of storage medium).
  • the mobile communication unit 112 transmits the radio signal to and/or receives a radio signal from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet unit 113 supports wireless internet access of the mobile terminal.
  • the unit can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the unit may include Wi-Fi (WLAN, Wireless Local Area Networks), Wireless Broadband (Wibro), Worldwide Interoperability for Microwave Access (Wimax), and High Speed Downlink Packet Access. (HSDPA, High Speed Downlink Packet Access) and more.
  • the short-range communication unit 114 is a unit for supporting short-range communication.
  • Some examples of short-range communication technologies include Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Zigbee, and the like.
  • the location information unit 115 is a unit for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information unit is a Global Positioning System (GPS).
  • GPS Global Positioning System
  • the position information unit 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information, thereby accurately calculating three-dimensional current position information according to longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite. Further, the position information unit 115 can calculate the speed information by continuously calculating the current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the storage unit 160 (or other storage medium) or transmitted via the communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication unit 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a mouse, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel, a shaker. Rod and so on.
  • a touch panel when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
  • a first requesting unit 140 configured to initiate a request for acquiring the first multimedia information
  • a first obtaining unit 141 configured to acquire the first multimedia information
  • a second requesting unit 142 configured to initiate the second multimedia
  • the second information obtaining unit 143 is configured to acquire the second multimedia information
  • the playing unit 144 is configured to separately load the first multimedia information and the second multiple according to the preset playing policy.
  • the media information is played and played;
  • the analog conversion unit 145 is configured to: when detecting that the currently played multimedia information is the second multimedia information, enable the virtual reality play mode, and each of the second multimedia information a frame image is simulated as a corresponding at least one curved image, and an area to be imaged of each frame of the planar image is included in a display area of the terminal, and an area to be imaged of the at least one curved image is larger than a display area of the terminal; 146.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification unit, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification unit may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Subscriber Identity Module (SIM), and a Universal Customer Identification Unit (USIM, Universal). Subscriber Identity Module) and more.
  • UIM User Identification Module
  • SIM Subscriber Identity Module
  • USB Universal Customer Identification Unit
  • the identification device may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 may be operable to receive input (eg, data information, power, etc.) from the external device and The received input is transmitted to one or more components within the mobile terminal 100 or can be used to transfer data between the mobile terminal and an external device.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output unit 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100.
  • the mobile terminal 100 can display a related user interface (UI) or a graphical user interface (GUI).
  • UI related user interface
  • GUI graphical user interface
  • the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include a Liquid Crystal Display (LCD), a Thin Film Transistor (LCD), an Organic Light-Emitting Diode (OLED) display, a flexible display, and a three-dimensional (3D) At least one of a display or the like.
  • LCD Liquid Crystal Display
  • LCD Thin Film Transistor
  • OLED Organic Light-Emitting Diode
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a transparent organic light emitting diode (TOLED) display or the like.
  • TOLED transparent organic light emitting diode
  • Mobile terminal 100 may include two or more, depending on the particular desired implementation.
  • a multi display unit or other display device
  • the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown).
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output unit 152 may convert audio data received by the communication unit 110 or stored in the memory 160 into audio when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the signal is output as a sound.
  • the audio output unit 152 can provide an audio output (eg, a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 100.
  • the audio output unit 152 may include a speaker, a buzzer, and the like.
  • the storage unit 160 may store a software program or the like that performs processing and control operations performed by the processing unit 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the storage unit 160 may store data regarding various manners of vibration and audio signals that are output when a touch is applied to the touch screen.
  • the storage unit 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (for example, SD or DX memory, etc.), a random access memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (EEPROM) PROM, Programmable Read Only Memory), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the storage unit 160 through a network connection.
  • Processing unit 180 typically controls the overall operation of the mobile terminal. For example, processing unit 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. As another example, the processing unit 180 can perform a pattern recognition process to input a handwriting or a graph performed on the touch screen. The slice drawing input is recognized as a character or image.
  • the power supply unit 190 receives external power or internal power under the control of the processing unit 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may use an Application Specific Integrated Circuit (ASIC), a Digital Signal Processing (DSP), a Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), processor, controller, microcontroller, microprocessor, electronics designed to perform the functions described herein At least one of the units is implemented, and in some cases, such an implementation may be implemented in controller 180.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processing
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system in which a mobile terminal is operable according to an embodiment of the present invention will now be described with reference to FIG.
  • Such communication systems may use different air interfaces and/or physical layers.
  • the air interface used by the communication system includes, for example, frequency division multiple access (FDMA, Frequency Division Multiple Access), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE) Long Term Evolution)), Global System for Mobile Communications (GSM), etc.
  • FDMA frequency division multiple access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a Public Switched Telephone Network (PSTN) 290.
  • PSTN Public Switched Telephone Network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), with each partition covered by a multi-directional antenna or an antenna pointing in a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as multiple cellular stations.
  • the broadcast transmitter (BT, Broadcast Transmitter) 295 will broadcast the letter The number is sent to the mobile terminal 100 operating within the system.
  • a broadcast receiving unit 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • several satellites 300 are shown, for example, a Global Positioning System (GPS) satellite 300 can be employed.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the location information unit 115 as shown in FIG. 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the mobile communication unit 112 of the communication unit 110 in the mobile terminal accesses the mobile communication based on necessary data (including user identification information and authentication information) of the mobile communication network (such as 2G/3G/4G mobile communication network) built in the mobile terminal.
  • the network transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal users such as web browsing and network multimedia broadcasting.
  • the wireless internet unit 113 of the communication unit 110 implements a function of a wireless hotspot by operating a related protocol function of the wireless hotspot, and the wireless hotspot supports access of a plurality of mobile terminals (any mobile terminal other than the mobile terminal) by multiplexing the mobile communication unit 112. Between the mobile communication network The mobile communication connection transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for services such as web browsing and network multimedia playback of the mobile terminal user, since the mobile terminal is substantially between the multiplexed mobile terminal and the communication network.
  • mobile communication data including uplink mobile communication data and downlink mobile communication data
  • the mobile communication connection transmits the mobile communication data, so the traffic of the mobile communication data consumed by the mobile terminal is included in the communication charge of the mobile terminal by the charging entity on the communication network side, thereby consuming the mobile communication included in the communication tariff used by the mobile terminal for contracting.
  • Data traffic for data
  • FIG. 3 is a schematic diagram of hardware entities of each party performing information interaction according to an embodiment of the present invention.
  • FIG. 3 includes: a server 11 and a terminal device 21-24.
  • the terminal device 21-24 performs information interaction with a server through a wired network or a wireless network.
  • Terminal equipment includes mobile phones, desktops, PCs, all-in-ones, and the like.
  • the terminal device is installed with an application (such as a video application, a social application, a map navigation application, a high-speed rail application, etc.).
  • the terminal initiates the first acquisition to the server.
  • the terminal obtains the first multimedia information (such as video including variety shows, TV series or movies), and the terminal initiates the acquisition to the server.
  • the second multimedia information (such as advertisement information) is requested by the terminal, and the terminal acquires the second multimedia information (such as advertisement information) according to a preset playing policy, such as before the video information is played, or during the video information playing, Or suspending the playing of the video information, or playing the advertising information at the end of the playing of the video information, and the like, loading the first multimedia information (such as a video including a variety show, a TV series or a movie) and the first Two multimedia information (such as advertising information) and playing, detecting that the currently played multimedia information is the second multimedia information (such as advertising information) Turning on a virtual reality play mode, and simulating each frame image in the second multimedia information (such as advertisement information) as a corresponding at least one curved image, where the image to be imaged of each frame plane image is included in the terminal
  • FIG. 3 is only an example of a system architecture that implements an embodiment of the present invention.
  • the embodiment of the present invention is not limited to the system structure described in FIG. 3 above, and may perform an image on the second multimedia information (such as advertisement information).
  • the enlargement process may also perform image enlargement processing on the first multimedia information (such as video information), but since the second multimedia information (such as advertisement information) has a short play time, it does not occupy too much Terminal resources can also achieve virtual and real conversion of image enlargement processing.
  • the communication system described in FIG. 2, and the system architecture described in FIG. 3 various embodiments of the method of the present invention are presented.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • An embodiment of the present invention provides an information processing method. As shown in FIG. 4, the method includes:
  • Step 101 The terminal initiates a request for acquiring the first multimedia information to the server.
  • the first multimedia information may include video information such as a variety show, a TV show or a movie
  • the terminal may obtain a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of the use. Which kind of portal is used to obtain the video to be watched, and after the touch-click operation is performed on the video to be watched, the terminal sends a request to the server, and the server returns the requested video information to the terminal, and step 102 is performed.
  • Step 102 The terminal acquires the first multimedia information.
  • Step 103 The terminal initiates a request for acquiring the second multimedia information to the server.
  • the second multimedia information may include advertisement information including shopping, product recommendation, brand promotion
  • the embodiment of the present invention is not limited to the scenario of sharing the advertisement information.
  • the terminal first obtains a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of which portal is used to obtain the video that is desired to be viewed. After the video is clicked, the terminal will initiate a request to the server, and the server will return the requested video information. Give the terminal.
  • the advertisement information is played, and the like, when the advertisement information is triggered to be played.
  • the server After the terminal automatically initiates a request to the server, the server returns the requested advertisement information to the terminal, and performs step 104.
  • Step 104 The terminal acquires the second multimedia information.
  • Step 105 The terminal separately loads and plays the first multimedia information and the second multimedia information according to a preset play policy.
  • the embodiment of the invention is not limited to the scenario in which the advertisement information is shared.
  • the terminal first obtains a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of which portal is used to obtain the video that is desired to be viewed. After the touched video is clicked, the terminal will send a request to the server, and the server returns the requested video information to the terminal.
  • the advertisement information is played, and the like, when the advertisement information is triggered to be played.
  • the server After the terminal automatically initiates a request to the server, the server automatically returns the requested advertisement information to the terminal for loading and playing.
  • the preset policy when the video information is triggered to be played, the playback video information is loaded, thereby The video information and the advertisement information are alternately played according to a preset policy.
  • the first multimedia information (such as video information) is requested first, and then the second multimedia information (such as advertisement information) is requested.
  • the second order information is not limited to this sequence, and the second content may be requested first.
  • Multimedia information (such as advertising information), and then request the first multimedia information (such as video information). It is also possible to request the first multimedia information (such as video information) and the second multimedia information (such as advertisement information) in two times, and to request the first multimedia information (such as video information) and the second multiple at a time.
  • Media information (such as advertising information).
  • Step 106 When detecting that the currently played multimedia information is the second multimedia information, enable a virtual reality play mode, and simulate each frame plane image in the second multimedia information as a corresponding at least one surface. image.
  • the embodiment of the present invention considers that the second multimedia information (such as advertisement information) is subjected to image enlargement processing.
  • the first multimedia information such as video
  • Information Perform image enlargement processing.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • a piece of second multimedia information currently being played specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • the terminal and the multimedia information to be outputted in advance such as carton glasses, not limited to the material of the paper, or other materials such as plastic), as shown in Fig.
  • the locally projected play content obtained after the play mode is processed and projected on the screen of the mobile phone in a half screen mode to display a half screen effect on the screen of the mobile phone.
  • the carton glasses include: a bracket having a nose supporting member and two convex lens sheets, wherein the two convex lens sheets are mounted on the bracket, and the screen of the terminal faces the two convex lens sheets, and the terminal is assembled Assembling with the carton glasses, as shown in Figure 7, the assembled user uses the renderings, the user puts them in front of the video for viewing, such as watching a "car brand promotion" video shown in Figure 5.
  • each frame of video is converted from a plane to a curved surface
  • an area to be imaged of each frame plane image is included in a display area of the terminal, and an area to be imaged of the at least one curved image is larger than a display area of the terminal
  • the effect of converting the plane to the surface display as can be seen, after converting to the surface, the image will be placed Big.
  • Step 107 When each of the at least one surface graphic is locally projected to the multimedia information to be outputted for imaging, the playing interface of the multimedia to be outputted end is divided into two, and is synchronized on the 1/2 playing interface. Shows the content of the partially projected play.
  • the multimedia information to be outputted end may be a carton glasses, not limited to the material of the paper, and may be other materials such as plastic.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11, in a carton
  • an advertisement information sharing scenario such as a section of “car brand promotion” identified by A11
  • the terminal identified by A12 shown in FIG. 6 is assembled with the carton glasses identified by A13, and a frame of video in a section of the “Brand Promotion of the Car” identified by A11 is obtained, and the interpolation process is performed by interpolation.
  • the video frame is converted from the plane to the surface identified by A16. After the surface video is obtained, the part of the surface video is projected onto the screen in split screen mode. As shown in A14 and A15, the final projection result of the split screen display is:
  • the original advertising video (video played by ordinary mobile phones) is processed into a spatial surface (virtual coordinate space, not displayed) by interpolation algorithm, and then a part of the curved surface is projected into two half screens by a projection algorithm, which is finally displayed on the mobile phone. It is a two-and-a-half screen effect, and the carton glasses are used to enhance the stereoscopic view, allowing users to get an immersive experience.
  • each of the curved graphics is locally projected to the multimedia information to be outputted, and the playback interface of the multimedia output is divided into two, on the 1/2 playback interface.
  • the partially projected play content is displayed synchronously to obtain the final projection result of the split screen display.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • An embodiment of the present invention provides an information processing method, where the method includes:
  • Step 201 The terminal initiates a request for acquiring the first multimedia information to the server.
  • the first multimedia information may include video information such as a variety show, a TV show or a movie, and the terminal may obtain a desired view through various applications such as a video application or a login video website.
  • step 202 is performed.
  • Step 202 The terminal acquires the first multimedia information.
  • Step 203 The terminal initiates a request for acquiring the second multimedia information to the server.
  • the second multimedia information may include advertisement information including shopping, product recommendation, brand promotion
  • the embodiment of the present invention is not limited to the scenario of sharing the advertisement information.
  • the terminal first obtains a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of which portal is used to obtain the video that is desired to be viewed. After the touched video is clicked, the terminal will send a request to the server, and the server returns the requested video information to the terminal.
  • the advertisement information is played, and the like, when the advertisement information is triggered to be played.
  • the server After the terminal automatically initiates a request to the server, the server returns the requested advertisement information to the terminal, and performs step 204.
  • Step 204 The terminal acquires the second multimedia information.
  • Step 205 The terminal separately loads and plays the first multimedia information and the second multimedia information according to a preset play policy.
  • the embodiment of the invention is not limited to the scenario in which the advertisement information is shared.
  • the terminal first obtains a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of which portal is used to obtain the video that is desired to be viewed. After the touched video is clicked, the terminal will send a request to the server, and the server returns the requested video information to the terminal.
  • the preset playing strategy such as before playing video information, or during video information playing, or suspending playing video information, or viewing When the frequency information is played, the advertisement information and the like are played again.
  • the terminal automatically sends a request to the server to return the requested advertisement information to the terminal for loading.
  • Play according to the preset policy, when the video information is triggered to be played, the play video information is loaded, so that the video information and the advertisement information are alternately played according to the preset policy.
  • the first multimedia information (such as video information) is requested first, and then the second multimedia information (such as advertisement information) is requested.
  • the second order information is not limited to this sequence, and the second content may be requested first.
  • Multimedia information (such as advertising information), and then request the first multimedia information (such as video information). It is also possible to request the first multimedia information (such as video information) and the second multimedia information (such as advertisement information) in two times, and to request the first multimedia information (such as video information) and the second multiple at a time.
  • Media information (such as advertising information).
  • Step 206 When detecting that the currently played multimedia information is the second multimedia information, obtain a first operation, where the first operation is used to trigger to enable the virtual reality play mode.
  • the virtual reality play mode may be turned on by the terminal-installed mode conversion application, or by adding a function of the video application, such as adding an entry of the mode conversion, when the first operation is applied to the mode conversion entry, the trigger is turned on.
  • Virtual reality playback mode It is also possible to have a processing chip built in the terminal, and use the processing chip to implement a series of related processing such as loading a virtual reality playing mode, split screen projection processing, and the like.
  • Step 207 In response to the first operation, switching from a normal video play mode to a virtual reality play mode, and simulating each frame plane image in the second multimedia information as a corresponding at least one curved surface image.
  • the normal video play mode refers to a normal play mode
  • the video frame of the video is a planar effect
  • the virtual reality mode is a mode in which the video frame of the video is converted from a plane to a curved surface in the embodiment of the present invention, which is a stereoscopic and intuitive effect.
  • the embodiment of the present invention considers the second multimedia information (such as advertisement information).
  • the image enlargement process is performed.
  • the first multimedia information (such as video information) can be subjected to image enlargement processing by using the same principle of the embodiment of the present invention.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • a piece of second multimedia information currently being played specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • the terminal and the multimedia information to be outputted in advance such as carton glasses, not limited to the material of the paper, or other materials such as plastic), as shown in Fig.
  • the locally projected play content obtained after the play mode is processed and projected on the screen of the mobile phone in a half screen mode to display a half screen effect on the screen of the mobile phone.
  • the carton glasses include: a bracket having a nose supporting member and two convex lens sheets, wherein the two convex lens sheets are mounted on the bracket, and the screen of the terminal faces the two convex lens sheets, and the terminal is assembled Assembling with the carton glasses, as shown in Figure 7, the assembled user uses the renderings, the user puts them in front of the video for viewing, such as watching a "car brand promotion" video shown in Figure 5.
  • each frame of video is converted from a plane to a curved surface
  • an area to be imaged of each frame plane image is included in a display area of the terminal, and an area to be imaged of the at least one curved image is larger than a display area of the terminal
  • the effect of converting the plane to the surface display as can be seen, the image will be enlarged after being converted to the surface.
  • Step 208 When each of the at least one surface graphic is locally projected to the multimedia information to be outputted for imaging, the playing interface of the multimedia to be outputted end is divided into two, and is synchronized on the 1/2 playing interface. Shows the content of the partially projected play.
  • the multimedia information to be outputted end may be a carton glasses, not limited to the material of the paper, and may be other materials such as plastic.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11, in a carton
  • an advertisement information sharing scenario such as a section of “car brand promotion” identified by A11
  • the terminal identified by A12 shown in FIG. 6 is assembled with the carton glasses identified by A13, and a frame of video in a section of the “Brand Promotion of the Car” identified by A11 is obtained, and the interpolation process is performed by interpolation.
  • the video frame is converted from the plane to the surface identified by A16. After the surface video is obtained, the part of the surface video is projected onto the screen in split screen mode. As shown in A14 and A15, the final projection result of the split screen display is:
  • the original advertising video (video played by ordinary mobile phones) is processed into a spatial surface (virtual coordinate space, not displayed) by interpolation algorithm, and then a part of the curved surface is projected into two half screens by a projection algorithm, which is finally displayed on the mobile phone. It is a two-and-a-half screen effect, and the carton glasses are used to enhance the stereoscopic view, allowing users to get an immersive experience.
  • each of the curved graphics is locally projected to the multimedia information to be outputted, and the playback interface of the multimedia output is divided into two, on the 1/2 playback interface.
  • the partially projected play content is displayed synchronously to obtain the final projection result of the split screen display.
  • the embodiment of the present invention triggers the normal video play mode to switch to the virtual reality play mode, which is triggered by acquiring the first operation, which is a trigger that requires user intervention, and can give the user more choices, at which time point, which segment The video content is played in a virtual reality play mode.
  • each frame image in the second multimedia information is simulated as a corresponding at least one curved image, including: sequentially acquiring each of the second multimedia information. a frame plane image; an interpolation operation is performed on each of the frame plane images to generate a corresponding surface image.
  • the method further includes: intercepting an intermediate part or other partial content of the second multimedia information from the frame image of each frame, and recording the first waiting Processing the information, after the first to-be-processed information in the frame image of each frame is interpolated to a corresponding curved image, the second to-be-processed information is obtained, and the second to-be-processed information is the local The projected content of the shot.
  • the playing interface of the multimedia to be outputted end is divided into two, and the partially projected playing content is synchronously displayed on the 1/2 playing interface, including: adopting a split screen playing mode.
  • the play interface of the multimedia to be outputted end is divided into two, and is recorded as a first interface and a second interface, wherein the first interface and the second interface are both a 1/2 play interface; and the first interface is displayed on the first interface a partially projected play content, the first interface corresponding to a convex lens piece of the multimedia to be outputted end; the second interface is displayed with the same or similar content as the locally projected play content, the second interface corresponding to Another convex lens sheet of the multimedia to be outputted.
  • the advertisement video played during normal use is the normal video mode, as shown in FIG. 5, after the virtual reality play mode is turned on, such as the terminal Put it into the carton glasses, the advertising video is converted from a plane to a curved surface, so that the image will be split left and right after the image is enlarged, and the carton glasses are placed in front of both eyes, and the head is viewed, eventually forming a virtual reality effect.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • An embodiment of the present invention provides an information processing method, where the method includes:
  • Step 301 The terminal initiates a request for acquiring the first multimedia information to the server.
  • the first multimedia information may include video information such as a variety show, a TV show or a movie
  • the terminal may obtain a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of the use.
  • Which type of portal is used to obtain the video to be viewed, and after the touch-click operation is performed on the video to be watched, the terminal sends a request to the server, and the server returns the requested video information to the terminal, and step 302 is performed.
  • Step 302 The terminal acquires the first multimedia information.
  • Step 303 The terminal initiates a request for acquiring the second multimedia information to the server.
  • the second multimedia information may include advertisement information including shopping, product recommendation, brand promotion
  • the embodiment of the present invention is not limited to the scenario of sharing the advertisement information.
  • the terminal first obtains a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of which portal is used to obtain the video that is desired to be viewed. After the touched video is clicked, the terminal will send a request to the server, and the server returns the requested video information to the terminal.
  • the advertisement information is played, and the like, when the advertisement information is triggered to be played.
  • the server After the terminal automatically initiates a request to the server, the server returns the requested advertisement information to the terminal, and performs step 304.
  • Step 304 The terminal acquires the second multimedia information.
  • Step 305 The terminal separately loads and plays the first multimedia information and the second multimedia information according to a preset play policy.
  • the embodiment of the invention is not limited to the scenario in which the advertisement information is shared.
  • the terminal first obtains a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of which portal is used to obtain the video that is desired to be viewed. After the touched video is clicked, the terminal will send a request to the server, and the server returns the requested video information to the terminal.
  • the advertisement information is played, and the like, when the advertisement information is triggered to be played.
  • the server After the terminal automatically initiates a request to the server, the server automatically returns the requested advertisement information to the terminal for loading and playing.
  • the preset policy when the video information is triggered to be played, the playback video information is loaded, thereby , alternately play video information and wide according to preset strategies Report information.
  • the first multimedia information (such as video information) is requested first, and then the second multimedia information (such as advertisement information) is requested.
  • the second order information is not limited to this sequence, and the second content may be requested first.
  • Multimedia information (such as advertising information), and then request the first multimedia information (such as video information). It is also possible to request the first multimedia information (such as video information) and the second multimedia information (such as advertisement information) in two times, and to request the first multimedia information (such as video information) and the second multiple at a time.
  • Media information (such as advertising information).
  • Step 306 When it is detected that the currently played multimedia information is the second multimedia information, determine whether the user wears the multimedia information to be outputted, and the multimedia information to be outputted end supports virtual reality imaging.
  • Step 307 When it is determined that the user wears the multimedia information to be outputted, triggering to enable the virtual reality play mode, and switching from the normal video play mode to the virtual reality play mode, each of the second multimedia information A frame of the planar image is modeled as a corresponding at least one curved image.
  • the virtual reality play mode may be turned on by the terminal-installed mode conversion application, or by adding a function of the video application, such as adding an entry of the mode conversion, when the first operation is applied to the mode conversion entry, the trigger is turned on.
  • Virtual reality playback mode It is also possible to have a processing chip built in the terminal, and use the processing chip to implement a series of related processing such as loading a virtual reality playing mode, split screen projection processing, and the like, for example, determining, by the processing chip, that the user wears the multimedia information to be outputted. At the time, the virtual reality play mode is triggered.
  • the normal video play mode refers to a normal play mode
  • the video frame of the video is a planar effect
  • the virtual reality mode is a mode in which the video frame of the video is converted from a plane to a curved surface in the embodiment of the present invention, which is a stereoscopic and intuitive effect.
  • the embodiment of the present invention considers that the second multimedia information (such as advertisement information) is subjected to image enlargement processing.
  • the first multimedia can also be used for the first multimedia according to the same principle of the embodiment of the present invention. Information such as video information is subjected to image enlargement processing.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • a piece of second multimedia information currently being played specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • the terminal and the multimedia information to be outputted in advance such as carton glasses, not limited to the material of the paper, or other materials such as plastic), as shown in Fig.
  • the locally projected play content obtained after the play mode is processed and projected on the screen of the mobile phone in a half screen mode to display a half screen effect on the screen of the mobile phone.
  • the carton glasses include: a bracket having a nose supporting member and two convex lens sheets, wherein the two convex lens sheets are mounted on the bracket, and the screen of the terminal faces the two convex lens sheets, and the terminal is assembled Assembling with the carton glasses, as shown in Figure 7, the assembled user uses the renderings, the user puts them in front of the video for viewing, such as watching a "car brand promotion" video shown in Figure 5.
  • each frame of video is converted from a plane to a curved surface
  • an area to be imaged of each frame plane image is included in a display area of the terminal, and an area to be imaged of the at least one curved image is larger than a display area of the terminal
  • the effect of converting the plane to the surface display as can be seen, the image will be enlarged after being converted to the surface.
  • Step 308 When each of the at least one surface graphic is locally projected to the multimedia information to be outputted for imaging, the playing interface of the multimedia to be outputted end is divided into two, and synchronized on the 1/2 playing interface. Shows the content of the partially projected play.
  • the multimedia information to be outputted end may be a carton glasses, not limited to the material of the paper, and It is made of other materials such as plastic.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11, in a carton
  • the terminal identified by A12 shown in FIG. 6 is assembled with the carton glasses identified by A13, and a frame of video in a section of the “Brand Promotion of the Car” identified by A11 is obtained, and the interpolation process is performed by interpolation.
  • the video frame is converted from the plane to the surface identified by A16. After the surface video is obtained, the part of the surface video is projected onto the screen in split screen mode. As shown in A14 and A15, the final projection result of the split screen display is:
  • the original advertising video (video played by ordinary mobile phones) is processed into a spatial surface (virtual coordinate space, not displayed) by interpolation algorithm, and then a part of the curved surface is projected into two half screens by a projection algorithm, which is finally displayed on the mobile phone. It is a two-and-a-half screen effect, and the carton glasses are used to enhance the stereoscopic view, allowing users to get an immersive experience.
  • each of the curved graphics is locally projected to the multimedia information to be outputted, and the playback interface of the multimedia output is divided into two, on the 1/2 playback interface.
  • the partially projected play content is displayed synchronously to obtain the final projection result of the split screen display.
  • the embodiment of the present invention triggers the normal video play mode to switch to the virtual reality play mode, which is to trigger the user to wear the multimedia to be outputted end (such as the carton glasses) to automatically trigger the opening, without user intervention, and can liberate the user's hands. , have a better viewing experience, and achieve immersive viewing effect.
  • each frame image in the second multimedia information is simulated as a corresponding at least one curved image, including: sequentially acquiring each of the second multimedia information. a frame plane image; an interpolation operation is performed on each of the frame plane images to generate a corresponding surface image.
  • the method further includes: intercepting an intermediate part or other partial content of the second multimedia information from the frame image of each frame, and recording the first waiting Processing the information, after the first to-be-processed information in the frame image of each frame is interpolated to a corresponding curved image, the second to-be-processed information is obtained, and the second to-be-processed information is the local The projected content of the shot.
  • the playing interface of the multimedia to be outputted end is divided into two, and the partially projected playing content is synchronously displayed on the 1/2 playing interface, including: adopting a split screen playing mode.
  • the play interface of the multimedia to be outputted end is divided into two, and is recorded as a first interface and a second interface, wherein the first interface and the second interface are both a 1/2 play interface; and the first interface is displayed on the first interface a partially projected play content, the first interface corresponding to a convex lens piece of the multimedia to be outputted end; the second interface is displayed with the same or similar content as the locally projected play content, the second interface corresponding to Another convex lens sheet of the multimedia to be outputted.
  • the advertisement video played during normal use is the normal video mode, as shown in FIG. 5, after the virtual reality play mode is turned on, such as the terminal Put it into the carton glasses, the advertising video is converted from a plane to a curved surface, so that the image will be split left and right after the image is enlarged, and the carton glasses are placed in front of both eyes, and the head is viewed, eventually forming a virtual reality effect.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • An information processing system of the embodiment of the present invention includes a terminal 31, a video server 32, and an advertisement server 33.
  • the terminal 31 includes: a first requesting unit 311 configured to initiate acquisition of the first multimedia. a request for information; a first obtaining unit 312 configured to acquire the first multimedia information; a second requesting unit 313 configured to initiate a request for acquiring the second multimedia information; and a second obtaining unit 314 configured to obtain a second multimedia information;
  • the playing unit 315 is configured to separately load and play the first multimedia information and the second multimedia information according to a preset playing policy;
  • the analog converting unit 316 is configured to detect When the currently played multimedia information is the second multimedia information, the virtual reality playing mode is turned on, and the second multimedia information is used.
  • Each frame image of each frame is simulated as a corresponding at least one curved image, and the area to be imaged of each frame of the planar image is included in a display area of the terminal, and the area to be imaged of the at least one curved image is larger than the display area of the terminal;
  • the projection unit 317 is configured to: when each of the at least one curved graphic is locally projected to the multimedia information to be outputted for imaging, the playback interface of the multimedia to be outputted end is divided into two, at 1/2 The partially projected play content is displayed synchronously on the playback interface.
  • the video server 32 is configured to respond to the request for requesting the first multimedia information, and feed back the first multimedia information (such as video information) to the terminal 31, and the advertisement server 33 is configured to respond to the request for requesting the second multimedia information, and feedback Two multimedia information (such as advertisement information) is given to the terminal 31.
  • the first multimedia information may include video information such as a variety show, a TV show or a movie
  • the terminal may obtain a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of the use. Which kind of portal is used to obtain the video that you want to watch, and after the touch-click operation is performed on the video to be watched, the terminal will send a request to the server, and the server returns the requested video information to the terminal.
  • the second multimedia information may include advertisement information including shopping, product recommendation, and brand promotion.
  • the embodiment of the present invention is not limited to the scenario of sharing the advertisement information.
  • the terminal first obtains a video list or other video presentation form that is desired to be viewed through various applications such as a video application or a login video website, regardless of which portal is used to obtain the video that is desired to be viewed.
  • the terminal After the touched video is clicked, the terminal will send a request to the server, and the server returns the requested video information to the terminal.
  • the advertisement information is played, and the like, when the advertisement information is triggered to be played, Without the user's touch operation, the terminal automatically sends a request to the server, and the server returns the requested advertisement information to the terminal.
  • the server returns the requested advertisement information to the terminal, loads and plays the file, and according to the preset policy, when the video information is triggered to be played, the video information is loaded and played. Thereby, the video information and the advertisement information are alternately played according to the preset policy.
  • the first multimedia information (such as video information) may be requested first, and then the second multimedia information (such as advertisement information) may be requested.
  • the sequence is not limited to this sequence, and the second plurality may be requested first.
  • Media information (such as advertising information), and then request the first multimedia information (such as video information). It is also possible to request the first multimedia information (such as video information) and the second multimedia information (such as advertisement information) in two times, and to request the first multimedia information (such as video information) and the second multiple at a time.
  • Media information (such as advertising information).
  • the embodiment of the present invention considers that the second multimedia information (such as advertisement information) is subjected to image enlargement processing.
  • the first multimedia information (such as video information) can also be used according to the same principle of the embodiment of the present invention. Perform image enlargement processing.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • a piece of second multimedia information currently being played specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11.
  • the terminal and the multimedia information to be outputted in advance such as carton glasses, not limited to the material of the paper, or other materials such as plastic), as shown in Fig.
  • the locally projected play content obtained after the play mode is processed and projected on the screen of the mobile phone in a half screen mode to display a half screen effect on the screen of the mobile phone.
  • the carton glasses include: a bracket having a nose supporting member and two convex lens sheets, wherein the two convex lens sheets are mounted on the bracket, and the screen of the terminal faces the two convex lens sheets, and the terminal is assembled Assembling with the carton glasses, as shown in Figure 7, the assembled user uses the renderings, the user puts them in front of the video for viewing, such as watching a "car brand promotion" video shown in Figure 5.
  • each frame of video is converted from a plane to a curved surface
  • an area to be imaged of each frame plane image is included in a display area of the terminal, and an area to be imaged of the at least one curved image is larger than a display area of the terminal
  • the effect of converting the plane to the surface display as can be seen, the image will be enlarged after being converted to the surface.
  • the multimedia information to be outputted end may be a carton glasses, not limited to the material of the paper, and may be other materials such as plastic.
  • the terminal detects a piece of second multimedia information currently being played, specifically a piece of advertisement information in an advertisement information sharing scenario, such as a section of “car brand promotion” identified by A11, in a carton
  • an advertisement information sharing scenario such as a section of “car brand promotion” identified by A11
  • the terminal identified by A12 shown in FIG. 6 is assembled with the carton glasses identified by A13, and a frame of video in a section of the “Brand Promotion of the Car” identified by A11 is obtained, and the interpolation process is performed by interpolation.
  • the video frame is converted from the plane to the surface identified by A16. After the surface video is obtained, the part of the surface video is projected onto the screen in split screen mode. As shown in A14 and A15, the final projection result of the split screen display is:
  • the original advertising video (video played by ordinary mobile phones) is processed into a spatial surface (virtual coordinate space, not displayed) by interpolation algorithm, and then a part of the curved surface is projected into two half screens by a projection algorithm, which is finally displayed on the mobile phone. It is a two-and-a-half screen effect, and the carton glasses are used to enhance the stereoscopic view, allowing users to get an immersive experience.
  • each of the curved graphics is locally projected to the multimedia information to be outputted, and the playback interface of the multimedia output is divided into two, on the 1/2 playback interface.
  • the partially projected play content is displayed synchronously to obtain the final projection result of the split screen display.
  • the first multimedia information is video information
  • the second multimedia information is advertisement information
  • the analog conversion unit is further configured to: when detecting that the currently played multimedia information is the second multimedia information, obtain the first operation, The first operation is used to trigger the virtual reality play mode to be turned on; in response to the first operation, the normal video play mode is switched to the virtual reality play mode.
  • the analog conversion unit is further configured to: when detecting that the currently played multimedia information is the second multimedia information, determine whether the user wears the multimedia information to be outputted.
  • the multimedia information to be outputted end supports virtual reality imaging; when it is determined that the user wears the multimedia information to be outputted, the virtual reality play mode is triggered to be turned on, and the normal video play mode is switched to the virtual reality play mode.
  • the analog conversion unit is further configured to: sequentially acquire each frame plane image in the second multimedia information; and perform interpolation operation simulation on each frame plane image A corresponding surface image.
  • the terminal further includes: an intercepting unit configured to: intercept the intermediate part or other local content of the second multimedia information from the frame image of each frame, and record After the information is processed, the first to-be-processed information in the frame image of each frame is simulated into a corresponding curved image, and the second to-be-processed information is obtained.
  • the locally projected play content is configured to: intercept the intermediate part or other local content of the second multimedia information from the frame image of each frame, and record After the information is processed, the first to-be-processed information in the frame image of each frame is simulated into a corresponding curved image, and the second to-be-processed information is obtained.
  • the locally projected play content is a server.
  • the projection unit is further configured to divide the play interface of the multimedia to be output end into two by using a split screen play mode, and record the first interface and the second interface.
  • the first interface and the second interface are both a 1/2 play interface; the locally projected play content is displayed on the first interface, and the first interface corresponds to a convex lens piece of the multimedia output end;
  • the second interface displays the same or similar content as the locally projected playing content, and the second interface corresponds to another convex lens sheet of the multimedia to be outputted end.
  • the embodiment of the present invention further provides an information processing system, where the information processing system includes: the terminal according to any one of the foregoing aspects, and the multimedia information to be outputted integrally with the terminal;
  • the terminal is used as an information input source, and is used to perform the partial projection of the second multimedia information played by the terminal after the virtual reality playback mode is processed. And respectively projecting onto the two convex lens sheets for imaging in a split-screen playback mode;
  • the multimedia information to be outputted end comprises: a bracket having a nose supporting member and two convex lens sheets, wherein the two convex lens sheets are mounted on the bracket And assembling the screen of the terminal toward the two convex lens sheets.
  • the above terminal may be an electronic device such as a PC, and may also be a portable electronic device such as a PAD, a tablet computer, a laptop computer, or an intelligent mobile terminal such as a mobile phone, and is not limited to the description herein;
  • the server may be configured by a cluster system, and is integrated into one or each unit function split electronic device for realizing each unit function, and both the terminal and the server include at least a database for storing data and processing for data processing. Or a storage medium set in the server or a separately set storage medium.
  • a microprocessor for the processor for data processing, a microprocessor, a central processing unit (CPU), a digital signal processor (DSP, Digital Singnal Processor) or programmable logic may be used when performing processing.
  • An FPGA Field-Programmable Gate Array
  • An operation instruction for a storage medium, includes an operation instruction, where the operation instruction may be computer executable code, and the operation instruction is used to implement the information processing method in the foregoing embodiment of the present invention.
  • Video ads divided into traditional video ads and mobile video ads.
  • Traditional video ads are set up and served in the video
  • mobile video ads are divided into traditional patch ads and In-App video ads, which refers to the mode of inserting video in mobile devices.
  • Video stickers Movie ads Refers to ads that are played in the video clips, as well as background ads and more.
  • patch advertising can be regarded as an extension of TV advertising, and the operational logic behind it is still the principle of secondary sales of media.
  • VR video advertisement refers to the production of video content that can be used for the viewing of the headset on the mobile device based on the virtual reality technology, and the user experiences an immersive user experience after wearing the glasses.
  • the current mobile video advertisement can be played on a mobile device in a full screen form, and the existing mobile video advertisement is viewed by the naked eye, because the screen of the mobile device is small, the perception is poor, the user's attention is insufficient, and the playback screen is limited to the screen. size.
  • the virtual reality technology is adopted, and the user can wear a pair of glasses to generate a stereoscopic immersive look and feel, and can directly convert the existing advertisement video into an effect close to the virtual reality look and feel, and realize the processing module with the virtual reality playing function.
  • the processing module can be in the form of a dedicated application, which can be combined with a video application and added as a new function of the video application, and can also exist in the form of a chip.
  • the processing module is configured to convert the video frame from a plane to a curved surface by interpolation processing, and project the image to be projected on the screen by a projection process in a split screen form.
  • smart mobile devices such as iPhones, Android phones
  • similar carton glasses not limited to carton materials, which may be plastics, etc.
  • the video source is an ordinary advertisement video as an example
  • the advertisement video played during normal use is an ordinary video mode (as shown in FIG. 5).
  • Will switch to virtual reality mode such as the iPhone, Android phone into the carton glasses (as shown in Figure 6)
  • the advertising video is split left and right, put the carton glasses in front of the eyes, fit the head to watch, will form Close to the virtual reality effect (as shown in Figure 7).
  • the above processing module is used to simulate a frame by interpolating each frame of video, as shown by the shaded area in FIG.
  • the patch advertisement can be continuously played in the form of a queue, thereby realizing the display effect of the left and right split screen states, and the left and right split screens can Synchronize the normal commercial video, no time difference, and finally project the obtained surface video to the screen to form the final viewing video, so that the user can watch the video by changing the orientation.
  • the triggering of the left and right split screen states may be the following two schemes: 1) identifying the state of the device, automatically switching the left and right split screens when the horizontal screen is vertically placed; 2) clicking the screen split screen button to switch the left and right split screens;
  • the left and right split screen states realized by the processing module are displayed in a split screen according to the position indicated by b in FIG. 11, and are divided into a first screen and a second screen.
  • the left and right split screens can be played synchronously.
  • the content displayed on the left split screen is: the middle part of the source video (the part in the block diagram) at the position indicated by a in Fig. 11, and the content displayed on the right split screen is the middle part of the source video, Fig. 11
  • the content displayed by the first screen and the second screen may be the middle part of the source video in the intercepted video frame (the part in the block diagram), which are identical in this example, of course, due to the interception processing and the precision The difference in degree requirements, the two can also be different, as long as the two images are close or similar.
  • the iPhone and the Android mobile phone perceive the movement of the device, and the processing module obtains the shaking value, and the processing module moves the source video to the left and right according to the shaking value to form a feeling that the user views the advertisement from different perspectives, that is, sliding in FIG.
  • the position of the middle part of the source video (the part in the block diagram) on the source video, the area that needs to be displayed is processed into a curved video frame by interpolation, and projected onto the screen to form a display video.
  • the projection technology is as shown in FIG.
  • the pixels that overlap when projected are averaged. It should be pointed out that, after the interpolation process, the display area is enlarged to the area indicated by A16. In practical applications, the area indicated by A17 needs to be selected as the local area from the area indicated by A16.
  • the partial area is projected onto the screen in a split screen state as shown in FIG.
  • the virtual reality effect of the simulation is implemented on the advertisement information display process of the front paste, the middle insert, and the post paste, and the advertiser does not need to reproduce the video advertisement, that is, the advertiser does not need to shoot in advance.
  • 360-degree video to reduce production costs the existing ordinary patch can be automatically converted into a virtual reality-simulable result through the interpolation processing terminal.
  • the mode of the video positive film whether or not 360 degree video
  • Virtual reality effects The terminal determines whether to access the processing module that implements the virtual reality playing function according to the source of the video.
  • the processing module For the video originating from the advertiser (that is, the patch may be the front, middle, and back patches), the processing module is invoked, and the other Video (such as the body video in a TV series) does not call the processing module.
  • the processing module is used for converting ordinary video into virtual reality video, specifically adopting an interpolation algorithm in graphic image processing to convert ordinary video into a curved surface, and using a projection algorithm in graphic image processing to locally map the curved video to half of the screen. The screen is displayed.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are configured to execute the information processing method described above.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing may be completed by a program instruction related hardware, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program includes the steps of the foregoing method embodiment; and the foregoing storage medium includes: mobile storage A device that can store program code, such as a device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the first multimedia information and the second multimedia information are separately loaded and played, and the current playback is detected.
  • the multimedia information is the second multimedia information
  • the virtual reality play mode is turned on, and each frame image in the second multimedia information is simulated as a corresponding at least one curved image, where each frame The area to be imaged of the planar image is included in the display area of the terminal, and the area to be imaged of the at least one curved image is larger than the display area of the terminal, so as to achieve the effect of image enlargement, so that the presentation of the multimedia information is not limited to the screen size of the terminal.

Abstract

本发明实施例公开了一种信息处理方法及终端、计算机存储介质,其中,所述方法包括:发起获取第一多媒体信息的请求;获取到第一多媒体信息;发起获取第二多媒体信息的请求;获取到第二多媒体信息;根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像;将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。

Description

一种信息处理方法及终端、计算机存储介质 技术领域
本发明涉及通讯技术,尤其涉及一种信息处理方法及终端、计算机存储介质。
背景技术
随着互联网技术的发展,智能终端的大量普及,信息分享越来越便捷,比如,用户通过社交网站或者社交应用就可以便携地进行信息分享,又如,用户在通过视频应用或登录视频网站时除了可以收看到第一多媒体信息(如视频本身),还可以收到第二多媒体信息(如广告信息),该第二多媒体信息(如广告信息)可以在第一多媒体信息(如视频本身)播放之前、第一多媒体信息(如视频本身)播放过程中、第一多媒体信息(如视频本身)播放暂停间隙或第一多媒体信息(如视频本身)播放结束后呈现给用户,为了更好的进行信息分享,需要用户看到更立体直观的第二多媒体信息,即:使得第二多媒体信息的播放图像更大,画质更清晰。然而,当用户使用终端比如手机来看第二多媒体信息时,会受限于手机屏幕大小的限制,如何使多媒体信息的呈现不受限于终端屏幕大小的限制,是要解决的技术问题,相关技术中,并未存在解决该问题的方案。
发明内容
有鉴于此,本发明实施例希望提供一种信息处理方法及终端、计算机存储介质,至少解决了现有技术存在的问题。
本发明实施例的技术方案是这样实现的:
本发明实施例的一种信息处理方法,所述方法包括:
发起获取第一多媒体信息的请求;
获取到第一多媒体信息;
发起获取第二多媒体信息的请求;
获取到第二多媒体信息;
根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;
检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域;
将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
本发明实施例的一种终端,所述终端包括:
第一请求单元,配置为发起获取第一多媒体信息的请求;
第一获取单元,配置为获取到第一多媒体信息;
第二请求单元,配置为发起获取第二多媒体信息的请求;
第二获取单元,配置为获取到第二多媒体信息;
播放单元,配置为根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;
模拟转换单元,配置为检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域;
投影单元,配置为将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
所述第一请求单元、所述第一获取单元、所述第二请求单元、所述第二获取单元、所述播放单元、所述模拟转换单元、所述投影单元在执行处理时,可以采用中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Singnal Processor)或可编程逻辑阵列(FPGA,Field-Programmable Gate Array)实现。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行上述的信息处理方法。
本发明实施例的信息处理方法包括:发起获取第一多媒体信息的请求;获取到第一多媒体信息;发起获取第二多媒体信息的请求;获取到第二多媒体信息;根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域;将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
采用本发明实施例,获取到第一多媒体信息和第二多媒体信息后,分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放,检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图 像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域,以达到画面放大的效果,使多媒体信息的呈现不受限于终端屏幕大小的限制,将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,从而使用户看到更立体直观的第二多媒体信息,第二多媒体信息的播放图像更大,画质更清晰,使用户可以沉浸在所播放内容中。
附图说明
图1为实现本发明各个实施例的移动终端一个可选的硬件结构示意图;
图2为如图1所示的移动终端的通信系统示意图;
图3为本发明实施例中进行信息交互的各方硬件实体的示意图;
图4为本发明实施例一的方法流程图;
图5为应用本发明实施例的未经虚拟现实技术处理的UI界面示意图;
图6为应用本发明实施例的终端与纸盒眼镜的装配示意图;
图7为应用本发明实施例的终端与纸盒眼镜的装配后示意图;
图8为应用本发明实施例的平面转曲面的示意图;
图9为应用本发明实施例的得到分屏状态的示意图;
图10为实施例四的系统组成结构示意图;
图11为应用本发明实施例的截取视频帧及在每一屏中显示截取部分视频帧的示意图。
具体实施方式
下面结合附图对技术方案的实施作进一步的详细描述。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描 述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明实施例的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
在下面的详细说明中,陈述了众多的具体细节,以便彻底理解本发明。不过,对于本领域的普通技术人员来说,显然可在没有这些具体细节的情况下实践本发明。在其他情况下,没有详细说明公开的公知方法、过程、组件、电路和网络,以避免不必要地使实施例的各个方面模糊不清。
另外,本文中尽管多次采用术语“第一”、“第二”等来描述各种元件(或各种多媒体信息或各种应用或各种指令或各种操作)等,不过这些元件(或多媒体信息或应用或指令或操作)不应受这些术语的限制。这些术语只是用于区分一个元件(或多媒体信息或应用或指令或操作)和另一个元件(或多媒体信息或应用或指令或操作)。例如,第一操作可以被称为第二操作,第二操作也可以被称为第一操作,而不脱离本发明的范围,第一操作和第二操作都是操作,只是二者并不是相同的操作而已。
本发明实施例中的步骤并不一定是按照所描述的步骤顺序进行处理,可以按照需求有选择的将步骤打乱重排,或者删除实施例中的步骤,或者增加实施例中的步骤,本发明实施例中的步骤描述只是可选的顺序组合,并不代表本发明实施例的所有步骤顺序组合,实施例中的步骤顺序不能认为是对本发明的限制。
本发明实施例中的术语“和/或”指的是包括相关联的列举项目中的一个或多个的任何和全部的可能组合。还要说明的是:当用在本说明书中时,“包括/包含”指定所陈述的特征、整数、步骤、操作、元件和/或组件的存在,但是不排除一个或多个其他特征、整数、步骤、操作、元件和/或组件和/或它们的组群的存在或添加。
本发明实施例的智能终端(如移动终端)可以以各种形式来实施。例 如,本发明实施例中描述的移动终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA,Personal Digital Assistant)、平板电脑(PAD)、便携式多媒体播放器(PMP,Portable Media Player)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例的移动终端一个可选的硬件结构示意图。
移动终端100可以包括通信单元110、音频/视频(A/V)输入单元120、用户输入单元130、第一请求单元140、第一获取单元141、第二请求单元142、第二获取单元143、播放单元144、模拟转换单元145、投影单元146、输出单元150、存储单元160、接口单元170、处理单元180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信(如果将移动终端用固定终端代替,也可以通过有线方式进行电通信)。例如,通信单元具体为无线通信单元时可以包括广播接收单元111、移动通信单元112、无线互联网单元113、短程通信单元114和位置信息单元115中的至少一个,这些单元是可选的,根据不同需求可以增删。
广播接收单元111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播 信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信单元112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB,Digital Multimedia Broadcasting)的电子节目指南(EPG,Electronic Program Guide)、数字视频广播手持(DVB-H,Digital Video Broadcasting-Handheld)的电子服务指南(ESG,Electronic Service Guide)等等的形式而存在。广播接收单元111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收单元111可以通过使用诸如多媒体广播-地面(DMB-T,Digital Multimedia Broadcasting-Terrestrial)、数字多媒体广播-卫星(DMB-S,Digital Multimedia Broadcasting-Satellite)、数字视频广播手持(DVB-H),前向链路媒体(MediaFLO,Media Forward Link Only)的数据广播系统、地面数字广播综合服务(ISDB-T,Integrated Services Digital Broadcasting-Terrestrial)等等的数字广播系统接收数字广播。广播接收单元111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收单元111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信单元112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网单元113支持移动终端的无线互联网接入。该单元可以内部或外部地耦接到终端。该单元所涉及的无线互联网接入技术可以包括无线局域网络(Wi-Fi,WLAN,Wireless Local Area Networks)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA, High Speed Downlink Packet Access)等等。
短程通信单元114是用于支持短程通信的单元。短程通信技术的一些示例包括蓝牙、射频识别(RFID,Radio Frequency Identification)、红外数据协会(IrDA,Infrared Data Association)、超宽带(UWB,Ultra Wideband)、紫蜂等等。
位置信息单元115是用于检查或获取移动终端的位置信息的单元。位置信息单元的典型示例是全球定位系统(GPS,Global Positioning System)。根据当前的技术,位置信息单元115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,位置信息单元115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储单元160(或其它存储介质)中或者经由通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信单元112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、鼠标、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
第一请求单元140,用于发起获取第一多媒体信息的请求;第一获取单元141,用于获取到第一多媒体信息;第二请求单元142,用于发起获取第二多媒体信息的请求;第二获取单元143,用于获取到第二多媒体信息;播放单元144,用于根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;模拟转换单元145,用于检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域;投影单元146,用于将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别单元的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别单元可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别单元(UIM,User Identify Module)、客户识别单元(SIM,Subscriber Identity Module)、通用客户识别单元(USIM,Universal Subscriber Identity Module)等等。另外,具有识别单元的装置(下面称为" 识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出单元152等等。
显示单元151可以显示在移动终端100中处理的信息。例如,移动终端100可以显示相关用户界面(UI,User Interface)或图形用户界面(GUI,Graphical User Interface)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD,Liquid Crystal Display)、薄膜晶体管LCD(TFT-LCD,Thin Film Transistor-LCD)、有机发光二极管(OLED,Organic Light-Emitting Diode)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为透明有机发光二极管(TOLED)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更 多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出单元152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出单元152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元152可以包括扬声器、蜂鸣器等等。
存储单元160可以存储由处理单元180执行的处理和控制操作的软件程序等等,或者可以暂时地存储已经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储单元160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储单元160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM,Random Access Memory)、静态随机访问存储器(SRAM,Static Random Access Memory)、只读存储器(ROM,Read Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read Only Memory)、可编程只读存储器(PROM,Programmable Read Only Memory)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储单元160的存储功能的网络存储装置协作。
处理单元180通常控制移动终端的总体操作。例如,处理单元180执行与语音通话、数据通信、视频通话等等相关的控制和处理。又如,处理单元180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图 片绘制输入识别为字符或图像。
电源单元190在处理单元180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processing)、数字信号处理装置(DSPD,Digital Signal Processing Device)、可编程逻辑装置(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件单元来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,已经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图2描述其中根据本发明实施例的移动终端能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA,Frequency Division  Multiple Access)、时分多址(TDMA,Time Division Multiple Access)、码分多址(CDMA,Code Division Multiple Access)和通用移动通信系统(UMTS,Universal Mobile Telecommunications System)(特别地,长期演进(LTE,Long Term Evolution))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS,Base Station)270、基站控制器(BSC,Base Station Controller)275和移动交换中心(MSC,Mobile Switching Center)280。MSC280被构造为与公共电话交换网络(PSTN,Public Switched Telephone Network)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干已知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM、IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC275。
每个BS 270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS 270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS 270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS 270也可以被称为基站收发器子系统(BTS,Base Transceiver Station)或者其它等效术语。在这样的情况下,术语“基站”可以用于笼统地表示单个BSC275和至少一个BS 270。基站也可以被称为“蜂窝站”。或者,特定BS 270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT,Broadcast Transmitter)295将广播信 号发送给在系统内操作的移动终端100。如图1中所示的广播接收单元111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个卫星300,例如可以采用全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的位置信息单元115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS 270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS 270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS 270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS 270以将正向链路信号发送到移动终端100。
移动终端中通信单元110的移动通信单元112基于移动终端内置的接入移动通信网络(如2G/3G/4G等移动通信网络)的必要数据(包括用户识别信息和鉴权信息)接入移动通信网络为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据)。
通信单元110的无线互联网单元113通过运行无线热点的相关协议功能而实现无线热点的功能,无线热点支持多个移动终端(移动终端之外的任意移动终端)接入,通过复用移动通信单元112与移动通信网络之间的 移动通信连接为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据),由于移动终端实质上是复用移动终端与通信网络之间的移动通信连接传输移动通信数据的,因此移动终端消耗的移动通信数据的流量由通信网络侧的计费实体计入移动终端的通信资费,从而消耗移动终端签约使用的通信资费中包括的移动通信数据的数据流量。
图3为本发明实施例中进行信息交互的各方硬件实体的示意图,图3中包括:服务器11、终端设备21-24,终端设备21-24通过有线网络或者无线网络与服务器进行信息交互,终端设备包括手机、台式机、PC机、一体机等类型。其中,终端设备中安装有应用(如视频应用,社交应用、地图导航应用,高铁线路应用等),采用本发明实施例,基于上述图3所示的系统架构,终端向服务器发起获取第一多媒体信息(如包括综艺节目、电视剧或电影在内的视频信息)的请求,终端获取到第一多媒体信息(如包括综艺节目、电视剧或电影在内的视频),终端向服务器发起获取第二多媒体信息(如广告信息)的请求,终端获取到第二多媒体信息(如广告信息),根据预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视频信息播放结束时再播放广告信息等等策略,按需分别加载所述第一多媒体信息(如包括综艺节目、电视剧或电影在内的视频)和所述第二多媒体信息(如广告信息)并进行播放,检测到当前播放的多媒体信息为所述第二多媒体信息(如广告信息)时,开启虚拟现实播放模式,将所述第二多媒体信息(如广告信息)中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域,以达到屏幕图像放大的技术效果,使得图像的播放不受终端屏幕的限制,将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体 信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
上述图3的例子只是实现本发明实施例的一个系统架构实例,本发明实施例并不限于上述图3所述的系统结构,且除了可以对第二多媒体信息(如广告信息)进行图像放大处理,还可以对第一多媒体信息(如视频信息)进行图像放大处理,不过,由于第二多媒体信息(如广告信息)播放时长较短,因此,并不会占用太多的终端资源,也可以达到图像放大处理的虚实转换。基于上述图1所述的移动终端100硬件结构、图2所述的通信系统及图3所述的系统架构,提出本发明方法各个实施例。
实施例一:
本发明实施例提供了一种信息处理方法,如图4所示,所述方法包括:
步骤101、终端向服务器发起获取第一多媒体信息的请求。
这里,第一多媒体信息可以包括综艺节目、电视剧或电影在内的视频信息,终端可以通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端,执行步骤102。
步骤102、终端获取到第一多媒体信息。
步骤103、终端向服务器发起获取第二多媒体信息的请求。
这里,在广告信息分享的场景下,第二多媒体信息可以包括购物、商品推荐、品牌推广在内的广告信息,当然,本发明实施例并不限于该广告信息分享的场景。在广告信息分享的场景下,终端先通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回 给终端。根据所述预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视频信息播放结束时再播放广告信息等等策略,当触发播放广告信息时,无需用户的触控操作,终端自动向会向服务器发起请求后,服务器将所请求的广告信息返回给终端,执行步骤104。
步骤104、终端获取到第二多媒体信息。
步骤105、终端根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放。
本发明实施例并不限于该广告信息分享的场景。在广告信息分享的场景下,终端先通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端。根据所述预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视频信息播放结束时再播放广告信息等等策略,当触发播放广告信息时,无需用户的触控操作,终端自动向会向服务器发起请求后,服务器将所请求的广告信息返回给终端进行加载并播放,按照预设策略,当触发播放视频信息时,加载播放视频信息,从而,根据预设策略交替播放视频信息和广告信息。
这里,是先请求第一多媒体信息(如视频信息),后请求第二多媒体信息(如广告信息),在实际应用中,并不限于这种先后顺序,也可以先请求第二多媒体信息(如广告信息),后请求第一多媒体信息(如视频信息)。也可以无需分2次请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息),可以一次性请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息)。
步骤106、检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像。
这里,由于第二多媒体信息(如广告信息)的播放时长小于第一多媒体信息(视频信息),则终端的播放工作量会小,不占用过多终端的资源,如终端CPU的处理速度等,因此,本发明实施例考虑将第二多媒体信息(如广告信息)进行图像放大处理,当然,采用本发明实施例的同样原理也可以对第一多媒体信息(如视频信息)进行图像放大处理。
如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频。先预先将终端与多媒体信息待输出端(如纸盒眼镜,不限于纸的材质,还可以是塑料等其他材质),如图6所示,以纸盒眼镜为例,将A12所标识的终端与A13所标识的纸盒眼镜进行装配,将终端置于纸盒眼镜内一个承载终端的位置上,所述终端作为信息输入源,用于将对终端播放的第二多媒体信息开启虚拟现实播放模式后处理得到的所局部投射的播放内容,并以半屏模式投射在手机屏幕上,在手机屏幕上显示半屏效果。所述纸盒眼镜包括:具备鼻子支撑部件的支架和两个凸透镜片,所述两个凸透镜片安装于所述支架上,将所述终端的屏幕面朝所述两个凸透镜片装配,将终端与纸盒眼镜装配好,如图7所示为装配好的用户使用效果图,用户将其置于眼前进行视频收看,如收看图5所示的一段“车的品牌推广”视频。
这里,由于将每一帧视频由平面转换为曲面,则所述每一帧平面图像的待成像区域包含在终端的显示区域中,而所述至少一个曲面图像的待成像区域大于终端的显示区域,从而达到屏幕放大的效果,如图8所示为将平面转换到曲面显示的效果图,从中可以看出,转换到曲面后,图像会放 大。
步骤107、将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
这里,多媒体信息待输出端可以为纸盒眼镜,不限于纸的材质,还可以是塑料等其他材质。纸盒眼镜为例,其与终端的装配前和装配后的示意图如图6-7所示。如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频,以纸盒眼镜为例,将图6所示的A12所标识的终端与A13所标识的纸盒眼镜进行装配,获取A11所标识的一段“车的品牌推广”视频中的一帧视频,通过插值处理将该视频帧由平面转换到A16所标识的曲面上,得到曲面视频后,将曲面视频的部分以分屏模式投影到屏幕上,如A14和A15所示为分屏显示的最终投射结果,即:将原广告视频(普通手机播放的视频),通过插值算法处理成一个空间曲面(虚拟坐标空间,不是显示的),然后通过投影算法将曲面的一部分投影成两个半屏,在手机上最终显示的是两个半屏效果,而纸盒眼镜是用来增强观看立体感的,使用户能获得浸入式的体验。可见,采用本发明实施例,将曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,以得到分屏显示的最终投射结果。
实施例二:
本发明实施例提供了一种信息处理方法,所述方法包括:
步骤201、终端向服务器发起获取第一多媒体信息的请求。
这里,第一多媒体信息可以包括综艺节目、电视剧或电影在内的视频信息,终端可以通过视频应用或者登录视频网站等各种入口来获取想收看 的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端,执行步骤202。
步骤202、终端获取到第一多媒体信息。
步骤203、终端向服务器发起获取第二多媒体信息的请求。
这里,在广告信息分享的场景下,第二多媒体信息可以包括购物、商品推荐、品牌推广在内的广告信息,当然,本发明实施例并不限于该广告信息分享的场景。在广告信息分享的场景下,终端先通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端。根据所述预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视频信息播放结束时再播放广告信息等等策略,当触发播放广告信息时,无需用户的触控操作,终端自动向会向服务器发起请求后,服务器将所请求的广告信息返回给终端,执行步骤204。
步骤204、终端获取到第二多媒体信息。
步骤205、终端根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放。
本发明实施例并不限于该广告信息分享的场景。在广告信息分享的场景下,终端先通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端。根据所述预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视 频信息播放结束时再播放广告信息等等策略,当触发播放广告信息时,无需用户的触控操作,终端自动向会向服务器发起请求后,服务器将所请求的广告信息返回给终端进行加载并播放,按照预设策略,当触发播放视频信息时,加载播放视频信息,从而,根据预设策略交替播放视频信息和广告信息。
这里,是先请求第一多媒体信息(如视频信息),后请求第二多媒体信息(如广告信息),在实际应用中,并不限于这种先后顺序,也可以先请求第二多媒体信息(如广告信息),后请求第一多媒体信息(如视频信息)。也可以无需分2次请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息),可以一次性请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息)。
步骤206、检测到当前播放的多媒体信息为所述第二多媒体信息时,获取第一操作,所述第一操作用于触发开启虚拟现实播放模式。
这里,开启虚拟现实播放模式可以是通过终端安装的模式转换应用来处理,或者通过增加视频应用的一个功能,如加入模式转换的入口,当第一操作作用于模式转换的入口后,则触发开启虚拟现实播放模式。还可以是在终端内置一个处理芯片,利用该处理芯片来实现加载虚拟现实播放模式,分屏投影处理等一系列相关处理。
步骤207、响应所述第一操作,由正常视频播放模式切换到虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像。
这里,正常视频播放模式指常规的播放模式,视频的视频帧为平面效果,虚拟现实模式是本发明实施例将视频的视频帧由平面转换为曲面的模式,是一种立体直观的效果。
这里,由于第二多媒体信息(如广告信息)的播放时长小于第一多媒 体信息(视频信息),则终端的播放工作量会小,不占用过多终端的资源,如终端CPU的处理速度等,因此,本发明实施例考虑将第二多媒体信息(如广告信息)进行图像放大处理,当然,采用本发明实施例的同样原理也可以对第一多媒体信息(如视频信息)进行图像放大处理。
如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频。先预先将终端与多媒体信息待输出端(如纸盒眼镜,不限于纸的材质,还可以是塑料等其他材质),如图6所示,以纸盒眼镜为例,将A12所标识的终端与A13所标识的纸盒眼镜进行装配,将终端置于纸盒眼镜内一个承载终端的位置上,所述终端作为信息输入源,用于将对终端播放的第二多媒体信息开启虚拟现实播放模式后处理得到的所局部投射的播放内容,并以半屏模式投射在手机屏幕上,在手机屏幕上显示半屏效果。所述纸盒眼镜包括:具备鼻子支撑部件的支架和两个凸透镜片,所述两个凸透镜片安装于所述支架上,将所述终端的屏幕面朝所述两个凸透镜片装配,将终端与纸盒眼镜装配好,如图7所示为装配好的用户使用效果图,用户将其置于眼前进行视频收看,如收看图5所示的一段“车的品牌推广”视频。
这里,由于将每一帧视频由平面转换为曲面,则所述每一帧平面图像的待成像区域包含在终端的显示区域中,而所述至少一个曲面图像的待成像区域大于终端的显示区域,从而达到屏幕放大的效果,如图8所示为将平面转换到曲面显示的效果图,从中可以看出,转换到曲面后,图像会放大。
步骤208、将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
这里,多媒体信息待输出端可以为纸盒眼镜,不限于纸的材质,还可以是塑料等其他材质。纸盒眼镜为例,其与终端的装配前和装配后的示意图如图6-7所示。如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频,以纸盒眼镜为例,将图6所示的A12所标识的终端与A13所标识的纸盒眼镜进行装配,获取A11所标识的一段“车的品牌推广”视频中的一帧视频,通过插值处理将该视频帧由平面转换到A16所标识的曲面上,得到曲面视频后,将曲面视频的部分以分屏模式投影到屏幕上,如A14和A15所示为分屏显示的最终投射结果,即:将原广告视频(普通手机播放的视频),通过插值算法处理成一个空间曲面(虚拟坐标空间,不是显示的),然后通过投影算法将曲面的一部分投影成两个半屏,在手机上最终显示的是两个半屏效果,而纸盒眼镜是用来增强观看立体感的,使用户能获得浸入式的体验。可见,采用本发明实施例,将曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,以得到分屏显示的最终投射结果。
这里,本发明实施例触发正常视频播放模式切换到虚拟现实播放模式,是通过获取第一操作触发开启的,是需要用户干预的触发,可以给用户更多选择,在哪个时间点,对哪段视频内容采用虚拟现实播放模式进行播放。
在本发明实施例一实施方式中,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,包括:依次获取所述第二多媒体信息中的每一帧平面图像;对所述每一帧平面图像采用插值运算模拟成对应的一个曲面图像。
在本发明实施例一实施方式中,所述方法还包括:从所述每一帧平面图像中截取得到第二多媒体信息的中间部分或其他局部内容,记为第一待 处理信息,使对所述每一帧平面图像中的第一待处理信息采用插值运算模拟成对应的一个曲面图像后,得到第二待处理信息,所述第二待处理信息为所述所局部投射的播放内容。
在本发明实施例一实施方式中,所述将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,包括:采用分屏播放模式将所述多媒体待输出端的播放界面一分为二,记为第一界面和第二界面,所述第一界面和第二界面皆为1/2播放界面;在所述第一界面显示所述所局部投射的播放内容,所述第一界面对应所述多媒体待输出端的一个凸透镜片;在所述第二界面显示与所述所局部投射的播放内容相同或近似的内容,所述第二界面对应所述多媒体待输出端的另一个凸透镜片。
在一个实际应用中,当视频源为广告视频时,未开启虚拟现实播放模式时,正常使用时播放的广告视频为普通视频模式,如图5所示,开启虚拟现实播放模式后,如将终端放入纸盒眼镜中,则广告视频由平面转换为曲面,使得图像放大后会左右分屏,将纸盒眼镜置于双眼前方,贴合头部观看,最终形成接近虚拟现实效果。
实施例三:
本发明实施例提供了一种信息处理方法,所述方法包括:
步骤301、终端向服务器发起获取第一多媒体信息的请求。
这里,第一多媒体信息可以包括综艺节目、电视剧或电影在内的视频信息,终端可以通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端,执行步骤302。
步骤302、终端获取到第一多媒体信息。
步骤303、终端向服务器发起获取第二多媒体信息的请求。
这里,在广告信息分享的场景下,第二多媒体信息可以包括购物、商品推荐、品牌推广在内的广告信息,当然,本发明实施例并不限于该广告信息分享的场景。在广告信息分享的场景下,终端先通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端。根据所述预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视频信息播放结束时再播放广告信息等等策略,当触发播放广告信息时,无需用户的触控操作,终端自动向会向服务器发起请求后,服务器将所请求的广告信息返回给终端,执行步骤304。
步骤304、终端获取到第二多媒体信息。
步骤305、终端根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放。
本发明实施例并不限于该广告信息分享的场景。在广告信息分享的场景下,终端先通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端。根据所述预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视频信息播放结束时再播放广告信息等等策略,当触发播放广告信息时,无需用户的触控操作,终端自动向会向服务器发起请求后,服务器将所请求的广告信息返回给终端进行加载并播放,按照预设策略,当触发播放视频信息时,加载播放视频信息,从而,根据预设策略交替播放视频信息和广 告信息。
这里,是先请求第一多媒体信息(如视频信息),后请求第二多媒体信息(如广告信息),在实际应用中,并不限于这种先后顺序,也可以先请求第二多媒体信息(如广告信息),后请求第一多媒体信息(如视频信息)。也可以无需分2次请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息),可以一次性请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息)。
步骤306、检测到当前播放的多媒体信息为所述第二多媒体信息时,判断用户是否佩戴有所述多媒体信息待输出端,所述多媒体信息待输出端支持虚拟现实成像。
步骤307、当判断出用户佩戴有所述多媒体信息待输出端时,触发开启虚拟现实播放模式,并由正常视频播放模式切换到虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像。
这里,开启虚拟现实播放模式可以是通过终端安装的模式转换应用来处理,或者通过增加视频应用的一个功能,如加入模式转换的入口,当第一操作作用于模式转换的入口后,则触发开启虚拟现实播放模式。还可以是在终端内置一个处理芯片,利用该处理芯片来实现加载虚拟现实播放模式,分屏投影处理等一系列相关处理,如,通过该处理芯片判断出用户佩戴有所述多媒体信息待输出端时,则触发开启虚拟现实播放模式。
这里,正常视频播放模式指常规的播放模式,视频的视频帧为平面效果,虚拟现实模式是本发明实施例将视频的视频帧由平面转换为曲面的模式,是一种立体直观的效果。
这里,由于第二多媒体信息(如广告信息)的播放时长小于第一多媒体信息(视频信息),则终端的播放工作量会小,不占用过多终端的资源, 如终端CPU的处理速度等,因此,本发明实施例考虑将第二多媒体信息(如广告信息)进行图像放大处理,当然,采用本发明实施例的同样原理也可以对第一多媒体信息(如视频信息)进行图像放大处理。
如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频。先预先将终端与多媒体信息待输出端(如纸盒眼镜,不限于纸的材质,还可以是塑料等其他材质),如图6所示,以纸盒眼镜为例,将A12所标识的终端与A13所标识的纸盒眼镜进行装配,将终端置于纸盒眼镜内一个承载终端的位置上,所述终端作为信息输入源,用于将对终端播放的第二多媒体信息开启虚拟现实播放模式后处理得到的所局部投射的播放内容,并以半屏模式投射在手机屏幕上,在手机屏幕上显示半屏效果。所述纸盒眼镜包括:具备鼻子支撑部件的支架和两个凸透镜片,所述两个凸透镜片安装于所述支架上,将所述终端的屏幕面朝所述两个凸透镜片装配,将终端与纸盒眼镜装配好,如图7所示为装配好的用户使用效果图,用户将其置于眼前进行视频收看,如收看图5所示的一段“车的品牌推广”视频。
这里,由于将每一帧视频由平面转换为曲面,则所述每一帧平面图像的待成像区域包含在终端的显示区域中,而所述至少一个曲面图像的待成像区域大于终端的显示区域,从而达到屏幕放大的效果,如图8所示为将平面转换到曲面显示的效果图,从中可以看出,转换到曲面后,图像会放大。
步骤308、将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
这里,多媒体信息待输出端可以为纸盒眼镜,不限于纸的材质,还可 以是塑料等其他材质。纸盒眼镜为例,其与终端的装配前和装配后的示意图如图6-7所示。如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频,以纸盒眼镜为例,将图6所示的A12所标识的终端与A13所标识的纸盒眼镜进行装配,获取A11所标识的一段“车的品牌推广”视频中的一帧视频,通过插值处理将该视频帧由平面转换到A16所标识的曲面上,得到曲面视频后,将曲面视频的部分以分屏模式投影到屏幕上,如A14和A15所示为分屏显示的最终投射结果,即:将原广告视频(普通手机播放的视频),通过插值算法处理成一个空间曲面(虚拟坐标空间,不是显示的),然后通过投影算法将曲面的一部分投影成两个半屏,在手机上最终显示的是两个半屏效果,而纸盒眼镜是用来增强观看立体感的,使用户能获得浸入式的体验。可见,采用本发明实施例,将曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,以得到分屏显示的最终投射结果。
这里,本发明实施例触发正常视频播放模式切换到虚拟现实播放模式,是通过检测用户佩戴了多媒体待输出端(如纸盒眼镜)自动触发开启的,无需用户干预的触发,可以解放用户的双手,有更好的观影体验,达到沉浸式的观影效果。
在本发明实施例一实施方式中,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,包括:依次获取所述第二多媒体信息中的每一帧平面图像;对所述每一帧平面图像采用插值运算模拟成对应的一个曲面图像。
在本发明实施例一实施方式中,所述方法还包括:从所述每一帧平面图像中截取得到第二多媒体信息的中间部分或其他局部内容,记为第一待 处理信息,使对所述每一帧平面图像中的第一待处理信息采用插值运算模拟成对应的一个曲面图像后,得到第二待处理信息,所述第二待处理信息为所述所局部投射的播放内容。
在本发明实施例一实施方式中,所述将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,包括:采用分屏播放模式将所述多媒体待输出端的播放界面一分为二,记为第一界面和第二界面,所述第一界面和第二界面皆为1/2播放界面;在所述第一界面显示所述所局部投射的播放内容,所述第一界面对应所述多媒体待输出端的一个凸透镜片;在所述第二界面显示与所述所局部投射的播放内容相同或近似的内容,所述第二界面对应所述多媒体待输出端的另一个凸透镜片。
在一个实际应用中,当视频源为广告视频时,未开启虚拟现实播放模式时,正常使用时播放的广告视频为普通视频模式,如图5所示,开启虚拟现实播放模式后,如将终端放入纸盒眼镜中,则广告视频由平面转换为曲面,使得图像放大后会左右分屏,将纸盒眼镜置于双眼前方,贴合头部观看,最终形成接近虚拟现实效果。
实施例四:
本发明实施例的一种信息处理系统,如图10所示,包括终端31、视频服务器32、广告服务器33;其中,终端31包括:第一请求单元311,配置为发起获取第一多媒体信息的请求;第一获取单元312,配置为获取到第一多媒体信息;第二请求单元313,配置为发起获取第二多媒体信息的请求;第二获取单元314,配置为获取到第二多媒体信息;播放单元315,配置为根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;模拟转换单元316,配置为检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中 的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域;及投影单元317,配置为将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。视频服务器32配置为响应请求第一多媒体信息的请求,反馈第一多媒体信息(如视频信息)给终端31,广告服务器33配置为响应请求第二多媒体信息的请求,反馈第二多媒体信息(如广告信息)给终端31。
这里,第一多媒体信息可以包括综艺节目、电视剧或电影在内的视频信息,终端可以通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端。在广告信息分享的场景下,第二多媒体信息可以包括购物、商品推荐、品牌推广在内的广告信息,当然,本发明实施例并不限于该广告信息分享的场景。在广告信息分享的场景下,终端先通过视频应用或者登录视频网站等各种入口来获取想收看的视频列表或其他视频呈现形式,无论采用哪一种入口来获取想要收看的视频,对所要收看的视频进行触控点击操作后,终端会向服务器发起请求后,服务器将所请求的视频信息返回给终端。根据所述预设的播放策略,如在视频信息播放之前、或视频信息播放过程中、或暂停播放视频信息、或视频信息播放结束时再播放广告信息等等策略,当触发播放广告信息时,无需用户的触控操作,终端自动向会向服务器发起请求后,服务器将所请求的广告信息返回给终端。服务器将所请求的广告信息返回给终端后进行加载并播放,按照预设策略,当触发播放视频信息时,加载播放视频信息, 从而,根据预设策略交替播放视频信息和广告信息。
可以是先请求第一多媒体信息(如视频信息),后请求第二多媒体信息(如广告信息),在实际应用中,并不限于这种先后顺序,也可以先请求第二多媒体信息(如广告信息),后请求第一多媒体信息(如视频信息)。也可以无需分2次请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息),可以一次性请求第一多媒体信息(如视频信息)和第二多媒体信息(如广告信息)。
由于第二多媒体信息(如广告信息)的播放时长小于第一多媒体信息(视频信息),则终端的播放工作量会小,不占用过多终端的资源,如终端CPU的处理速度等,因此,本发明实施例考虑将第二多媒体信息(如广告信息)进行图像放大处理,当然,采用本发明实施例的同样原理也可以对第一多媒体信息(如视频信息)进行图像放大处理。
如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频。先预先将终端与多媒体信息待输出端(如纸盒眼镜,不限于纸的材质,还可以是塑料等其他材质),如图6所示,以纸盒眼镜为例,将A12所标识的终端与A13所标识的纸盒眼镜进行装配,将终端置于纸盒眼镜内一个承载终端的位置上,所述终端作为信息输入源,用于将对终端播放的第二多媒体信息开启虚拟现实播放模式后处理得到的所局部投射的播放内容,并以半屏模式投射在手机屏幕上,在手机屏幕上显示半屏效果。所述纸盒眼镜包括:具备鼻子支撑部件的支架和两个凸透镜片,所述两个凸透镜片安装于所述支架上,将所述终端的屏幕面朝所述两个凸透镜片装配,将终端与纸盒眼镜装配好,如图7所示为装配好的用户使用效果图,用户将其置于眼前进行视频收看,如收看图5所示的一段“车的品牌推广”视频。
这里,由于将每一帧视频由平面转换为曲面,则所述每一帧平面图像的待成像区域包含在终端的显示区域中,而所述至少一个曲面图像的待成像区域大于终端的显示区域,从而达到屏幕放大的效果,如图8所示为将平面转换到曲面显示的效果图,从中可以看出,转换到曲面后,图像会放大。
这里,多媒体信息待输出端可以为纸盒眼镜,不限于纸的材质,还可以是塑料等其他材质。纸盒眼镜为例,其与终端的装配前和装配后的示意图如图6-7所示。如图5所示为终端检测到当前所播放的一段第二多媒体信息,具体为广告信息分享场景下的一段广告信息,如A11所标识的一段“车的品牌推广”视频,以纸盒眼镜为例,将图6所示的A12所标识的终端与A13所标识的纸盒眼镜进行装配,获取A11所标识的一段“车的品牌推广”视频中的一帧视频,通过插值处理将该视频帧由平面转换到A16所标识的曲面上,得到曲面视频后,将曲面视频的部分以分屏模式投影到屏幕上,如A14和A15所示为分屏显示的最终投射结果,即:将原广告视频(普通手机播放的视频),通过插值算法处理成一个空间曲面(虚拟坐标空间,不是显示的),然后通过投影算法将曲面的一部分投影成两个半屏,在手机上最终显示的是两个半屏效果,而纸盒眼镜是用来增强观看立体感的,使用户能获得浸入式的体验。可见,采用本发明实施例,将曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,以得到分屏显示的最终投射结果。
在本发明实施例一实施方式中,所述第一多媒体信息为视频信息;所述第二多媒体信息为广告信息。
在本发明实施例一实施方式中,所述模拟转换单元,进一步配置为:检测到当前播放的多媒体信息为所述第二多媒体信息时,获取第一操作, 所述第一操作用于触发开启虚拟现实播放模式;响应所述第一操作,由正常视频播放模式切换到虚拟现实播放模式。
在本发明实施例一实施方式中,所述模拟转换单元,进一步配置为:检测到当前播放的多媒体信息为所述第二多媒体信息时,判断用户是否佩戴有所述多媒体信息待输出端,所述多媒体信息待输出端支持虚拟现实成像;当判断出用户佩戴有所述多媒体信息待输出端时,触发开启虚拟现实播放模式,并由正常视频播放模式切换到虚拟现实播放模式。
在本发明实施例一实施方式中,所述模拟转换单元,进一步配置为:依次获取所述第二多媒体信息中的每一帧平面图像;对所述每一帧平面图像采用插值运算模拟成对应的一个曲面图像。
在本发明实施例一实施方式中,所述终端还包括:截取单元,配置为:从所述每一帧平面图像中截取得到第二多媒体信息的中间部分或其他局部内容,记为第一待处理信息,使在对所述每一帧平面图像中的第一待处理信息采用插值运算模拟成对应的一个曲面图像后,得到第二待处理信息,所述第二待处理信息为所述所局部投射的播放内容。
在本发明实施例一实施方式中,所述投影单元,进一步配置为:采用分屏播放模式将所述多媒体待输出端的播放界面一分为二,记为第一界面和第二界面,所述第一界面和第二界面皆为1/2播放界面;在所述第一界面显示所述所局部投射的播放内容,所述第一界面对应所述多媒体待输出端的一个凸透镜片;在所述第二界面显示与所述所局部投射的播放内容相同或近似的内容,所述第二界面对应所述多媒体待输出端的另一个凸透镜片。
本发明实施例还提供了一种信息处理系统,所述信息处理系统包括:上述方案中任一项所述的终端、与所述终端外接于一体以配合使用的多媒体信息待输出端;其中,所述终端作为信息输入源,用于将对终端播放的第二多媒体信息开启虚拟现实播放模式后处理得到的所局部投射的播放内 容,分别投射到两个凸透镜片上进行分屏播放模式的成像;所述多媒体信息待输出端包括:具备鼻子支撑部件的支架和两个凸透镜片,所述两个凸透镜片安装于所述支架上,将所述终端的屏幕面朝所述两个凸透镜片装配。
这里需要指出的是,上述终端可以为PC这种电子设备,还可以为如PAD,平板电脑,手提电脑这种便携电子设备、还可以为如手机这种智能移动终端,不限于这里的描述;所述服务器可以是通过集群系统构成的,为实现各单元功能而合并为一或各单元功能分体设置的电子设备,终端和服务器都至少包括用于存储数据的数据库和用于数据处理的处理器,或者包括设置于服务器内的存储介质或独立设置的存储介质。
其中,对于用于数据处理的处理器而言,在执行处理时,可以采用微处理器、中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Singnal Processor)或可编程逻辑阵列(FPGA,Field-Programmable Gate Array)实现;对于存储介质来说,包含操作指令,该操作指令可以为计算机可执行代码,通过所述操作指令来实现上述本发明实施例信息处理方法流程中的各个步骤。
这里需要指出的是:以上涉及终端和服务器项的描述,与上述方法描述是类似的,同方法的有益效果描述,不做赘述。对于本发明终端和服务器实施例中未披露的技术细节,请参照本发明方法流程描述的实施例所描述内容。
以一个现实应用场景为例对本发明实施例阐述如下:
本应用场景采用本发明实施例,在广告信息分享的场景下,具体为一种普通贴片转虚拟现实广告的技术方案。首先对本场景中涉及的技术名称进行解释说明:1)视频广告:分为传统视频广告和移动视频广告两类。传统视频广告是在视频内的广告进行设置和投放,而移动视频广告分为传统贴片广告和In-App视频广告,是指在移动设备内进行的插播视频的模式。2)视频贴 片广告:指的是在视频片头片插片播放的广告,以及背景广告等等。作为最早的网络视频营销方式,贴片广告可以算是电视广告的延伸,其背后的运营逻辑依然是媒介的二次售卖原理。3)VR视频广告:指的是基于虚拟现实技术,在移动设备上产生可用于头戴式眼镜观看的视频内容,用户戴上眼镜后产生沉浸式的用户体验。
现有技术中,目前的移动视频广告,可以全屏形式在移动设备上播放,裸眼观看,现有移动视频广告,因移动设备屏幕小,观感较差,用户关注度不够,播放画面受限于屏幕大小。而采用本发明实施例,采用虚拟现实技术,用户戴上眼镜后产生立体沉浸式观感,可以直接将现有广告视频转换为接近虚拟现实观感的效果,通过具备虚拟现实播放功能的处理模块来实现,该处理模块可以是一个专门应用的形式,可以是与视频应用结合,并作为视频应用的新增功能,还可以以芯片的形式存在。该处理模块用于通过插值处理将视频帧由平面转换为曲面,通过投影处理将待投射画面以分屏的形式投射在屏幕上。采用本发明实施例,还需要智能移动设备(例如iPhone、Android手机),以及类似纸盒眼镜(不限于纸盒材质,可以是塑料等)作为视频输出端使用。
采用本发明实施例,以视频源为普通的广告视频为例,正常使用时播放的广告视频为普通视频模式(如图5所示)。将切换成虚拟现实模式,如将iPhone、Android手机放入纸盒眼镜(如图6所示),广告视频被左右分屏,将纸盒眼镜置于双眼前方,贴合头部观看,将形成接近虚拟现实效果(如图7所示)。为了营造虚拟现实的效果,利用上述处理模块,通过对每一帧视频采用插值的方法模拟成一个曲面,如图8中阴影区域所示,即:实时处理普通视频的每一帧画面,通过插值的方式处理成曲面,曲面是平面的放大效果,曲面大于手机自身屏幕大小。采用上述处理模块,可以队列形式连续播放贴片广告,实现左右分屏状态的显示效果,左右分屏能够 同步播放普通广告视频,无时差,最终将得到的曲面视频投射至屏幕,形成最终观看视频,使得用户可通过变换方位观看视频。其中,左右分屏状态的触发可以是以下两种方案:1)识别设备的状态,在横屏垂直放置时,自动切换左右分屏;2)用户点击屏幕分屏按钮,切换左右分屏;
就处理模块而言,其所实现的左右分屏状态,如图11中b所指示的位置进行分屏显示,分为第一屏和第二屏。左右分屏可以同步播放,左分屏显示的内容为:图11中a所指示的位置的源视频中间部分(框图中的部分),右分屏显示的内容同为源视频中间部分,图11中c所指示的位置的源视频中间部分(框图中的部分)。也就是说,第一屏和第二屏所显示的内容可以为所截取的视频帧中源视频中间部分(框图中的部分),二者在本例子中完全相同,当然,由于截取处理和精确度要求的不同,二者也可以不尽相同,只要二者图像接近或相似即可。当用户左右晃动眼镜,iPhone、Android手机感知到设备的移动,处理模块获得晃动数值,处理模块将源视频根据晃动数值做左右移动显示,形成用户从不同视角观看广告的感觉,即滑动图11中源视频中间部分(框图中的部分)在源视频上的位置,将当前需要显示的区域通过插值的方式处理成曲面视频帧,并投射至屏幕形成显示视频,投影技术方式如图9所示,投影时重叠的像素采用均值的做法。这里需要指出的是,由于进行插值处理后,会将显示区域放大至A16所示的区域,在实际应用中,还需要从A16所示的区域选取出A17所示的区域作为局部区域,将该局部区域以分屏状态投影到屏幕上,如图9所示。
采用本发明实施例,在前贴、中插、后贴的广告信息投放显示流程之上,实现了模拟的虚拟现实效果,广告主不需要重新制作视频广告,即:不需要广告主事先提前拍摄360度视频,以降低制作成本,通过插值处理终端可以将现有普通贴片自动转换成可模拟虚拟现实的结果。其中,根据视频正片的模式(是否360度视频),决定广告是否要利用插值处理转换成 虚拟现实效果。终端根据视频的来源,判断是否调取实现虚拟现实播放功能的处理模块,对来源于广告主的视频(也就是贴片,可能为前、中、后贴片),会调用处理模块,对其他视频(比如:电视剧中的正文视频),不会调用处理模块。该处理模块用于将普通视频转变成虚拟现实视频,具体采用图形图像处理中的插值算法以将普通视频换算成曲面,采用图形图像处理中的投影算法,将曲面视频局部映射到屏幕中的半屏进行显示。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行上述的信息处理方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步 骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
采用本发明实施例,获取到第一多媒体信息和第二多媒体信息后,分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放,检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域,以达到画面放大的效果,使多媒体信息的呈现不受限于终端屏幕大小的限制,将所述至少一个 曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,从而使用户看到更立体直观的第二多媒体信息,第二多媒体信息的播放图像更大,画质更清晰,使用户可以沉浸在所播放内容中。

Claims (15)

  1. 一种信息处理方法,所述方法包括:
    发起获取第一多媒体信息的请求;
    获取到第一多媒体信息;
    发起获取第二多媒体信息的请求;
    获取到第二多媒体信息;
    根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;
    检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域;
    将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
  2. 根据权利要求1所述的方法,其中,所述第一多媒体信息为视频信息;
    所述第二多媒体信息为广告信息。
  3. 根据权利要求1或2所述的方法,其中,所述检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,包括:
    检测到当前播放的多媒体信息为所述第二多媒体信息时,获取第一操作,所述第一操作用于触发开启虚拟现实播放模式;
    响应所述第一操作,由正常视频播放模式切换到虚拟现实播放模式。
  4. 根据权利要求1或2所述的方法,其中,所述检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,包括:
    检测到当前播放的多媒体信息为所述第二多媒体信息时,判断用户是否佩戴有所述多媒体信息待输出端,所述多媒体信息待输出端支持虚拟现实成像;
    当判断出用户佩戴有所述多媒体信息待输出端时,触发开启虚拟现实播放模式,并由正常视频播放模式切换到虚拟现实播放模式。
  5. 根据权利要求1或2所述的方法,其中,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,包括:
    依次获取所述第二多媒体信息中的每一帧平面图像;
    对所述每一帧平面图像采用插值运算模拟成对应的一个曲面图像。
  6. 根据权利要求1或2所述的方法,其中,所述方法还包括:
    从所述每一帧平面图像中截取得到第二多媒体信息的中间部分或其他局部内容,记为第一待处理信息,使对所述每一帧平面图像中的第一待处理信息采用插值运算模拟成对应的一个曲面图像后,得到第二待处理信息,所述第二待处理信息为所述所局部投射的播放内容。
  7. 根据权利要求1或2所述的方法,其中,所述将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容,包括:
    采用分屏播放模式将所述多媒体待输出端的播放界面一分为二,记为第一界面和第二界面,所述第一界面和第二界面皆为1/2播放界面;
    在所述第一界面显示所述所局部投射的播放内容;
    在所述第二界面显示与所述所局部投射的播放内容相同或近似的内容。
  8. 一种终端,所述终端包括:
    第一请求单元,配置为发起获取第一多媒体信息的请求;
    第一获取单元,配置为获取到第一多媒体信息;
    第二请求单元,配置为发起获取第二多媒体信息的请求;
    第二获取单元,配置为获取到第二多媒体信息;
    播放单元,配置为根据预设的播放策略分别加载所述第一多媒体信息和所述第二多媒体信息并进行播放;
    模拟转换单元,配置为检测到当前播放的多媒体信息为所述第二多媒体信息时,开启虚拟现实播放模式,将所述第二多媒体信息中的每一帧平面图像模拟为对应的至少一个曲面图像,所述每一帧平面图像的待成像区域包含在终端的显示区域中,所述至少一个曲面图像的待成像区域大于终端的显示区域;
    投影单元,配置为将所述至少一个曲面图形中的每一个曲面图形局部投射到多媒体信息待输出端进行成像时,将所述多媒体待输出端的播放界面一分为二,在1/2播放界面上同步显示所局部投射的播放内容。
  9. 根据权利要求8所述的终端,其中,所述第一多媒体信息为视频信息;
    所述第二多媒体信息为广告信息。
  10. 根据权利要求8或9所述的终端,其中,所述模拟转换单元,进一步配置为:
    检测到当前播放的多媒体信息为所述第二多媒体信息时,获取第一操作,所述第一操作用于触发开启虚拟现实播放模式;
    响应所述第一操作,由正常视频播放模式切换到虚拟现实播放模式。
  11. 根据权利要求8或9所述的终端,其中,所述模拟转换单元,进一步配置为:
    检测到当前播放的多媒体信息为所述第二多媒体信息时,判断用户是否佩戴有所述多媒体信息待输出端,所述多媒体信息待输出端支持虚拟现实成像;
    当判断出用户佩戴有所述多媒体信息待输出端时,触发开启虚拟现实播放模式,并由正常视频播放模式切换到虚拟现实播放模式。
  12. 根据权利要求8或9所述的终端,其中,所述模拟转换单元,进一步配置为:
    依次获取所述第二多媒体信息中的每一帧平面图像;
    对所述每一帧平面图像采用插值运算模拟成对应的一个曲面图像。
  13. 根据权利要求8或9所述的终端,其中,所述终端还包括:截取单元,配置为:
    从所述每一帧平面图像中截取得到第二多媒体信息的中间部分或其他局部内容,记为第一待处理信息,使在对所述每一帧平面图像中的第一待处理信息采用插值运算模拟成对应的一个曲面图像后,得到第二待处理信息,所述第二待处理信息为所述所局部投射的播放内容。
  14. 根据权利要求8或9所述的终端,其中,所述投影单元,进一步配置为:
    采用分屏播放模式将所述多媒体待输出端的播放界面一分为二,记为第一界面和第二界面,所述第一界面和第二界面皆为1/2播放界面;
    在所述第一界面显示所述所局部投射的播放内容;
    在所述第二界面显示与所述所局部投射的播放内容相同或近似的内容。
  15. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1所述的信息处理方法。
PCT/CN2017/085412 2016-05-27 2017-05-22 一种信息处理方法及终端、计算机存储介质 WO2017202271A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610364845.8 2016-05-27
CN201610364845.8A CN107438179B (zh) 2016-05-27 2016-05-27 一种信息处理方法及终端

Publications (1)

Publication Number Publication Date
WO2017202271A1 true WO2017202271A1 (zh) 2017-11-30

Family

ID=60412059

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085412 WO2017202271A1 (zh) 2016-05-27 2017-05-22 一种信息处理方法及终端、计算机存储介质

Country Status (2)

Country Link
CN (1) CN107438179B (zh)
WO (1) WO2017202271A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130475A (zh) * 2020-09-22 2020-12-25 北京字节跳动网络技术有限公司 设备控制方法、装置、终端和存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108768832B (zh) * 2018-05-24 2022-07-12 腾讯科技(深圳)有限公司 客户端间的交互方法和装置、存储介质、电子装置
CN109587470A (zh) * 2018-10-23 2019-04-05 嘉兴玄视信息科技有限公司 一种基于虚拟现实一体机的3d电视及一体机控制系统
CN109769112B (zh) * 2019-01-07 2021-04-09 上海临奇智能科技有限公司 具有多种屏幕效果的虚拟屏幕一体机的组装设置方法
CN110677689A (zh) * 2019-09-29 2020-01-10 杭州当虹科技股份有限公司 一种基于用户视角的vr视频广告无缝插播方法
CN112423052A (zh) * 2019-11-04 2021-02-26 青岛海信激光显示股份有限公司 显示系统及显示方法
CN111683281A (zh) * 2020-06-04 2020-09-18 腾讯科技(深圳)有限公司 视频播放方法、装置、电子设备及存储介质
CN117041508B (zh) * 2023-10-09 2024-01-16 杭州罗莱迪思科技股份有限公司 一种分布式投影方法、投影系统、设备和介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035294A (zh) * 2007-04-13 2007-09-12 深圳市融合视讯科技有限公司 在视频节目中插播网络广告的方法
CN101072164A (zh) * 2007-05-29 2007-11-14 腾讯科技(深圳)有限公司 一种网络广告的显示方法及系统
US20160071546A1 (en) * 2014-09-04 2016-03-10 Lev NEYMOTIN Method of Active-View Movie Technology for Creating and Playing Multi-Stream Video Files

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750724B (zh) * 2012-04-13 2018-12-21 广东赛百威信息科技有限公司 一种基于图像的三维和全景系统自动生成方法
JP6245652B2 (ja) * 2012-10-11 2017-12-13 田原 博史 映像観察システム
CN103703789B (zh) * 2013-06-28 2018-02-02 华为技术有限公司 一种数据展示的方法、终端及系统
CN103543831A (zh) * 2013-10-25 2014-01-29 梁权富 头戴式全景播放装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035294A (zh) * 2007-04-13 2007-09-12 深圳市融合视讯科技有限公司 在视频节目中插播网络广告的方法
CN101072164A (zh) * 2007-05-29 2007-11-14 腾讯科技(深圳)有限公司 一种网络广告的显示方法及系统
US20160071546A1 (en) * 2014-09-04 2016-03-10 Lev NEYMOTIN Method of Active-View Movie Technology for Creating and Playing Multi-Stream Video Files

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG, JING: "The Application of Virtual Reality Technology in Advertisement", MODERN BUSINESS, 27 January 2013 (2013-01-27), pages 273 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130475A (zh) * 2020-09-22 2020-12-25 北京字节跳动网络技术有限公司 设备控制方法、装置、终端和存储介质

Also Published As

Publication number Publication date
CN107438179A (zh) 2017-12-05
CN107438179B (zh) 2019-09-20

Similar Documents

Publication Publication Date Title
WO2017202271A1 (zh) 一种信息处理方法及终端、计算机存储介质
WO2018108049A1 (zh) 一种信息处理方法及终端、计算机存储介质
WO2018059332A1 (zh) 一种信息处理方法及终端、计算机存储介质
WO2016173422A1 (zh) 多屏互动方法及系统
CN104866265B (zh) 多媒体文件的显示方法和装置
US9628145B2 (en) Method and system for transfering data between plurality of devices
WO2018090911A1 (zh) 一种文件处理方法、终端及服务器、计算机存储介质
CN106358319A (zh) 一种无线投影装置、系统及方法
CN106453538A (zh) 屏幕共享装置和方法
WO2016161986A1 (zh) 操作识别方法、装置、移动终端及计算机存储介质
CN106485689A (zh) 一种图片处理方法和装置
CN106201395A (zh) 一种显示方法及移动终端
CN108093019B (zh) 一种成员信息的刷新方法及终端
WO2017143853A1 (zh) 多链路智能分流方法及移动终端
CN106713716A (zh) 一种双摄像头的拍摄控制方法和装置
WO2017128898A1 (zh) 移动终端及移动终端的触控操作方法
WO2018045961A1 (zh) 一种图片处理方法及终端、存储介质
CN105827866A (zh) 一种移动终端及控制方法
WO2017128912A1 (zh) 移动终端及其触控操作方法
CN106371788A (zh) 屏幕投影连接装置和方法
CN106507072A (zh) 一种无线投影装置、系统及方法
CN106373110A (zh) 一种图像融合的方法及装置
CN106383707A (zh) 一种图片的显示方法及系统
CN106250081A (zh) 一种基于双屏终端的显示方法和装置
CN106130981A (zh) 增强现实设备的数字标签自定义装置及方法

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17802117

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17802117

Country of ref document: EP

Kind code of ref document: A1