WO2018059332A1 - 一种信息处理方法及终端、计算机存储介质 - Google Patents

一种信息处理方法及终端、计算机存储介质 Download PDF

Info

Publication number
WO2018059332A1
WO2018059332A1 PCT/CN2017/103031 CN2017103031W WO2018059332A1 WO 2018059332 A1 WO2018059332 A1 WO 2018059332A1 CN 2017103031 W CN2017103031 W CN 2017103031W WO 2018059332 A1 WO2018059332 A1 WO 2018059332A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
center
user interface
materials
terminal
Prior art date
Application number
PCT/CN2017/103031
Other languages
English (en)
French (fr)
Inventor
李烈强
张旭博
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP17854782.4A priority Critical patent/EP3511828B1/en
Publication of WO2018059332A1 publication Critical patent/WO2018059332A1/zh
Priority to US16/207,749 priority patent/US10776562B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • G06F16/972Access to data in other repository systems, e.g. legacy data or dynamic Web page generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/106Display of layout of documents; Previewing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/117Tagging; Marking up; Designating a block; Setting of attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/131Fragmentation of text files, e.g. creating reusable text-blocks; Linking to fragments, e.g. using XInclude; Namespaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor

Definitions

  • the present invention relates to network interaction technologies, and in particular, to an information processing method, a terminal, and a computer storage medium.
  • an application scenario is: sending customized information to a user, and after the user receives the information, the information is presented in a static form, such as a piece of text, a picture, etc.
  • the information is presented in a relatively simple manner.
  • static information presentation does not have the interactive interaction of information, and the original intention of information sharing or a main purpose is to promote the sharing and dissemination of information through interaction.
  • the embodiments of the present invention provide an information processing method, a terminal, and a computer storage medium, which at least solve the problems existing in the prior art.
  • An information processing method includes:
  • the second state being used to represent that the target object follows the first operation in a multi-level asynchronous manner in the terminal user interface Dynamic rendering.
  • a display unit configured to present a target object in a first state in the terminal user interface
  • a matching unit configured to trigger a first operation in a browsing page where the target object is located, determine a displacement moving direction currently generated by the browsing page according to a parameter generated by the first operation, and select a matching direction of the displacement motion Material displacement motion condition;
  • a synthesizing unit configured to acquire at least two materials obtained by the target object, and generate a dynamic rendering style of the target object according to the at least two materials and the material displacement motion condition;
  • a switching processing unit configured to switch to a second state when the target object is rendered according to the dynamic presentation style, the second state being used to represent that the target object follows the first operation in a terminal user interface Multi-level unsynchronized way for dynamic rendering.
  • a computer storage medium provided by an embodiment of the present invention, wherein a computer program for executing the above information processing method is stored.
  • the target object in the first state (static) is presented in the terminal user interface; the first operation is triggered in the browsing page where the target object is located, and the parameter is determined according to the parameter generated by the first operation.
  • Browse the current displacement direction of the page choose and a material displacement motion condition matching the displacement motion direction; acquiring at least two materials obtained by the target object, generating a dynamic presentation style of the target object according to the at least two materials and the material displacement motion condition,
  • the material and the corresponding material displacement motion condition can obtain the dynamic rendering style of the target object finally presented on the terminal, thereby having the basis of changing the target object from static to dynamic according to the first operation, and then, through interactive response,
  • the first operation is responsive to switch the target object to the second state when rendered in the dynamic presentation style, the second state being used to characterize the target object following the first operation in the end user interface
  • the dynamic presentation is carried out in a multi-level unsynchronized manner, and the interactive operation-based interaction obtains the final form of dynamic information presentation, which promotes the sharing and dissemination of
  • FIG. 1 is a schematic diagram of an optional hardware structure of a mobile terminal implementing various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a communication system of the mobile terminal shown in FIG. 1;
  • FIG. 3 is a schematic diagram of hardware entities of each party performing information interaction in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of interaction between a terminal and a server according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic diagram of interaction between a terminal and a server according to Embodiment 2 of the present invention.
  • FIG. 6 is a schematic diagram of interaction between a terminal and a server according to Embodiment 3 of the present invention.
  • FIG. 7 is a schematic diagram of interaction between a terminal and a server according to Embodiment 4 of the present invention.
  • FIG. 8 is a schematic structural diagram of a system according to Embodiment 5 of the present invention.
  • 9-17 are diagrams showing final rendering effects of material decomposition and response to user operations in multiple application scenarios according to an embodiment of the present invention.
  • first, second, etc. are used herein to describe various elements (or various thresholds or various applications or various instructions or various operations), etc., these elements (or thresholds) Or application or instruction or operation) should not be limited by these terms. These terms are only used to distinguish one element (or threshold or application or instruction or operation) and another element (or threshold or application or instruction or operation).
  • first operation may be referred to as a second operation
  • second operation may also be referred to as a first operation
  • the first operation and the second operation are both operations, but the two are not the same The operation is only.
  • the steps in the embodiment of the present invention are not necessarily processed in the order of the steps described.
  • the steps may be selectively arranged to be reordered according to requirements, or the steps in the embodiment may be deleted, or the steps in the embodiment may be added.
  • the description of the steps in the embodiments of the present invention is only an optional combination of the steps, and does not represent a combination of the steps of the embodiments of the present invention.
  • the order of the steps in the embodiments is not to be construed as limiting the present invention.
  • the intelligent terminal (such as a mobile terminal) of the embodiment of the present invention can be implemented in various forms.
  • the mobile terminal described in the embodiment of the present invention may include, for example, a mobile phone, a smart phone, Mobile terminals such as notebook computers, digital broadcast receivers, personal digital assistants (PDAs, Personal Digital Assistants), tablet computers (PADs), portable multimedia players (PMPs, portable media players), navigation devices, and the like, and such as digital TVs, desktops
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • portable media players portable media players
  • navigation devices and the like
  • digital TVs desktops
  • a fixed terminal such as a computer.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram of an optional hardware structure of a mobile terminal implementing various embodiments of the present invention.
  • the mobile terminal 100 may include a communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a matching unit 140, a synthesizing unit 141, a switching processing unit 142, an output unit 150, a display unit 151, and a storage unit 160.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network (electrical communication can also be made by wire if the mobile terminal is replaced with a fixed terminal).
  • the communication unit when it is specifically a wireless communication unit, it may include at least one of a broadcast receiving unit 111, a mobile communication unit 112, a wireless internet unit 113, a short-range communication unit 114, and a location information unit 115, which are optional, according to different Demand can be added or deleted.
  • the broadcast receiving unit 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. Broadcast phase
  • the off information may also be provided via the mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication unit 112.
  • the broadcast signal may exist in various forms, for example, it may be an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), a digital video broadcast handheld (DVB-H, Digital Video Broadcasting-Handheld). ) exists in the form of an ESG (Electronic Service Guide) and the like.
  • EPG Electronic Program Guide
  • DMB Digital Multimedia Broadcasting
  • DVD-H Digital Video Broadcasting-Handheld
  • ESG Electronic Service Guide
  • the broadcast receiving unit 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving unit 111 can use a digital video broadcast handheld (DVB) by using, for example, Multimedia Broadcast Broadcasting-Terrestrial, Digital Multimedia Broadcasting-Satellite (DMB-S), Digital Multimedia Broadcasting-Satellite (DMB-S) -H), a digital broadcast system such as a data broadcast system of Media Forward Link Only (MediaFLO, Media Forward Link Only), Integrated Broadcast Digital Broadcasting (ISDB-T), or the like receives digital broadcast.
  • the broadcast receiving unit 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving unit 111 may be stored in the memory 160 (or other type of storage medium).
  • the mobile communication unit 112 transmits the radio signal to and/or receives a radio signal from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet unit 113 supports wireless internet access of the mobile terminal.
  • the unit can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the unit may include Wi-Fi (WLAN, Wireless Local Area Networks), Wireless Broadband (Wibro), Worldwide Interoperability for Microwave Access (Wimax), and High Speed Downlink Packet Access. (HSDPA, High Speed Downlink Packet Access) and so on.
  • the short-range communication unit 114 is a unit for supporting short-range communication.
  • Some short-range communication technologies Examples include Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Zigbee, and the like.
  • the location information unit 115 is a unit for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information unit is a Global Positioning System (GPS).
  • GPS Global Positioning System
  • the GPS unit 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information according to longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite. Further, the GPS unit 115 can calculate the speed information by continuously calculating the current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the storage unit 160 (or other storage medium) or transmitted via the communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication unit 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and And may include a keyboard, a mouse, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel, a rocker, and the like.
  • a touch panel when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
  • the matching unit 140 is configured to trigger a first operation in the browsing page where the target object is located, determine a displacement motion direction currently generated by the browsing page according to the parameter generated by the first operation, and select a matching direction of the displacement motion.
  • a material displacement motion condition configured to acquire at least two materials obtained by the target object, and generate a dynamic rendering style of the target object according to the at least two materials and the material displacement motion condition;
  • the processing unit 142 is configured to switch to the second state when the target object is rendered according to the dynamic presentation style, where the second state is used to represent that the target object follows the first operation in the terminal user interface Multi-level unsynchronized way for dynamic rendering.
  • the display unit 151 is configured to present the target object in the first state in the terminal user interface, and after being subjected to a series of processing by the matching unit 140, the synthesizing unit 141, and the switching processing unit 142, and then presented by the display unit 151 in the second The target object of the state. At this time, the target object dynamically displays information in the terminal user interface in a multi-level unsynchronized manner.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification unit, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification unit may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Subscriber Identity Module (SIM), and a Universal Customer Identification Unit (USIM, Universal). Subscriber Identity Module) and more.
  • UIM User Identification Module
  • SIM Subscriber Identity Module
  • USB Universal Customer Identification Unit
  • the device having the identification unit may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be used to receive from an external device The input (eg, data information, power, etc.) and transmits the received input to one or more components within the mobile terminal 100 or can be used to transfer data between the mobile terminal and an external device.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output unit 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100.
  • the mobile terminal 100 can display a related user interface (UI) or a graphical user interface (GUI).
  • UI related user interface
  • GUI graphical user interface
  • the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include a Liquid Crystal Display (LCD), a Thin Film Transistor (LCD), an Organic Light-Emitting Diode (OLED) display, a flexible display, and a three-dimensional (3D) At least one of a display or the like.
  • LCD Liquid Crystal Display
  • LCD Thin Film Transistor
  • OLED Organic Light-Emitting Diode
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a transparent organic light emitting diode (TOLED) display or the like.
  • TOLED transparent organic light emitting diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • Touch screen can be used to detect touch input pressure and touch Touch the input position and touch input area.
  • the audio output unit 152 may convert audio data received by the communication unit 110 or stored in the memory 160 into audio when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the signal is output as a sound.
  • the audio output unit 152 can provide an audio output (eg, a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 100.
  • the audio output unit 152 may include a speaker, a buzzer, and the like.
  • the storage unit 160 may store a software program or the like that performs processing and control operations performed by the processing unit 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the storage unit 160 may store data regarding various manners of vibration and audio signals that are output when a touch is applied to the touch screen.
  • the storage unit 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (for example, SD or DX memory, etc.), a random access memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (EEPROM) PROM, Programmable Read Only Memory), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the storage unit 160 through a network connection.
  • Processing unit 180 typically controls the overall operation of the mobile terminal. For example, processing unit 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. As another example, the processing unit 180 can perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the processing unit 180 and Provide the appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may use an Application Specific Integrated Circuit (ASIC), a Digital Signal Processing (DSP), a Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), processor, controller, microcontroller, microprocessor, electronics designed to perform the functions described herein At least one of the units is implemented, and in some cases, such an implementation may be implemented in controller 180.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processing
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system in which a mobile terminal is operable according to an embodiment of the present invention will now be described with reference to FIG.
  • Such communication systems may use different air interfaces and/or physical layers.
  • the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and General Purpose Code Division Multiple Access (CDMA).
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • CDMA General Purpose Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a Public Switched Telephone Network (PSTN) 290.
  • PSTN Public Switched Telephone Network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), with each partition covered by a multi-directional antenna or an antenna pointing in a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as multiple cellular stations.
  • a broadcast transmitter (BT, Broadcast Transmitter) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving unit 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • satellites 300 are available, for example a Global Positioning System (GPS) satellite 300 can be employed. The satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • GPS Global Positioning System
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the location information unit 115 as shown in FIG. 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the mobile communication unit 112 of the communication unit 110 in the mobile terminal accesses the mobile communication based on necessary data (including user identification information and authentication information) of the mobile communication network (such as 2G/3G/4G mobile communication network) built in the mobile terminal.
  • the network transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal users such as web browsing and network multimedia broadcasting.
  • the wireless internet unit 113 of the communication unit 110 implements a function of a wireless hotspot by operating a related protocol function of the wireless hotspot, and the wireless hotspot supports access of a plurality of mobile terminals (any mobile terminal other than the mobile terminal) by multiplexing the mobile communication unit 112.
  • the mobile communication connection with the mobile communication network transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for the mobile terminal user's web browsing, network multimedia playback, etc., due to the movement
  • the terminal essentially multiplexes the mobile communication connection between the mobile terminal and the communication network to transmit the mobile communication data, so the traffic of the mobile communication data consumed by the mobile terminal is included in the communication fee of the mobile terminal by the charging entity on the communication network side, thereby The data traffic of the mobile communication data included in the communication tariff used by the mobile terminal for contracting is consumed.
  • FIG. 3 is a schematic diagram of hardware entities of each party performing information interaction in the embodiment of the present invention.
  • FIG. 3 includes: a server 11 and a terminal device 21-24, providing original material or providing a target object to be finally delivered to the terminal device 21-24.
  • the advertiser terminal 31; the terminal device 21-24 performs information interaction with the server 11 through a wired network or a wireless network, and the server 11 is connected to the advertiser terminal 31 to acquire original material or provide a target to be finally delivered to the terminal device 21-24.
  • the terminal device includes a mobile phone, a desktop computer, a PC, an all-in-one, and the like.
  • the server serves as a data source, and the original data obtained from the advertiser terminal 31 (such as the original material or the target object that is finally to be delivered to the terminal device) and the preset policy are provided to the terminal device for use by the terminal.
  • the device selects the material displacement motion condition of the preset strategy in accordance with the current operation scene according to the displacement motion caused by the first operation, and finally obtains the motion condition according to the original data and the material displacement motion condition.
  • a dynamic target object on the end user interface to replace the target object that was previously static before the first operation.
  • the dynamic presentation style of the generated target object may be the processing on the terminal side, or may be directly provided to the terminal device after being preprocessed on the server side.
  • the terminal device 23 is taken as an example to describe how the terminal side generates a dynamic presentation style of the target object and a processing logic that finally renders the dynamic target object.
  • the processing logic 10 includes: S1, presenting a target object in a first state in a terminal user interface; S2, triggering a first operation in a browsing page where the target object is located, and determining a location according to the parameter generated by the first operation Navigating the direction of displacement motion currently generated by the browsing page, and selecting a material position that matches the direction of the displacement motion Moving the motion condition; S3, acquiring at least two materials obtained by the target object, generating a dynamic presentation style of the target object according to the at least two materials and the material displacement motion condition; S4, the target object according to The dynamic presentation mode is switched to a second state when the presentation is performed, and the second state is used to represent that the target object is dynamically rendered in a multi-level asynchronous manner in the terminal user interface following the first operation.
  • FIG. 3 is only a system architecture example of the embodiment of the present invention.
  • the embodiment of the present invention is not limited to the system structure described in FIG. 3, and the hardware structure of the mobile terminal 100 described in FIG. 1 is as described above.
  • the communication system and the system architecture described in FIG. 3 present various embodiments of the method of the present invention.
  • the material displacement motion condition as a strategy for synthesizing multiple materials based on displacement into an image that can be dynamically rendered, can be configured in the terminal according to the interaction between the user and the terminal (such as the first operation).
  • the policy may be configured on the server, and after receiving the request generated by the user interacting with the terminal (such as the first operation), the server responds to the request, and the policy is given to the terminal in response. use.
  • FIG. 4 An information processing method according to an embodiment of the present invention is shown in FIG. 4, where the method includes:
  • Step 101 Present a target object in a first state in a terminal user interface.
  • the first state may be a stationary state, as an initial state before the first operation is triggered, the first state (static), that is, the user first sees a still picture.
  • the still picture seen by the user will exhibit a multi-layered unsynchronized perspective effect, that is, from the first state (static) to the second state (dynamic). That is to say, the initially still picture, based on the first operation, finally obtains a dynamic rendering result, which is formed by performing a parallax of the viewing angle displacement in a multi-level manner.
  • the dynamic presentation includes: a change in displacement, a change in direction, a change in angle, etc.; the dynamic presentation may further include: a color transparency, a color translucent, a color gradation, etc. in the presentation state; Changes, changes in direction, changes in angles, etc. are combined.
  • Step 102 Trigger a first operation in a browsing page where the target object is located, determine a displacement motion direction currently generated by the browsing page according to the parameter generated by the first operation, and select a material displacement that matches the displacement motion direction. Movement conditions.
  • the parameter corresponding to the displacement caused by the first operation is received, and the parameter is used as a parameter generated based on the first operation.
  • the first operation includes a gesture sliding operation or a mouse scrolling operation.
  • An example is: when the browsing page of the target object is browsed, the first operation is triggered, and the first operation causes the browsing page (or the target object) to move up or down.
  • it may also be a change other than the displacement mentioned in the above steps, such as a change in the direction, a change in the angle, a color transparency in the presentation state, a translucent color, a gradation of the color, and the like.
  • the displacement change is not limited to moving upward or downward, and also includes moving to the left or right.
  • the first operation causes the displacement motion
  • the material displacement motion condition that matches the current operation scene in the preset strategy is selected according to the displacement motion, so as to be finally rendered according to the original data and the material displacement motion condition.
  • Dynamic target object on the terminal user interface is
  • Step 103 Acquire at least two materials obtained by the target object, and generate a dynamic presentation style of the target object according to the at least two materials and the material displacement motion condition.
  • the terminal device selectively selects a material displacement motion condition that matches the current operation scene according to the displacement motion caused by the first operation, so as to be subsequently displaced according to the original data and the material.
  • the motion condition results in a dynamic target object that can ultimately be presented on the end user interface to replace the target object that was previously static before the first operation.
  • a dynamic rendering style of the target object needs to be generated, and then a dynamic target object presented on the terminal user interface is finally obtained based on the dynamic rendering style.
  • the dynamic rendering style of the generated target object can be the terminal side
  • the processing may also be directly provided to the terminal device after being preprocessed on the server side.
  • the original data may be a plurality of materials obtained by decomposing the target object in advance, and the plurality of materials are preconditions for finally forming a multi-level asynchronous presentation mode.
  • Step 104 The target object is switched to a second state when the target object is presented according to the dynamic presentation style, and the second state is used to represent that the target object follows the first operation and is not synchronized in a multi-level in the terminal user interface. The way to do dynamic rendering.
  • the first state that is, the user first sees a still picture.
  • the still picture seen by the user will exhibit multiple layers of unsynchronized perspective effects, that is, from the first state (static) to the second state (dynamic), specifically, due to the state of the present application.
  • the change, from static to dynamic is caused by a displacement motion caused by the first operation of the interaction, and therefore the second state is used to characterize that the target object follows the first operation in a multi-level asynchronous manner in the end user interface.
  • the dynamic image rendering is finally obtained in the browsing page.
  • the rendering effect is formed by performing a parallax of the viewing angle displacement in a multi-level manner.
  • a plurality of materials and a dynamic mechanism are used to obtain a dynamic rendering effect of multiple layers of asynchronous.
  • multiple materials can be synthesized according to a dynamically combined strategy or algorithm.
  • the compositing of the compositing is to perform compositing according to the comparison between the center of the screen and the center of the advertising area (or the center of the area where the target object is located).
  • the terminal device senses the first operation on the device, and then presents the corresponding generated dynamic picture to the user based on the pre-conditions and specific policies of the foregoing synthesis, so as to achieve a multi-layer unsynchronized perspective effect of the picture. That is: 1) perceive user operations; 2) generate and finally render dynamic pictures according to user operations; wherein generating dynamic pictures is generated according to multiple materials and policies.
  • a dynamic rendering style of the target object thereby providing a basis for changing the target object from static to dynamic according to the first operation, and then responding to the first operation by the interactive response, thereby dynamically rendering the target object according to the dynamic Switching to the second state when the style is rendered, the second state is used to represent that the target object is dynamically rendered in a multi-level asynchronous manner in the terminal user interface following the first operation, and the interaction based on the interaction operation is obtained.
  • the final form of information dynamics promotes the sharing and dissemination of information.
  • FIG. 5 An information processing method according to an embodiment of the present invention is shown in FIG. 5, where the method includes:
  • Step 201 Present a target object in a first state in a terminal user interface.
  • the first state may be a stationary state, as an initial state before the first operation is triggered, the first state (static), that is, the user first sees a still picture.
  • the still picture seen by the user will exhibit a multi-layered unsynchronized perspective effect, that is, from the first state (static) to the second state (dynamic). That is to say, the initially still picture, based on the first operation, finally obtains a dynamic rendering result, which is formed by performing a parallax of the viewing angle displacement in a multi-level manner.
  • the dynamic presentation includes: a change in displacement, a change in direction, a change in angle, etc.; the dynamic presentation may further include: a color transparency, a color translucent, a color gradation, etc. in the presentation state; Changes, changes in direction, changes in angles, etc. are combined.
  • Step 202 Determine, according to the parameter generated by the first operation, a direction of displacement motion currently generated by the browsing page.
  • Step 203 Match corresponding material displacement motion conditions according to the displacement motion direction.
  • Steps 202-203 triggering a first operation in a browsing page where the target object is located, and matching a corresponding material displacement motion condition from the preset policy according to the parameter generated by the first operation, and the material displacement motion condition is according to the first
  • the up, down, left, and right displacements caused by the operation, or the difference in the materials selected by a certain target object are different, and the final information dynamic rendering effect obtained according to the displacement motion condition of the material is also diverse.
  • the first operation is triggered when browsing the browsing page where the target object is located, and the first operation causes the browsing page (or the target object) to move up or down.
  • the first operation is not limited to the gesture sliding operation triggered when the page is browsed, such as the gesture sliding or the gesture sliding operation, or the mouse scrolling operation triggered when the page is browsed, such as scrolling up or down.
  • the operation of scrolling the mouse The different first operations bring the browsing page (or the target object) to move up or down, and the different material displacement motion conditions in the preset strategy need to be taken in a targeted manner.
  • the material displacement motion condition can be finally synthesized according to the comparison between the center of the screen and the center of the advertisement area (or the center of the area where the target object is located).
  • the queue a is cyclically valued and assembled to form an advertisement map, and the split mode is based on the magnitude of the difference between b and c.
  • the layer-by-layer Y coordinate adjustment of the material in the queue a is performed in the forward or reverse direction, and the larger the difference is, the larger the displacement distance is. If the first action triggered when the user swipes or scrolls through the page browsing causes the b value to change, the above three-step process is repeated.
  • the parameter corresponding to the displacement caused by the first operation is received, and the parameter is used as a parameter generated based on the first operation.
  • the first operation includes a gesture sliding operation or a mouse scrolling operation.
  • An example is: The first operation is triggered when browsing the browsing page where the target object is located, and the first operation causes the browsing page (or the target object) to move up or down.
  • it may also be a change other than the displacement mentioned in the above steps, such as a change in the direction, a change in the angle, a color transparency in the presentation state, a translucent color, a gradation of the color, and the like.
  • the displacement change is not limited to moving upward or downward, and also includes moving to the left or right.
  • the first operation causes the displacement motion
  • the material displacement motion condition that matches the current operation scene in the preset strategy is selected according to the displacement motion, so as to be finally rendered according to the original data and the material displacement motion condition.
  • Dynamic target object on the terminal user interface is
  • Step 204 Acquire at least two materials obtained by the target object, and generate a dynamic presentation style of the target object according to the at least two materials and the material displacement motion condition.
  • the terminal device selectively selects a material displacement motion condition that matches the current operation scene according to the displacement motion caused by the first operation, so as to be subsequently displaced according to the original data and the material.
  • the motion condition results in a dynamic target object that can ultimately be presented on the end user interface to replace the target object that was previously static before the first operation.
  • a dynamic rendering style of the target object needs to be generated, and then a dynamic target object presented on the terminal user interface is finally obtained based on the dynamic rendering style.
  • the dynamic presentation style of the generated target object may be the processing on the terminal side, or may be directly provided to the terminal device after being preprocessed on the server side.
  • the original data may be a plurality of materials obtained by decomposing the target object in advance, and the plurality of materials are preconditions for finally forming a multi-level asynchronous presentation mode.
  • Step 205 The target object is switched to a second state when the target object is rendered according to the dynamic presentation style, and the second state is used to represent that the target object follows the first operation and is not synchronized in a multi-level in the terminal user interface. The way to do dynamic rendering.
  • the first state that is, the user first sees a still picture. Trigger the first operation After the operation, the still picture seen by the user will exhibit multiple layers of unsynchronized perspective effects, that is, from the first state (static) to the second state (dynamic), specifically, due to the state change of the present application, Static to dynamic, which is caused by the displacement motion caused by the first operation of the interaction. Therefore, the second state is used to represent that the target object follows the first operation and is dynamic in a multi-level asynchronous manner in the terminal user interface. Presented. This dynamic rendering is a multi-layered unsynchronized perspective effect. In the browsing page, the image is initially still.
  • the dynamic image rendering is finally obtained in the browsing page.
  • the rendering effect is formed by performing a parallax of the viewing angle displacement in a multi-level manner.
  • a plurality of materials and a dynamic mechanism are used to obtain a dynamic rendering effect of multiple layers of asynchronous.
  • multiple materials can be synthesized according to a dynamically combined strategy or algorithm.
  • the compositing of the compositing is to perform compositing according to the comparison between the center of the screen and the center of the advertising area (or the center of the area where the target object is located).
  • the terminal device senses the first operation on the device, and then presents the corresponding generated dynamic picture to the user based on the pre-conditions and specific policies of the foregoing synthesis, so as to achieve a multi-layer unsynchronized perspective effect of the picture. That is: 1) perceive user operations; 2) generate and finally render dynamic pictures according to user operations; wherein generating dynamic pictures is generated according to multiple materials and policies.
  • the dynamic rendering style of the target object finally presented on the terminal can be obtained, thereby having the basis of changing the target object from static to dynamic according to the first operation, and then responding through interaction.
  • Responding to the first operation to switch the target object to the second state when the target object is rendered according to the dynamic presentation style, the second state being used to characterize the target object following the first operation at the end user The interface is dynamically presented in a multi-level unsynchronized manner, and the interaction based on the interactive operation obtains the dynamic form of the final information, which promotes the sharing and dissemination of information.
  • the material displacement motion condition corresponding to the scene selection and the specific synthetic dynamic rendering effect, as shown in FIG. 6, include:
  • Step 301 Determine, according to the parameter generated by the first operation, a direction of displacement motion currently generated by the browsing page.
  • Step 302 When selecting a material displacement motion condition that matches the displacement motion direction, if the displacement motion direction is upward movement, determine whether the center of the terminal user interface and the center of the target object area coincide. Then, step 303 is performed; otherwise, step 304 is performed.
  • the first operation is an upward movement, and the dynamic rendering effect is that the target object runs up. Conversely, the first operation is a downward movement, and the dynamic rendering effect is that the target object runs down.
  • the same adjustment threshold or adjustment thresholds of different levels are adopted.
  • the first operation is to move to the left, and when the center of the terminal user interface coincides with the center of the region where the target object is located, the material displacement movement of the at least two materials moving to the left according to the same adjustment threshold is performed. condition.
  • Step 303 When the center of the terminal user interface coincides with the center of the area where the target object is located, select a first material displacement motion condition that moves the at least two materials upward according to the same adjustment threshold.
  • Step 304 When the center of the terminal user interface does not coincide with the center of the area where the target object is located, select a second material displacement motion condition that moves the at least two materials upward by layer according to different adjustment thresholds.
  • the center of the terminal user interface coincides with the center of the area where the target object is located
  • the material displacement that directly superimposes the at least two materials and moves upward as a whole is selected. Movement conditions.
  • the center of the terminal user interface does not coincide with the center of the area where the target object is located, based on the at least two
  • the different priority levels of the layers to which the material belongs are the material displacement motion conditions of the layer-by-layer upward movement with different thresholds. For example, the higher the priority layer, the lower the position of the layer at the time of synthesis, and the adjustment of the layer with higher priority.
  • the displacement of each layer is adjusted according to different adjustment thresholds, and other display effects such as transparency or color of each layer may be adjusted according to different adjustment thresholds.
  • An example is that there are three materials, which are identified by the material a1, the material a2, and the material a3. The material a1 is the layer with the highest priority, and the material a3 is the layer with the lowest priority.
  • the hierarchical order of each material is: material a3, material a2, material a1, that is, the lowest priority.
  • the layer "material a3" has the highest position at the time of composition.
  • the material a1 is the maximum value
  • the material a2 is the next largest value
  • the material a3 is the minimum value or zero (the material a3 is not adjusted)
  • the adjustment threshold of the material a1 can be selected to be 2 cm
  • the adjustment threshold is 1 cm
  • the adjustment threshold of the material a3 is 0.5 cm or 0 cm, which is only a schematic example and does not limit the specific value. According to the different adjustment thresholds corresponding to the materials a1-a3, the layers are moved downwards layer by layer.
  • a corresponding special material displacement motion condition and a specific synthetic dynamic rendering effect are selected according to different scenarios, as shown in FIG. 7, including:
  • Step 401 Determine, according to the parameter generated by the first operation, a direction of displacement motion currently generated by the browsing page.
  • Step 402 When selecting a material displacement motion condition that matches the displacement motion direction, if the displacement motion direction is a downward movement, determine whether the center of the terminal user interface and the center of the target object area coincide. If yes, go to step 403; otherwise, go to step 404.
  • the first operation is a downward movement, and the dynamic rendering effect is that the target object runs down. Conversely, the first operation is an upward movement, and the dynamic rendering effect is that the target object runs up. Need It is pointed out that, in addition to the upward and downward directions, the movement to the left or the right or the displacement of the angle is included in the protection scope of the present invention, and correspondingly, the same adjustment threshold or the adjustment threshold of different levels is adopted to correspond.
  • the material displacement motion condition For example, the first operation is to move to the left, and when the center of the terminal user interface coincides with the center of the region where the target object is located, the material displacement movement of the at least two materials moving to the left according to the same adjustment threshold is performed. condition.
  • Step 403 When the center of the end user interface coincides with the center of the area where the target object is located, select a third material displacement motion condition that moves the at least two materials downward according to the same adjustment threshold.
  • Step 404 When the center of the end user interface does not coincide with the center of the area where the target object is located, select a fourth material displacement motion condition that moves the at least two materials downward by layer according to different adjustment thresholds.
  • the material displacement that directly superimposes the at least two materials and moves upward as a whole is selected. Movement conditions.
  • the material displacement movement condition of the layer-by-layer upward movement with different threshold values is performed based on different priorities of the layers to which the at least two materials belong, For example, the higher the priority layer, the lower the position at which it is synthesized, and the higher the adjustment threshold for the higher priority layer; the lower the priority, the higher the position at the time of synthesis.
  • the adjustment threshold for the lower priority layer is smaller or zero (ie, the layer with lower priority may not be adjusted).
  • the displacement of each layer is adjusted according to different adjustment thresholds, and other display effects such as transparency or color of each layer may be adjusted according to different adjustment thresholds.
  • An example is: there are three materials, which are identified by material b1, material b2, and material b3. Among them, the material b1 is the layer with the highest priority, and the material b3 is the layer with the lowest priority.
  • the order of the layers of each material is: material b3,
  • the material b1 is the maximum value
  • the material b2 is the next largest value
  • the material b3 is the minimum value or zero (the material b3 is not adjusted)
  • the adjustment threshold of the material b1 can be selected to be 3 cm
  • the adjustment threshold is 1.5 cm
  • the adjustment threshold of the material b3 is 1 cm or 0 cm.
  • the reference is only a schematic example, and the specific value is not limited, and the adjustment threshold is moved downward according to the different adjustment thresholds corresponding to the materials b1-b3.
  • generating a dynamic presentation style of the target object according to the at least two materials and the material displacement motion condition including: arranging the at least two materials according to a material priority, Obtaining a result of the arrangement; when the center of the end user interface does not coincide with the center of the area where the target object is located, acquiring a first coordinate value for identifying the center of the terminal user interface, and obtaining the identifier for identifying the target object a second coordinate value of the regional center; determining a difference between the first coordinate value and the second coordinate value as an adjustment base, and at least two of the ranking results are prioritized according to the adjustment base
  • the material is subjected to a forward displacement or a reverse layer-by-layer coordinate value adjustment.
  • the adjustment threshold used for the adjustment of the layer-by-layer coordinate value is generated according to the adjustment base, and the same adjustment threshold is adopted for different layers to which the at least two materials belong or Different adjustment thresholds.
  • the forward direction the addition of the displacement in the Y coordinate direction.
  • Reverse Subtract the displacement in the Y coordinate direction.
  • the upward displacement can bring the dynamic effect of the target object upward; the upward displacement can also bring the dynamic effect of the target object downward.
  • the different algorithms (forward or reverse) described above are selected according to different dynamic effects.
  • the method further includes:
  • the target object is located at an upper position of the terminal user interface, and the layer-by-layer coordinate value adjustment of the forward displacement is performed. 2) If the first coordinate value is smaller than the second coordinate value, the target object is located at a lower position of the terminal user interface, and the reverse layer-by-layer coordinate value adjustment is performed.
  • the first operation is triggered in the browsing page where the target object is located, and the specific parameter of the corresponding material displacement motion condition is matched from the preset policy according to the parameter generated by the first operation, and at least the pre-decomposed target object is obtained.
  • the two materials generate various changes such as a dynamic presentation style of the target object according to the at least two materials and the material displacement motion condition, and refer to the description in the above embodiment.
  • the method further includes: acquiring a preset new material (pre-embedded egg), where the newly added material is different from at least two materials obtained by decomposing the target object in advance; Generating a dynamic presentation style of the target object according to the newly added material, the at least two materials, and the material displacement motion condition, so that the target object follows or displays the first operation on the terminal user interface to display or hide the Add material.
  • a preset new material pre-embedded egg
  • the target object may follow the specific operation mode of the first operation to correspond to the speed, the action pressure and the working distance.
  • the rendering effect that is, the change caused by the first operation presents an unsynchronized trajectory motion reflecting effect. Because the speed of the first operation, the pressure value of the action or its change, the action distance of the first operation, etc. are different in each operation of the user, and the difference is accurately captured to improve the interaction precision and the interaction effect.
  • the corresponding unsynchronized trajectory motion reflects the effect.
  • An information processing system of the embodiment of the present invention includes a terminal 41, a server 51, and an advertiser terminal 61 that provides original data.
  • the terminal 41 includes a display unit 411 for presenting in the terminal user interface. a target object in a first state; a matching unit 412, configured to trigger a first operation in a browsing page where the target object is located, and determine, according to a parameter generated by the first operation, a displacement motion direction currently generated by the browsing page, Selecting a material displacement motion condition that matches the displacement motion direction;
  • the synthesizing unit 413 is configured to acquire at least two materials obtained by the target object, and generate the according to the at least two materials and the material displacement motion condition a dynamic rendering style of the target object;
  • the switching processing unit 414 is configured to switch to the second state when the target object is rendered according to the dynamic rendering style, and the second state is used to represent the The target object is dynamically rendered in a multi-level asynchronous manner in the terminal user interface following the first operation.
  • the first state may be a stationary state, as an initial state before the first operation is triggered, the first state (static), that is, the user first sees a still picture.
  • the still picture seen by the user will exhibit a multi-layered unsynchronized perspective effect, that is, from the first state (static) to the second state (dynamic). That is to say, the initially still picture, based on the first operation, finally obtains a dynamic rendering result, which is formed by performing a parallax of the viewing angle displacement in a multi-level manner.
  • the dynamic presentation includes: a change in displacement, a change in direction, a change in angle, etc.; the dynamic presentation may further include: a color transparency, a color translucent, a color gradation, etc. in the presentation state; Changes, changes in direction, changes in angles, etc. are combined.
  • the first operation includes a gesture sliding operation or a mouse scrolling operation.
  • An example is: when the browsing page of the target object is browsed, the first operation is triggered, and the first operation causes the browsing page (or the target object) to move up or down.
  • it may also be a change other than the displacement mentioned in the above steps, such as a change in the direction, a change in the angle, a color transparency in the presentation state, a translucent color, a gradation of the color, and the like.
  • the displacement change is not limited to moving upward or downward, and also includes moving to the left or right.
  • the material displacement motion condition in accordance with the current operation scene in the preset strategy is selected according to the displacement motion, so as to be finally presented in the terminal according to the original data and the material displacement motion condition.
  • Dynamic target object on the user interface For example, if the first operation causes displacement motion, the material displacement motion condition in accordance with the current operation scene in the preset strategy is selected according to the displacement motion, so as to be finally presented in the terminal according to the original data and the material displacement motion condition.
  • the terminal device After the first operation of the interaction is triggered, the terminal device selectively selects a material displacement motion condition that matches the current operation scene according to the displacement motion caused by the first operation, so as to subsequently move the motion condition according to the original data and the material.
  • a dynamic target object that can ultimately be presented on the end user interface is obtained to replace the target object that was previously static before the first operation.
  • the style, and then based on the dynamic rendering style ultimately results in a dynamic target object presented on the end user interface.
  • the dynamic presentation style of the generated target object may be the processing on the terminal side, or may be directly provided to the terminal device after being preprocessed on the server side.
  • the original data may be a plurality of materials obtained by decomposing the target object in advance, and the plurality of materials are preconditions for finally forming a multi-level asynchronous presentation mode.
  • a plurality of materials and a dynamic mechanism are used to obtain a dynamic rendering effect of multiple layers of asynchronous.
  • multiple materials can be synthesized according to a dynamically combined strategy or algorithm.
  • the user sees a still picture, ie the target object is in the first state (static).
  • the still picture seen by the user will present a multi-layer unsynchronized perspective effect, that is, the target object changes from the first state (static) to the second state (dynamic), specifically, due to the present application.
  • the state change, from static to dynamic is caused by the displacement motion caused by the first operation of the interaction, and therefore, the second state is used to characterize that the target object follows the first operation in the terminal user interface at multiple levels.
  • Dynamically rendered in a synchronized manner This dynamic rendering is a multi-layered unsynchronized perspective effect.
  • the image In the browsing page, the image is initially still. Based on the first operation (such as gesture sliding or scrolling the mouse, as long as the target object on the page is displaced), the dynamic image rendering is finally obtained in the browsing page.
  • the rendering effect is formed by performing a parallax of the viewing angle displacement in a multi-level manner.
  • the compositing of the compositing is to perform compositing according to the comparison between the center of the screen and the center of the advertising area (or the center of the area where the target object is located).
  • the terminal device senses the first operation on the device, and then presents the corresponding generated dynamic picture to the user based on the pre-conditions and specific policies of the foregoing synthesis, so as to achieve a multi-layer unsynchronized perspective effect of the picture. That is: 1) perceive user operations; 2) generate and finally render dynamic pictures according to user operations; wherein generating dynamic pictures is generated according to multiple materials and policies.
  • the dynamic rendering style of the target object finally rendered on the terminal can be obtained, thereby having the target object static according to the first operation Changing to a dynamic basis, then responding to the first operation by an interactive response, thereby switching the target object to a second state when rendered in accordance with the dynamic rendering style, the second state being used to characterize the
  • the target object follows the first operation in the terminal user interface and is dynamically presented in a multi-level unsynchronized manner, and the interaction based on the interaction operation obtains the final dynamic form of the information, which promotes the sharing and dissemination of information.
  • the matching unit further includes: a displacement determining sub-unit, configured to determine, according to the parameter generated by the first operation, a displacement motion direction currently generated by the browsing page; And for matching the corresponding material displacement motion condition from the preset strategy according to the displacement motion direction.
  • the first operation includes: a gesture sliding operation or a mouse scrolling operation.
  • the displacement determining sub-unit is further configured to: determine whether the center of the terminal user interface and the center of the target object area coincide when the displacement movement direction is upward movement
  • the selecting subunit is further configured to: when the center of the end user interface coincides with the center of the area where the target object is located, select the first one that moves the at least two materials upwards according to the same adjustment threshold a material displacement motion condition; when the center of the end user interface does not coincide with the center of the area where the target object is located, selecting a second material displacement motion condition that moves the at least two materials upward by layer according to different adjustment thresholds.
  • the displacement determining subunit is further configured to: determine, when the displacement movement direction is a downward movement, whether a center of the terminal user interface and a center of the target object area are
  • the selecting subunit is further configured to: when the center of the terminal user interface coincides with the center of the area where the target object is located, select to move the at least two materials downwards according to the same adjustment threshold as a whole. a third material displacement motion condition; when the center of the end user interface does not coincide with the center of the area where the target object is located, A fourth material displacement motion condition that moves the at least two materials downward by layer according to different adjustment thresholds is selected.
  • the synthesizing unit is further configured to: arrange the at least two materials according to a material priority to obtain an arrangement result; when the center of the terminal user interface and the target object When the center of the area is not coincident, the first coordinate value for identifying the center of the terminal user interface is obtained, and the second coordinate value for identifying the center of the area where the target object is located is obtained; the first coordinate value and the location are The difference between the second coordinate values is determined as an adjustment base, and the layer-by-layer coordinate value of the forward displacement or the reverse direction of the at least two materials arranged according to the priority in the sorting result is adjusted according to the adjustment base.
  • the adjustment threshold used for the adjustment of the coordinate value is generated according to the adjustment base, and the same adjustment threshold or a different adjustment threshold is adopted for different layers to which the at least two materials belong.
  • the synthesizing unit is further configured to: if the first coordinate value is greater than the second coordinate value, the target object is located at an upper position of the terminal user interface, The layer-by-layer coordinate value adjustment of the forward displacement is performed. If the first coordinate value is smaller than the second coordinate value, the target object is located at a lower position of the terminal user interface, and the reverse layer-by-layer coordinate value adjustment is performed.
  • the terminal further includes: a new material acquiring unit, configured to acquire a preset new material, where the newly added material is different from at least two materials obtained by decomposing the target object in advance.
  • the synthesizing unit is further configured to generate a dynamic rendering style of the target object according to the newly added material, the at least two materials, and the material displacement motion condition, so that the target object follows the first operation
  • the new material is displayed or hidden in the end user interface.
  • a microprocessor for the processor for data processing, a microprocessor, a central processing unit (CPU), a digital signal processor (DSP, Digital Singnal Processor) or programmable logic may be used when performing processing.
  • Array FPGA, Field-Programmable The Array is implemented.
  • the storage medium includes an operation instruction, and the operation instruction may be computer executable code, and the steps in the flow of the information processing method of the embodiment of the present invention are implemented by the operation instruction.
  • the information can be advertising information or other multimedia information.
  • the information (advertisement information) can be switched from the original still picture form to the three-dimensional stereoscopic form, and the switching trigger is based on the user's interactive operation on the page, such as a sliding movement operation or a mouse.
  • the moving operation causes the original still image to be displaced in a multi-level manner to form a parallax.
  • a processing flow using the embodiment of the present invention includes: 1) the creative material is output in multiple layers and separated elements; 2) uploading to the server side, and dynamically combining the background code; 3) advertising the image to the mobile terminal or the computer end Page display; 4) When the user slides up or down by gestures to scroll the page; 5) At the same time, the advertisement map area will follow the speed and distance of the operation to make an unsynchronized track motion reflection; 6) the advertisement seen by the user The figure will show the effect of the change in stereo disparity.
  • the material picture stored in the layered storage of the server is first loaded into the queue a; the Y coordinate value of the page in the middle of the screen display area is obtained as b; The Y coordinate value of the page where the center position is located is assigned to c; the size of the b value and the c value are compared. If b is greater than c, the advertisement image is on the upper side of the display screen. If b is smaller than c, the advertisement image is on the display screen. The lower position, if b is equal to c, the advertisement map is located in the middle of the screen; according to the difference between b and c, the queue a is taken The values are combined to form an advertisement map.
  • the flattening method is used to adjust the material of the queue a in the forward or reverse layerwise Y coordinate according to the difference of the b and c differences.
  • FIG. 9-17 are diagrams showing the final rendering effect obtained by using the embodiment of the present invention in the above scenario.
  • the target object in the first state such as the original static picture identified by A
  • FIG. 9 further includes a plurality of materials (identified by S1, S2, . . . , Sn respectively) obtained by decomposing the target object (original still picture identified by A) in advance, so as to be subsequently determined according to the plurality of materials.
  • the material displacement motion condition generates a dynamic rendering style of the target object.
  • FIG. 9 Another schematic diagram of pre-decomposing the target object is shown in FIG.
  • the terminal device selectively selects a material displacement motion condition that matches the current operation scene according to the displacement motion caused by the first operation, so as to follow the A plurality of materials and the material displacement motion condition generate a dynamic rendering style of the target object, and obtain a dynamic target object that can be finally presented on the terminal user interface, as shown in FIG. 10-11, instead of being in the first place before the start.
  • a static target object (the original still image identified by A) before the operation.
  • a dynamic rendering style of the target object needs to be generated, and then a dynamic target object presented on the terminal user interface is finally obtained based on the dynamic rendering style.
  • the dynamic presentation style of the generated target object may be the processing on the terminal side, or may be directly provided to the terminal device after being preprocessed on the server side.
  • the original data may be a plurality of materials obtained by decomposing the target object in advance, and the plurality of materials are preconditions for finally forming a multi-level asynchronous presentation mode.
  • the first operation is a finger-up sliding operation in the browsing page, D>d, triggering a finger-up sliding operation, switching from the first state on the left to the second state on the right, so that the target object is displayed.
  • a dynamic presentation style that is, the sliding operation of the target object following the finger up is performed in a multi-level asynchronous manner in the terminal user interface.
  • Another The state switching diagram caused by the sliding operation that triggers the finger up is as shown in FIG.
  • the first operation is a sliding operation of a finger up in the browsing page, D>d, triggering a sliding operation of the finger downward, switching from the first state on the left to the second state on the right, so that the target object is displayed.
  • a dynamic rendering style is obtained, that is, the sliding operation of the target object following the finger downward moves upward in a multi-level asynchronous manner in the terminal user interface.
  • the state switching diagram caused by the sliding operation of triggering the finger downward is as shown in FIG.
  • the target object of the first state is included, such as the original still picture identified by A1, and the static picture is initially displayed in the terminal user interface and presented to the user.
  • a plurality of materials (identified by S1, S2, ..., Sn, respectively, and S2 including embedded eggs) obtained by decomposing the target object (the original still picture identified by A1). Clicking to receive the red envelope") to subsequently generate a dynamic presentation style of the target object according to the plurality of materials and the material displacement motion condition.
  • the first operation is a sliding operation of a finger up in the browsing page, and FIG. 12 further includes a case where the first state switches to the second state after the sliding operation of triggering the finger up.
  • the sliding operation of triggering the finger up is switched from the first state on the left to the second state on the right, so that the target object displays a dynamic presentation style, that is, the target object follows the sliding operation of the finger up in the end user.
  • the interface moves upwards in a multi-level unsynchronized manner, and since the upward movement reveals the originally hidden embedded egg, the information identified by B is “click to receive the red envelope”.
  • triggering the downward sliding operation of the finger will hide the egg that has been displayed to the user, and the information identified by B is “click to receive the red envelope”.
  • Another schematic diagram of pre-decomposing the target object is shown in Fig. 16.
  • the pre-embedded egg is the "limited gift free collar" information displayed by the layered material (2).
  • the sliding operation that triggers the finger up will cause the state to switch, and the original hidden egg "limited gift free collar” will be displayed, as shown in FIG.
  • the above dynamic rendering is a multi-layered unsynchronized perspective effect.
  • the first static image in the browsing page based on the first operation (such as gesture sliding or scrolling the mouse, as long as it can cause The operation of shifting the target object is feasible.
  • a dynamic image rendering result is obtained in the browsing page, and the rendering effect is formed by performing a parallax of the viewing angle displacement in a multi-level manner.
  • the "limited gift free collar” in FIG. 17 is based on the interaction between the user and the terminal (such as the first Operation) triggering the currently displayed picture to change state from “static” to "moving”, and after obtaining the rendering effect of the dynamic image in the browsing page in a multi-level manner, when the execution of the first operation ends, the rendering effect of the dynamic image becomes As a still image, at this time, the user can click on the interactive control to enter the interaction between the terminal and the background server, and obtain a new browsing page pointed by the interactive control.
  • the user can also click on the entire image to enter the interaction between the terminal and the background server, and obtain a new browsing page pointed by the interactive control.
  • This article does not limit the specific interaction mode.
  • the image is again changed to a still image, and the position in the screen (such as the position C2) is different from the position at the time of the state before the state switching (such as the position C1), for example, the position C1 is the screen.
  • the position C2 is the position where the screen is moved upward.
  • Embodiments of the present invention also provide a computer storage medium, such as a memory including a computer program, which is executable by a processor of a data processing apparatus to perform the steps described above.
  • the computer storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories, such as a mobile phone. , computers, tablet devices, personal digital assistants, etc.
  • the computer readable storage medium provided by the embodiment of the present invention has a computer program stored thereon, and when the computer program is executed by the processor, the following information processing method steps are performed.
  • the second state being used to represent that the target object follows the first operation in a multi-level asynchronous manner in the terminal user interface Dynamic rendering.
  • the computer program when executed by the processor, it also executes:
  • a second material displacement motion condition that moves the at least two materials upward by layer according to different adjustment thresholds is selected.
  • the computer program when executed by the processor, it also executes:
  • the computer program when executed by the processor, it also executes:
  • the layer-by-layer coordinate value adjustment, the adjustment threshold used for the adjustment of the layer-by-layer coordinate value is generated according to the adjustment base, and the same adjustment threshold or different adjustment threshold is adopted for different layers to which the at least two materials belong.
  • the computer program when executed by the processor, it also executes:
  • the target object is located at an upper position of the terminal user interface, and the layer-by-layer coordinate value adjustment of the forward displacement is performed.
  • the target object is located at a lower position of the terminal user interface, and the reverse layer-by-layer coordinate value adjustment is performed.
  • the computer program when executed by the processor, it also executes:
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another The way of dividing, such as: multiple units or components can be combined, or can be integrated into another system, or some features can be ignored, or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a mobile storage device, a ROM, a RAM, a disk, or a light.
  • a medium such as a disk that can store program code.
  • the target object in the first state is presented in the terminal user interface; the first operation is triggered in the browsing page where the target object is located, and the parameter is determined according to the parameter generated by the first operation.
  • Navigating the displacement motion direction currently generated by the page selecting a material displacement motion condition that matches the displacement motion direction, acquiring at least two materials obtained by the target object, and performing displacement motion conditions according to the at least two materials and the material Generating a dynamic rendering style of the target object, having multiple materials and corresponding material displacement motion conditions, and obtaining a dynamic rendering style of the target object finally presented on the terminal, thereby having a static change of the target object according to the first operation Up to a dynamic basis, thereafter, responding to the first operation by an interactive response, thereby switching the target object to a second state when rendered in accordance with the dynamic rendering style, the second state being used to characterize the target
  • the object follows the first operation in a multi-level asynchronous manner in the terminal user interface Line dynamic presentation, get a dynamic presentation of the final form

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

一种信息处理方法及终端、计算机存储介质,其中,所述方法包括:在终端用户界面中呈现处于第一状态的目标对象(101);在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件(102);获取由目标对象得到的至少两个素材,根据至少两个素材和素材位移运动条件生成目标对象的动态呈现样式(103);所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现(104)。

Description

一种信息处理方法及终端、计算机存储介质
相关申请的交叉引用
本申请基于申请号为201610877820.8、申请日为2016年09月30日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及网络交互技术,尤其涉及一种信息处理方法及终端、计算机存储介质。
背景技术
随着互联网技术的发展,信息发送、信息接收和信息呈现是用户间进行信息分享的常用手段。比如,一种应用场景为:将定制的信息发送给用户,用户接收后,呈现该信息,信息的呈现形式为静态的,如一段文字,一个图片等,一方面,信息的呈现方式比较单一,另一方面,静态的信息呈现不具备信息的互动交互性,而信息分享的初衷或者称一个主要目的,就是通过交互来促进信息的分享和传播。然而,相关技术中,对于该问题,尚无有效解决方案。
发明内容
有鉴于此,本发明实施例提供了一种信息处理方法及终端、计算机存储介质,至少解决了现有技术存在的问题。
本发明实施例的技术方案是这样实现的:
本发明实施例的一种信息处理方法,所述方法包括:
在终端用户界面中呈现处于第一状态的目标对象;
在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;
获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;
在所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
本发明实施例的一种终端,所述终端包括:
显示单元,用于在终端用户界面中呈现处于第一状态的目标对象;
匹配单元,用于在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;
合成单元,用于获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;
切换处理单元,用于在所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
本发明实施例提供的一种计算机存储介质,其中存储有计算机程序,该计算机程序用于执行上述信息处理方法。
采用本发明实施例,在终端用户界面中呈现处于第一状态(静态)的目标对象;在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所 述位移运动方向匹配的素材位移运动条件;获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,有多个素材和对应的素材位移运动条件,可以得到最终呈现在终端上目标对象的动态呈现样式,从而,具备了将目标对象根据第一操作由静态变化至动态的基础,之后,通过交互响应,对第一操作进行响应,从而将所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现,基于互动操作的交互得到最终信息动态的呈现形式,促进了信息的分享和传播。
附图说明
图1为实现本发明各个实施例的移动终端一个可选的硬件结构示意图;
图2为如图1所示的移动终端的通信系统示意图;
图3为本发明实施例中进行信息交互的各方硬件实体的示意图;
图4为本发明实施例一的终端与服务器交互的示意图;
图5为本发明实施例二的终端与服务器交互的示意图;
图6为本发明实施例三的终端与服务器交互的示意图;
图7为本发明实施例四的终端与服务器交互的示意图;
图8为本发明实施例五的系统组成结构示意图;
图9-17为采用本发明实施例多个应用场景的素材分解和响应用户操作对应得到的最终呈现效果图。
具体实施方式
下面结合附图对技术方案的实施作进一步的详细描述。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为 了有利于本发明实施例的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
在下面的详细说明中,陈述了众多的具体细节,以便彻底理解本发明。不过,对于本领域的普通技术人员来说,显然可在没有这些具体细节的情况下实践本发明。在其他情况下,没有详细说明公开的公知方法、过程、组件、电路和网络,以避免不必要地使实施例的各个方面模糊不清。
另外,本文中尽管多次采用术语“第一”、“第二”等来描述各种元件(或各种阈值或各种应用或各种指令或各种操作)等,不过这些元件(或阈值或应用或指令或操作)不应受这些术语的限制。这些术语只是用于区分一个元件(或阈值或应用或指令或操作)和另一个元件(或阈值或应用或指令或操作)。例如,第一操作可以被称为第二操作,第二操作也可以被称为第一操作,而不脱离本发明的范围,第一操作和第二操作都是操作,只是二者并不是相同的操作而已。
本发明实施例中的步骤并不一定是按照所描述的步骤顺序进行处理,可以按照需求有选择的将步骤打乱重排,或者删除实施例中的步骤,或者增加实施例中的步骤,本发明实施例中的步骤描述只是可选的顺序组合,并不代表本发明实施例的所有步骤顺序组合,实施例中的步骤顺序不能认为是对本发明的限制。
本发明实施例中的术语“和/或”指的是包括相关联的列举项目中的一个或多个的任何和全部的可能组合。还要说明的是:当用在本说明书中时,“包括/包含”指定所陈述的特征、整数、步骤、操作、元件和/或组件的存在,但是不排除一个或多个其他特征、整数、步骤、操作、元件和/或组件和/或它们的组群的存在或添加。
本发明实施例的智能终端(如移动终端)可以以各种形式来实施。例如,本发明实施例中描述的移动终端可以包括诸如移动电话、智能电话、 笔记本电脑、数字广播接收器、个人数字助理(PDA,Personal Digital Assistant)、平板电脑(PAD)、便携式多媒体播放器(PMP,Portable Media Player)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例的移动终端一个可选的硬件结构示意图。
移动终端100可以包括通信单元110、音频/视频(A/V)输入单元120、用户输入单元130、匹配单元140、合成单元141、切换处理单元142、输出单元150、显示单元151、存储单元160、接口单元170、处理单元180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信(如果将移动终端用固定终端代替,也可以通过有线方式进行电通信)。例如,通信单元具体为无线通信单元时可以包括广播接收单元111、移动通信单元112、无线互联网单元113、短程通信单元114和位置信息单元115中的至少一个,这些单元是可选的,根据不同需求可以增删。
广播接收单元111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相 关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信单元112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB,Digital Multimedia Broadcasting)的电子节目指南(EPG,Electronic Program Guide)、数字视频广播手持(DVB-H,Digital Video Broadcasting-Handheld)的电子服务指南(ESG,Electronic Service Guide)等等的形式而存在。广播接收单元111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收单元111可以通过使用诸如多媒体广播-地面(DMB-T,Digital Multimedia Broadcasting-Terrestrial)、数字多媒体广播-卫星(DMB-S,Digital Multimedia Broadcasting-Satellite)、数字视频广播手持(DVB-H),前向链路媒体(MediaFLO,Media Forward Link Only)的数据广播系统、地面数字广播综合服务(ISDB-T,Integrated Services Digital Broadcasting-Terrestrial)等等的数字广播系统接收数字广播。广播接收单元111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收单元111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信单元112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网单元113支持移动终端的无线互联网接入。该单元可以内部或外部地耦接到终端。该单元所涉及的无线互联网接入技术可以包括无线局域网络(Wi-Fi,WLAN,Wireless Local Area Networks)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA,High Speed Downlink Packet Access)等等。
短程通信单元114是用于支持短程通信的单元。短程通信技术的一些 示例包括蓝牙、射频识别(RFID,Radio Frequency Identification)、红外数据协会(IrDA,Infrared Data Association)、超宽带(UWB,Ultra Wideband)、紫蜂等等。
位置信息单元115是用于检查或获取移动终端的位置信息的单元。位置信息单元的典型示例是全球定位系统(GPS,Global Positioning System)。根据当前的技术,GPS单元115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS单元115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储单元160(或其它存储介质)中或者经由通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信单元112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并 且可以包括键盘、鼠标、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
匹配单元140,用于在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;合成单元141,用于获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;切换处理单元142,用于在所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。通过显示单元151,用于在终端用户界面中呈现处于第一状态的目标对象,在经过匹配单元140、合成单元141、切换处理单元142的一系列处理后,再通过显示单元151呈现处于第二状态的目标对象,此时,该目标对象以多层次不同步的方式在终端用户界面中进行信息的动态呈现。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别单元的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别单元可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别单元(UIM,User Identify Module)、客户识别单元(SIM,Subscriber Identity Module)、通用客户识别单元(USIM,Universal Subscriber Identity Module)等等。另外,具有识别单元的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置 的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出单元152等等。
显示单元151可以显示在移动终端100中处理的信息。例如,移动终端100可以显示相关用户界面(UI,User Interface)或图形用户界面(GUI,Graphical User Interface)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD,Liquid Crystal Display)、薄膜晶体管LCD(TFT-LCD,Thin Film Transistor-LCD)、有机发光二极管(OLED,Organic Light-Emitting Diode)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为透明有机发光二极管(TOLED)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触 摸输入位置和触摸输入面积。
音频输出单元152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出单元152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元152可以包括扬声器、蜂鸣器等等。
存储单元160可以存储由处理单元180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储单元160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储单元160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM,Random Access Memory)、静态随机访问存储器(SRAM,Static Random Access Memory)、只读存储器(ROM,Read Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read Only Memory)、可编程只读存储器(PROM,Programmable Read Only Memory)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储单元160的存储功能的网络存储装置协作。
处理单元180通常控制移动终端的总体操作。例如,处理单元180执行与语音通话、数据通信、视频通话等等相关的控制和处理。又如,处理单元180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在处理单元180的控制下接收外部电力或内部电力并且 提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processing)、数字信号处理装置(DSPD,Digital Signal Processing Device)、可编程逻辑装置(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件单元来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,已经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图2描述其中根据本发明实施例的移动终端能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA,Frequency Division Multiple Access)、时分多址(TDMA,Time Division Multiple Access)、码分多址(CDMA,Code Division Multiple Access)和通用移动通信系统 (UMTS,Universal Mobile Telecommunications System)(特别地,长期演进(LTE,Long Term Evolution))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS,Base Station)270、基站控制器(BSC,Base Station Controller)275和移动交换中心(MSC,Mobile Switching Center)280。MSC280被构造为与公共电话交换网络(PSTN,Public Switched Telephone Network)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM、IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC275。
每个BS 270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS 270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS 270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS 270也可以被称为基站收发器子系统(BTS,Base Transceiver Station)或者其它等效术语。在这样的情况下,术语“基站”可以用于笼统地表示单个BSC275和至少一个BS 270。基站也可以被称为“蜂窝站”。或者,特定BS 270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT,Broadcast Transmitter)295将广播信号发送给在系统内操作的移动终端100。如图1中所示的广播接收单元111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示 出了几个卫星300,例如可以采用全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的位置信息单元115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS 270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS 270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS 270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS 270以将正向链路信号发送到移动终端100。
移动终端中通信单元110的移动通信单元112基于移动终端内置的接入移动通信网络(如2G/3G/4G等移动通信网络)的必要数据(包括用户识别信息和鉴权信息)接入移动通信网络为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据)。
通信单元110的无线互联网单元113通过运行无线热点的相关协议功能而实现无线热点的功能,无线热点支持多个移动终端(移动终端之外的任意移动终端)接入,通过复用移动通信单元112与移动通信网络之间的移动通信连接为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据),由于移动 终端实质上是复用移动终端与通信网络之间的移动通信连接传输移动通信数据的,因此移动终端消耗的移动通信数据的流量由通信网络侧的计费实体计入移动终端的通信资费,从而消耗移动终端签约使用的通信资费中包括的移动通信数据的数据流量。
图3为本发明实施例中进行信息交互的各方硬件实体的示意图,图3中包括:服务器11、终端设备21-24,提供原始素材或者提供最终要投放到终端设备21-24上目标对象的广告主终端31;终端设备21-24通过有线网络或者无线网络与服务器11进行信息交互,服务器11与广告主终端31相连,以便获取原始素材或者提供最终要投放到终端设备21-24上目标对象。其中,终端设备包括手机、台式机、PC机、一体机等类型。采用本发明实施例,服务器作为数据源,将从广告主终端31得到的原始数据(如原始素材或者提供最终要投放到终端设备上的目标对象)以及预设策略提供给终端设备使用,以便终端设备在互动的第一操作触发后,根据第一操作引起的位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件,根据该原始数据和素材位移运动条件得到最终能呈现在终端用户界面上的动态的目标对象,以替代之前在出发第一操作前处于静态的目标对象。这里,根据该原始数据和素材位移运动条件需生成目标对象的动态呈现样式,之后基于该动态呈现样式最终得到呈现在终端用户界面上的动态的目标对象。当然,生成目标对象的动态呈现样式可以是终端侧的处理,也可以是在服务器侧预处理后,直接提供给终端设备使用的。在终端设备侧,以终端设备23为例,对终端侧是如何生成目标对象的动态呈现样式及最终呈现动态的目标对象的处理逻辑进行描述。该处理逻辑10包括:S1、在终端用户界面中呈现处于第一状态的目标对象;S2、在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位 移运动条件;S3、获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;S4、所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
上述图3的例子只是实现本发明实施例的一个系统架构实例,本发明实施例并不限于上述图3所述的系统结构,基于上述图1所述的移动终端100硬件结构、图2所述的通信系统及图3所述的系统架构,提出本发明方法各个实施例。
本文中,所述素材位移运动条件,作为将多个素材基于位移合成为最终能进行动态呈现的图像的一种策略,可以在终端配置该策略,根据用户与终端的交互(如第一操作)来提取该策略以进行处理,也可以把该策略配置在服务器,收到用户与终端交互(如第一操作)而生成的请求后,服务器对请求进行响应,在响应时把该策略给到终端使用。
本发明实施例的一种信息处理方法,如图4所示,所述方法包括:
步骤101、在终端用户界面中呈现处于第一状态的目标对象。
这里,第一状态可以是静止状态,作为在触发第一操作之前的一个初始状态存在,第一状态(静态的),即首先用户看到静止的图片。触发第一操作后,用户看到的静止的图片会呈现多层不同步的透视效果,即由第一状态(静态的)变为第二状态(动态的)。也就是说,该最初静止的图片,基于该第一操作,最终得到动态的呈现结果,该呈现效果是以多层次的方式进行视角位移形成视差所形成的。其中,该动态呈现包括:位移的变化,方向的变化,角度的变化等;动态呈现还可以包括:在呈现状态上如颜色透明、颜色半透明、颜色明暗的渐变等;也可以与上述位移的变化,方向的变化,角度的变化等进行结合呈现。
步骤102、在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件。
这里,在所述目标对象所在的浏览页面中触发第一操作后,接收由第一操作导致的位移所对应产生的所述参数,将所述参数作为基于所述第一操作产生的参数。
这里,第一操作包括:手势滑动操作或鼠标滚动操作。一个示例为:浏览该目标对象所在浏览页面时触发第一操作,第一操作会导致浏览页面(或称目标对象)向上移动或者向下移动。当然,也可以是如上述步骤中提及的除了位移之外的变化,如方向的变化,角度的变化,在呈现状态上如颜色透明、颜色半透明、颜色明暗的渐变等。其中,位移变化不限于向上移动或者向下移动,还包括向左或向右移动。
这里,比如,第一操作引起了位移运动,则根据位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件,以便后续根据该原始数据和素材位移运动条件得到最终能呈现在终端用户界面上的动态的目标对象。
步骤103、获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式。
这里,终端设备在互动的第一操作触发后,根据第一操作引起的位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件后,以便后续根据该原始数据和素材位移运动条件得到最终能呈现在终端用户界面上的动态的目标对象,以替代之前在出发第一操作前处于静态的目标对象。首先需要根据该原始数据和素材位移运动条件需生成目标对象的动态呈现样式,之后基于该动态呈现样式最终得到呈现在终端用户界面上的动态的目标对象。当然,生成目标对象的动态呈现样式可以是终端侧 的处理,也可以是在服务器侧预处理后,直接提供给终端设备使用的。其中,原始数据可以为预先分解所述目标对象得到的多个素材,多个素材是最终能形成多层次不同步呈现方式的前提条件。
步骤104、所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
这里,第一状态(静态的),即首先用户看到静止的图片。触发第一操作后,用户看到的静止的图片会呈现多层不同步的透视效果,即由第一状态(静态的)变为第二状态(动态的),具体的,由于本申请的状态变化,由静态到动态,是基于互动的第一操作引起的位移运动导致的,因此,第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。该动态呈现是多层不同步的透视效果。在浏览页面中最初是静止的图片,基于该第一操作(如手势滑动或滚动鼠标等,只要能造成页面上的目标对象出现位移的操作都可行),最终在浏览页面中得到动态的图片呈现结果,该呈现效果是以多层次的方式进行视角位移形成视差所形成的。
采用本发明实施例,采用多个素材,动态的机制,得到多层不同步的动态呈现效果,具体的,可以将多个素材按照动态结合的策略或称算法进行合成。其中,一个示例为:该合成的先决条件是根据屏幕中心和广告区域中心(或称目标对象所在区域的中心)的比对,来进行合成。首先终端设备感应作用于该设备上的第一操作,再基于上述合成的先决条件和具体策略,为用户呈现对应生成的动态图片,达到图片的多层不同步的透视效果。即:1)感知用户操作;2)根据用户操作来生成和最终呈现动态图片;其中,生成动态图片是根据多个素材和策略所生成的。
有多个素材和对应的素材位移运动条件,可以得到最终呈现在终端上 目标对象的动态呈现样式,从而,具备了将目标对象根据第一操作由静态变化至动态的基础,之后,通过交互响应,对第一操作进行响应,从而将所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现,基于互动操作的交互得到最终信息动态的呈现形式,促进了信息的分享和传播。
本发明实施例的一种信息处理方法,如图5所示,所述方法包括:
步骤201、在终端用户界面中呈现处于第一状态的目标对象。
这里,第一状态可以是静止状态,作为在触发第一操作之前的一个初始状态存在,第一状态(静态的),即首先用户看到静止的图片。触发第一操作后,用户看到的静止的图片会呈现多层不同步的透视效果,即由第一状态(静态的)变为第二状态(动态的)。也就是说,该最初静止的图片,基于该第一操作,最终得到动态的呈现结果,该呈现效果是以多层次的方式进行视角位移形成视差所形成的。其中,该动态呈现包括:位移的变化,方向的变化,角度的变化等;动态呈现还可以包括:在呈现状态上如颜色透明、颜色半透明、颜色明暗的渐变等;也可以与上述位移的变化,方向的变化,角度的变化等进行结合呈现。
步骤202、根据所述第一操作产生的参数,判断所述浏览页面当前产生的位移运动方向。
步骤203、根据所述位移运动方向匹配对应的素材位移运动条件。
通过步骤202-203,在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数从预设策略中匹配对应的素材位移运动条件,素材位移运动条件根据第一操作带来的向上、下、左、右位移,或者合成某个目标对象所选取素材的不同,是有所不同的,则根据该素材位移运动条件得到的最终信息动态呈现效果也是多样化的。
在一个实际应用中,浏览该目标对象所在浏览页面时触发第一操作,第一操作会导致浏览页面(或称目标对象)向上移动或者向下移动。其中,第一操作,不限定为进行页面浏览时触发的手势滑动操作,如手势上滑或手势下滑的操作,也可以是在进行页面浏览时触发的鼠标滚动操作,如向上滚动鼠标或者向下滚动鼠标的操作。不同的第一操作带来浏览页面(或称目标对象)向上移动或向下移动,需针对性的采取预设策略中的不同素材位移运动条件。该素材位移运动条件可以根据屏幕中心和广告区域中心(或称目标对象所在区域的中心)的比对,来进行最终的合成。如,1)载入服务器端分层存储的素材图片,将其加入队列a,获取屏幕显示区域正中的页面Y坐标值赋值为b,获取广告图中心位置所处的页面Y坐标值赋值为c,对比判断b值与c值的大小。2)如果b大于c则说明广告图处于显示屏的偏上位置;如果b小于c则说明广告图处于显示屏的偏下位置;如果b等于c则说明广告图位于屏幕正中。对于这几个不同的场景,采用的素材位移运动条件是不同的,最终根据该素材位移运动条件得到的最终信息动态呈现效果也是不同的,具备多样性。3)根据不同场景选定了对应的素材位移运动条件后,根据b、c的差值大小,对队列a进行循环取值,拼合组成广告图,拼合方式根据b、c差值的大小作为基数分别对队列a中素材做正向或反向的逐层Y坐标调整,差值越大侧逐层位移距离越大。如果当用户滑动或滚动页面浏览时所触发的第一操作导致b值发生变化,则重复执行上面的三步流程。其中,如何根据不同场景选定对应的素材位移运动条件,在后续实施例中进行描述。
这里,在所述目标对象所在的浏览页面中触发第一操作后,接收由第一操作导致的位移所对应产生的所述参数,将所述参数作为基于所述第一操作产生的参数。
这里,第一操作包括:手势滑动操作或鼠标滚动操作。一个示例为: 浏览该目标对象所在浏览页面时触发第一操作,第一操作会导致浏览页面(或称目标对象)向上移动或者向下移动。当然,也可以是如上述步骤中提及的除了位移之外的变化,如方向的变化,角度的变化,在呈现状态上如颜色透明、颜色半透明、颜色明暗的渐变等。其中,位移变化不限于向上移动或者向下移动,还包括向左或向右移动。
这里,比如,第一操作引起了位移运动,则根据位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件,以便后续根据该原始数据和素材位移运动条件得到最终能呈现在终端用户界面上的动态的目标对象。
步骤204、获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式。
这里,终端设备在互动的第一操作触发后,根据第一操作引起的位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件后,以便后续根据该原始数据和素材位移运动条件得到最终能呈现在终端用户界面上的动态的目标对象,以替代之前在出发第一操作前处于静态的目标对象。首先需要根据该原始数据和素材位移运动条件需生成目标对象的动态呈现样式,之后基于该动态呈现样式最终得到呈现在终端用户界面上的动态的目标对象。当然,生成目标对象的动态呈现样式可以是终端侧的处理,也可以是在服务器侧预处理后,直接提供给终端设备使用的。其中,原始数据可以为预先分解所述目标对象得到的多个素材,多个素材是最终能形成多层次不同步呈现方式的前提条件。
步骤205、所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
这里,第一状态(静态的),即首先用户看到静止的图片。触发第一操 作后,用户看到的静止的图片会呈现多层不同步的透视效果,即由第一状态(静态的)变为第二状态(动态的),具体的,由于本申请的状态变化,由静态到动态,是基于互动的第一操作引起的位移运动导致的,因此,第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。该动态呈现是多层不同步的透视效果。在浏览页面中最初是静止的图片,基于该第一操作(如手势滑动或滚动鼠标等,只要能造成页面上的目标对象出现位移的操作都可行),最终在浏览页面中得到动态的图片呈现结果,该呈现效果是以多层次的方式进行视角位移形成视差所形成的。
采用本发明实施例,采用多个素材,动态的机制,得到多层不同步的动态呈现效果,具体的,可以将多个素材按照动态结合的策略或称算法进行合成。其中,一个示例为:该合成的先决条件是根据屏幕中心和广告区域中心(或称目标对象所在区域的中心)的比对,来进行合成。首先终端设备感应作用于该设备上的第一操作,再基于上述合成的先决条件和具体策略,为用户呈现对应生成的动态图片,达到图片的多层不同步的透视效果。即:1)感知用户操作;2)根据用户操作来生成和最终呈现动态图片;其中,生成动态图片是根据多个素材和策略所生成的。
有多个素材和对应的素材位移运动条件,可以得到最终呈现在终端上目标对象的动态呈现样式,从而,具备了将目标对象根据第一操作由静态变化至动态的基础,之后,通过交互响应,对第一操作进行响应,从而将所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现,基于互动操作的交互得到最终信息动态的呈现形式,促进了信息的分享和传播。
基于上述实施例,本发明实施例的一种信息处理方法中,针对根据不 同场景选定对应的素材位移运动条件及具体合成动态的呈现特效,如图6所示,包括:
步骤301、根据所述第一操作产生的参数,判断所述浏览页面当前产生的位移运动方向。
步骤302、选择与所述位移运动方向匹配的素材位移运动条件时,若所述位移运动方向为向上移动,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合,如果是,则执行步骤303;否则,执行步骤304。这里,第一操作是向上移动,该动态的呈现特效是目标对象向上跑。反之,第一操作是向下移动,该动态的呈现特效是目标对象向下跑。需要指出的是,除了向上,向下,还包括向左,向右移动或者呈现一定角度的位移,都在本发明的保护范围中,相应的,采取同一个调整阈值或者不同层级的调整阈值进行对应的素材位移运动条件。比如,第一操作是向左移动,当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,将所述至少两个素材按照同一个调整阈值整体向左移动的素材位移运动条件。
步骤303、当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向上移动的第一素材位移运动条件。
步骤304、当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向上移动的第二素材位移运动条件。
采用本发明实施例,终端用户界面的中心(屏幕显示区域)与所述目标对象所在区域的中心重合时,简单来说,是选择将所述至少两个素材直接叠加并整体向上移动的素材位移运动条件。终端用户界面的中心(屏幕显示区域)与所述目标对象所在区域的中心不重合时,基于所述至少两个 素材所属层的不同优先级进行阈值不同的逐层向上移动的素材位移运动条件,如,优先级越高的层,在合成时其所在的位置为越低,对优先级越高的层的调整阈值越大;优先级越低的层,在合成时其所在的位置为越高,对优先级越低的层的调整阈值越小或者为零(即,对优先级越低的层,可以不进行调节)。当然,不限于是对各级层的位移是按照不同的调整阈值进行调整,还可以是对各级层的透明度或颜色等其他显示效果按照不同的调整阈值进行调整。一个例子是:存在3个素材,分别以素材a1、素材a2、素材a3进行标识。其中,素材a1为优先级最高的层,素材a3为优先级最低的层,那么,在后续合成叠加时各个素材的层级排列顺序为:素材a3、素材a2、素材a1,即:优先级最低的层“素材a3”,在合成时其所在的位置为最高。在调整阈值的选择上,素材a1为最大值,素材a2为次大值,素材a3为最小值或者为零(对素材a3不进行调整),可以选择素材a1的调整阈值为2厘米,素材a2的调整阈值为1厘米,素材a3的调整阈值为0.5厘米或者0厘米,这里只是示意性举例的参考,并不限定具体的数值。按照素材a1-a3对应的不同调整阈值逐层向下移动。
基于上述实施例,本发明实施例的一种信息处理方法中,针对根据不同场景选定对应的素材位移运动条件及具体合成动态的呈现特效,如图7所示,包括:
步骤401、根据所述第一操作产生的参数,判断所述浏览页面当前产生的位移运动方向。
步骤402、选择与所述位移运动方向匹配的素材位移运动条件时,若所述位移运动方向为向下移动,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合,如果是,则执行步骤403;否则,执行步骤404。这里,第一操作是向下移动,该动态的呈现特效是目标对象向下跑。反之,第一操作是向上移动,该动态的呈现特效是目标对象向上跑。需要 指出的是,除了向上,向下,还包括向左,向右移动或者呈现一定角度的位移,都在本发明的保护范围中,相应的,采取同一个调整阈值或者不同层级的调整阈值进行对应的素材位移运动条件。比如,第一操作是向左移动,当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,将所述至少两个素材按照同一个调整阈值整体向左移动的素材位移运动条件。
步骤403、当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向下移动的第三素材位移运动条件。
步骤404、当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向下移动的第四素材位移运动条件。
采用本发明实施例,终端用户界面的中心(屏幕显示区域)与所述目标对象所在区域的中心重合时,简单来说,是选择将所述至少两个素材直接叠加并整体向上移动的素材位移运动条件。终端用户界面的中心(屏幕显示区域)与所述目标对象所在区域的中心不重合时,基于所述至少两个素材所属层的不同优先级进行阈值不同的逐层向上移动的素材位移运动条件,如,优先级越高的层,在合成时其所在的位置为越低,对优先级越高的层的调整阈值越大;优先级越低的层,在合成时其所在的位置为越高,对优先级越低的层的调整阈值越小或者为零(即,对优先级越低的层,可以不进行调节)。当然,不限于是对各级层的位移是按照不同的调整阈值进行调整,还可以是对各级层的透明度或颜色等其他显示效果按照不同的调整阈值进行调整。一个例子是:存在3个素材,分别以素材b1、素材b2、素材b3进行标识。其中,素材b1为优先级最高的层,素材b3为优先级最低的层,那么,在后续合成叠加时各个素材的层级排列顺序为:素材b3、 素材b2、素材b1,即:优先级最低的层“素材b3”,在合成时其所在的位置为最高。在调整阈值的选择上,素材b1为最大值,素材b2为次大值,素材b3为最小值或者为零(对素材b3不进行调整),可以选择素材b1的调整阈值为3厘米,素材b2的调整阈值为1.5厘米,素材b3的调整阈值为1厘米或者0厘米,这里只是示意性举例的参考,并不限定具体的数值,按照素材b1-b3对应的不同调整阈值逐层向下移动。
在本发明实施例一实施方式中,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,包括:将所述至少两个素材按照素材优先级进行排列,得到排列结果;当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,获取用于标识所述终端用户界面中心的第一坐标值,获取用于标识所述目标对象所在区域中心的第二坐标值;将所述第一坐标值和所述第二坐标值间的差值确定为调整基数,按照所述调整基数对所述排序结果中依优先级排列的至少两个素材进行正向位移或反向的逐层坐标值调整,逐层坐标值的调整所采用的调整阈值根据所述调整基数生成,针对所述至少两个素材所属的不同层采取同一个调整阈值或不同的调整阈值。其中,正向:对位移做Y坐标方向上的加法运算。反向:对位移做Y坐标方向上的减法运算。一个实际应用中,向上的位移,可以带来目标对象向上的动态效果;向上的位移,也可以带来目标对象向下的动态效果。根据不同的动态效果来选用上述不同的运算方法(正向或反向)。
在本发明实施例一实施方式中,所述方法还包括:
1)如果所述第一坐标值大于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏上位置,进行所述正向位移的逐层坐标值调整。2)如果所述第一坐标值小于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏下位置,进行所述反向的逐层坐标值调整。
在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数从预设策略中匹配对应的素材位移运动条件的具体实现及获取预先分解所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式等各种变化,参见上述实施例中的描述。在本发明实施例一实施方式中,所述方法还包括:获取预设的新增素材(预埋的彩蛋),所述新增素材不同于预先分解所述目标对象得到的至少两个素材;根据所述新增素材、所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,使所述目标对象跟随所述第一操作在终端用户界面显示或者隐藏所述新增素材。
综上各个实施例,根据第一操作的速度,作用压力值或其变化,第一操作的作用距离等,目标对象会跟随第一操作的上述具体操作方式作出与速度、作用压力及作用距离对应的呈现效果,即:随着第一操作带来的变化呈现不同步的轨迹运动反映效果。因为,第一操作的速度,作用压力值或其变化,第一操作的作用距离等在用户的每一次操作中都有所不同,精准捕捉这种不同,来提高交互精度和交互效果,达到与之对应的不同步的轨迹运动反映效果。
本发明实施例的一种信息处理系统,包括终端41、服务器51和提供原始数据的广告主终端61等,如图8所示,终端41包括:显示单元411,用于在终端用户界面中呈现处于第一状态的目标对象;匹配单元412,用于在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;合成单元413,用于获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;切换处理单元414,用于在所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所 述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
这里,第一状态可以是静止状态,作为在触发第一操作之前的一个初始状态存在,第一状态(静态的),即首先用户看到静止的图片。触发第一操作后,用户看到的静止的图片会呈现多层不同步的透视效果,即由第一状态(静态的)变为第二状态(动态的)。也就是说,该最初静止的图片,基于该第一操作,最终得到动态的呈现结果,该呈现效果是以多层次的方式进行视角位移形成视差所形成的。其中,该动态呈现包括:位移的变化,方向的变化,角度的变化等;动态呈现还可以包括:在呈现状态上如颜色透明、颜色半透明、颜色明暗的渐变等;也可以与上述位移的变化,方向的变化,角度的变化等进行结合呈现。
这里,第一操作包括:手势滑动操作或鼠标滚动操作。一个示例为:浏览该目标对象所在浏览页面时触发第一操作,第一操作会导致浏览页面(或称目标对象)向上移动或者向下移动。当然,也可以是如上述步骤中提及的除了位移之外的变化,如方向的变化,角度的变化,在呈现状态上如颜色透明、颜色半透明、颜色明暗的渐变等。其中,位移变化不限于向上移动或者向下移动,还包括向左或向右移动。比如,第一操作引起了位移运动,则根据位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件,以便后续根据该原始数据和素材位移运动条件得到最终能呈现在终端用户界面上的动态的目标对象。
终端设备在互动的第一操作触发后,根据第一操作引起的位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件后,以便后续根据该原始数据和素材位移运动条件得到最终能呈现在终端用户界面上的动态的目标对象,以替代之前在出发第一操作前处于静态的目标对象。首先需要根据该原始数据和素材位移运动条件需生成目标对象的动态呈现 样式,之后基于该动态呈现样式最终得到呈现在终端用户界面上的动态的目标对象。当然,生成目标对象的动态呈现样式可以是终端侧的处理,也可以是在服务器侧预处理后,直接提供给终端设备使用的。其中,原始数据可以为预先分解所述目标对象得到的多个素材,多个素材是最终能形成多层次不同步呈现方式的前提条件。
采用本发明实施例,采用多个素材,动态的机制,得到多层不同步的动态呈现效果,具体的,可以将多个素材按照动态结合的策略或称算法进行合成。,首先,用户看到静止的图片,即目标对象为第一状态(静态的)。触发第一操作后,用户看到的静止的图片会呈现多层不同步的透视效果,即目标对象由第一状态(静态的)变为第二状态(动态的),具体的,由于本申请的状态变化,由静态到动态,是基于互动的第一操作引起的位移运动导致的,因此,第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。该动态呈现是多层不同步的透视效果。在浏览页面中最初是静止的图片,基于该第一操作(如手势滑动或滚动鼠标等,只要能造成页面上的目标对象出现位移的操作都可行),最终在浏览页面中得到动态的图片呈现结果,该呈现效果是以多层次的方式进行视角位移形成视差所形成的。
其中,一个示例为:该合成的先决条件是根据屏幕中心和广告区域中心(或称目标对象所在区域的中心)的比对,来进行合成。首先终端设备感应作用于该设备上的第一操作,再基于上述合成的先决条件和具体策略,为用户呈现对应生成的动态图片,达到图片的多层不同步的透视效果。即:1)感知用户操作;2)根据用户操作来生成和最终呈现动态图片;其中,生成动态图片是根据多个素材和策略所生成的。
有多个素材和对应的素材位移运动条件,可以得到最终呈现在终端上目标对象的动态呈现样式,从而,具备了将目标对象根据第一操作由静态 变化至动态的基础,之后,通过交互响应,对第一操作进行响应,从而将所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现,基于互动操作的交互得到最终信息动态的呈现形式,促进了信息的分享和传播。
在本发明实施例一实施方式中,所述匹配单元,进一步包括:位移判断子单元,用于根据所述第一操作产生的参数,判断所述浏览页面当前产生的位移运动方向;选择子单元,用于根据所述位移运动方向从预设策略中匹配对应的素材位移运动条件。
在本发明实施例一实施方式中,所述第一操作包括:手势滑动操作或鼠标滚动操作。
在本发明实施例一实施方式中,所述位移判断子单元,进一步用于:所述位移运动方向为向上移动时,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合;所述选择子单元,进一步用于:当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向上移动的第一素材位移运动条件;当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向上移动的第二素材位移运动条件。
在本发明实施例一实施方式中,所述位移判断子单元,进一步用于:所述位移运动方向为向下移动时,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合;所述选择子单元,进一步用于:当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向下移动的第三素材位移运动条件;当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时, 选择将所述至少两个素材按照不同调整阈值逐层向下移动的第四素材位移运动条件。
在本发明实施例一实施方式中,所述合成单元,进一步用于:将所述至少两个素材按照素材优先级进行排列,得到排列结果;当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,获取用于标识所述终端用户界面中心的第一坐标值,获取用于标识所述目标对象所在区域中心的第二坐标值;将所述第一坐标值和所述第二坐标值间的差值确定为调整基数,按照所述调整基数对所述排序结果中依优先级排列的至少两个素材进行正向位移或反向的逐层坐标值调整,逐层坐标值的调整所采用的调整阈值根据所述调整基数生成,针对所述至少两个素材所属的不同层采取同一个调整阈值或不同的调整阈值。
在本发明实施例一实施方式中,所述合成单元,进一步用于:如果所述第一坐标值大于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏上位置,进行所述正向位移的逐层坐标值调整。如果所述第一坐标值小于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏下位置,进行所述反向的逐层坐标值调整。
在本发明实施例一实施方式中,所述终端还包括:新素材获取单元,用于获取预设的新增素材,所述新增素材不同于预先分解所述目标对象得到的至少两个素材;所述合成单元,进一步用于根据所述新增素材、所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,使所述目标对象跟随所述第一操作在终端用户界面显示或者隐藏所述新增素材。
其中,对于用于数据处理的处理器而言,在执行处理时,可以采用微处理器、中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Singnal Processor)或可编程逻辑阵列(FPGA,Field-Programmable  Gate Array)实现;对于存储介质来说,包含操作指令,该操作指令可以为计算机可执行代码,通过所述操作指令来实现上述本发明实施例信息处理方法流程中的各个步骤。
这里需要指出的是:以上涉及终端和服务器项的描述,与上述方法描述是类似的,同方法的有益效果描述,不做赘述。对于本发明终端和服务器实施例中未披露的技术细节,请参照本发明方法流程描述的实施例所描述内容。
以一个现实应用场景为例对本发明实施例阐述如下:
目前信息交互的过程中,大多信息的呈现形式都只是通过改变信息种类及形式,对于信息的分享、传播和互动性关注不够。信息可以是广告信息,也可以是其他多媒体信息。对应这个场景,采用本发明实施例,可以将信息(广告信息)由原本静止的图片形式切换到以三维立体的呈现形式,切换的触发是基于用户对于页面的互动操作,如滑动移动操作或鼠标移动操作,让原来静止的广告图以多层次的方式进行视角位移形成视差。
采用本发明实施例的一个处理流程包括:1)广告素材以多层次,元素分离的方式输出;2)上传至服务器端,由后台代码做动态结合;3)广告图投放至移动端或电脑端页面展示;4)当用户通过手势上下滑动或滚动鼠标操作来进行页面的浏览;5)同时广告图区域会跟随操作的速度及距离做出不同步的轨迹运动反映;6)用户看到的广告图将呈现出立体视差的变化效果。其中,在终端呈现信息(如广告信息)的展示页面,先载入服务器端分层存储的素材图片,将其加入队列a;获取屏幕显示区域正中的页面Y坐标值赋值为b;获取广告图中心位置所处的页面Y坐标值赋值为c;对比判断b值与c值的大小,如果b大于c则说明广告图处于显示屏的偏上位置,如果b小于c则说明广告图处于显示屏的偏下位置,如果b等于c则说明广告图位于屏幕正中;根据b、c的差值大小,对队列a进行循环取 值,拼合组成广告图,拼合方式根据b、c差值的大小作为基数分别对队列a中素材做正向或反向的逐层Y坐标调整,差值越大侧逐层位移距离越大。当用户滑动或滚动页面浏览时。b值发生变化,侧重复上面的计算方式。
图9-17为上述场景采用本发明实施例得到的最终呈现效果图。具体的,图9中,包括:第一状态的目标对象,如以A所标识的原始的静态图片,该静态图片初始显示于终端用户界面中,呈现给用户。图9中还包括预先分解所述目标对象(以A所标识的原始的静态图片)得到的多个素材(分别以S1、S2、……、Sn进行标识),以便后续根据所述多个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式。另一个预先分解所述目标对象的示意图如图13所示。
需要指出的是,终端设备在互动的第一操作触发后,根据第一操作引起的位移运动来针对性的选择预设策略中符合当前操作场景的素材位移运动条件后,以便后续根据根据所述多个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,得到最终能呈现在终端用户界面上的动态的目标对象,如图10-11所示,以替代之前在出发第一操作前处于静态的目标对象(以A所标识的原始的静态图片)。首先需要根据该原始数据和素材位移运动条件需生成目标对象的动态呈现样式,之后基于该动态呈现样式最终得到呈现在终端用户界面上的动态的目标对象。当然,生成目标对象的动态呈现样式可以是终端侧的处理,也可以是在服务器侧预处理后,直接提供给终端设备使用的。其中,原始数据可以为预先分解所述目标对象得到的多个素材,多个素材是最终能形成多层次不同步呈现方式的前提条件。对于图10来说,第一操作为在浏览页面中手指向上的滑动操作,D>d,触发手指向上的滑动操作,从左边的第一状态切换到右边的第二状态,使得目标对象显示出动态的呈现样式,即:所述目标对象跟随所述手指向上的滑动操作在终端用户界面中以多层次不同步的方式进行向上移动。另一 个,在触发手指向上的滑动操作导致的状态切换图如图14所示。
对于图11来说,第一操作为在浏览页面中手指向上的滑动操作,D>d,触发手指向下的滑动操作,从左边的第一状态切换到右边的第二状态,使得目标对象显示出动态的呈现样式,即:所述目标对象跟随所述手指向下的滑动操作在终端用户界面中以多层次不同步的方式进行向上移动。另一个,在触发手指向下的滑动操作导致的状态切换图如图15所示。
图12中,包括:第一状态的目标对象,如以A1所标识的原始的静态图片,该静态图片初始显示于终端用户界面中,呈现给用户。图12中还包括预先分解所述目标对象(以A1所标识的原始的静态图片)得到的多个素材(分别以S1、S2、……、Sn进行标识,S2中还包括预埋的彩蛋“点击领取红包”),以便后续根据所述多个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式。第一操作为在浏览页面中手指向上的滑动操作,图12中还包括第一状态在触发手指向上的滑动操作后切换到第二状态的情形。其中,触发手指向上的滑动操作,从左边的第一状态切换到右边的第二状态,使得目标对象显示出动态的呈现样式,即:所述目标对象跟随所述手指向上的滑动操作在终端用户界面中以多层次不同步的方式进行向上移动,且由于向上移动会显露出原本隐藏的预埋的彩蛋,以B标识的信息“点击领取红包”。此时,触发手指向下的滑动操作,则会将已经显示给用户的彩蛋,以B标识的信息“点击领取红包”进行隐藏。另一个预先分解所述目标对象的示意图如图16所示,预埋的彩蛋为分层素材(2)所显示的“限量赠品免费领”信息。在触发手指向上的滑动操作,会导致状态切换,将原本隐藏的预埋彩蛋“限量赠品免费领”显示出来,如图17所示。
上述动态呈现是多层不同步的透视效果。在浏览页面中最初是静止的图片,基于该第一操作(如手势滑动或滚动鼠标等,只要能造成页面上的 目标对象出现位移的操作都可行),最终在浏览页面中得到动态的图片呈现结果,该呈现效果是以多层次的方式进行视角位移形成视差所形成的。
需要指出的是,对于本文附图中的各种交互控件而言,如图14中的“免费试听课程”,图17中的“限量赠品免费领”,基于用户与终端的交互(如第一操作)触发当前显示的图片由“静”变“动”进行状态切换,以多层次的方式在浏览页面中得到动态图像的呈现效果后,当第一操作执行结束时,动态图像的呈现效果变为静止图像,此时,用户可以点击该交互控件,以进入终端与后台服务器间的交互,得到交互控件所指向的新的浏览页面。当然,用户也可以点击该整个图像,以进入终端与后台服务器间的交互,得到交互控件所指向的新的浏览页面,本文不限制具体的交互方式。其中,得到动态图像的呈现效果后重新变为静止图像,其在屏幕中所处的位置(如位置C2)区别于状态切换前处于静止时的位置(如位置C1),比如,位置C1为屏幕正中央的位置,当用户手指向上滑动导致动态呈现效果,并使得整个图像向上移动,当重新归位静止时,位置C2为屏幕偏向上方移动的位置。
本发明实施例还提供了一种计算机存储介质,例如包括计算机程序的存储器,上述计算机程序可由数据处理装置的处理器执行,以完成前述方法所述步骤。计算机存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备,如移动电话、计算机、平板设备、个人数字助理等。
本发明实施例提供的计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器运行时,执行如下信息处理方法步骤。
一实施例中,该计算机程序被处理器运行时,执行:
在终端用户界面中呈现处于第一状态的目标对象;
在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;
获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;
在所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
一实施例中,该计算机程序被处理器运行时,还执行:
所述位移运动方向为向上移动时,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合;
当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向上移动的第一素材位移运动条件;
当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向上移动的第二素材位移运动条件。
一实施例中,该计算机程序被处理器运行时,还执行:
所述位移运动方向为向下移动时,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合;
当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向下移动的第三素材位移运动条件;
当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向下移动的第四素 材位移运动条件。
一实施例中,该计算机程序被处理器运行时,还执行:
将所述至少两个素材按照素材优先级进行排列,得到排列结果;
当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,获取用于标识所述终端用户界面中心的第一坐标值,获取用于标识所述目标对象所在区域中心的第二坐标值;
将所述第一坐标值和所述第二坐标值间的差值确定为调整基数,按照所述调整基数对所述排序结果中依优先级排列的至少两个素材进行正向位移或反向的逐层坐标值调整,逐层坐标值的调整所采用的调整阈值根据所述调整基数生成,针对所述至少两个素材所属的不同层采取同一个调整阈值或不同的调整阈值。
一实施例中,该计算机程序被处理器运行时,还执行:
如果所述第一坐标值大于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏上位置,进行所述正向位移的逐层坐标值调整。
如果所述第一坐标值小于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏下位置,进行所述反向的逐层坐标值调整。
一实施例中,该计算机程序被处理器运行时,还执行:
获取预设的新增素材,所述新增素材不同于由所述目标对象得到的至少两个素材;
根据所述新增素材、所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,使所述目标对象跟随所述第一操作在终端用户界面显示或者隐藏所述新增素材。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外 的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光 盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
采用本发明实施例,在终端用户界面中呈现处于第一状态(静态)的目标对象;在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,有多个素材和对应的素材位移运动条件,可以得到最终呈现在终端上目标对象的动态呈现样式,从而,具备了将目标对象根据第一操作由静态变化至动态的基础,之后,通过交互响应,对第一操作进行响应,从而将所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现,基于互动操作的交互得到最终信息动态的呈现形式,促进了信息的分享和传播。

Claims (13)

  1. 一种信息处理方法,所述方法包括:
    在终端用户界面中呈现处于第一状态的目标对象;
    在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;
    获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;
    在所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
  2. 根据权利要求1所述的方法,其中,选择与所述位移运动方向匹配的素材位移运动条件,包括:
    所述位移运动方向为向上移动时,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合;
    当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向上移动的第一素材位移运动条件;
    当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向上移动的第二素材位移运动条件。
  3. 根据权利要求1所述的方法,其中,选择与所述位移运动方向匹配的素材位移运动条件,包括:
    所述位移运动方向为向下移动时,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合;
    当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向下移动的第三素材位移运动条件;
    当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向下移动的第四素材位移运动条件。
  4. 根据权利要求1所述的方法,其中,所述根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,包括:
    将所述至少两个素材按照素材优先级进行排列,得到排列结果;
    当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,获取用于标识所述终端用户界面中心的第一坐标值,获取用于标识所述目标对象所在区域中心的第二坐标值;
    将所述第一坐标值和所述第二坐标值间的差值确定为调整基数,按照所述调整基数对所述排序结果中依优先级排列的至少两个素材进行正向位移或反向的逐层坐标值调整,逐层坐标值的调整所采用的调整阈值根据所述调整基数生成,针对所述至少两个素材所属的不同层采取同一个调整阈值或不同的调整阈值。
  5. 根据权利要求4所述的方法,其中,所述方法还包括:
    如果所述第一坐标值大于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏上位置,进行所述正向位移的逐层坐标值调整。
    如果所述第一坐标值小于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏下位置,进行所述反向的逐层坐标值调整。
  6. 根据权利要求1至5任一项所述的方法,其中,所述方法还包括:
    获取预设的新增素材,所述新增素材不同于由所述目标对象得到的至少两个素材;
    根据所述新增素材、所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,使所述目标对象跟随所述第一操作在终端用户界面显示或者隐藏所述新增素材。
  7. 一种终端,所述终端包括:
    显示单元,用于在终端用户界面中呈现处于第一状态的目标对象;
    匹配单元,用于在所述目标对象所在的浏览页面中触发第一操作,根据所述第一操作产生的参数判断所述浏览页面当前产生的位移运动方向,选择与所述位移运动方向匹配的素材位移运动条件;
    合成单元,用于获取由所述目标对象得到的至少两个素材,根据所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式;
    切换处理单元,用于在所述目标对象按照所述动态呈现样式进行呈现时切换到第二状态,所述第二状态用于表征所述目标对象跟随所述第一操作在终端用户界面中以多层次不同步的方式进行动态呈现。
  8. 根据权利要求7所述的终端,其中,所述匹配单元,进一步用于:
    所述位移运动方向为向上移动时,判断所述终端用户界面的中心与所述目标对象所在区域的中心是否重合;
    当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向上移动的第一素材位移运动条件;
    当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向上移动的第二素材位移运动条件。
  9. 根据权利要求7所述的终端,其中,所述匹配单元,进一步用于:
    所述位移运动方向为向下移动时,判断所述终端用户界面的中心与 所述目标对象所在区域的中心是否重合;
    所述选择子单元,进一步用于:
    当所述终端用户界面的中心与所述目标对象所在区域的中心重合时,选择将所述至少两个素材按照同一个调整阈值整体向下移动的第三素材位移运动条件;
    当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,选择将所述至少两个素材按照不同调整阈值逐层向下移动的第四素材位移运动条件。
  10. 根据权利要求7所述的终端,其中,所述合成单元,进一步用于:
    将所述至少两个素材按照素材优先级进行排列,得到排列结果;
    当所述终端用户界面的中心与所述目标对象所在区域的中心不重合时,获取用于标识所述终端用户界面中心的第一坐标值,获取用于标识所述目标对象所在区域中心的第二坐标值;
    将所述第一坐标值和所述第二坐标值间的差值确定为调整基数,按照所述调整基数对所述排序结果中依优先级排列的至少两个素材进行正向位移或反向的逐层坐标值调整,逐层坐标值的调整所采用的调整阈值根据所述调整基数生成,针对所述至少两个素材所属的不同层采取同一个调整阈值或不同的调整阈值。
  11. 根据权利要求10所述的终端,其中,所述合成单元,进一步用于:
    如果所述第一坐标值大于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏上位置,进行所述正向位移的逐层坐标值调整。
    如果所述第一坐标值小于所述第二坐标值,则所述目标对象位于所述终端用户界面的偏下位置,进行所述反向的逐层坐标值调整。
  12. 根据权利要求7至11任一项所述的终端,其中,所述终端还包括:
    新素材获取单元,用于获取预设的新增素材,所述新增素材不同于预先分解所述目标对象得到的至少两个素材;
    所述合成单元,进一步用于根据所述新增素材、所述至少两个素材和所述素材位移运动条件生成所述目标对象的动态呈现样式,使所述目标对象跟随所述第一操作在终端用户界面显示或者隐藏所述新增素材。
  13. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1至6任一项所述的信息处理方法。
PCT/CN2017/103031 2016-09-30 2017-09-22 一种信息处理方法及终端、计算机存储介质 WO2018059332A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17854782.4A EP3511828B1 (en) 2016-09-30 2017-09-22 Information processing method, terminal, and computer storage medium
US16/207,749 US10776562B2 (en) 2016-09-30 2018-12-03 Information processing method, terminal, and computer storage medium for dynamic object rendering based on resource item displacement strategy

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610877820.8A CN106484416B (zh) 2016-09-30 2016-09-30 一种信息处理方法及终端
CN201610877820.8 2016-09-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/207,749 Continuation US10776562B2 (en) 2016-09-30 2018-12-03 Information processing method, terminal, and computer storage medium for dynamic object rendering based on resource item displacement strategy

Publications (1)

Publication Number Publication Date
WO2018059332A1 true WO2018059332A1 (zh) 2018-04-05

Family

ID=58268608

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103031 WO2018059332A1 (zh) 2016-09-30 2017-09-22 一种信息处理方法及终端、计算机存储介质

Country Status (4)

Country Link
US (1) US10776562B2 (zh)
EP (1) EP3511828B1 (zh)
CN (1) CN106484416B (zh)
WO (1) WO2018059332A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221598A (zh) * 2018-11-23 2020-06-02 北京金山云网络技术有限公司 动态显示图像的方法、装置和终端设备
CN111432264A (zh) * 2020-03-30 2020-07-17 腾讯科技(深圳)有限公司 基于媒体信息流的内容展示方法、装置、设备及存储介质

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484416B (zh) * 2016-09-30 2021-02-05 腾讯科技(北京)有限公司 一种信息处理方法及终端
CN108958579B (zh) * 2018-06-26 2020-09-25 维沃移动通信有限公司 一种红包发送、收取方法及红包发送、收取装置
CN110262742B (zh) * 2019-05-17 2021-04-16 北京奇艺世纪科技有限公司 一种信息流广告的展示方法及终端
CN110809062B (zh) * 2019-11-14 2022-03-25 思必驰科技股份有限公司 公有云语音识别资源调用控制方法和装置
CN111008057A (zh) * 2019-11-28 2020-04-14 北京小米移动软件有限公司 页面展示方法、装置及存储介质
CN112995409B (zh) * 2019-12-02 2022-03-11 荣耀终端有限公司 智能通信策略生效场景的显示方法及移动终端、计算机可读存储介质
CN111354089B (zh) * 2020-02-21 2023-08-15 上海米哈游天命科技有限公司 多层级的特效排序方法、装置、设备及存储介质
CN111263181A (zh) * 2020-02-25 2020-06-09 北京达佳互联信息技术有限公司 直播互动方法、装置、电子设备、服务器及存储介质
CN112685677A (zh) * 2021-01-06 2021-04-20 腾讯科技(深圳)有限公司 页面组件的处理方法、装置、电子设备及计算机存储介质
CN112906553B (zh) * 2021-02-09 2022-05-17 北京字跳网络技术有限公司 图像处理方法、装置、设备及介质
CN113204299B (zh) * 2021-05-21 2023-05-05 北京字跳网络技术有限公司 显示方法、装置、电子设备和存储介质
CN114092370A (zh) * 2021-11-19 2022-02-25 北京字节跳动网络技术有限公司 一种图像展示方法、装置、计算机设备及存储介质
CN115048008B (zh) * 2022-06-17 2023-08-15 浙江中控技术股份有限公司 一种hmi画面中对象的可视化方法和设备
CN116991528B (zh) * 2023-08-14 2024-03-19 腾讯科技(深圳)有限公司 一种页面切换方法、装置、存储介质和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102479052A (zh) * 2010-11-26 2012-05-30 Lg电子株式会社 移动终端及其操作控制方法
CN104750389A (zh) * 2015-03-31 2015-07-01 努比亚技术有限公司 显示图片的方法及装置
US9128599B2 (en) * 2011-08-29 2015-09-08 Lg Electronics Inc. Mobile terminal and image converting method thereof
CN105045509A (zh) * 2015-08-03 2015-11-11 努比亚技术有限公司 一种编辑图片的装置和方法
CN106484416A (zh) * 2016-09-30 2017-03-08 腾讯科技(北京)有限公司 一种信息处理方法及终端

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4325075B2 (ja) * 2000-04-21 2009-09-02 ソニー株式会社 データオブジェクト管理装置
US6990637B2 (en) * 2003-10-23 2006-01-24 Microsoft Corporation Graphical user interface for 3-dimensional view of a data collection based on an attribute of the data
TW200834461A (en) * 2007-02-14 2008-08-16 Brogent Technologies Inc Multi-layer 2D layer dynamic display method and system thereof
US20090007014A1 (en) * 2007-06-27 2009-01-01 Microsoft Corporation Center locked lists
US9092241B2 (en) * 2010-09-29 2015-07-28 Verizon Patent And Licensing Inc. Multi-layer graphics painting for mobile devices
US8694900B2 (en) * 2010-12-13 2014-04-08 Microsoft Corporation Static definition of unknown visual layout positions
US8291349B1 (en) * 2011-01-19 2012-10-16 Google Inc. Gesture-based metadata display
CN103345534B (zh) * 2013-07-26 2016-12-28 浙江中控技术股份有限公司 一种动态图处理方法及装置
US9224237B2 (en) * 2013-09-27 2015-12-29 Amazon Technologies, Inc. Simulating three-dimensional views using planes of content
CN105450664B (zh) * 2015-12-29 2019-04-12 腾讯科技(深圳)有限公司 一种信息处理方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102479052A (zh) * 2010-11-26 2012-05-30 Lg电子株式会社 移动终端及其操作控制方法
US9128599B2 (en) * 2011-08-29 2015-09-08 Lg Electronics Inc. Mobile terminal and image converting method thereof
CN104750389A (zh) * 2015-03-31 2015-07-01 努比亚技术有限公司 显示图片的方法及装置
CN105045509A (zh) * 2015-08-03 2015-11-11 努比亚技术有限公司 一种编辑图片的装置和方法
CN106484416A (zh) * 2016-09-30 2017-03-08 腾讯科技(北京)有限公司 一种信息处理方法及终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3511828A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221598A (zh) * 2018-11-23 2020-06-02 北京金山云网络技术有限公司 动态显示图像的方法、装置和终端设备
CN111221598B (zh) * 2018-11-23 2023-09-15 北京金山云网络技术有限公司 动态显示图像的方法、装置和终端设备
CN111432264A (zh) * 2020-03-30 2020-07-17 腾讯科技(深圳)有限公司 基于媒体信息流的内容展示方法、装置、设备及存储介质
CN111432264B (zh) * 2020-03-30 2024-02-09 腾讯科技(深圳)有限公司 基于媒体信息流的内容展示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US10776562B2 (en) 2020-09-15
CN106484416A (zh) 2017-03-08
EP3511828A1 (en) 2019-07-17
EP3511828A4 (en) 2020-06-24
CN106484416B (zh) 2021-02-05
EP3511828B1 (en) 2022-06-22
US20190102366A1 (en) 2019-04-04

Similar Documents

Publication Publication Date Title
WO2018059332A1 (zh) 一种信息处理方法及终端、计算机存储介质
CN107368238B (zh) 一种信息处理方法及终端
WO2018108049A1 (zh) 一种信息处理方法及终端、计算机存储介质
CN106909274B (zh) 一种图像显示方法和装置
US9569894B2 (en) Glass type portable device and information projecting side searching method thereof
WO2017202271A1 (zh) 一种信息处理方法及终端、计算机存储介质
CN105468158B (zh) 颜色调整方法及移动终端
CN106713716B (zh) 一种双摄像头的拍摄控制方法和装置
US20150042580A1 (en) Mobile terminal and a method of controlling the mobile terminal
KR20150047032A (ko) 이동 단말기 및 이의 제어방법
KR20140133081A (ko) 이동 단말기 및 그의 공유 컨텐츠 표시방법
CN106851114B (zh) 一种照片显示、照片生成装置和方法、终端
US20160034440A1 (en) Apparatus for controlling mobile terminal and method therefor
CN106341554B (zh) 一种数据内容的快速查找方法、装置及移动终端
CN107070981B (zh) 多终端的设备协同控制系统及方法
CN106791119B (zh) 一种照片处理方法、装置及终端
CN109168029B (zh) 一种调整分辨率的方法、设备和计算机可存储介质
CN107197084B (zh) 一种移动终端间投影的方法和第一移动终端
US20160110094A1 (en) Mobile terminal and control method thereof
CN105227771B (zh) 图片传输方法和装置
CN104639428B (zh) 即时通讯中会话场景的自适应方法和移动终端
CN106990896B (zh) 一种基于双摄像头的立体照片展示方法、装置和移动终端
CN107220109B (zh) 一种界面显示的方法和设备
CN105912262A (zh) 桌面图标调整装置、终端及桌面图标调整方法
CN107809448B (zh) 一种数据处理方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17854782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017854782

Country of ref document: EP

Effective date: 20190430