CN112927326A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112927326A
CN112927326A CN201911151350.7A CN201911151350A CN112927326A CN 112927326 A CN112927326 A CN 112927326A CN 201911151350 A CN201911151350 A CN 201911151350A CN 112927326 A CN112927326 A CN 112927326A
Authority
CN
China
Prior art keywords
target
information
image
information flow
flow interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911151350.7A
Other languages
Chinese (zh)
Inventor
罗绮琪
寇敬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911151350.7A priority Critical patent/CN112927326A/en
Publication of CN112927326A publication Critical patent/CN112927326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides an image processing method and device, electronic equipment and a computer readable storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying an information flow interface; in response to an operation instruction of the information flow interface, determining position information of a display area of a target animation in the information flow interface, wherein the target animation comprises n frames of images which are arranged in sequence, and n is a positive integer greater than or equal to 3; determining a target image from n frames of images of the target animation according to the position information; and controlling the target image to be displayed on the information flow interface. The technical scheme of the embodiment of the disclosure provides an image processing method, which can increase more dimensional information expression on an information flow interface and further improve the content distribution efficiency.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Stream of information (Feeds) advertisements are advertisements in social media user friend trends, or information media and audiovisual media content streams. The advertisement interspersed in the content stream has relatively good experience for users, and can be accurately delivered by using the label of the user for advertisers, so that the advertisement interspersed in the content stream is particularly increased explosively after the mobile internet era comes.
However, the display form of the existing information flow advertisement on the information flow interface is single, and the expression dimension of the content is limited.
Therefore, a new image processing method and apparatus, an electronic device, and a computer-readable storage medium are needed.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device, an electronic device and a computer-readable storage medium, which can increase more-dimensional information expression on an information flow interface.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
The embodiment of the present disclosure provides an image processing method, including: displaying an information flow interface; in response to an operation instruction of the information flow interface, determining position information of a display area of a target animation in the information flow interface, wherein the target animation comprises n frames of images which are arranged in sequence, and n is a positive integer greater than or equal to 3; determining a target image from n frames of images of the target animation according to the position information; and controlling the target image to be displayed on the information flow interface.
An embodiment of the present disclosure provides an image processing apparatus, including: an interface display module configured to display an information flow interface; the position determining module is configured to respond to an operation instruction of the information flow interface and determine position information of a display area of a target animation in the information flow interface, wherein the target animation comprises n frames of images which are arranged in sequence, and n is a positive integer greater than or equal to 3; a target image determining module configured to determine a target image from n frames of images of the target animation according to the position information; and the target image display module is configured to control the target image to be displayed on the information flow interface.
In some exemplary embodiments of the present disclosure, the apparatus further comprises: a region determination module configured to determine a target region in the information flow interface; and the interval dividing module is configured to divide the target area into n intervals in sequence.
In some exemplary embodiments of the present disclosure, the apparatus further comprises: the video acquisition module is configured to acquire a target video; an image obtaining module configured to obtain n frames of images from the target video for generating the target animation; and the image interval corresponding module is configured to correspond the n frames of images of the target animation to the n intervals of the target area one by one.
In some exemplary embodiments of the present disclosure, the position determination module comprises: a directrix setting unit configured to set a directrix on a display area of the target animation; a position determining unit configured to determine a target section in which the guideline is currently located from the n sections as the position information.
In some exemplary embodiments of the present disclosure, the target image display module includes: and the positive sequence playing unit is configured to sequentially display each frame image in the n frame images of the target animation if the operation instruction is to slide the information flow interface in the first direction.
In some exemplary embodiments of the present disclosure, the target image display module further includes: and the reverse-order playing unit is configured to display each frame image in the n frame images of the target animation in reverse order from the target image if the operation instruction is to slide the information flow interface in a second direction.
In some exemplary embodiments of the present disclosure, the apparatus further comprises: and the preloading module is configured to load the n frames of images of the target animation when the previous screen of the target animation is displayed on the information flow interface.
In some exemplary embodiments of the present disclosure, the preload module includes: a preview image determination unit configured to determine a still preview image among the n frame images of the target animation; a preview image loading unit configured to load the still preview image; and the other image loading unit is configured to sequentially load other images except the static preview image in the n frames of images.
In some exemplary embodiments of the present disclosure, the apparatus further comprises: and the preview image display module is configured to display the static preview image when the target image fails to be loaded.
In some exemplary embodiments of the present disclosure, the apparatus further comprises: the distribution information acquisition module is configured to acquire distribution information of the target animation; a distribution object determination module configured to determine a target distribution object of the target animation according to the distribution information; and the animation sending module is configured to send the target animation to an information flow interface of the target distribution object.
The disclosed embodiments provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the image processing method as described in the above embodiments.
An embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device configured to store one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method as described in the above embodiments.
In the technical solutions provided by some embodiments of the present disclosure, when an information flow interface is operated, position information of a target animation displayed on the information flow interface may be determined, so that a certain frame of n frames of images of the target animation may be determined as a target image to be currently displayed according to the position information of the target animation, and the target image is displayed on the information flow interface. On one hand, the information content to be displayed can be flexibly configured in the mode, and the method is not limited to a single switching mode and pictures, so that the information expression capacity with more dimensions can be increased, and the method can be further used for improving the distribution efficiency of the content; on the other hand, the target image is determined based on the position information, the calculated amount is small, the running speed is high, and the target animation displayed in the information flow interface is smooth and natural.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 shows a schematic diagram of an exemplary system architecture to which an image processing method or an image processing apparatus of an embodiment of the present disclosure may be applied;
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device used to implement embodiments of the present disclosure;
3-6 illustrate a pictorial illustration of a related art casual interaction;
FIG. 7 schematically shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a schematic diagram of a casual interaction macro in accordance with an embodiment of the present disclosure;
FIG. 9 schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 10 schematically shows a flow chart of an image processing method according to a further embodiment of the present disclosure;
FIG. 11 is a diagram illustrating a processing procedure of step S720 shown in FIG. 7 in one embodiment;
fig. 12 schematically shows a schematic diagram of correspondence of each section to each frame image according to an embodiment of the present disclosure;
FIG. 13 is a diagram illustrating a processing procedure of step S740 shown in FIG. 7 in one embodiment;
FIG. 14 schematically illustrates a schematic diagram of forward and reverse sliding according to an embodiment of the present disclosure;
FIG. 15 schematically illustrates a schematic diagram of a forward sliding display of a first frame according to an embodiment of the present disclosure;
FIG. 16 schematically illustrates a schematic diagram of a forward sliding display of a second frame according to an embodiment of the present disclosure;
FIG. 17 schematically illustrates a schematic diagram of a forward sliding display of a third frame according to an embodiment of the present disclosure;
FIG. 18 schematically shows a schematic diagram of a forward sliding display of a fourth frame according to an embodiment of the present disclosure;
FIG. 19 schematically shows a schematic diagram of a forward sliding display of a fifth frame according to an embodiment of the present disclosure;
FIG. 20 schematically shows a schematic diagram of a forward sliding display of a sixth frame according to an embodiment of the present disclosure;
FIG. 21 schematically illustrates a diagram of a reverse sliding display of a fourth frame according to an embodiment of the disclosure;
FIG. 22 schematically illustrates a schematic diagram of a reverse sliding display of a third frame according to an embodiment of the present disclosure;
FIG. 23 schematically illustrates a diagram of a reverse sliding display of a second frame, according to an embodiment of the present disclosure;
FIG. 24 schematically illustrates a schematic diagram of a reverse sliding display of a first frame, according to an embodiment of the present disclosure;
FIG. 25 schematically shows a flow chart of an image processing method according to a further embodiment of the disclosure;
FIG. 26 is a diagram illustrating a processing procedure of step S2510 shown in FIG. 25 in one embodiment;
FIG. 27 schematically shows a schematic diagram of an image loading sequence according to an embodiment of the present disclosure;
FIG. 28 schematically shows a flow chart of an image processing method according to a further embodiment of the disclosure;
fig. 29 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which an image processing method or an image processing apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, the dynamic information presentation system 100 may include a user terminal 101, which is logged in with a first user account, a promoted account terminal 102, a network 103, and a platform server 104. Network 103 is used to provide a medium for communication links between user terminal 101, promotional account terminal 102, and platform server 104. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The platform server 104 is a server that provides background services to clients (e.g., social clients, information clients, video clients, etc.). The platform server 104 may be at least one server, a server cluster, a distributed server platform, a cloud computing center, or a combination of several server clusters.
It should be understood that the number of user terminals, promotional account terminals, networks, and platform servers in FIG. 1 are merely illustrative. Any number of user terminals, promotion account terminals, network and platform servers may be provided as required.
A user may interact with the platform server 104 over the network 103 using the user terminal 101, the promotional account terminal 102, to receive or send messages, and so on. The user terminal 101, the promoted account terminal 102 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablets, laptop portable computers, desktop computers, wearable devices, smart home devices, and so forth.
It is assumed here that various Applications (APPs) with dynamic information display function can be installed on the user terminal 101, such as a news APP, a social APP, an instant messaging APP, a voice messaging APP, a game APP, a transaction platform, a navigation APP, and a video APP. The dynamic information display function is a function that a user (the user may be a broad one, and may be an individual, or may be any organization such as a company, an enterprise, and an institution) publishes own dynamic information on a dynamic information publishing platform, and other users having relationships such as friends and/or groups and/or subscriptions and/or concerns can refer to the dynamic information on the platform. On the dynamic information display platform, each user corresponds to a dynamic information flow corresponding to the own account, the dynamic information flow includes dynamic information published by other users having relationships of friends and/or groups and/or subscriptions and/or concerns and the like with the user, and also includes dynamic information published by the user, and the dynamic information is displayed generally according to a timeline form. For example, a user may obtain a dynamic information stream corresponding to a first user account from a dynamic information publishing platform, and then obtain a self-portrait published by a second user account a, an article published by a second user account B, a short video published by a second user account C, a news information published by a second user account D, and the like. The second user account may be more than one. The second user account is in a relationship of friends and/or groups and/or subscriptions and/or concerns with the first user account. The user account here refers to an account applied by an individual and/or an organization in the platform server 104, and after the account is logged in to the corresponding current client, the account is used by the platform server 104 and other clients to identify the user identity of the current client. The dynamic information is information that a user account publishes on a dynamic information publishing platform at a certain time, and each piece of dynamic information includes but is not limited to: picture information, text information, voice information, video information, and the like.
The promotion account here refers to an account in which there is a promotion requirement in the platform server 104. Alternatively, the promotional account is an account that is applied by a government agency, organization, institution, group or individual in the platform server for the presence of promotional requirements. The promotional account may publish dynamic information through the promotional account terminal 102. The dynamic information (which may also be referred to as "promotion information") of the promotion account may include: account information and content information, the content information including but not limited to: at least one of picture information, text information, voice information, video information, geographic location information, card information (which may be used to obtain coupons, etc., for example), out-link information (which may be used to click on a "view details" virtual button light, for example), and promotional identification, etc.
The platform server 104 may also be configured to add the dynamic information of the promoted account to a dynamic information stream corresponding to the first user account. After the adding, the dynamic information stream corresponding to the first user account not only includes the dynamic information of the second user account, but also includes the dynamic information of the promoted account.
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.
It should be noted that the computer system 200 of the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read-Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for system operation are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 208 including a hard disk and the like; and a communication section 209 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 210 as necessary, so that a computer program read out therefrom is installed into the storage section 208 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The computer program, when executed by a Central Processing Unit (CPU)201, performs various functions defined in the methods and/or apparatus of the present application.
It should be noted that the computer readable storage medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and/or units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described modules and/or units may also be disposed in a processor. Wherein the names of such modules and/or units do not in some way constitute a limitation on the modules and/or units themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer-readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 7, 9, 10, 11, 13, 25, 26, and 28.
In the embodiment of the present disclosure, an information stream refers to a set of information moving in the same direction in space and time, a set of all information transmitted from an information source to an information receiver refers to a flow of information in a computer system and a communication network in an information processing process from the viewpoint of research, development and application of modern information technology. Such flow may occur from person to person, person to facility, within a facility, and facility to facility. A mobile device information stream refers to a set of information that moves in the same direction, both spatially and temporally, on a mobile device, such as a cell phone, tablet, etc.
In order to help the brand to better catch the eyeballs of users and improve marketing effect, an advertising product, namely a casual interactive advertisement, is brought on line in the related art. The free interaction large picture is a large picture switching mode in information flow, and means that two pictures are carried in a picture area under the mode of a single large picture, and the two pictures are switched back and forth according to specified switching animation (such as gradual change) along with the sliding of an information flow interface up and down by a finger of a user. So-called casual interaction, not only is the advertisement moving, but also the user is interacted with. In the whole process from the advertisement viewing to the click and interaction completion of the user, each link can freely interact with the user, and a more viscous connection is established. With the push-out of the interactive advertisement, the advertisement can be moved only by two picture materials. The free interaction large image generates an animation effect along with the sliding of the fingers of the user, the main body of the product can be highlighted, the interaction effect is increased, the interestingness and the interactivity of the advertisement are improved, the manufacturing cost is low, and the method is very friendly to advertisers.
The "big picture" herein refers to a picture size format that fills up the left and right sides of the information flow interface in the information flow scene, and is collectively referred to as a big picture. Corresponding to the "small picture", that is, in the scene of the information flow, the picture size formats on the left and right sides of the information flow interface are not fully filled.
Fig. 3-6 show schematic diagrams of a relief interaction map in the related art.
As shown in fig. 3-6, in the information flow product (e.g. browser, information APP, etc.), the casual interactive big-graph mode is configured in the background in advance, and two static big-graphs are uploaded. When a user slides the information flow interface up and down by fingers, the two large images of the freely interactive large image can be switched back and forth according to a certain animation switching template. Taking the promotion information as an example of a certain game, when a user slides an information flow interface, a first large image is covered by a second large image, and here, it is assumed that the covered switching template is a circular expansion. And no switching effect exists until the second large picture is completely displayed. Otherwise, the user slides down the information flow interface, and the second large image is covered by the first large image.
However, the large random interaction image in the related art can only show two images, and the density of the presented content information is limited to a certain extent, so that certain advertisements such as the core content of a game cannot be highlighted, and the information transmission of the advertisements is limited; and the switching mode is single, and the method cannot be adapted to more flexible scenes. There are many contents that cannot be applied to a single switching template, such as lighting up a local content in a picture, and elements in the picture appearing in a misplaced manner. Meanwhile, the number of pictures and the switching mode need to be specified by the front end, and the flexibility is weak. The information flow product in the related technology has a single structure and few interaction forms with users, and is easy to go across the flower as users are more and more accustomed to browsing the information flow in a short time, the information touch difficulty is gradually improved, and the information flow product is not beneficial to distributing type and value-added type information (content which can be presented commercially) to users.
With the development of information flow products, the information presentation style of information flow needs to be updated, and more expansion and exploration are performed by combining the use scenes and habits of users and the types and structures of information contents.
Fig. 7 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure. The method provided by the embodiment of the present disclosure may be executed by any electronic device with computing processing capability, for example, the user terminal 101, the promoted account terminal 102, and/or the platform server 104 in fig. 1. In the following description, the user terminal 101 is taken as an execution subject for illustration.
As shown in fig. 7, an image processing method provided by an embodiment of the present disclosure may include the following steps.
In step S710, an information flow interface is displayed.
Specifically, the user may view Feeds in an APP or a browser installed on the user terminal, for example, a mobile phone, where the Feeds are information streams that are continuously updated and presented to the user content, which are hereinafter referred to as information streams. In the embodiment, the text data corresponding to the information stream is cached while the user slides and views the information stream, so that the page content corresponding to the information stream is quickly presented when the user clicks a certain information stream item. The user can check the information flow by sliding on the screen of the user terminal, the screen of the user terminal displays the information flows of a plurality of items, the information flow of each item is thumbnail content, when the user further clicks a certain information flow item, the user can directly jump to a page to present content details corresponding to the clicked information flow, wherein the content details can be characters, pictures, audio, videos and the like. For example, the displayed information flow interface may be as described below with reference to FIG. 8.
In step S720, in response to an operation instruction for the information flow interface, position information of a display area of a target animation in the information flow interface is determined, where the target animation includes n frames of images arranged in sequence, and n is a positive integer greater than or equal to 3.
The implementation process of determining the location information according to the operation instruction may refer to the following descriptions of the embodiments of fig. 9, 10, and 11.
In the embodiment of the present disclosure, the operation instruction may be any one or more of sliding, clicking, and the like of a user finger or other component with a sensing function (e.g., a capacitive pen, and the like) on a screen, e.g., a touch-sensitive surface, of the user terminal. In the following embodiments, the user terminal is a mobile phone having a capacitive touch-sensitive surface, and the operation instruction is an example of sliding of a finger of a user on the capacitive touch-sensitive surface, but the disclosure is not limited thereto, and the operation instruction may be any gesture, action or behavior preset in the system.
In the embodiment of the disclosure, n frames of images are edited in the target animation, the playing time interval between each frame of image is set, and then each frame of image can be used as a single frame to play, so as to achieve the effect of animation. Each frame of image can be referred to as a sequence frame, and n sequence frames are integrated into a group of continuous images in time sequence, which is referred to as a sequence frame group.
In the embodiment of the present disclosure, the target animation may be any one of a Flash animation, a GIF (Graphics Interchange Format) animation, a 3D (three-dimensional) animation, and the like. Considering that in an information flow scene, the finger sliding speed of a user is usually fast, the user generally has no time and patience to wait for the loading of a video, and a GIF animation picture has the characteristic that the picture is small and is easy to place and call at any time, so in the following description, the GIF animation is taken as an example for illustration. The GIF animation generates a certain dynamic image by switching pictures of multiple layers according to time, and when a group of specific static images are switched at a specified frequency, the effect of animation is achieved. The expression form of the advertisement board is rich and various, has strong inclusion, can be mixed with various expression forms and artistic styles, and can be applied to the production of commercial advertisements. In the embodiment of the disclosure, the GIF animation or any animation with a size smaller than 20 megabytes can be called as "small animation", promotion information is produced by the small animation, and when the promotion information is displayed in an information flow interface, even if the finger sliding speed of a user is high, the pause phenomenon is not easy to occur.
It should be noted that, with the development of network communication technology, for example, 5G (5th Generation Wireless Systems, fifth Generation mobile communication system) technology, videos of any size can be loaded on the user terminal almost seamlessly, and in this case, large videos or large animations, such as 3D animations, can also be applied to the information flow scene.
In step S730, a target image is determined from the n-frame images of the target animation according to the position information.
In step S740, the target image is controlled to be displayed on the information flow interface.
The implementation process of displaying the target image on the information flow interface according to the position information can refer to the following description of the embodiments of fig. 12, 13 and 14.
When the information flow interface is operated, the image processing method provided by the embodiment of the disclosure can determine the position information of the target animation displayed on the information flow interface, so that one frame of n frames of images of the target animation can be determined as the target image to be displayed currently according to the position information of the target animation, and the target image is displayed on the information flow interface. On one hand, the information content to be displayed can be flexibly configured in the mode, and the method is not limited to a single switching mode and pictures, so that the information expression capacity with more dimensions can be increased, and the method can be further used for improving the distribution efficiency of the content; on the other hand, the target image is determined based on the position information, the calculated amount is small, the running speed is high, and the target animation displayed in the information flow interface is smooth and natural.
FIG. 8 schematically shows a schematic diagram of a casual interaction macro in accordance with an embodiment of the present disclosure.
As shown in fig. 8, taking an example that a first user (having a first user account) opens a certain browser installed on a first user terminal thereof, the first user divides information streams into categories of recommendation (for example, a corresponding information stream may be recommended according to historical click behavior, historical browsing behavior, historical search behavior, and the like of the user), video (the format of the information stream is video), laugh (determined according to the content of the information stream), electronic competition (information stream related to the electronic competition), hot spot (current hot spot information, hot spot news, and the like), entertainment (for example, information stream related to a star, a singer, and the like), and the like.
The recommendation category is displayed by default, and the information 1, the free interaction large graph in the dashed box and the information 2 are displayed in the information flow interface in chronological order. The information 1 and the information 2 may be dynamic information published by a second user (having a second user account) having some relationship with the first user (e.g. a friend relationship on a social client, a group relationship on an instant messaging client, a subscription relationship with a news client, etc.).
A random interaction large graph for publishing promotion information is inserted between the information 1 and the information 2, and the display area of the random interaction large graph comprises a character area for displaying promotion characters at the top, a large graph area for displaying target animation and an area for displaying other information such as the name and the outer chain of the promotion information at the bottom.
It should be understood that, although the illustration in the embodiment of the present disclosure is that the casual interaction promotion information is a single large image, because the large image may show more clear contents to the user, the present disclosure is not limited thereto, and may also be in a small image display form, and/or display multiple images simultaneously, and the present disclosure is not limited thereto. Meanwhile, the layout of the display area of the casual interaction promotion information is not limited to the illustration shown in fig. 8, and the display area can be adjusted into a number of sub-areas according to actual requirements, the shape and area of each sub-area, what content is displayed, and the like.
Fig. 9 schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure. As shown in fig. 9, the method provided by the embodiment of the present disclosure may further include the following steps, which are different from the above-described embodiment.
In step S910, a target area in the information flow interface is determined.
In the embodiment of the present disclosure, the target region may be a whole region on a screen of the first user terminal for displaying the information flow entry, or may be a partial region for displaying the information flow entry, which may be determined according to a specific application scenario, and the position, the area size, and the shape of the target region are not limited in the embodiment of the present disclosure.
In step S920, the target area is sequentially divided into n sections.
For example, the target area may be sequentially divided from top to bottom, from bottom to top, from left to right, or from right to left, and the size and shape of each of the n intervals may be the same or different, or the size and shape of some intervals may be the same, and the size and shape of some intervals are different, which is not limited in this disclosure.
Fig. 10 schematically shows a flowchart of an image processing method according to yet another embodiment of the present disclosure. As shown in fig. 10, the method provided by the embodiment of the present disclosure may further include the following steps, which are different from the above-described embodiment.
In step S1010, a target video is acquired.
In the embodiment of the present disclosure, a promotion account (the owner of the promotion account may be, for example, an advertiser, a game manufacturer, or the like) may log in a promotion account terminal thereof, and upload a target video with promotion information to a platform server, where the number of the target video may be one or more.
In step S1020, n frames of images are obtained from the target video for generating the target animation.
For example, if there is one target video, one frame of image may be cut out from the images in the target video at regular intervals, n frames of images are cut out altogether, and the n frames of images are combined into a group of continuous images in time sequence to form a target animation including n frames of images. When the image is captured, the image may be captured at equal time intervals or at unequal time intervals, where the equal time intervals refer to that the time intervals between any two adjacent captured images are equal, for example, one image is captured every other 1 frame, and the unequal time intervals refer to that the time intervals between at least part of two adjacent captured images are unequal, for example, the 1 st frame, the 6 th frame, the 8 th frame, the 10 th frame, and the 12 th frame are sequentially captured from the target video. As long as n frames of images are finally captured, the time interval of the capturing can be determined according to the number of video frames included in the target video and the value of the n frames to be captured.
For another example, if the target video itself includes n frames of images, the target video may be directly used as the target animation.
For another example, the target video may only include one frame of image, or one frame of image may be extracted from multiple frames of images of the target video, and the frame of image is divided into different regions, where each region is used as one frame of image in the target animation.
When some special display effect is needed, the image in the target video can be processed to generate the target animation. For example, in order to achieve the effect of displaying the local content in the lighting picture in the target animation, the lighting picture in the target video may be made into an animation and then displayed in frames. For another example, in order to achieve the effect of displaying the elements in the pictures with the dislocation emerging in the target animation, multiple frames of images in the target video may be arranged, and any two adjacent images are partially overlapped, so as to generate one or more frames of images in the target animation, so that the images have the display effect of the dislocation emerging.
In the embodiment of the present disclosure, there may be a plurality of target videos, and n frames of images may be cut from the plurality of target videos to generate the target animation, or images in the plurality of target videos may be processed to generate the target animation.
In the embodiment of the present disclosure, the content of the target animation is not limited, and any creative content may be carried except the number of animation frames (e.g., n frames) and the size. The size of the image in the target animation is the size of the container carrying the animation. If the size of the target video uploaded by the promotion account does not meet the specification, a corresponding processing mode can be set, one is that the size of the uploaded target video is automatically detected during uploading, and if the size of the uploaded target video does not meet the specification, the uploading to the platform server is not allowed; one is to allow uploading, but the platform server will automatically crop or resize the image to fit the specified size.
In step S1030, the n frame images of the target animation are in one-to-one correspondence with the n sections of the target region.
In the embodiment of the present disclosure, the number of images in the target animation is not limited, but the specific number of image frames should be the same as the number of divided regions of the target region, and there is a one-to-one correspondence between the two.
Fig. 11 is a schematic diagram illustrating a processing procedure of step S720 illustrated in fig. 7 in an embodiment. As shown in fig. 11, in the embodiment of the present disclosure, the step S720 may further include the following steps.
In step S721, a guideline on the display area of the target animation is set.
In the embodiment of the present disclosure, the guideline refers to a line segment preset at a position where a display area of the animation is located, and may be used to determine position information where the display area of the target animation is located at the current time.
In the embodiment of the present disclosure, the guideline may be disposed at any position of the display area of the target animation, for example, at any one of the top, the bottom, the left side, the right side, the middle, and the like of the display area, and the guideline may be a straight line, but the present disclosure is not limited thereto.
In step S722, a target section in which the guideline is currently located is determined from the n sections as the position information.
In the embodiment of the present disclosure, which of the n divided sections the guideline on the display area of the target animation is located in at the current time is automatically detected, and then the section is used as the position information.
For example, fig. 12 schematically shows a schematic diagram of correspondence of each section to each frame image according to an embodiment of the present disclosure.
As shown in FIG. 12, the n frame images in the target animation are shown in order as a set of consecutively numbered pictures, such as picture 1, picture 2, picture 3, picture 4, picture 5, picture 6, picture 7 through picture n in the illustration. And dividing the whole area for displaying the information flow items on the screen into n rectangular intervals with equal intervals in the sequence from bottom to top, and sequentially numbering the intervals from 1, 2, 3, 4, 5, 6, 7 to n. Picture 1 corresponds to section 1, picture 2 corresponds to section 2, picture 3 corresponds to section 3, picture 4 corresponds to section 4, picture 5 corresponds to section 5, picture 6 corresponds to section 6, picture 7 corresponds to section 7, until picture n corresponds to section n. And a quasi-line is set at the top of the large image area of the target animation, and when the quasi-line reaches which interval, the image corresponding to the interval is displayed.
For example, in fig. 12, if the guideline is in the interval 1, the picture 1 is displayed correspondingly.
It should be noted that, when the finger of the user slides on the screen, the information stream displayed on the information stream interface moves along with the movement of the hand, for example, when the finger of the user slides up the information stream, the information stream moves up, and when the finger of the user slides down the information stream, the information stream moves down, and at this time, the position of the alignment line changes accordingly, so that which frame of image should be displayed currently can be determined according to the position of the alignment line, but is not related to the position of the finger of the user on the screen at this time.
Fig. 13 is a schematic diagram illustrating a processing procedure of step S740 shown in fig. 7 in an embodiment. As shown in fig. 13, in the embodiment of the present disclosure, the step S740 may further include the following steps.
In step S741, if the operation instruction is to slide the information flow interface in the first direction, the n frames of images of the target animation are sequentially displayed.
For example, the first direction may be a direction from bottom to top with reference to a mobile phone screen, and is set as a forward direction, that is, when the user's finger slides the information flow interface forward, the 1 st frame image, the 2 nd frame image, and up to the nth frame image in the n frame images are sequentially displayed, and specifically, the following contents in the embodiment of fig. 14 may be referred to. However, the present disclosure is not limited thereto, and for example, any one of the left-to-right direction, the top-to-bottom direction, the right-to-left direction, and the like may be used as the first direction.
With continued reference to fig. 13, the step S740 may further include a step S742: and if the operation instruction is to slide the information flow interface in a second direction, sequentially displaying each frame image in the n frame images of the target animation in a reverse order from the target image.
For example, the second direction may be a direction from top to bottom with reference to the mobile phone screen, and is set to be a reverse direction, that is, when the user's finger slides the information flow interface in a reverse direction, the frames of the n frames of images are displayed in a reverse order from the currently displayed target image, for example, the nth frame, the nth-1 frame, the nth-2 frame, and the 1 st frame are sequentially displayed, and specifically, the following contents in the embodiment of fig. 14 may be referred to. However, the present disclosure is not limited thereto, and for example, any one of the left-to-right direction, the bottom-to-top direction, the right-to-left direction, and the like may be used as the second direction. The first direction and the second direction may be different directions or opposite directions.
Fig. 14 schematically illustrates a schematic diagram of forward and reverse sliding according to an embodiment of the present disclosure.
As shown in fig. 14, if the user's finger is slid forward from bottom to top on the mobile phone screen at the beginning, and the user slides forward from section 1 to section 11, the 1 st to 11 th frame images of the target animation are sequentially displayed, and then, the user slides backward from top to bottom from the 11 th frame image to section 8, and the 11 th, 10 th, 9 th, and 8 th frame images of the target animation are sequentially displayed.
This is illustrated below by way of example in fig. 15-24. Here, the promotion information is taken as a certain game as an example.
As shown in fig. 15, when the user's finger slides the information flow interface upward and forward on the screen, the directrix of the display area of the target animation is located in the interval of 1, and the first frame image of the target animation is displayed. As shown in fig. 16-20, as the user's finger continues to slide forward, the directrix reaches intervals 2, 3, 4, 5 and 6 in turn, and then the second frame, the third frame, the fourth frame, the fifth frame and the sixth frame of the target animation are displayed in turn.
During the sliding process of the finger, the content displayed on the information flow interface changes correspondingly along with the speed and the sliding direction of the finger. And if the finger of the user stops sliding in the midway, the target animation is played to the image of the frame A at the current moment, and the image of the frame A is presented in the large image area. Regardless of the frame image where the user's finger stops, when the information stream is slid in the reverse direction, the contents of the target animation can be played in reverse order from the frame image.
As shown in fig. 21, assuming that the user's finger is stopped on the fourth frame image, the fourth frame image is displayed first when sliding in the reverse direction. As the user's finger continues to slide downwards and backwards, the directrix sequentially reaches the intervals of 3, 2 and 1, and then the third frame, the second frame and the first frame of the target animation are displayed in reverse order as shown in fig. 22-24.
In the above example, the text area of the display area of the target animation displays "cure the mini game, release your working pressure" and the other information "advertisement xxxxAPP download" and "download" virtual keys below do not change with the sliding of the user's finger, but the disclosure is not limited thereto, and in other embodiments, the contents of these areas may be updated according to actual needs.
It should be noted that, in the above-mentioned embodiment, when the frame images in the target animation are displayed on the information flow interface, the frame images are displayed in the display area of the target animation in a constant size, for example, according to a specified large-scale drawing size, but the present disclosure is not limited thereto, and in other embodiments, when the finger of the user slides in the forward direction and/or in the reverse direction on the information flow interface, the displayed frame images may be enlarged frame by frame, and the displayed target images may exceed the originally set display area of the target animation during enlargement, thereby achieving a better dynamic display effect and being able to catch the attention of the user. In some embodiments, it may be further configured that when the user's finger slides forward on the information flow interface, the sequentially displayed frame images may gradually enlarge; when the user's finger slides backward across the information flow interface, the frame images displayed in reverse order may be gradually reduced, and so on.
The image processing method provided by the embodiment of the disclosure gives a large-image display structure to the information stream, creates an operation instruction for sliding the information stream up and down along with the finger of the user, and plays the target animation in forward or reverse order, so that the dynamic content in the promotion information can have more sufficient display opportunities in the information stream, and the click rate of the content can be improved.
Fig. 25 schematically shows a flowchart of an image processing method according to still another embodiment of the present disclosure. As shown in fig. 25, the method provided by the embodiment of the present disclosure may further include the following steps, which are different from the above-described embodiment.
In step S2510, when the previous screen of the target animation is displayed on the information flow interface, n frames of images of the target animation are loaded.
In the embodiment of the present disclosure, the previous screen of the target animation refers to an information flow display interface when the target animation is to be displayed but not yet displayed. That is, the front end performs preloading of the target animation when the image of the target animation is not yet displayed. Because the preloading technology is adopted, n frames of images can be loaded completely, so that when the target animation is displayed on the information flow interface, the loading of one frame by one frame is not needed any more, and the target animation can be played smoothly in sequence without pause in the display process along with the up-and-down sliding of the fingers of a user on the screen.
FIG. 26 is a diagram illustrating a processing procedure of step S2510 shown in FIG. 25 in one embodiment. As shown in fig. 25, in the embodiment of the present disclosure, the step S2510 may further include the following steps.
In step S2511, a still preview image of the n frame images of the target animation is determined.
In the embodiment of the disclosure, when the promoted account terminal uploads the target video, any one frame of the promoted account terminal can be selected as a static preview image of the target animation.
In step S2512, the still preview image is loaded.
When the user terminal loads the target animation, the user terminal can be set to preferentially load the selected static preview picture.
In step S2513, images other than the still preview image in the n-frame images are sequentially loaded.
And after the static preview images are loaded preferentially, sequentially loading other images in sequence. The specific loading sequence can be referred to in fig. 27 below.
In an exemplary embodiment, the method may further include: and when the target image fails to be loaded, displaying the static preview image.
Therefore, when one or more frames of images in the target animation cannot be loaded successfully, the dynamic effect is not displayed, but only the selected static preview image is displayed instead of displaying characters or pictures similar to the characters or pictures in loading to the user, so that the user experience can be improved, and at least part of popularization information can be conveyed to the user.
FIG. 27 schematically shows a schematic diagram of an image loading sequence according to an embodiment of the present disclosure.
As shown in fig. 27, assuming that the image of the 25 th frame of the selected target animation is taken as a still preview image, the loading order is: the 25 th frame, the 1 st to 24 th frames, and the 26 th to n-th frames.
Fig. 28 schematically shows a flowchart of an image processing method according to still another embodiment of the present disclosure. As shown in fig. 28, the method provided by the embodiment of the present disclosure may further include the following steps, which are different from the above-described embodiment.
In step S2810, distribution information of the target animation is acquired.
In the embodiment of the present disclosure, when the promoted account terminal uploads the target video, the distribution information of the target animation set by the promoted account terminal may be uploaded at the same time, and the distribution information may include at least one of the following items: the distance between the geographic position of the user account and a set geographic position (for example, the geographic position where a merchant corresponding to the promotion information is located) is smaller than a predetermined threshold; whether the gender of the user meets the set conditions; whether the age of the user meets a set condition; whether the occupation of the user meets the set conditions or not; release time conditions of the promotion information; the release times of the promotion information and the like.
For another example, the distribution information may further include: whether the user has seen the target animation within a day, whether the user has negative feedback after having seen the target animation, etc. The present disclosure does not limit the setting manner and content of the distribution information.
In step S2820, a target distribution object of the target animation is determined according to the distribution information.
For example, if the distribution information includes: and if the distance between the user and the set geographical position 'place C' is less than 1000 meters and the gender of the user is female, detecting that the distance between the geographical position of the user and the 'place C' is less than 1000 meters and detecting whether the gender of the user is female. If the detection result meets the distribution information, determining the user as a target distribution object of the target animation; and if the detection result does not meet the distribution information, the user is not the target distribution object.
In step S2830, the target animation is sent to an information flow interface of the target distribution object.
For example, if it is detected that a certain user is a target distribution object for viewing a random interaction large graph, the target animation is sent to an information flow interface corresponding to the target distribution object and is randomly presented in an information flow.
And the target distribution object receiving the target animation can interact with the dynamic information of the promoted account through comment operation or approval operation.
According to the image processing method provided by the embodiment of the disclosure, on one hand, the information expression capacity of more dimensions can be increased, the attention of users is attracted, the content distribution efficiency is further improved, and meanwhile, more selection modes can be provided for advertisers, and the ecological expansibility of advertisements can be created in many aspects. On the other hand, a basic function is formed by the way that the user plays the target animation when sliding the information stream, and an output party (including all people who output the animation content) can more freely and spatially output creative information stream content with strong interactivity, and is not limited to the styles of two static large pictures. And the creative contents are output every time, the creative contents do not need to be customized by front-end development, and an output party can flexibly configure the information contents without being limited to a single switching mode and a picture expression mode. The front-end development does not need to carry out development work according to the originality and the requirement of an output party at each time, so that the development workload is greatly reduced, the output efficiency is improved, and the result is presented. Meanwhile, target animations containing popularization information are inserted into the information flow, accurate putting can be achieved by combining big data and an artificial intelligence technology, and good brand exposure effects can be achieved. When the user slides information to the display area of the target animation, the target animation is dynamically played frame by frame, and strong interaction easily catches the sight of the user. In the era of mobile internet, even if the user is more attentive to stay for 1 second, the method is competitive advantage for advertisers.
Fig. 29 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 29, an image processing apparatus 2900 provided by the embodiments of the present disclosure may include: an interface display module 2910, a position determination module 2920, a target image determination module 2930, and a target image display module 2940.
The interface display module 2910 may be configured to display an information flow interface, among other things. The position determining module 2920 may be configured to determine position information of a display area of a target animation in the information flow interface in response to an operation instruction on the information flow interface, where the target animation may include n images arranged in order, and n may be a positive integer greater than or equal to 3. The target image determination module 2930 may be configured to determine a target image from the n-frame images of the target animation according to the position information. The target image display module 2940 may be configured to control the target image to be displayed on the information flow interface.
In an exemplary embodiment, the image processing apparatus 2900 may further include: a region determination module that may be configured to determine a target region in the information flow interface; an interval dividing module may be configured to sequentially divide the target area into n intervals.
In an exemplary embodiment, the image processing apparatus 2900 may further include: a video acquisition module that may be configured to acquire a target video; an image obtaining module configured to obtain n frames of images from the target video for generating the target animation; and the image interval corresponding module can be configured to correspond the n frames of images of the target animation to the n intervals of the target area in a one-to-one mode.
In an exemplary embodiment, the position determination module 2920 may include: a directrix setting unit configurable to set a directrix on a display area of the target animation; a position determining unit may be configured to determine, as the position information, a target section in which the guideline is currently located from the n sections.
In an exemplary embodiment, the target image display module 2940 may include: and the positive sequence playing unit can be configured to sequentially display each frame image in the n frame images of the target animation if the operation instruction is to slide the information flow interface in the first direction.
In an exemplary embodiment, the target image display module 2940 may further include: and the reverse-order playing unit may be configured to, if the operation instruction is to slide the information flow interface in a second direction, sequentially display, in reverse order, each frame image of the n frame images of the target animation, starting from the target image.
In an exemplary embodiment, the image processing apparatus 2900 may further include: the preloading module can be configured to load the n frames of images of the target animation when the previous screen of the target animation is displayed on the information flow interface.
In an exemplary embodiment, the preload module may include: a preview image determination unit configured to determine a still preview image among the n frame images of the target animation; a preview image loading unit that may be configured to load the still preview image; and the other image loading unit can be configured to sequentially load other images except the static preview image in the n frames of images.
In an exemplary embodiment, the image processing apparatus 2900 may further include: a preview image display unit may be configured to display the still preview image when the target image fails to be loaded.
In an exemplary embodiment, the image processing apparatus 2900 may further include: a distribution information acquisition module that may be configured to acquire distribution information of the target animation; a distribution object determination module configured to determine a target distribution object of the target animation according to the distribution information; and the animation sending module can be configured to send the target animation to an information flow interface of the target distribution object.
The specific implementation of each module and unit in the image processing apparatus provided in the embodiment of the present disclosure may refer to the content in the image processing method, and is not described herein again.
It should be noted that although in the above detailed description several modules and units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more of the modules and units described above may be embodied in one module and unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module and unit described above may be further divided into embodiments by a plurality of modules and units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. An image processing method, comprising:
displaying an information flow interface;
in response to an operation instruction of the information flow interface, determining position information of a display area of a target animation in the information flow interface, wherein the target animation comprises n frames of images which are arranged in sequence, and n is a positive integer greater than or equal to 3;
determining a target image from n frames of images of the target animation according to the position information;
and controlling the target image to be displayed on the information flow interface.
2. The image processing method according to claim 1, further comprising:
determining a target area in the information flow interface;
and dividing the target area into n intervals in sequence.
3. The image processing method according to claim 2, further comprising:
acquiring a target video;
obtaining n frames of images according to the target video for generating the target animation;
and enabling the n frames of images of the target animation to correspond to the n intervals of the target area one by one.
4. The image processing method according to claim 3, wherein determining position information of a display area of a target animation in the information flow interface in response to an operation instruction to the information flow interface comprises:
setting a guideline on the display area of the target animation;
and determining a target interval in which the guideline is currently positioned from the n intervals as the position information.
5. The image processing method of claim 1, wherein controlling the target image to be displayed on the information flow interface comprises:
and if the operation instruction is to slide the information flow interface in a first direction, sequentially displaying each frame image in the n frame images of the target animation on the information flow interface in sequence.
6. The image processing method of claim 5, wherein controlling the target image to be displayed on the information flow interface further comprises:
and if the operation instruction is to slide the information flow interface in a second direction, sequentially displaying each frame image in the n frame images of the target animation on the information flow interface in a reverse order from the target image.
7. The image processing method according to claim 1, further comprising:
and when the previous screen of the target animation is displayed on the information flow interface, loading n frames of images of the target animation.
8. The image processing method according to claim 7, wherein loading n-frame images of the target animation comprises:
determining a static preview image in the n frames of images of the target animation;
loading the static preview image;
and loading other images except the static preview image in the n frames of images in sequence.
9. The image processing method according to claim 8, further comprising:
and when the target image fails to be loaded, displaying the static preview image.
10. The image processing method according to claim 1, further comprising:
acquiring distribution information of the target animation;
determining a target distribution object of the target animation according to the distribution information;
and sending the target animation to an information flow interface of the target distribution object.
11. An image processing apparatus characterized by comprising:
an interface display module configured to display an information flow interface;
the position determining module is configured to respond to an operation instruction of the information flow interface and determine position information of a display area of a target animation in the information flow interface, wherein the target animation comprises n frames of images which are arranged in sequence, and n is a positive integer greater than or equal to 3;
a target image determining module configured to determine a target image from n frames of images of the target animation according to the position information;
and the target image display module is configured to control the target image to be displayed on the information flow interface.
12. An electronic device, comprising:
one or more processors;
a storage device configured to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1 to 10.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out an image processing method according to any one of claims 1 to 10.
CN201911151350.7A 2019-11-21 2019-11-21 Image processing method and device, electronic equipment and computer readable storage medium Pending CN112927326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911151350.7A CN112927326A (en) 2019-11-21 2019-11-21 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911151350.7A CN112927326A (en) 2019-11-21 2019-11-21 Image processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112927326A true CN112927326A (en) 2021-06-08

Family

ID=76160723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911151350.7A Pending CN112927326A (en) 2019-11-21 2019-11-21 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112927326A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553131A (en) * 2021-07-12 2021-10-26 网易传媒科技(北京)有限公司 Information display method, medium, device and computing equipment
CN113722628A (en) * 2021-08-31 2021-11-30 北京百度网讯科技有限公司 Method, apparatus, device and medium for displaying information stream
CN114077371A (en) * 2021-11-12 2022-02-22 北京百度网讯科技有限公司 Information display method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180310043A1 (en) * 2016-08-08 2018-10-25 Tencent Technology (Shenzhen) Company Limited Media information delivery method and system, terminal, server, and storage medium
CN109218819A (en) * 2018-09-29 2019-01-15 维沃移动通信有限公司 A kind of video previewing method and mobile terminal
CN110018763A (en) * 2019-03-19 2019-07-16 阿里巴巴集团控股有限公司 The method and apparatus of page presentation
CN110032700A (en) * 2019-04-04 2019-07-19 网易(杭州)网络有限公司 Information distribution control method, device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180310043A1 (en) * 2016-08-08 2018-10-25 Tencent Technology (Shenzhen) Company Limited Media information delivery method and system, terminal, server, and storage medium
CN109218819A (en) * 2018-09-29 2019-01-15 维沃移动通信有限公司 A kind of video previewing method and mobile terminal
CN110018763A (en) * 2019-03-19 2019-07-16 阿里巴巴集团控股有限公司 The method and apparatus of page presentation
CN110032700A (en) * 2019-04-04 2019-07-19 网易(杭州)网络有限公司 Information distribution control method, device, storage medium and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553131A (en) * 2021-07-12 2021-10-26 网易传媒科技(北京)有限公司 Information display method, medium, device and computing equipment
CN113722628A (en) * 2021-08-31 2021-11-30 北京百度网讯科技有限公司 Method, apparatus, device and medium for displaying information stream
WO2023029443A1 (en) * 2021-08-31 2023-03-09 北京百度网讯科技有限公司 Method and apparatus for displaying information flow, device, and medium
CN114077371A (en) * 2021-11-12 2022-02-22 北京百度网讯科技有限公司 Information display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP7230158B2 (en) Dynamic custom interstitial transition video for video streaming services
US9723335B2 (en) Serving objects to be inserted to videos and tracking usage statistics thereof
US10616727B2 (en) System and method for location-based content delivery and visualization
US20160300594A1 (en) Video creation, editing, and sharing for social media
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
CN112927326A (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2015089100A1 (en) Social messaging system and method
JP2023533457A (en) Method, Apparatus, and Device for Posting and Replying to Multimedia Content
US20170287000A1 (en) Dynamically generating video / animation, in real-time, in a display or electronic advertisement based on user data
US20160035016A1 (en) Method for experiencing multi-dimensional content in a virtual reality environment
US11631114B2 (en) Augmenting web-based video media with online auction functionality
US11991602B2 (en) System and method for location-based content delivery and visualization
KR20240042145A (en) Video publishing methods, devices, electronic equipment and storage media
CN110324676A (en) Data processing method, media content put-on method, device and storage medium
CN115190366B (en) Information display method, device, electronic equipment and computer readable medium
CN114679628B (en) Bullet screen adding method and device, electronic equipment and storage medium
US10453491B2 (en) Video processing architectures which provide looping video
CN115269886A (en) Media content processing method, device, equipment and storage medium
JP6695826B2 (en) Information display program, information display device, information display method, and distribution device
CN114025188B (en) Live advertisement display method, system, device, terminal and readable storage medium
CA3199704A1 (en) Video advertisement augmentation with dynamic web content
CN114500427A (en) Method, apparatus and computer readable medium for transmitting advertisement message in chat group
CN117786159A (en) Text material acquisition method, apparatus, device, medium and program product
CN117061692A (en) Rendering custom video call interfaces during video calls

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40047292

Country of ref document: HK