CN111190526A - Image processing method, image processing apparatus, computer device, and medium - Google Patents

Image processing method, image processing apparatus, computer device, and medium Download PDF

Info

Publication number
CN111190526A
CN111190526A CN202010020874.9A CN202010020874A CN111190526A CN 111190526 A CN111190526 A CN 111190526A CN 202010020874 A CN202010020874 A CN 202010020874A CN 111190526 A CN111190526 A CN 111190526A
Authority
CN
China
Prior art keywords
image
information
display position
target
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010020874.9A
Other languages
Chinese (zh)
Other versions
CN111190526B (en
Inventor
林晓文
李烈强
谢天
骆玘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010020874.9A priority Critical patent/CN111190526B/en
Publication of CN111190526A publication Critical patent/CN111190526A/en
Application granted granted Critical
Publication of CN111190526B publication Critical patent/CN111190526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, computer equipment and a medium, wherein the method comprises the following steps: when information is displayed in a user interface, displaying a first image related to the information at a first display position corresponding to the information; if a movement trigger event for the first image is detected, controlling the first image to move from the first display position to a second display position in the user interface; replacing the first image with a second image related to the information during the movement of the first image; and displaying the second image in a floating mode at the second display position. By adopting the embodiment of the invention, the attraction of the information to the user can be effectively improved, and the user viscosity of the information is enhanced.

Description

Image processing method, image processing apparatus, computer device, and medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a computer storage medium.
Background
With the development of internet technology, more and more information (such as advertisement information) appears in the daily life of users. At present, when information is pushed to a user, the information is usually directly displayed in a user interface, and no image processing is performed on the information. Therefore, the existing information has a single display mode and is weaker in attraction to users, so that the participation feeling and the interaction feeling of the users are lower, and the users can easily and habitually ignore the information. Therefore, how to enhance the user's viscosity of information becomes a research focus.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, computer equipment and a medium, which can effectively improve the attraction of information to users and enhance the user viscosity of the information.
In one aspect, an embodiment of the present invention provides an image processing method, where the image processing method includes:
when information is displayed in a user interface, displaying a first image related to the information at a first display position corresponding to the information;
if a movement trigger event for the first image is detected, controlling the first image to move from the first display position to a second display position in the user interface;
replacing the first image with a second image related to the information during the movement of the first image;
and displaying the second image in a floating mode at the second display position.
In one aspect, an embodiment of the present invention provides an image processing apparatus, including:
the display unit is used for displaying a first image related to the information at a first display position corresponding to the information when the information is displayed in a user interface;
the processing unit is used for controlling the first image to move from the first display position to a second display position in the user interface if a movement trigger event aiming at the first image is detected;
the processing unit is used for replacing the first image with a second image related to the information in the moving process of the first image;
the display unit is further configured to display the second image in a floating manner at the second display position.
In one aspect, an embodiment of the present invention provides a computer device, where the computer device includes an input interface and an output interface, and the computer device further includes:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
when information is displayed in a user interface, displaying a first image related to the information at a first display position corresponding to the information;
if a movement trigger event for the first image is detected, controlling the first image to move from the first display position to a second display position in the user interface;
replacing the first image with a second image related to the information during the movement of the first image;
and displaying the second image in a floating mode at the second display position.
In one aspect, an embodiment of the present invention provides a computer storage medium, where one or more instructions are stored, where the one or more instructions are adapted to be loaded by a processor and perform the following steps:
when information is displayed in a user interface, displaying a first image related to the information at a first display position corresponding to the information;
if a movement trigger event for the first image is detected, controlling the first image to move from the first display position to a second display position in the user interface;
replacing the first image with a second image related to the information during the movement of the first image;
and displaying the second image in a floating mode at the second display position.
When the information is displayed in the user interface, the first image related to the information can be displayed at the first display position corresponding to the information, so that the information is attracted to the user. If a movement trigger event for the first image is detected, the first image can be controlled to move from a first display position to a second display position in the user interface; by controlling the movement of the first image, the interest of information display can be improved, and the attraction of the information to users is further improved. During the moving process of the first image, replacing the first image by a second image related to the information so as to display the second image in a floating manner at a second display position; through seamless replacement between the first image and the second image, surprise feeling can be brought to the user in the display process of the information, so that the attraction of the information to the user is further improved, and the user viscosity of the information is enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a system architecture diagram of an image processing system according to an embodiment of the present invention;
FIG. 1b is a schematic diagram of an image material according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a first display position according to an embodiment of the present invention;
FIG. 3b is a schematic diagram illustrating a variation of a first display position according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of replacing a first image with a second image according to an embodiment of the present invention;
FIG. 3d is a schematic diagram of a method for replacing a first image with a second image according to an embodiment of the present invention;
FIG. 3e is a schematic diagram of a position of a suspension layer according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 5a is a schematic diagram illustrating an information display according to an embodiment of the present invention;
FIG. 5b is a schematic diagram of replacing a target image with a first image according to an embodiment of the present invention;
FIG. 5c is a schematic diagram of a first image display according to an embodiment of the present invention;
FIG. 5d is a diagram illustrating a second image replacing a first image according to an embodiment of the present invention;
fig. 5e is a schematic diagram illustrating a display of a first frame image in a second image according to an embodiment of the present invention;
FIG. 5f is a schematic diagram illustrating a second image according to an embodiment of the present invention;
FIG. 5g is a schematic diagram of replacing a second image with a target image according to an embodiment of the present invention;
FIG. 5h is a schematic diagram of a target image display according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
In the embodiment of the invention, the information refers to information displayed in a user interface for a user to browse; which may include, but are not limited to: advertising information, news information, social interaction information, and the like. The advertisement information refers to information which is transmitted to audience users by an advertiser and is used for attracting the audience users to pay attention to advertisement objects (such as automobiles, certain brands and the like); an advertisement message may include: text, image, audio, video, and other forms of content. News information refers to the fact of public dissemination and information of the latest state, that is, news information refers to information with news value that has recently come into the spotlight of the public. The social interaction information refers to information which is published on a social platform by a user and used for reflecting the states of the mood, the portrait and the like of the user; for example, dynamic information published by a contact in a WeChat friend circle or a QQ space, dynamic information published by a user in a microblog APP (application), and the like.
In order to enhance the user viscosity of information in the display process, the embodiment of the invention provides an image processing scheme and a corresponding image processing system; the user viscosity can be used for reflecting the attention degree of the user to the information, and the higher the user viscosity is, the higher the attention degree of the user to the information is. Referring to fig. 1a, the image processing system may include at least: at least one computer device 11 and a server 12. Among other things, the computer device 11 may include, but is not limited to: terminal devices such as smart phones, tablet computers, laptop computers, and desktop computers, or APPs running in the terminal devices, such as content interaction APPs (e.g., microblog APPs), instant messaging APPs (e.g., Tencent QQ, WeChat APPs), browser APPs, and so on. The server 12 is a server capable of providing a plurality of business services such as information service and material service for the computer device 11; which include but are not limited to: data processing servers, application servers, and web servers, among others. Wherein, the information service is a service for providing information for the computer equipment 11; the material service is a service for providing image material related to information to the computer device 11. The image material is a document constituting an image, and may be uploaded to the server 12 in advance by a distributor (e.g., an advertiser) of information. It should be noted that fig. 1a merely represents an exemplary system architecture of the image processing system, and does not limit the specific architecture of the image processing system. For example, the server 12 in the image processing system shown in FIG. 1a is a stand-alone service device; in this case, a plurality of business services such as information service and material service are provided by one service device (i.e. server 12). However, in practical applications, the server 12 in the image processing system may be a cluster device formed by a plurality of service devices; in this case, a plurality of business services such as information service and material service can be provided by each service device in the server 12; such as providing information services from an information server, providing material services from a material server, etc. Also, when the servers 12 are physically deployed, the servers 12 may be deployed in a blockchain network; the server 12 may also be deployed outside the blockchain network, which is not limited in the embodiment of the present invention. The blockchain network refers to a network formed by blockchains and a P2P (peer-to-peer) network; the blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block.
The image processing scheme proposed by the embodiment of the present invention can be executed by any computer device 11 in the above-mentioned image processing system, and the specific principle of the image processing scheme is as follows: the information publisher can upload the information to be promoted and the related image pixel to the server 12 for storage through an information promotion platform (such as an advertisement delivery platform). Specifically, if the server 12 is deployed outside the blockchain network, the publisher of the information can directly upload the information to be promoted and the related image files to the server 12 for storage through the information promotion platform. If the server 12 is deployed in the blockchain network, the publisher of the information can upload the information to be promoted and the related image pixels to the blockchain network through the information promotion platform; the information and the image material are identified by each node device in the block chain network, and corresponding blocks are generated after the identification passes, and the corresponding blocks are stored in the server 12. Wherein, the related image materials can comprise one or more of the following materials: material a, material B and material C, as shown in fig. 1B. Wherein, the material A can be a file forming a static image; the still image formed by the material a may be: a still image or a still avatar based on PNG (portable internet Graphics) format; the avatar means: and the image is used as a user identification on a website or a social platform, such as a QQ head portrait, a WeChat head portrait and the like. The material B may be a file constituting a dynamic image; the moving image formed by the material B may be: a moving image or moving avatar based on the GIF (Graphics Interchange Format) Format. The material C may be a file constituting a floating image (an image displayed on a floating layer), where the floating layer is a static or dynamic layer floating on a transparent bottom at the topmost layer of the user interface; the floating image formed by the material C may be: a PNG format based floating still image, or a GIF format based floating moving image.
In practical applications, the server 12 can send the image material and the information shown in FIG. 1b to the computer device 11. When the computer device 11 displays the information, it can display the corresponding images in sequence according to each material in the image material. Taking the example that the image material includes three materials, namely a material a, a material B, and a material C, the computer device may display the image a corresponding to the material a and the image B corresponding to the material B in sequence according to the sequence of the material a → the material B at a position (e.g., position a) within the user interface. Then, a move effect may be added where image B jumps out of position a and moves to another position within the user interface (e.g., position B); and replacing the image B with the image C corresponding to the material C during the movement of the image B, thereby displaying the image C in a floating manner at the position B. Therefore, the image processing scheme provided by the embodiment of the invention can effectively improve the interest and surprise of the information in the display process through the movement and replacement of the image material, thereby improving the attraction of the information to the user and enhancing the user viscosity of the information.
It will be appreciated that the method steps set forth in embodiments of the invention may be performed by a computer device, including but not limited to a terminal or a client.
Based on the above description, an embodiment of the present invention proposes an image processing method that can be executed by the above-mentioned computer apparatus. Referring to fig. 2, the image processing method may include the following steps S201 to S204:
s201, when the information is displayed in the user interface, a first image related to the information is displayed at a first display position corresponding to the information.
As can be seen from the foregoing, the information may include, but is not limited to: advertising information, news information, and social interaction information, among others. For convenience of illustration, the information mentioned in the following is illustrated by taking advertisement information as an example. The computer equipment can acquire the information and a first image material of a first image related to the information from the server in advance or in real time and analyze the first image material to obtain the first image; the first image may be a static image related to the information, or may be a dynamic image related to the information, which is not limited in the embodiment of the present invention. In particular implementations, the first image may include at least one of: the target object promoted by the information and the character promoting the target object. Wherein, the target object may include: brand of clothing, vehicles, skin care products, etc.; the characters of the promotion target object can be real characters (such as actors, singers and the like) with certain influence or virtual characters. When the information needs to be displayed, the computer equipment can display the information in the user interface; and when the information is displayed in the user interface, a first image related to the information is displayed at a first display position corresponding to the information.
In one embodiment, if the information is published on the social platform, the computer device may display the information in a content sharing interface in the social platform while displaying the information in a user interface. Social platforms herein may include, but are not limited to: a content sharing platform in the instant messaging APP (such as a friend circle in the WeChat APP, a public platform (public number for short) in the WeChat APP, a QQ space in the QQACPP, and the like), and a content sharing platform in the content interaction APP (such as a home page in the microblog APP); accordingly, the computer equipment can display the information in a dynamic display interface of a friend circle or a QQ space, or can display the information in a content detail interface of an article pushed by a public platform, and the like. In this embodiment, the first display position corresponding to the information may be an identification display position of a publisher of the information; the identification display position refers to a display position of the user identification of the publisher in the user interface, and the user identification can comprise a user head portrait, a user name and the like. Taking the user identifier as the user avatar as an example, the first display position (i.e. the identifier display position) can be seen in fig. 3 a. It should be noted that the first display position is associated with the display position of the information in the user interface, and is dynamically changeable; for example, when the user slides the information in the ui upward (i.e. to the top of the ui), the display position of the information in the ui will also move upward, and the first display position will also move upward, as shown in fig. 3 b. In one embodiment, if the information is published on the browser APP, the computer device may display the information in an information browsing interface of the browser APP when displaying the information in the user interface. In this embodiment, the first display position corresponding to the information may be any position in the display area corresponding to the information; such as the top left position, the top right position, etc. within the display area. For convenience of explanation, the first display position is taken as an example of the display position of the identifier corresponding to the head portrait of the user of the publisher of the information.
S202, if the movement trigger event aiming at the first image is detected, the first image is controlled to move from the first display position to the second display position in the user interface.
The computer device may detect whether there is a movement triggering event for the first image while displaying the first image. Wherein, the movement trigger event for the first image refers to an event for triggering the movement of the first image; which may include, but are not limited to: the event that the display duration of the first image is larger than the first duration threshold value and the event that the information is detected to be slid by a user through a gesture or a mouse. Correspondingly, if the computer device detects that the display duration of the first image is greater than the first duration threshold, it may be determined that a movement trigger event for the first image is detected; the first time threshold value can be set according to empirical values or traffic demands, for example, it can be set to 2 seconds. Alternatively, if the computer device detects a sliding operation of the user sliding the information to the top of the user interface through a gesture or a mouse, it may be determined that the movement trigger event for the first image is detected. Optionally, when the first image is a dynamic image, the triggering event for the movement of the first image may further include: detecting an event that each frame image in the first image is displayed; for example, if the first image is a dynamic image composed of 5 frames of images, the computer device may determine that a movement trigger event is detected for the first image after sequentially displaying 5 frames of images in the first image.
After detecting a movement trigger event for the first image, the computer device may control the first image to move from the first display position to a second display position in the user interface, so as to attract the attention of the user and improve the attention of the user to the information. The second display position can be set according to an empirical value or a service requirement; for example, the second display position may be set to a position at which a center point of the user interface is located, or the second display position may be set to a position at an upper left corner of the user interface. For convenience of illustration, the second display position is used as the position of the center point of the user interface.
S203, in the moving process of the first image, the second image related to the information is used to replace the first image.
And S204, displaying the second image in a floating mode at the second display position.
In steps S203-S204, the computer device can obtain a second image material of a second image related to the information from the server in advance or in real time, and analyze the second image material to obtain a second image; the second image may comprise at least one of: the target object promoted by the information and the character promoting the target object. During the movement of the first image, the computer device may replace the first image with a second image associated with the informational information. In a specific implementation, when the second image is used to replace the first image, at least two cases can exist: in the first case: the computer device may be such that the first image has been replaced with the second image without the first image reaching the second display position. In this case, the computer device may continue to control the second image to move to the second display position after performing the image replacement, thereby displaying the second image in a floating manner at the second display position, as shown in fig. 3 c. In the second case: the computer device may replace the first image with the second image when the first image reaches the second display position. In this case, the computer device may directly hover display the second image at the second display position after performing the image replacement, as shown in fig. 3 d.
It should be noted that, the floating display mentioned in the embodiment of the present invention refers to: and displaying in the suspension layer positioned at the topmost layer of the user interface. For example, if the interface displaying the information is interface 1, the floating layer for displaying the second image in a floating manner may be located above interface 1, and there is no interface above the floating layer, as shown in fig. 3 e. In order to improve the floating display effect of the second image, the second image can be divided into a background area and a foreground area according to a target object and a person contained in the second image; and setting the transparency of the background area and the foreground area according to actual requirements, for example, setting the transparency of the background area to 100% and the transparency of the foreground area to 0%. The background region is an image region in the second image that does not include the target object or the animation of the character (such as bubbles or snowflakes), and the foreground region is an image region in the second image that includes the target object, the character, and the animation.
As can be seen from the foregoing, the server according to the embodiment of the present invention may be deployed outside the blockchain network, or may be deployed in the blockchain network. When the server is deployed in the blockchain network, the first image material of the first image and the second image material of the second image described above may be stored in the blockchain network. Accordingly, the computer device may obtain first image material for the first image and second image material for the second image from the blockchain network. Specifically, the computer device may send a material acquisition request to a server in the blockchain network to request the server to issue a first image material of the first image and a second image material of the second image. Correspondingly, the server can respond to the material acquisition request, extract a first image material of the first image and a second image material of the second image from the blocks used for storing the image materials in the block chain, and send the first image material and the second image material to the computer equipment. The computer equipment can receive the first image material and the second image material sent by the server, can analyze the first image material to obtain a first image, and can analyze the second image material to obtain a second image.
When the information is displayed in the user interface, the first image related to the information can be displayed at the first display position corresponding to the information, so that the information is attracted to the user. If a movement trigger event for the first image is detected, the first image can be controlled to move from a first display position to a second display position in the user interface; by controlling the movement of the first image, the interest of information display can be improved, and the attraction of the information to users is further improved. During the moving process of the first image, replacing the first image by a second image related to the information so as to display the second image in a floating manner at a second display position; through seamless replacement between the first image and the second image, surprise feeling can be brought to the user in the display process of the information, so that the attraction of the information to the user is further improved, and the user viscosity of the information is enhanced.
Fig. 4 is a schematic flow chart of another image processing method according to an embodiment of the present invention. The image processing method may be executed by the above-mentioned computer apparatus. In the embodiment of the present invention, the first image is mainly taken as a dynamic image for explanation; referring to fig. 4, the image processing method may include the following steps S401 to S408:
s401, when the information is displayed in the user interface, the target image related to the information is displayed at the first display position corresponding to the information.
The computer equipment can acquire the information from the server in advance or in real time and display the information in the user interface. In addition, in the display process of the information, a user can execute up-and-down sliding operation on the information in a user interface through gestures or a mouse; when the computer device detects that the user interface includes a first display position corresponding to the information (e.g., a display position of a logo corresponding to the avatar of the user), a target image related to the information can be displayed at the first display position corresponding to the information. The target image related to the information can be obtained by the computer equipment through analyzing target image materials, and the target image materials can be obtained by the computer equipment in real time or in advance from a server; the target image may comprise at least one of: the target object promoted by the information and the character promoting the target object. In one embodiment, the target image may be a still image.
S402, if a replacement triggering event aiming at the target image is detected, replacing the target image by a first image related to the information; and displaying the first image at the first display position.
The computer device may detect whether there is a replacement trigger event for the target image during the display of the target image. Wherein, the replacement triggering event for the target image refers to an event for triggering replacement of the target image with the first image (i.e. the dynamic image); which may include, but are not limited to: the event that the display duration of the target image is larger than the target duration threshold value and the event that the information display area where the information is completely displayed on the user interface is detected. The full display refers to the whole area of the information display area containing the information in the visible range of the user interface; for example, the visible range of the user interface shown in the left diagram of fig. 5a only includes a partial area of the information display area where the information is located, so that the user interface shown in the left diagram of fig. 5a does not completely display the information display area where the information is located; the user interface shown in the right diagram of fig. 5a includes the whole area of the information display area where the information is located within the visible range of the user interface shown in the right diagram of fig. 5a, and the user interface shown in the right diagram of fig. 5a completely displays the information display area where the information is located.
Correspondingly, if the computer device detects that the display duration of the target image is greater than the target duration threshold, the replacement trigger event for the target image can be determined to be detected; the target duration threshold here may be set according to empirical values or traffic demands, for example, may be set to 1 second. Alternatively, if the computer device detects that the user interface has completely displayed the information display area in which the information is located, it may be determined that a replacement trigger event for the target image is detected. After detecting a replacement triggering event for the target image, the computer device may replace the target image with the first image associated with the informational information, as shown in FIG. 5 b. Then, the computer device can display the first image at the first display position corresponding to the information, as shown in fig. 5 c. In one embodiment, the first image and the target image may be independent images; in another embodiment, the target image may also be any one frame image (e.g., a first frame image) in the first image, which is not limited in this embodiment of the present invention. By replacing the image at the first display position with a dynamic image (i.e., the first image) from a static image (i.e., the target image), the user's attention to the stay of the information can be attracted, thereby enhancing the attraction of the information to the user.
And S403, if the movement trigger event aiming at the first image is detected, controlling the first image to move from the first display position to the second display position in the user interface.
S404, in the moving process of the first image, the second image related to the information is used to replace the first image.
In steps S403-S404, if the computer device detects a movement trigger event for the first image, for example, detects that the display duration of the first image is greater than a first duration threshold (e.g., 2 seconds), the computer device may control the first image to move from the first display position to a second display position in the user interface. In particular, the computer device may first determine a first arc from the first display position to the second display position and may then control the first image to move along the first arc from the first display position to the second display position. During the moving of the first image, the computer apparatus may further replace the first image with a second image related to the information through step S404.
One specific implementation of step S404 may be: during the movement of the first image, the computer device may directly replace the first image with a second image associated with the information. Alternatively, another implementation of step S404 may be: during the movement of the first image, the computer device may dynamically reduce an image size of the first image at an image reduction rate; wherein the image reduction rate can be set according to an empirical value or a business requirement. If the image size of the first image is detected to be reduced to the first size threshold, replacing the first image with a second image related to the information; the first size threshold herein may be set according to an empirical value or a traffic demand, and for example, the first size threshold may be set to 0. Note that, when the image size of the first image is reduced to the first size threshold, there may be two cases: in a first case, the image size of the first image may be reduced to a first size threshold before the first image is moved to the second display position; in the second case, the image size of the first image may be reduced to the first size threshold when the first image is moved to the second display position. For convenience of illustration, the first case is taken as an example for the following description.
In the embodiment of the present invention, the second image may be a static image or a dynamic image. Accordingly, the computer device can at least include the following two embodiments when replacing the first image with the second image: when the second image is a static image, if the computer device detects that the image size of the first image is reduced to a first size threshold, the computer device can directly replace the first image with the second image; in this case, the image size of the second image may be greater than or equal to the first size threshold. When the second image is a dynamic image formed by a plurality of frames of images arranged in sequence, if the computer equipment detects that the image size of the first image is reduced to a first size threshold value, the computer equipment can acquire a first frame image in the second image related to the information; then the first image can be replaced by the first frame image; in this case, the image size of the first frame image is greater than or equal to the first size threshold. Taking the second image as a dynamic image, and taking the image size of the first frame image equal to the first size threshold as an example; a schematic diagram of replacing the first image with the second image can be seen in fig. 5 d. The embodiment of the invention realizes image replacement by dynamically adjusting the image size, can improve the interestingness of image replacement, attracts the staying attention of a user to information, and further improves the attractiveness of the information to the user. And when the first image and the second image contain the person, a surprise that the person jumps out of the first image can be created for the user through image movement and replacement, the attraction of the information to the user is further improved, the information is promoted to be clicked and checked by the user, and therefore the click conversion rate of the information is improved.
It should be noted that, after the computer device executes step S401, steps S402-S404 may not be executed; but detects whether there is a movement triggering event of the target image (e.g., an event that the display duration of the target image is greater than a preset duration threshold). If the target image exists, controlling the target image to move from the first display position to the second display position; and replacing the target image with the second image during the movement of the target image. Then, the computer device may directly perform step S405.
And S405, displaying the second image in a floating mode at the second display position.
As can be seen from the foregoing, the second image may be a static image or a dynamic image; accordingly, step S405 may include at least the following two embodiments:
when the second image is a static image, the computer device may directly hover display the second image at the second display position. Further, the computer device may dynamically enlarge the image size of the second image according to the image enlargement rate in the process of displaying the second image in a floating manner at the second display position; the image magnification rate here may be set according to an empirical value or a business requirement. If the image size of the second image is enlarged to the second size threshold, the dynamic enlargement operation for the image size of the second image may be stopped, and the second image having the image size of the second size threshold may be displayed at the second display position. The second size threshold herein may be determined according to the size of the user interface, which may be less than or equal to the size of the user interface. It should be understood that the second size threshold may be greater than the first size threshold.
When the second image is a dynamic image, the computer device may first display the first frame image of the second image in a floating manner at the second display position, and dynamically enlarge the image size of the first frame image according to the image enlargement rate. Taking the first size threshold as 0 and the second size threshold equal to the size of the user interface as an example, by dynamically enlarging the image size of the first frame image, the first frame image appears from the center point of the screen (i.e. the second display position), and gradually enlarges from 0 (the first size threshold) to fill the whole user interface, as shown in fig. 5 e. If it is detected that the image size of the first frame image is enlarged to the second size threshold, the frame images in the second image may be sequentially displayed in a floating manner at the second display position, as shown in fig. 5 f.
And S406, if a closing trigger event for the second image is detected, controlling the second image to move from the second display position to the first display position.
Referring to fig. 5f, a close button may be included in the second image. The computer device may detect whether there is a shutdown trigger event for the second image while displaying the second image. The close trigger event herein refers to an event for triggering closing of the second image, which may include, but is not limited to: and detecting an event of the triggering operation aiming at the closing button and an event of the display duration of the second image being greater than a second duration threshold. Accordingly, if the computer device detects a triggering operation for the close button, it may be determined that a close triggering event for the second image is detected. Alternatively, if the computer device detects that the display duration of the second image is greater than a second duration threshold, it may be determined that a close trigger event for the second image is detected; the second time period threshold value can be set according to empirical values or traffic demands, for example, it can be set to 5 seconds. Optionally, when the second image is a dynamic image, the closing trigger event for the second image may further include: detecting an event that each frame image in the second image is displayed; for example, the second image is a dynamic image composed of 8 frames of images, and the computer device may determine that a close trigger event for the second image is detected after sequentially displaying 8 frames of images in the second image.
After detecting a closing trigger event for the second image, the computer device may control the second image to move from the second display position to the first display position to attract the attention of the user and improve the attention of the user to the information. In particular, the computer device may first determine a second arc from the second display position to the first display position and may then control the second image to move along the second arc from the second display position to the first display position. It should be noted that, when the second image is a moving image formed by a plurality of frames of images arranged in sequence, the computer device may specifically control the last frame image (last frame image) of the second image to move from the second display position to the first display position when controlling the second image to move to the first display position. During the movement of the second image, the computer apparatus may further replace the second image with the target image through step S407; the specific implementation of this step can be referred to the following related description of step S407.
S407, replacing the second image with the target image in the moving process of the second image.
In a specific implementation manner, the specific implementation manner of step S407 may be: during the movement of the first image, the computer device may directly replace the second image with the target image. It should be understood that when replacing the first image with the second image, there may be at least two of the following: in the first case: the computer device may replace the second image with the target image before the second image reaches the first display position. In this case, the computer apparatus may continue to control the target image to move to the first display position after performing the image replacement, thereby displaying the target image at the first display position. In the second case: the computer device may replace the second image with the target image when the second image reaches the first display position.
In a specific implementation, the specific implementation of step S407 may be: during the movement of the second image, the computer device may dynamically reduce the image size of the second image at an image reduction rate. And if the image size of the second image is detected to be reduced to the first size threshold, replacing the second image with the target image. Note that, when the image size of the second image is reduced to the first size threshold, there may be two cases: in a first case, the image size of the second image may be reduced to a first size threshold before the second image is moved to the first display position; in the second case, the image size of the second image may be reduced to the first size threshold when the second image is moved to the first display position. For convenience of illustration, the first case is taken as an example for illustration; accordingly, a schematic diagram of replacing the second image with the target image can be seen in fig. 5 g.
S408, displaying the target image at the first display position.
In one embodiment, the computer device may directly display the target image at the first display location. In one embodiment, the computer device may display the target image in a fade-in manner at the first display location. Specifically, the computer device may display the target image at a first display position according to the transparency of the first image; the transparency of the first image here may be set according to actual needs or empirical values, for example, the transparency of the first image may be set to 100%, 60%, etc. Secondly, the transparency of the target image can be dynamically adjusted according to the transparency reduction rate based on the transparency of the first image, so that the transparency of the target image after dynamic adjustment is smaller than or equal to the transparency of the second image; the transparency of the second image is smaller than that of the first image, and the transparency of the second image can be set according to actual requirements or empirical values, for example, the transparency of the second image can be set to 0%. For example, if the transparency of the first image is 60% and the transparency of the second image is 0%, the schematic diagram of the computer device fading-in the target image at the first display position can be seen in fig. 5 h.
It should be understood that, in other embodiments, the computer device may also replace the second image with the first image and display the first image at the first display position in the process of controlling the second image to move from the second display position to the first display position, which is similar to steps S407-S408 and will not be described herein again. Further, after replacing the second image with the first image and displaying the first image at the first display position; the computer device may also replace the first image with the target image and display the target image at the first display location.
When the information is displayed in the user interface, the first image related to the information can be displayed at the first display position corresponding to the information, so that the information is attracted to the user. If a movement trigger event for the first image is detected, the first image can be controlled to move from a first display position to a second display position in the user interface; by controlling the movement of the first image, the interest of information display can be improved, and the attraction of the information to users is further improved. During the moving process of the first image, replacing the first image by a second image related to the information so as to display the second image in a floating manner at a second display position; through seamless replacement between the first image and the second image, surprise feeling can be brought to the user in the display process of the information, so that the attraction of the information to the user is further improved, and the user viscosity of the information is enhanced.
Based on the description of the above embodiment of the image processing method, the embodiment of the present invention also discloses an image processing apparatus, which may be a computer program (including a program code) running in a computer device; the computer device here may be a terminal device, or may be an APP running in the terminal device. The image processing apparatus may perform the method shown in fig. 2 or fig. 4. Referring to fig. 6, the image processing apparatus may operate the following units:
the display unit 601 is used for displaying a first image related to information at a first display position corresponding to the information when the information is displayed in a user interface;
a processing unit 602, configured to control the first image to move from the first display position to a second display position in the user interface if a movement trigger event for the first image is detected;
the processing unit 602 is further configured to replace the first image with a second image related to the information during the moving process of the first image;
the display unit 601 is further configured to display the second image in a floating manner at the second display position.
In one embodiment, the first image and the second image each comprise at least one of: the target object promoted by the information and the character promoting the target object; the information is published on a social platform, and the first display position is an identification display position of a publisher of the information; the second display position is the position of the center point of the user interface.
In an embodiment, the processing unit 602, when being configured to replace the first image with the second image related to the information during the moving of the first image, is specifically configured to: dynamically reducing the image size of the first image according to an image reduction rate in the moving process of the first image; and if the image size of the first image is detected to be reduced to a first size threshold, replacing the first image with a second image related to the information.
In one embodiment, the second image is a moving image composed of a plurality of frames of images arranged in sequence; accordingly, the processing unit 602, when configured to replace the first image with the second image related to the information if it is detected that the image size of the first image is reduced to the first size threshold, may be specifically configured to: if the image size of the first image is detected to be reduced to a first size threshold value, acquiring a first frame image in a second image related to the information; and replacing the first image with the first frame image, wherein the image size of the first frame image is larger than or equal to the first size threshold.
In an embodiment, when the display unit 601 is configured to display the second image in a floating manner at the second display position, it may be specifically configured to: displaying the first frame image of the second image in a suspended mode at the second display position, and dynamically amplifying the image size of the first frame image according to the image amplification rate; and if the image size of the first frame image is detected to be enlarged to a second size threshold, sequentially displaying each frame image in the second image in a suspended manner at the second display position.
In one embodiment, the first image is a dynamic image; correspondingly, when the display unit 601 is used for displaying information in the user interface, and when a first image related to the information is displayed at a first display position corresponding to the information, it may be specifically used for: when information is displayed in a user interface, displaying a target image related to the information at a first display position corresponding to the information; the target image is a static image; if a replacement triggering event aiming at the target image is detected, replacing the target image by a first image related to the information; and displaying the first image at the first display position.
In one embodiment, the processing unit 602 is further operable to: if a closing trigger event for the second image is detected, controlling the second image to move from the second display position to the first display position; replacing the second image with the target image during the movement of the second image; the display unit 601 may also be used to: and displaying the target image at the first display position.
In an embodiment, when the processing unit 602 is configured to replace the second image with the target image in the moving process of the second image, it is specifically configured to: dynamically reducing the image size of the second image according to an image reduction rate in the moving process of the second image; and if the image size of the second image is detected to be reduced to a first size threshold, replacing the second image with the target image.
In an embodiment, the display unit 601, when configured to display the target image at the first display position, may specifically be configured to: displaying the target image at the first display position according to the first image transparency; and dynamically adjusting the transparency of the target image based on the transparency of the first image and according to the transparency reduction rate, so that the transparency of the dynamically adjusted target image is less than or equal to the transparency of a second image, and the transparency of the second image is less than the transparency of the first image.
In one embodiment, a close button is included in the second image; accordingly, the processing unit 602 may be further configured to: and if the triggering operation aiming at the closing button is detected, determining that a closing triggering event aiming at the second image is detected.
In one embodiment, the processing unit 602 is further operable to: acquiring a first image material of the first image and a second image material of the second image from a block chain network; and analyzing the first image material to obtain the first image, and analyzing the second image material to obtain the second image.
According to an embodiment of the present invention, each step involved in the method shown in fig. 2 or fig. 4 may be performed by each unit in the image processing apparatus shown in fig. 6. For example, steps S201 and S204 shown in fig. 2 may be performed by the display unit 601 shown in fig. 6, and steps S202 to S203 may be performed by the processing unit 602 shown in fig. 6; as another example, steps S401 to S402, S405, and S408 shown in fig. 4 may be performed by the display unit 601 shown in fig. 6, and steps S403 to S404 and steps S406 to S407 may be performed by the processing unit 602 shown in fig. 6. According to another embodiment of the present invention, the units in the image processing apparatus shown in fig. 6 may be respectively or entirely combined into one or several other units to form the image processing apparatus, or some unit(s) thereof may be further split into multiple units with smaller functions to form the image processing apparatus, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the image processing apparatus may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the image processing apparatus device as shown in fig. 6 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the respective methods as shown in fig. 2 or fig. 4 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and an image processing method according to an embodiment of the present invention may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
When the information is displayed in the user interface, the first image related to the information can be displayed at the first display position corresponding to the information, so that the information is attracted to the user. If a movement trigger event for the first image is detected, the first image can be controlled to move from a first display position to a second display position in the user interface; by controlling the movement of the first image, the interest of information display can be improved, and the attraction of the information to users is further improved. During the moving process of the first image, replacing the first image by a second image related to the information so as to display the second image in a floating manner at a second display position; through seamless replacement between the first image and the second image, surprise feeling can be brought to the user in the display process of the information, so that the attraction of the information to the user is further improved, and the user viscosity of the information is enhanced.
Based on the description of the method embodiment and the device embodiment, the embodiment of the invention also provides computer equipment. Referring to fig. 7, the computer device includes at least a processor 701, an input interface 702, an output interface 703, and a computer storage medium 704. Wherein the computer storage medium 704 is configured to store a computer program comprising program instructions, and the processor 701 is configured to execute the program instructions stored by the computer storage medium 704. Note that, if the computer device is a terminal device, the processor 701 may be a CPU (Central Processing Unit), and the computer storage medium 704 may be directly stored in a memory of the computer device. If the computer device is an APP running in a terminal device, the processor 701 may be a microprocessor, and the computer storage medium 704 may be stored in a memory of the terminal device where the computer device is located.
The processor 701 is a computing core and a control core of a computer device, and is adapted to implement one or more instructions, and in particular to load and execute the one or more instructions to implement a corresponding method flow or a corresponding function; in one embodiment, the processor 701 according to an embodiment of the present invention may be configured to perform a series of image processing, including: when information is displayed in a user interface, displaying a first image related to the information at a first display position corresponding to the information; if a movement trigger event for the first image is detected, controlling the first image to move from the first display position to a second display position in the user interface; replacing the first image with a second image related to the information during the movement of the first image; displaying the second image in hover at the second display position, and so on.
An embodiment of the present invention further provides a computer storage medium (Memory), which is a Memory device in a computer device and is used to store programs and data. It is understood that the computer storage medium herein may include both built-in storage media in the computer device and, of course, extended storage media supported by the computer device. Within which may be stored one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 701. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 701 to perform the corresponding steps of the methods described above in connection with the image processing embodiments; in particular implementations, one or more instructions in the computer storage medium are loaded by processor 701 and perform the following steps:
when information is displayed in a user interface, displaying a first image related to the information at a first display position corresponding to the information;
if a movement trigger event for the first image is detected, controlling the first image to move from the first display position to a second display position in the user interface;
replacing the first image with a second image related to the information during the movement of the first image;
and displaying the second image in a floating mode at the second display position.
In one embodiment, the first image and the second image each comprise at least one of: the target object promoted by the information and the character promoting the target object; the information is published on a social platform, and the first display position is an identification display position of a publisher of the information; the second display position is the position of the center point of the user interface.
In one embodiment, when replacing the first image with a second image associated with the informational information during the movement of the first image, the one or more instructions are loaded and executed by processor 701 to: dynamically reducing the image size of the first image according to an image reduction rate in the moving process of the first image; and if the image size of the first image is detected to be reduced to a first size threshold, replacing the first image with a second image related to the information.
In one embodiment, the second image is a moving image composed of a plurality of frames of images arranged in sequence; accordingly, when the first image is replaced with the second image associated with the information if it is detected that the image size of the first image is reduced to the first size threshold, the one or more instructions are loaded and executed by the processor 701 to: if the image size of the first image is detected to be reduced to a first size threshold value, acquiring a first frame image in a second image related to the information; and replacing the first image with the first frame image, wherein the image size of the first frame image is larger than or equal to the first size threshold.
In one embodiment, when the second image is displayed in the second display position in a floating manner, the one or more instructions are loaded and specifically executed by the processor 701: displaying the first frame image of the second image in a suspended mode at the second display position, and dynamically amplifying the image size of the first frame image according to the image amplification rate; and if the image size of the first frame image is detected to be enlarged to a second size threshold, sequentially displaying each frame image in the second image in a suspended manner at the second display position.
In one embodiment, the first image is a dynamic image; correspondingly, when information is displayed in the user interface, and a first image related to the information is displayed at a first display position corresponding to the information, the one or more instructions are loaded and specifically executed by the processor 701: when information is displayed in a user interface, displaying a target image related to the information at a first display position corresponding to the information; the target image is a static image; if a replacement triggering event aiming at the target image is detected, replacing the target image by a first image related to the information; and displaying the first image at the first display position.
In one embodiment, the one or more instructions may also be loaded and specifically executed by processor 701 to: if a closing trigger event for the second image is detected, controlling the second image to move from the second display position to the first display position; replacing the second image with the target image during the movement of the second image; and displaying the target image at the first display position.
In one embodiment, when the target image is used to replace the second image during the moving of the second image, the one or more instructions are loaded and specifically executed by the processor 701: dynamically reducing the image size of the second image according to an image reduction rate in the moving process of the second image; and if the image size of the second image is detected to be reduced to a first size threshold, replacing the second image with the target image.
In one embodiment, when the target image is displayed at the first display position, the one or more instructions are loaded and specifically executed by processor 701 to: displaying the target image at the first display position according to the first image transparency; and dynamically adjusting the transparency of the target image based on the transparency of the first image and according to the transparency reduction rate, so that the transparency of the dynamically adjusted target image is less than or equal to the transparency of a second image, and the transparency of the second image is less than the transparency of the first image.
In one embodiment, a close button is included in the second image; accordingly, the one or more instructions may also be loaded and specifically executed by processor 701: and if the triggering operation aiming at the closing button is detected, determining that a closing triggering event aiming at the second image is detected.
In one embodiment, the one or more instructions may also be loaded and specifically executed by processor 701 to: acquiring a first image material of the first image and a second image material of the second image from a block chain network; and analyzing the first image material to obtain the first image, and analyzing the second image material to obtain the second image.
When the information is displayed in the user interface, the first image related to the information can be displayed at the first display position corresponding to the information, so that the information is attracted to the user. If a movement trigger event for the first image is detected, the first image can be controlled to move from a first display position to a second display position in the user interface; by controlling the movement of the first image, the interest of information display can be improved, and the attraction of the information to users is further improved. During the moving process of the first image, replacing the first image by a second image related to the information so as to display the second image in a floating manner at a second display position; through seamless replacement between the first image and the second image, surprise feeling can be brought to the user in the display process of the information, so that the attraction of the information to the user is further improved, and the user viscosity of the information is enhanced.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (14)

1. An image processing method, comprising:
when information is displayed in a user interface, displaying a first image related to the information at a first display position corresponding to the information;
if a movement trigger event for the first image is detected, controlling the first image to move from the first display position to a second display position in the user interface;
replacing the first image with a second image related to the information during the movement of the first image;
and displaying the second image in a floating mode at the second display position.
2. The method of claim 1, wherein the first image and the second image each contain at least one of: the target object promoted by the information and the character promoting the target object;
the information is published on a social platform, and the first display position is an identification display position of a publisher of the information; the second display position is the position of the center point of the user interface.
3. The method of claim 1 or 2, wherein replacing the first image with a second image related to the information during the moving of the first image comprises:
dynamically reducing the image size of the first image according to an image reduction rate in the moving process of the first image;
and if the image size of the first image is detected to be reduced to a first size threshold, replacing the first image with a second image related to the information.
4. The method according to claim 3, wherein the second image is a moving image composed of a plurality of frames of sequentially arranged images; if the image size of the first image is detected to be reduced to a first size threshold, replacing the first image with a second image related to the information, including:
if the image size of the first image is detected to be reduced to a first size threshold value, acquiring a first frame image in a second image related to the information;
and replacing the first image with the first frame image, wherein the image size of the first frame image is larger than or equal to the first size threshold.
5. The method of claim 4, wherein said displaying the second image in hover at the second display position comprises:
displaying the first frame image of the second image in a suspended mode at the second display position, and dynamically amplifying the image size of the first frame image according to the image amplification rate;
and if the image size of the first frame image is detected to be enlarged to a second size threshold, sequentially displaying each frame image in the second image in a suspended manner at the second display position.
6. The method of claim 1 or 2, wherein the first image is a dynamic image; when the information is displayed in the user interface, a first image related to the information is displayed at a first display position corresponding to the information, and the method comprises the following steps:
when information is displayed in a user interface, displaying a target image related to the information at a first display position corresponding to the information; the target image is a static image;
if a replacement triggering event aiming at the target image is detected, replacing the target image by a first image related to the information; and displaying the first image at the first display position.
7. The method of claim 6, wherein the method further comprises:
if a closing trigger event for the second image is detected, controlling the second image to move from the second display position to the first display position;
replacing the second image with the target image during the movement of the second image;
and displaying the target image at the first display position.
8. The method of claim 7, wherein said replacing the second image with the target image during the moving of the second image comprises:
dynamically reducing the image size of the second image according to an image reduction rate in the moving process of the second image;
and if the image size of the second image is detected to be reduced to a first size threshold, replacing the second image with the target image.
9. The method of claim 7, wherein said displaying the target image at the first display position comprises:
displaying the target image at the first display position according to the first image transparency;
and dynamically adjusting the transparency of the target image based on the transparency of the first image and according to the transparency reduction rate, so that the transparency of the dynamically adjusted target image is less than or equal to the transparency of a second image, and the transparency of the second image is less than the transparency of the first image.
10. The method of claim 6, wherein a close button is included in the second image; the method further comprises the following steps:
and if the triggering operation aiming at the closing button is detected, determining that a closing triggering event aiming at the second image is detected.
11. The method of claim 1, wherein the method further comprises:
acquiring a first image material of the first image and a second image material of the second image from a block chain network;
and analyzing the first image material to obtain the first image, and analyzing the second image material to obtain the second image.
12. An image processing apparatus characterized by comprising:
the display unit is used for displaying a first image related to the information at a first display position corresponding to the information when the information is displayed in a user interface;
the processing unit is used for controlling the first image to move from the first display position to a second display position in the user interface if a movement trigger event aiming at the first image is detected;
the processing unit is further used for replacing the first image with a second image related to the information in the moving process of the first image;
the display unit is further configured to display the second image in a floating manner at the second display position.
13. A computer device comprising an input interface and an output interface, further comprising:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having stored thereon one or more instructions adapted to be loaded by the processor and to perform the image processing method according to any of claims 1-11.
14. A computer storage medium having stored thereon one or more instructions adapted to be loaded by a processor and to perform the image processing method according to any of claims 1-11.
CN202010020874.9A 2020-01-08 2020-01-08 Image processing method, image processing apparatus, computer device, and medium Active CN111190526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010020874.9A CN111190526B (en) 2020-01-08 2020-01-08 Image processing method, image processing apparatus, computer device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010020874.9A CN111190526B (en) 2020-01-08 2020-01-08 Image processing method, image processing apparatus, computer device, and medium

Publications (2)

Publication Number Publication Date
CN111190526A true CN111190526A (en) 2020-05-22
CN111190526B CN111190526B (en) 2021-08-24

Family

ID=70708532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010020874.9A Active CN111190526B (en) 2020-01-08 2020-01-08 Image processing method, image processing apparatus, computer device, and medium

Country Status (1)

Country Link
CN (1) CN111190526B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012040596A1 (en) * 2010-09-24 2012-03-29 Rovi Technologies Corporation Systems and methods for touch-based media guidance
CN102668588A (en) * 2009-11-17 2012-09-12 Lg电子株式会社 Advertising method using network television
US20180027274A1 (en) * 2015-11-11 2018-01-25 Tencent Technology (Shenzhen) Company Limited Information processing method, terminal, and computer storage medium
CN109920065A (en) * 2019-03-18 2019-06-21 腾讯科技(深圳)有限公司 Methods of exhibiting, device, equipment and the storage medium of information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102668588A (en) * 2009-11-17 2012-09-12 Lg电子株式会社 Advertising method using network television
WO2012040596A1 (en) * 2010-09-24 2012-03-29 Rovi Technologies Corporation Systems and methods for touch-based media guidance
US20180027274A1 (en) * 2015-11-11 2018-01-25 Tencent Technology (Shenzhen) Company Limited Information processing method, terminal, and computer storage medium
CN109920065A (en) * 2019-03-18 2019-06-21 腾讯科技(深圳)有限公司 Methods of exhibiting, device, equipment and the storage medium of information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔晓舟: ""视觉动态设计在新媒体广告中的应用研究"", 《艺术与设计(理论)》 *

Also Published As

Publication number Publication date
CN111190526B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
US11164278B2 (en) Screen capture method, terminal, and storage medium employing both parent application program and sub-application program
CN105610954B (en) Media information processing method and system
KR101832912B1 (en) Providing Social Endorsements with Online Advertising
CN105917369B (en) Modifying advertisement resizing for presentation in a digital magazine
CA2787816C (en) Share box for endorsements
US20160034437A1 (en) Mobile social content-creation application and integrated website
CN108574618B (en) Pushed information display method and device based on social relation chain
CN100444163C (en) Configuration method for webpage display
US20120254769A1 (en) Caching multiple views corresponding to multiple aspect ratios
US20110276400A1 (en) Online Advertisement Storage and Active Management
US20120259712A1 (en) Advertising in a virtual environment
US10838608B2 (en) Smooth scrolling of a structured document presented in a graphical user interface with bounded memory consumption
CN106445997B (en) Information processing method and server
EP4022575A1 (en) Image replacement inpainting
CN106649518B (en) Method and device for processing dynamic information data
EP4068120A1 (en) Message management system and method for communication application, and presentation terminal
KR101342122B1 (en) System and method for providing a multimidea business card using a smart phone application
US11094100B1 (en) Compound animation in content items
CN111190526B (en) Image processing method, image processing apparatus, computer device, and medium
KR20140086972A (en) Bridge pages for mobile advertising
US9912622B2 (en) Electronic messaging system involving adaptive content
US11537273B1 (en) Compound animation showing user interactions
US20160019593A1 (en) Computer-implemented method and system for ephemeral advertising
JP6695826B2 (en) Information display program, information display device, information display method, and distribution device
US20120259709A1 (en) Systems and Methods for Associating Attribution Data with Digital Content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant