CN111932346A - Image display method, device, equipment and computer readable storage medium - Google Patents

Image display method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111932346A
CN111932346A CN202010875251.XA CN202010875251A CN111932346A CN 111932346 A CN111932346 A CN 111932346A CN 202010875251 A CN202010875251 A CN 202010875251A CN 111932346 A CN111932346 A CN 111932346A
Authority
CN
China
Prior art keywords
combined
articles
article
image
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010875251.XA
Other languages
Chinese (zh)
Inventor
林洁娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010875251.XA priority Critical patent/CN111932346A/en
Publication of CN111932346A publication Critical patent/CN111932346A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device, equipment and a computer readable storage medium for displaying images; the method comprises the following steps: presenting a graphical interface comprising at least two items; in response to an article selection operation triggered based on the graphical interface, displaying at least two articles to be combined selected by the article selection operation; receiving a combined display instruction for the at least two articles to be combined; responding to the combined display instruction, displaying a target image combined with the at least two articles to be combined, wherein the target image is obtained based on the display image combination of each article to be combined; the display image is an image displayed in an article detail page of the corresponding article to be combined. By the method and the device, the target images of the selected articles can be combined based on the selected articles, so that a user can quickly and conveniently know the effect of combining a plurality of articles to be combined.

Description

Image display method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for displaying an image.
Background
With the increasing economic rise of the e-commerce, online shopping has become a very common shopping means. The user can freely select items of the mood apparatus by browsing various item images, and other detailed information of the items, on the network.
Generally, a user purchases a plurality of items to be combined, such as an upper body garment and a clothing accessory, at the same time, but it is difficult for the user to know the effect of combining the items.
Disclosure of Invention
The embodiment of the application provides an image display method, device and equipment and a computer readable storage medium, which can automatically combine to obtain a target image combined with a selected article based on the selected article, so that a user can quickly and conveniently know the effect of combining a plurality of articles to be combined.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image display method, which comprises the following steps:
presenting a graphical interface comprising at least two items;
in response to an article selection operation triggered based on the graphical interface, displaying at least two articles to be combined selected by the article selection operation;
receiving a combined display instruction for the at least two articles to be combined;
responding to the combined display instruction, displaying a target image combined with the at least two articles to be combined, wherein the target image is obtained based on the display image combination of each article to be combined;
the display image is an image displayed in an article detail page of the corresponding article to be combined.
The embodiment of the application provides a display device of image, includes:
a presentation module for presenting a graphical interface comprising at least two items;
the first display module is used for responding to an article selection operation triggered based on the graphical interface and displaying at least two articles to be combined selected by the article selection operation;
the receiving module is used for receiving a combined display instruction aiming at the at least two articles to be combined;
the second display module is used for responding to the combined display instruction and displaying a target image combined with the at least two articles to be combined, wherein the target image is obtained by combining the display images of the articles to be combined;
the display image is an image displayed in an article detail page of the corresponding article to be combined.
In the above scheme, the first display module is further configured to present a combined function item in the graphical interface;
presenting a selection function item corresponding to each of the items in response to a trigger operation for the combined function item;
and receiving an item selection operation triggered based on the selection function item.
In the above scheme, the first display module is further configured to receive a dragging operation for a target article of the at least two articles;
synchronously moving the target object along with the dragging operation;
triggering the item selection operation when the relative position between the target item and the other of the at least two items satisfies a relative position condition in response to the release of the drag operation.
In the above scheme, the first display module is further configured to present a voice input function item in the graphical interface;
presenting a voice input state in response to a trigger operation for the voice input function item, and receiving input voice information;
and when the voice information comprises the voice contents matched with at least two articles, taking the at least two articles matched with the voice contents as articles to be combined.
In the scheme, at least two articles are displayed by adopting information flow cards, and the information flow cards correspond to the articles one by one; the first display module is further configured to receive a dragging operation for an information flow card corresponding to a target article in the at least two articles;
synchronously moving the information flow card corresponding to the target object along with the dragging operation;
and in response to the release of the dragging operation, triggering the article selection operation when the position of the information flow card corresponding to the target article is overlapped with the positions of the information flow cards of other articles in the at least two articles.
In the above scheme, the first display module is further configured to respectively display at least two articles to be combined selected by the article selection operation by using the display image corresponding to the article to be combined.
In the above scheme, the second display module is further configured to present an adding function item corresponding to the at least two articles to be combined, where the adding function item is configured to add the at least two articles to be combined to a target position when a trigger operation is received;
responding to the trigger operation aiming at the added function item, and presenting prompt information of successful adding;
wherein the prompt information is used for indicating that the at least two articles to be combined have been added to the target position.
In the above scheme, the second display module is further configured to present a purchase function item;
and the purchasing function item is used for jumping to a payment interface corresponding to the at least two items to be combined when a trigger operation is received.
In the above scheme, the second display module is further configured to obtain display images of at least two articles to be combined;
selecting a target image as a background image from the display images of the at least two articles to be combined;
cutting display images except the target image in the display images of the at least two articles to be combined to obtain image materials corresponding to the corresponding articles to be combined;
and adding the obtained image material into the background image to obtain a target image combined with the at least two articles to be combined.
In the above scheme, the second display module is further configured to obtain a display image of each article to be combined from an article detail page of each article to be combined;
and when the number of the display images of the article to be combined is at least two, selecting the display image for completely displaying the article to be combined from the at least two display images as the display image for combining the target image.
In the above scheme, the second display module is further configured to obtain display images of at least two articles to be combined;
determining a layout template corresponding to the category according to the category of the at least two articles to be combined;
and combining the display images of the at least two articles to be combined based on the layout template to obtain a target image combined with the at least two articles to be combined.
In the above scheme, the second display module is further configured to perform three-dimensional modeling on the display images of the at least two articles to be combined to obtain three-dimensional models corresponding to the at least two articles to be combined;
and embedding the three-dimensional models of the at least two articles to be combined into the same image scene to obtain a target image combined with the three-dimensional forms of the at least two articles to be combined.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the image display method provided by the embodiment of the application when the processor executes the executable instructions stored in the memory.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the image display method provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects: by presenting a graphical interface comprising at least two items; in response to an article selection operation triggered based on the graphical interface, displaying at least two articles to be combined selected by the article selection operation; receiving a combined display instruction for the at least two articles to be combined; responding to the combined display instruction, displaying a target image combined with the at least two articles to be combined, wherein the target image is obtained based on the display image combination of each article to be combined; the display image is an image displayed in an article detail page of a corresponding article to be combined; therefore, the target images combined with the selected articles can be obtained based on the selected articles through automatic combination, the target images combined with the at least two articles to be combined are efficiently displayed, and a user can conveniently and quickly know the effect of combining the articles to be combined.
Drawings
FIG. 1 is a schematic diagram of a target image acquisition process provided by the related art;
fig. 2 is a schematic diagram of an architecture of a system 100 for displaying images provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for displaying an image according to an embodiment of the present disclosure;
FIG. 5 is an interface diagram of a favorite page provided by an embodiment of the application;
FIG. 6 is an interface diagram of a live interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an interface for displaying articles to be combined according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an interface for displaying articles to be combined according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an interface for displaying articles to be combined according to an embodiment of the present application;
FIG. 10 is a schematic view of an item selection interface provided by an embodiment of the present application;
FIG. 11 is an interface diagram of a drag process provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of an interface for presenting combined presentation function items provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of an item addition process provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of an item addition process provided by an embodiment of the present application;
FIG. 15 is a schematic interface diagram of page jump provided in the embodiment of the present application;
fig. 16 is a flowchart illustrating a method for displaying an image according to an embodiment of the present application;
FIG. 17 is a schematic diagram of an interface for selecting an item to be combined according to an embodiment of the present application;
FIG. 18 is a schematic diagram of a target image provided by an embodiment of the present application;
FIG. 19 is a flowchart illustrating a method for displaying an image according to an embodiment of the present disclosure;
fig. 20 is a schematic structural diagram of a display device for displaying an image according to an embodiment of the present invention.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client, an application program running in the terminal for providing various services, such as a video client, an e-commerce client, and the like.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
In the related art, in order to know the effect of combining two articles together, a user can only capture a screenshot of a detail page including image display images of the two articles in the process of browsing article information, and then combine the screenshots of the articles together in a jigsaw mode by using image editing software to obtain a target image combined with the two articles.
For example, fig. 1 is a schematic diagram of a target image obtaining process provided in the related art, referring to fig. 1, when a user browses a first article, screenshot a detail page 101 of the first article and stores a corresponding screenshot, and then, when the user browses a second article combined with the first article, screenshot a detail page 102 of the second article and stores a corresponding screenshot, where the detail page includes an image display image of the article; then, the user needs to start an image editing software, present a homepage of the image editing software, and present the puzzle function item 103 in the homepage; the user can put together the screenshots of the two items through the puzzle function item 103 to obtain a target image in which the two items are combined.
In the process of implementing the application, it is found that the above-mentioned manner of obtaining the target image requires the user to operate in multiple application programs, and the operation is very cumbersome.
Based on this, embodiments of the present application provide a method and an apparatus for displaying an image to at least solve the above problems in the related art, which are described below.
Referring to fig. 2, fig. 2 is a schematic diagram of an architecture of the image presentation system 100 provided in the embodiment of the present application, in order to support an exemplary application, a terminal 400 (exemplary terminals 400-1 and 400-2 are shown) is connected to the server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
A terminal 400 for presenting a graphical interface comprising at least two items; in response to an article selection operation triggered based on the graphical interface, displaying at least two articles to be combined selected by the article selection operation; receiving a combined display instruction for the at least two articles to be combined; responding to the combined display instruction, sending an acquisition request of a target image to the server 200;
the server 200 is used for acquiring images displayed in the article detail pages of the articles to be combined to obtain display images of at least two articles to be combined; combining the display images of at least two articles to be combined to obtain a target image combined with the at least two articles to be combined;
the terminal 400 is further configured to display the target image combined with the at least two articles to be combined.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited.
An exemplary application of the electronic device provided in the embodiments of the present application is described below, and the electronic device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device shown in fig. 3 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the electronic device are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in FIG. 3.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the image display apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates an image display apparatus 455 stored in the memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: a presentation module 4551, a first presentation module 4552, a reception module 4553 and a second presentation module 4554, which are logical and thus arbitrarily combined or further divided according to the functions implemented.
The functions of the respective modules will be explained below.
In other embodiments, the image display apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the image display apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to perform the image display method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The method for displaying the image provided by the embodiment of the present application will be described in conjunction with exemplary applications and implementations of the terminal provided by the embodiment of the present application.
Referring to fig. 4, fig. 4 is a schematic flowchart of an image displaying method provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
Step 401: the terminal presents a graphical interface comprising at least two items.
In some embodiments, the terminal is provided with a client, such as an e-commerce client, a live broadcast client, a microblog client, and the like, and the user can freely select the items of the mood apparatus through the client, images of various items, and other detailed information of the items.
Here, a graphical interface containing at least two items is presented through the client, in some embodiments, the graphical interface may be a recommendation page of the items, and at least two items recommended for the user are presented in the recommendation interface; the graphical interface can be a collection page of the items, and the items collected by at least two users are presented in the collection page; the graphical interface may also be a shopping cart page in which items added to the virtual shopping cart by the at least two users are presented; it should be noted that the graphical interface here may also be other graphical interfaces presented with at least two items.
In practical implementation, the items in the graphical interface can be presented in various ways, such as in the form of information flow cards, in the form of pictures, in the form of bubbles, or in other ways.
Exemplarily, when the client is an e-commerce client, the user can browse various items through the e-commerce client, and when the user browses the items of the mood instrument, the items can be added to the favorite; then clicking a viewing key of the favorite page, presenting the favorite page by the terminal, and presenting at least two items in the favorite; the user can select the items to be combined through the favorite page.
For example, fig. 5 is an interface schematic diagram of a collection page provided in an embodiment of the present application, and referring to fig. 5, in the collection page, an information flow card is used to display a plurality of items, each item corresponds to one information flow card, and basic information of the item, such as an image and description information, is presented in the information flow card. Wherein 501 is article A, and 502 is article B.
Illustratively, when the client is a live client, the user can watch the live recommendation of the articles through the live client, and at least two anchor recommended articles are presented in a live interface; the user can select the articles to be combined through the live broadcast interface.
For example, fig. 6 is an interface schematic diagram of a live interface provided in an embodiment of the present application, and referring to fig. 6, in the live interface, an information stream card is used to display a plurality of articles, each article corresponds to one information stream card, and basic information of the article, such as an image and description information, is presented in the information stream card.
Step 402: and in response to the item selection operation triggered based on the graphical interface, displaying at least two items to be combined selected by the item selection operation.
In actual implementation, after receiving an article selection operation triggered based on a graphical interface, the terminal determines a plurality of articles to be combined selected by the article selection operation, and then displays the articles to be combined.
In practical application, at least two articles to be combined belong to the same category, such as home decoration, and can comprise a sofa, a tea table, a desk, a seat and the like; also, as with apparel, it may include shirts, pants, suits, ties, leather shoes, and the like.
In some embodiments, the at least two articles to be combined selected by the article selection operation may be displayed in a floating window manner, so that the display interfaces of the at least two articles to be combined are floating on the graphical interface.
For example, fig. 7 is a schematic interface diagram for displaying an item to be combined according to an embodiment of the present application, and referring to fig. 7, after an item selection operation triggered based on a graphical interface is received, a floating window 701 is presented, the floating window 701 floats above a favorite page 702, and two items to be combined 703 are displayed in the floating window 701.
In some embodiments, the at least two articles to be combined selected by the article selection operation may be displayed in a split-screen display manner, that is, a display page of the articles to be combined is presented independently from the graphical interface, and the at least two articles to be combined are displayed in the display page of the articles to be combined.
For example, fig. 8 is a schematic interface diagram for displaying an article to be combined according to an embodiment of the present application, and referring to fig. 8, after an article selection operation triggered based on a graphical interface is received, a collection page 801 is displayed in an upper half area of a screen, a display page 802 of the article to be combined is displayed in a lower half area of the screen, and at least two articles 803 to be combined are displayed in the display page of the article to be combined.
In some embodiments, at least two items to be combined selected by the item selection operation may be displayed in a page jump manner, that is, after the item selection operation triggered based on the graphical interface is received, a page jump is performed, a display page of the at least two items to be combined is jumped to, and the at least two items to be combined are displayed in the display interface.
For example, fig. 9 is a schematic interface diagram for displaying an item to be combined according to an embodiment of the present application, and referring to fig. 9, after an item selection operation triggered based on a graphical interface is received, a page jump is performed, a collection page 901 jumps to a display page 902, and at least two items to be combined 903 are displayed in the display page.
In some embodiments, the terminal may also present a presentation interface that partially or completely covers the graphical interface, and present at least two items to be combined in the presentation interface.
It should be noted that the display mode of at least two articles to be combined is not limited to the above display mode.
In some embodiments, the terminal may trigger the item selection operation by: presenting the combined function item in a graphical interface; presenting selection function items corresponding to the items in response to the trigger operation aiming at the combined function item; an item selection operation triggered based on the selection function item is received.
Here, the combined function item may be presented in the form of a button, an icon, or the like, and the trigger operation for the combined function item may be a click operation, a slide operation, a long press operation, or the like, and the presentation form of the combined function item and the trigger form of the trigger operation are not limited here.
In actual implementation, after receiving the trigger operation for the combined function item, the terminal presents the selection function item corresponding to each article, and the user can select the article to be combined by clicking the selection function item of the corresponding article.
Here, a confirmation selection function item may also be presented, and after the user selects an article to be combined by selecting the function item, a click operation for the confirmation selection function item is performed to trigger an article selection operation.
For example, fig. 10 is an interface schematic diagram of item selection provided in the embodiment of the present application, and referring to fig. 10, a combined function item 1001 is presented in a favorite page, and when a click operation for the combined function item 1001 is received, a selection function item 1002 corresponding to each item is presented, and a selection confirmation function item 1003 is presented; the user may select the item to be combined by clicking on the select function item 1002 and then clicking on the confirm select function item 1003 to trigger the item selection operation.
In some embodiments, the article selection operation may be triggered by a long press operation, and the article targeted by the long press operation is taken as the article to be combined.
In some embodiments, the terminal may trigger the item selection operation by: before displaying at least two articles to be combined selected by the article selection operation, the terminal can receive a dragging operation aiming at a target article in the at least two articles; synchronously moving the target object along with the dragging operation; in response to the release of the drag operation, an item selection operation is triggered when a relative position between the target item and the other of the at least two items satisfies a location condition.
Here, the relative position between the target item and the other of the at least two items satisfies the relative position condition, which means that the presentation position of the target item and the presentation positions of the other of the two items satisfy the relative position condition; the relative position condition is preset, and may be that the presentation position where the target item is located overlaps with the presentation position where one of the at least two items is located, may also be that the presentation position where the target item is located is adjacent to the presentation position where at least one of the at least two items is located, or may be another position condition, which is not limited herein.
In practical implementation, the item selection operation may be triggered by a dragging operation, that is, for at least two items to be combined that a user wants to select, the dragging operation is performed on one of the at least two items to be combined so that a presentation position where the item and the other item of the at least two items to be combined are located satisfies a position presentation condition, and then the dragging operation is released to trigger the item selection operation on the at least two items to be combined.
For example, fig. 11 is an interface schematic diagram of a dragging process provided in an embodiment of the present application, and referring to fig. 11, fig. 11 illustrates a process in which an item a is dragged over an item B, where 1101 is the item a and 1102 is the item B; the article A moves upwards synchronously along with the dragging operation until the article A moves above the article B, namely the presenting position of the article A is overlapped with the presenting position of the article B; at this time, the drag operation is released, and the article a and the article B are regarded as articles to be combined.
In some embodiments, at least two articles are displayed using information flow cards, the information flow cards corresponding to the articles one-to-one; the terminal may trigger the item selection operation by: receiving a dragging operation of an information flow card corresponding to a target article in at least two articles; synchronously moving the information flow card corresponding to the target object along with the dragging operation; and in response to the release of the dragging operation, triggering the article selection operation when the information flow card corresponding to the target article is positioned to overlap with the information flow cards of other articles in the at least two articles.
In practical implementation, when at least two articles are displayed by using the information stream cards, the article selection operation may be triggered by a dragging operation for the information stream cards corresponding to the articles, that is, for two articles to be combined that a user wants to select, the dragging operation is executed for the information stream card corresponding to one of the two articles to be combined, the information stream card corresponding to the article is dragged to a position where the information stream card corresponding to the other article is located, so that the positions where the information stream cards corresponding to the two articles are located overlap, and then the dragging operation is released to trigger the article selection operation for the two articles.
In some embodiments, the terminal presents the voice input function item in a graphical interface; responding to the trigger operation aiming at the voice input function item, presenting a voice input state and receiving input voice information; and when the voice information comprises the voice contents matched with the at least two articles, taking the at least two articles matched with the voice contents as articles to be combined.
Here, the voice content matched with the article may be a name of the article, a keyword in description information of the article, a presentation order of the article in a page, and the like, and as long as information that can be used to identify the article can be used as the voice content matched with the article.
In actual implementation, after the voice information is received, voice recognition is performed on the voice information, and the recognized content is matched with the information of each article presented in the graphical interface, so that whether the voice content matched with at least two articles is included in the voice information or not is judged.
It should be noted that the voice information may be input for multiple times, that is, a piece of voice information may be input first, one article is selected from at least two articles as an article to be combined, and when a piece of voice information is input, another article is selected from at least two articles as an article to be combined; or inputting a voice message and selecting a plurality of articles as the articles to be combined.
In some embodiments, the terminal may present the at least two items to be combined selected by the item selection operation by: displaying at least two articles to be combined selected by article selection operation by adopting an information flow card; wherein, the information flow cards correspond to the articles to be combined one by one.
In practical implementation, each article to be combined corresponds to one information flow card, and one of the following information of the article to be combined is presented through the information flow card: and displaying images, prices and description information.
For example, referring to fig. 7, an information flow card is used to display two articles to be combined 703, and a display image, price and description information of the articles to be combined are presented in the information flow card.
In some embodiments, the terminal may present the at least two items to be combined selected by the item selection operation by: and respectively displaying at least two articles to be combined selected by the article selection operation by adopting the display images corresponding to the articles to be combined.
In practical implementation, the articles to be combined can be displayed in the form of images, where the images are displayed images of the articles to be combined, that is, images displayed in article detail pages of the articles to be combined. Here, when there are a plurality of display images of the articles to be combined, one or more display images may be selected from the plurality of display images to be displayed.
For example, referring to fig. 8, at least two articles to be combined 803 are displayed by using the display images corresponding to the articles to be combined, and each article to be combined corresponds to one display image.
Step 403: a combined display instruction for at least two items to be combined is received.
In practical implementation, the combined display instruction may be triggered by the combined display function item, that is, when at least two articles to be combined are displayed, the combined display function item is presented, and when the combined display function item is received; the combined display instruction can also be triggered by voice, namely, the combined display instruction is generated by voice information control; the combined display instruction may also be automatically triggered, for example, when the display duration of at least two articles to be combined reaches a certain duration threshold, the combined display instruction for at least two articles to be combined is automatically triggered. Here, the combined exposure instruction may also be triggered by other means.
For example, fig. 12 is an interface schematic diagram for presenting a combined presentation function item provided in an embodiment of the present application, and referring to fig. 12, while presenting an article to be combined, a combined presentation function item 1201 is presented, and when a user clicks the combined presentation function item 1201, a combined presentation instruction is triggered.
Step 404: and responding to the combined display instruction, displaying and combining target images of at least two articles to be combined, wherein the target images are obtained based on the display image combination of the articles to be combined.
The display image is an image displayed in an article detail page of the corresponding article to be combined.
Here, after receiving the combined display instruction, the target image combined with at least two articles to be combined is displayed, and since the target image is obtained by combining based on the display images of the articles to be combined, the target image at least includes all the selected articles to be combined.
In some embodiments, the terminal may further present an adding function item corresponding to at least two articles to be combined, where the adding function item is used to add the at least two articles to be combined to the target location at the same time when the triggering operation is received; responding to the trigger operation aiming at the added function item, and presenting prompt information of successful adding; wherein the prompt message is used for indicating that at least two articles to be combined are added to the target position.
In actual implementation, the target location may be a virtual shopping cart or a favorite.
Illustratively, when the target position is the virtual shopping cart, when the trigger operation of adding the function item is received, at least two articles to be combined are added to the virtual shopping cart at the same time, and prompt information is presented to inform a user that at least two articles to be combined have been added to the virtual shopping cart; when a user clicks the viewing function item of the virtual shopping cart, the page of the virtual shopping cart is presented, and at least two articles to be combined are presented on the page of the virtual shopping cart.
For example, fig. 13 is a schematic diagram of an item adding process provided in the embodiment of the present application, and referring to fig. 13, a target image 1301 and an adding function item 1302 are presented, and when a click operation for the adding function item 1302 is received, an item to be combined corresponding to the target image 1301 is added to a virtual shopping cart; when entering the page of the virtual shopping cart, it can be seen that the object to be combined 1303 corresponding to the target image 1301 is presented in the page of the virtual shopping cart.
Exemplarily, when the target position is the favorite, when a trigger operation of adding the function item is received, at least two articles to be combined are added to the favorite at the same time, and prompt information is presented to inform a user that at least two articles to be combined have been added to the favorite; when a user clicks a viewing function item of the favorite, a favorite page is presented, and at least two articles to be combined are presented in the favorite page.
For example, fig. 14 is a schematic diagram of an item adding process provided in an embodiment of the present application, and referring to fig. 14, a target image 1401 and an adding function item 1402 are presented, and when a click operation for the adding function item 1402 is received, an item to be combined corresponding to the target image 1401 is added to a favorite; when the collection page is entered, the item to be combined 1403 corresponding to the target image 1401 is presented in the collection page.
In some embodiments, the terminal may also present purchase function items; and the purchasing function item is used for jumping to a payment interface corresponding to at least two articles to be combined when receiving the triggering operation.
In practical implementation, the terminal can also present a purchase function item, and when a trigger operation for the purchase function item is received, a page jump is performed to a payment interface corresponding to at least two items to be combined, so that the at least two items to be combined are purchased.
For example, fig. 15 is a schematic view of an interface of page jumping provided in the embodiment of the present application, referring to fig. 15, showing a target image 1501 and a purchase function item 1502, when a user clicks the purchase function item 1502, jumping to a payment page, and showing an item to be combined 1503 and a payment function item 1504 in the payment page, when the user clicks the payment function item, payment can be performed to purchase the two items to be combined at the same time. And the to-be-combined article displayed in the payment page is the same as the to-be-combined article in the target image.
In some embodiments, before displaying and combining the target images of at least two articles to be combined, the terminal may further obtain the display image of each article to be combined from the article detail page of each article to be combined, respectively; when the number of the display images of the articles to be combined is at least two, the display image for completely displaying the articles to be combined is selected from the at least two display images to be used as the display image for combining the target images.
In practical implementation, for each article to be combined, all display images in the article detail page of the article to be combined are obtained, and when only one display image of the article to be combined exists, the image is directly used as the display image for combining the target images; when there are a plurality of display images of the articles to be combined, one display image needs to be selected from the plurality of display images as a display image for combining the target images.
In practical application, the content in each display image is identified through an image identification technology, so that the display image which can most completely display the article to be combined is selected from at least two display images according to an identification result and is used as the display image for combining the target images.
In some embodiments, the target image may be acquired by: acquiring display images of at least two articles to be combined; selecting a target image as a background image from display images of at least two articles to be combined; cutting display images except the target image in the display images of at least two articles to be combined to obtain image materials corresponding to the corresponding articles to be combined; and adding the obtained image material into the background image to obtain a target image combined with at least two articles to be combined.
In practical implementation, one display image of the article to be combined is selected from the display images of at least two articles to be combined to serve as a background image, and the display image of the article to be combined is used for generating image materials. Here, the part of the display image corresponding to the article to be combined is cut out as an image material by an image segmentation technique.
It should be noted that, if the display image of the article to be combined is not provided with a background, the image can be directly used as an image material.
For example, the article to be combined includes an article a and an article B, the display image of the article a may be used as a background image, then the display image of the article B is cut, and a portion corresponding to the article B is cut from the display image of the article B to obtain an image material of the article B; then, the image material of the article B is added to the background image.
In practical application, which article display image is used as the background image may be preset, for example, the selected first article to be combined display image may be used as the background image, or the display image may be determined according to the article category, for example, when the article to be combined display image includes a jacket and a hat, the jacket display image is used as the background image; it is also possible to use a presentation image containing a specific content as a background image, for example, a presentation image containing a human body as a background image. The selection method of the background image is not limited here.
In some embodiments, when at least two articles to be combined are clothing articles, it may be determined whether a person main body is included in the background image through image recognition, and if the person main body is included in the background image, the position of the image material in the background image is determined according to the position of the person main body corresponding to the article in the image material, so as to add the image material to the determined position in the background image, for example, if the article corresponding to the material image is an earring, the position of the person main body corresponding to the material image is an ear of the person, the position of the ear of the person in the background image is determined, and the image material is added to the position of the ear of the person in the background image.
If the person's subject is not included in the background image, then image material is added to the background image at a default location. Here, the default position may be determined according to the article to be combined, for example, when the article to be combined in the background image is a jacket and the article to be combined in the image material is pants, the default position of the image material is below the article to be combined in the background image; and when the article to be combined in the background image is the coat and the article to be combined in the image material is the earring, the default position of the image material is above the article to be combined in the background image.
In some embodiments, before displaying and combining the target images of the at least two articles to be combined, the terminal may further obtain display images of the at least two articles to be combined; determining a layout template corresponding to the category according to the categories of at least two articles to be combined; and combining the display images of at least two articles to be combined based on the layout template to obtain a target image combined with the at least two articles to be combined.
In actual implementation, layout templates corresponding to each category may be preset, for example, one layout template is set for clothing, one layout template is set for home decoration, and positions of objects in the category are set in the layout templates, so that positions of the objects to be combined can be determined, and display images of at least two objects to be combined are combined according to the determined positions.
In practical application, if the display image of the article to be combined is not provided with a background, the display image of the lending combined article can be directly combined; if the display image of the article to be combined comprises the background part, the display image of the article to be combined needs to be cut, the background part is removed, the part corresponding to the article to be combined is reserved, and then the cut display image is combined.
In some embodiments, the position relationship between the articles to be combined can be determined according to the association relationship between the at least two articles to be combined, and then the display images of the at least two articles to be combined are combined according to the determined position relationship.
In some embodiments, before the terminal displays and combines the target images of at least two articles to be combined, the terminal can also perform three-dimensional modeling on the display images of the at least two articles to be combined to obtain three-dimensional models corresponding to the at least two articles to be combined; embedding the three-dimensional models of at least two articles to be combined into the same image scene to obtain a three-dimensional target image combined with the at least two articles to be combined.
In practical implementation, one display image can be selected from the display images of at least two articles to be combined as a background image, and Three-Dimensional (3D) modeling is performed on the background image to obtain a corresponding Three-Dimensional model, wherein the Three-Dimensional model comprises an image scene and the corresponding articles to be combined; cutting the display images of other articles to be combined, and removing the background part to obtain image materials of the corresponding article part; and carrying out three-dimensional modeling on the obtained image material to obtain three-dimensional models of the articles to be combined, and embedding the three-dimensional models into the three-dimensional model containing the image scene to obtain a three-dimensional target image combined with at least two articles to be combined.
Or, the display image of each article to be combined can be subjected to image cutting, and the background part is removed, so that the image material of the corresponding article part is obtained; then, 3D modeling is carried out on the image materials to obtain a 3D model corresponding to each article to be combined; then, the 3D models are embedded into an existing 3D image scene, and a three-dimensional target image combined with at least two articles to be combined is obtained.
For example, when the articles to be combined are an article A and an article B, a part corresponding to the article B is cut out from a display image of the article B, a material map B corresponding to the article B is obtained, and then the material map B is subjected to 3D modeling to obtain a 3D model of the article B; 3D modeling is carried out on the graph A to obtain a 3D model corresponding to the graph A, wherein the 3D model corresponding to the graph A comprises a scene and an article A; and embedding the 3D model of the article B into the 3D model corresponding to the image A to obtain a target image in a 3D form.
It should be noted that if a certain display image itself does not include a background portion, it does not need to be cropped.
In some embodiments, the 3D model of the article may also be uploaded by the merchant, so that the 3D model uploaded by the merchant may be directly obtained, and the three-dimensional models corresponding to the at least two articles to be combined are embedded in the same image scene, so as to obtain the target image in the three-dimensional form combined with the at least two articles to be combined.
The embodiment of the application presents a graphical interface comprising at least two articles; responding to an article selection operation triggered based on the graphical interface, and displaying at least two articles to be combined selected by the article selection operation; receiving a combined display instruction aiming at least two articles to be combined; responding to a combined display instruction, displaying and combining target images of at least two articles to be combined, wherein the target images are obtained based on the display image combination of the articles to be combined; the display image is an image displayed in an article detail page of the corresponding article to be combined; therefore, the target image combined with at least two articles to be combined is efficiently displayed, so that a user can conveniently and quickly know the effect of combining the articles to be combined.
The following continues to describe the image display method provided by the embodiment of the present application, and in actual implementation, the image display method is implemented by a terminal and a server in a cooperative manner. Fig. 16 is a schematic flowchart of a method for displaying an image according to an embodiment of the present application, and referring to fig. 16, the method for displaying an image according to the embodiment of the present application includes:
step 1601: the terminal presents a graphical interface containing a plurality of items and presents the combined function item in the graphical interface.
Here, the plurality of articles includes article a, article B, and article C.
Step 1602: and the terminal responds to the trigger operation aiming at the combined function item and presents the selection function item corresponding to each article.
Step 1603: the terminal receives click operation of the selection function items aiming at the article A, the article B and the article C, and determines the article A, the article B and the article C as articles to be combined.
Step 1604: and responding to the click operation aiming at the determined selection function item, the terminal carries out page jump to a display page of the article to be combined, and displays the article A, the article B, the article C and the combined display function item in a display interface.
For example, fig. 17 is a schematic view of an interface for selecting an item to be combined according to an embodiment of the present application, and referring to fig. 17, a plurality of items and combined function items 1701 are presented in a graphical interface, where item a is 1702, item B is 1703, and item C is 1704; receiving a click operation for the combined function item 1701, presenting a selection function item 1705 corresponding to each article, and confirming a selection function item 1706, where each article corresponds to one selection function item; after receiving the click operation of the corresponding selection function items aiming at the article A, the article B and the article C, updating the presentation style of the corresponding selection function item; when the click operation for the confirmation selection function item 1706 is received, the terminal performs page jump to a display page 1707 of the article to be combined, and displays the article a, the article B, the article C, and the combined display function item 1708 in the display interface.
Step 1605: and sending a combination request to the server in response to the click operation aiming at the combination display function item.
Step 1606: the server calls up the display images of the article A, the article B and the article C.
Step 1607: and selecting images capable of completely displaying the corresponding articles from the display images of the articles respectively, and marking the images as a graph A, a graph B and a graph C respectively.
Step 1608: from each of the drawings a, B, and C, portions corresponding to the article a, the article B, and the article C are cut out to obtain a material drawing a, a material drawing B, and a material drawing C.
Step 1609: and determining the position relation of the article A, the article B and the article C in the target image according to the association relation among the article A, the article B and the article C.
Step 1610: and combining the material graph A, the material graph B and the material graph C according to the position relation of the article A, the article B and the article C in the target image to obtain the target image.
For example, if the article a is a chair, the article B is a table, and the article C is a lamp, the lamp should be located right above the table, and the chair should be located beside the table. Fig. 18 is a schematic view of a target image provided by an embodiment of the present application, and referring to fig. 18, a chair is located on the right side of a table, and a lamp is located above the table.
Step 1611: and returning the target image to the terminal.
Step 1612: and the terminal displays the target image and adds the function item.
Step 1613: and responding to the trigger operation aiming at the adding function item, adding the article A, the article B and the article C to the virtual shopping cart, and presenting prompt information of successful adding.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Fig. 1 and 9 are schematic flow diagrams of an image displaying method provided in the embodiment of the present application, and referring to fig. 19, the image displaying method provided in the embodiment of the present application includes:
step 1901: the terminal presents a favorite page containing item a and item B.
In actual implementation, the user may add the selected items to the favorites and then select the items desired to be combined on the favorites page.
For example, referring to fig. 5, in the favorite page, an information flow card is used to display a plurality of items, each item corresponds to one information flow card, and basic information of the item, such as an image, description information, etc., is presented in the information flow card. Wherein 501 is article A, and 502 is article B.
Step 1902: the terminal receives a drag operation of dragging the article a over the article B.
Here, the user may perform a drag operation for the article a, with which the article a is synchronously moved; when the article a is dragged over the article B, the dragging operation is released.
For example, referring to fig. 11, fig. 11 shows a process in which item a is dragged over item B, where 1101 is item a and 1102 is item B; the article A moves upwards synchronously along with the dragging operation until the article A moves above the article B, namely the article A is positioned at the presenting position overlapped with the article B.
Step 1903: and displaying the combined bullet frame of the article A and the article B, and presenting the combined function item in the bullet frame.
Taking the article A and the article B as articles to be combined, displaying a combined bullet frame, and presenting the article A and the article B in the combined bullet frame, so that a user can confirm whether the article A and the article B are the articles needing to be combined according to the presented information, and presenting a combined function item in the bullet frame; when the user confirms that the two items are to be combined, the combination function item may be clicked.
For example, referring to fig. 12, a floating window 1201 is presented, and an article a, an article B, and a combined display function item 1201 are presented in the floating window 1201.
It should be noted that the manner of selecting the articles to be combined is not limited to the dragging operation, and the articles to be combined may also be selected by a clicking operation, for example, clicking the article a and the article B, and taking the article a and the article B as the articles to be combined; alternatively, the selection of the combined article may be realized by a voice manner, for example, when voice information is input and it is determined that the voice information includes information indicating that the article a and the article B are selected by voice recognition, the article a and the article B are regarded as articles to be combined.
Step 1904: and the terminal receives click operation aiming at the combined function item.
Step 1905: the server calls out basic data and display images of the item A and the item B.
Here, the basic data includes description information such as an item name, an item type, and the like; the presentation image refers to the image presented in the item detail page, and is typically uploaded by the seller.
Step 1906: through an image recognition technology, an image capable of completely displaying the article A is selected from the display images of the article A and is marked as a diagram A.
Here, the plurality of display images of the article a are recognized by an image recognition technique to screen out an image from which the article a can be completely displayed.
Step 1907: judging whether the display image of the article B comprises a display image without a background, if so, executing a step 1908; otherwise, step 1909 is performed.
Step 1908: and taking the display image without the background in the display image of the article B as a material image B.
Step 1909: and selecting an image capable of completely displaying the article B from the display images of the article B by an image recognition technology, and marking the image as a graph B.
Here, the plurality of display images of the article a are recognized by an image recognition technique to screen out an image from which the article a can be completely displayed.
Step 1910: and (5) cutting a part corresponding to the article B from the image B to obtain a material image B corresponding to the article B.
Here, the image segmentation technique is used to segment the portion of the image B corresponding to the article B, so as to obtain a material map B without a background.
Step 1911: judging whether the figure A contains a person main body, if so, executing a step 1912; otherwise, step 1914 is performed.
Step 1912: and determining the category of the article B according to the basic information of the article B.
Step 1913: and determining the position of the person main body corresponding to the article B according to the type of the article B, and marking the position of the person main body.
In practical implementation, when there is a human body in the diagram a, the position of the human body corresponding to the article B needs to be determined. Here, the position may be determined according to the category of the article B, such as when the category of the article B is earring, the position of the main body of the person corresponding to the article B is the ear of the person; when the type of the article B is the jacket, the main position of the person corresponding to the article B is the upper body of the person.
Step 1914: and marking the corresponding default position of the article B.
In some embodiments, the default position of article B may be determined according to the category of article B and the category of article a, for example, when article a is a jacket and article B is a pair of pants, the default position corresponding to article B is below article a in fig. a; when the article A is a jacket and the article B is an earring, the default position corresponding to the article B is above the article A in the drawing A.
In other embodiments, a fixed position may be set for article B, that is, the default position corresponding to article B is fixed regardless of the categories of article a and article B.
Step 1915: and adding the material image B to the marked positioning points in the image A by using an image synthesis technology to obtain a target image.
Here, the background map is obtained from the display image of the article a, the material map is obtained from the display image of the article B, or the background map is obtained from the display image of the article B, the material map is obtained from the display image of the article a, for example, one image is selected from the display image of the article B as the background map and is marked as the image B, one image is selected from the display image of the article a and is marked as the image a, a part corresponding to the article a is cut from the image a to obtain the material map a corresponding to the article a, and the material map a is added to the marked positioning points in the image B to obtain the target image.
In actual implementation, the display image of which article is used as the background image may be preset, may be determined according to the type of the article, or may be determined according to the content in the display image, for example, the display image including the person main body is used as the background image.
In some embodiments, the target image in 3D form may also be displayed, that is, a 3D model of the article to be combined is obtained, and the 3D model of the article with the transparent background is embedded into a 3D scene of another article to form a target image in 3D form. The 3D model of the article to be combined can be uploaded by a merchant or obtained by performing 3D modeling on the display image.
For example, after a part corresponding to the article B is cut from the diagram B to obtain a material diagram B corresponding to the article B, 3D modeling is carried out on the material diagram B to obtain a 3D model of the article B; 3D modeling is carried out on the graph A to obtain a 3D model corresponding to the graph A, wherein the 3D model corresponding to the graph A comprises a scene and an article A; and embedding the 3D model of the article B into the 3D model corresponding to the image A to obtain a target image in a 3D form.
Step 1916: and sending the target image to the terminal.
Step 1917: and the terminal displays the target image and adds the function item.
Here, an add function item for adding the item a and the item B to the virtual shopping cart at the same time when the trigger operation is received.
In actual implementation, the target image and the added function item are displayed in a pop-up window form, for example, referring to fig. 13, a floating window is presented in the collection interface, and the target image 1301 and the added function item 1302 are displayed in the floating window.
Step 1918: and receiving a click operation aiming at the added function item.
Step 1919: and adding the article A and the article B to the virtual shopping cart, and presenting prompt information of successful addition.
The embodiment of the application has the following beneficial effects:
through a background image recognition technology, an image segmentation technology and an image processing technology, a user can obtain a target image combined with at least two articles through simple operation (such as dragging operation), the combined image in the mind and the sea of the user is imaged, the target image acquisition efficiency is improved, the purchase efficiency of the user is improved, and the online shopping experience is optimized.
Continuing to describe an exemplary structure of the image display apparatus 455 provided by the embodiment of the present invention implemented as a software module, fig. 20 is a schematic structural diagram of the image display apparatus provided by the embodiment of the present invention, and as shown in fig. 20, the image display apparatus provided by the embodiment of the present invention includes:
a presentation module 4551 configured to present a graphical interface including at least two items;
a first displaying module 4552, configured to display, in response to an item selection operation triggered based on the graphical interface, at least two items to be combined selected by the item selection operation;
a receiving module 4553, configured to receive a combined display instruction for the at least two articles to be combined;
a second display module 4554, configured to display, in response to the combined display instruction, a target image in which the at least two articles to be combined are combined, where the target image is obtained based on a combination of display images of the articles to be combined;
the display image is an image displayed in an article detail page of the corresponding article to be combined.
In some embodiments, the first presentation module is further configured to present a combined function item in the graphical interface;
presenting a selection function item corresponding to each of the items in response to a trigger operation for the combined function item;
and receiving an item selection operation triggered based on the selection function item.
In some embodiments, the first display module 4552 is further configured to receive a drag operation for a target item of the at least two items;
synchronously moving the target object along with the dragging operation;
triggering the item selection operation when the relative position of the target item and the other items of the at least two items satisfies a relative position condition in response to the release of the dragging operation.
In some embodiments, the first presentation module 4552 is further configured to present a voice input function item in the graphical interface;
presenting a voice input state in response to a trigger operation for the voice input function item, and receiving input voice information;
and when the voice information comprises the voice contents matched with at least two articles, taking the at least two articles matched with the voice contents as articles to be combined.
In some embodiments, at least two articles are displayed using information flow cards, the information flow cards corresponding one-to-one with the articles; the first display module 4552 is further configured to receive a dragging operation on an information flow card corresponding to a target article in the at least two articles;
synchronously moving the information flow card corresponding to the target object along with the dragging operation;
and in response to the release of the dragging operation, triggering the article selection operation when the position of the information flow card corresponding to the target article is overlapped with the positions of the information flow cards of other articles in the at least two articles.
In some embodiments, the first display module 4552 is further configured to respectively display at least two articles to be combined selected by the article selection operation using the display image corresponding to the article to be combined.
In some embodiments, the second display module 4554 is further configured to present an addition function item corresponding to the at least two items to be combined, where the addition function item is configured to add the at least two items to be combined to a target location at the same time when a trigger operation is received;
responding to the trigger operation aiming at the added function item, and presenting prompt information of successful adding;
wherein the prompt information is used for indicating that the at least two articles to be combined have been added to the target position.
In some embodiments, the second presentation module 4554 is further configured to present a purchase function;
and the purchasing function item is used for jumping to a payment interface corresponding to the at least two items to be combined when a trigger operation is received.
In some embodiments, the second display module 4554 is further configured to acquire display images of at least two articles to be combined;
selecting a target image as a background image from the display images of the at least two articles to be combined;
cutting display images except the target image in the display images of the at least two articles to be combined to obtain image materials corresponding to the corresponding articles to be combined;
and adding the obtained image material into the background image to obtain a target image combined with the at least two articles to be combined.
In some embodiments, the second display module 4554 is further configured to obtain a display image of each article to be combined from an article detail page of each article to be combined;
and when the number of the display images of the article to be combined is at least two, selecting the display image for completely displaying the article to be combined from the at least two display images as the display image for combining the target image.
In some embodiments, the second display module 4554 is further configured to acquire display images of at least two articles to be combined;
determining a layout template corresponding to the category according to the category of the at least two articles to be combined;
and combining the display images of the at least two articles to be combined based on the layout template to obtain a target image combined with the at least two articles to be combined.
In some embodiments, the second display module 4554 is further configured to perform three-dimensional modeling on the display images of the at least two articles to be combined, so as to obtain a three-dimensional model corresponding to the at least two articles to be combined;
and embedding the three-dimensional models of the at least two articles to be combined into the same image scene to obtain a target image combined with the three-dimensional forms of the at least two articles to be combined.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for displaying the image according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 4.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for displaying an image, the method comprising:
presenting a graphical interface comprising at least two items;
in response to an article selection operation triggered based on the graphical interface, displaying at least two articles to be combined selected by the article selection operation;
receiving a combined display instruction for the at least two articles to be combined;
responding to the combined display instruction, displaying a target image combined with the at least two articles to be combined, wherein the target image is obtained based on the display image combination of each article to be combined;
the display image is an image displayed in an article detail page of the corresponding article to be combined.
2. The method of claim 1, wherein prior to said presenting the at least two items to be combined selected by the item selection operation, the method further comprises:
presenting, in the graphical interface, a combined function item;
presenting a selection function item corresponding to each of the items in response to a trigger operation for the combined function item;
and receiving an item selection operation triggered based on the selection function item.
3. The method of claim 1, wherein prior to said presenting the at least two items to be combined selected by the item selection operation, the method further comprises:
receiving a dragging operation aiming at a target article in the at least two articles;
synchronously moving the target object along with the dragging operation;
in response to the release of the dragging operation, triggering the item selection operation when a relative positional relationship between the target item and other items of the at least two items satisfies a relative positional condition.
4. The method of claim 1, wherein prior to said presenting the at least two items to be combined selected by the item selection operation, the method further comprises:
presenting a voice input function item in the graphical interface;
presenting a voice input state in response to a trigger operation for the voice input function item, and receiving input voice information;
and when the voice information comprises the voice contents matched with at least two articles, taking the at least two articles matched with the voice contents as articles to be combined.
5. The method of claim 1, wherein the at least two items are displayed using information flow cards, the information flow cards corresponding one-to-one with the items;
before the presenting of the at least two articles to be combined selected by the article selection operation, the method further comprises:
receiving a dragging operation aiming at an information flow card corresponding to a target article in the at least two articles;
synchronously moving the information flow card corresponding to the target object along with the dragging operation;
and in response to the release of the dragging operation, triggering the article selection operation when the position of the information flow card corresponding to the target article is overlapped with the positions of the information flow cards of other articles in the at least two articles.
6. The method of claim 1, wherein said presenting the at least two items to be combined selected by the item selection operation comprises:
and respectively displaying at least two articles to be combined selected by the article selection operation by adopting the display images corresponding to the articles to be combined.
7. The method of claim 1, wherein the method further comprises:
presenting an adding function item corresponding to the at least two articles to be combined, wherein the adding function item is used for adding the at least two articles to be combined to a target position when a trigger operation is received;
responding to the trigger operation aiming at the added function item, and presenting prompt information of successful adding;
wherein the prompt information is used for indicating that the at least two articles to be combined have been added to the target position.
8. The method of claim 1, wherein the method further comprises:
presenting the purchase function item;
and the purchasing function item is used for jumping to a payment interface corresponding to the at least two items to be combined when a trigger operation is received.
9. The method of claim 1, wherein prior to said displaying the target image combined with the at least two items to be combined, the method further comprises:
acquiring display images of at least two articles to be combined;
selecting a target image as a background image from the display images of the at least two articles to be combined;
cutting display images except the target image in the display images of the at least two articles to be combined to obtain image materials corresponding to the corresponding articles to be combined;
and adding the obtained image material into the background image to obtain a target image combined with the at least two articles to be combined.
10. The method of claim 1, wherein prior to said displaying the target image combined with the at least two items to be combined, the method further comprises:
respectively obtaining a display image of each article to be combined from the article detail page of each article to be combined;
and when the number of the display images of the article to be combined is at least two, selecting the display image for completely displaying the article to be combined from the at least two display images as the display image for combining the target image.
11. The method of claim 1, wherein prior to said displaying the target image combined with the at least two items to be combined, the method further comprises:
acquiring display images of at least two articles to be combined;
determining a layout template corresponding to the category according to the category of the at least two articles to be combined;
and combining the display images of the at least two articles to be combined based on the layout template to obtain a target image combined with the at least two articles to be combined.
12. The method of claim 1, wherein the displaying the target image combined with the at least two items to be combined comprises:
carrying out three-dimensional modeling on the display images of the at least two articles to be combined to obtain three-dimensional models corresponding to the at least two articles to be combined;
and embedding the three-dimensional models of the at least two articles to be combined into the same image scene to obtain a target image combined with the three-dimensional forms of the at least two articles to be combined.
13. An apparatus for displaying an image, comprising:
a presentation module for presenting a graphical interface comprising at least two items;
the first display module is used for responding to an article selection operation triggered based on the graphical interface and displaying at least two articles to be combined selected by the article selection operation;
the receiving module is used for receiving a combined display instruction aiming at the at least two articles to be combined;
the display module is used for responding to the combined display instruction and displaying a target image combined with the at least two articles to be combined, wherein the target image is obtained by combining the display images of the articles to be combined;
the display image is an image displayed in an article detail page of the corresponding article to be combined.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of presenting an image according to any one of claims 1 to 12 when executing the executable instructions stored in the memory.
15. A computer-readable storage medium storing executable instructions for implementing the method of displaying an image according to any one of claims 1 to 12 when executed by a processor.
CN202010875251.XA 2020-08-27 2020-08-27 Image display method, device, equipment and computer readable storage medium Pending CN111932346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010875251.XA CN111932346A (en) 2020-08-27 2020-08-27 Image display method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010875251.XA CN111932346A (en) 2020-08-27 2020-08-27 Image display method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111932346A true CN111932346A (en) 2020-11-13

Family

ID=73308254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010875251.XA Pending CN111932346A (en) 2020-08-27 2020-08-27 Image display method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111932346A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561632A (en) * 2020-12-08 2021-03-26 北京达佳互联信息技术有限公司 Information display method, device, terminal and storage medium
CN114286122A (en) * 2021-12-22 2022-04-05 北京字跳网络技术有限公司 Page display method, device, equipment and storage medium for live broadcast room

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561632A (en) * 2020-12-08 2021-03-26 北京达佳互联信息技术有限公司 Information display method, device, terminal and storage medium
CN114286122A (en) * 2021-12-22 2022-04-05 北京字跳网络技术有限公司 Page display method, device, equipment and storage medium for live broadcast room
CN114286122B (en) * 2021-12-22 2024-05-28 北京字跳网络技术有限公司 Page display method, device, equipment, storage medium and program product of live broadcast room

Similar Documents

Publication Publication Date Title
US11205023B2 (en) Computer aided systems and methods for creating custom products
US10706637B2 (en) Computer aided systems and methods for creating custom products
US10943403B2 (en) Object preview in a mixed reality environment
US10867081B2 (en) Computer aided systems and methods for creating custom products
US9665960B1 (en) Image-based item location identification
US20120311509A1 (en) Reader with enhanced user functionality
US11853730B2 (en) Mini program data binding method and apparatus, device, and storage medium
US10089680B2 (en) Automatically fitting a wearable object
EP3345384B1 (en) Display apparatus and control method thereof
KR20190088974A (en) Machine-based object recognition of video content
WO2015135354A1 (en) Search method, system, and device.
US20130145286A1 (en) Electronic device, social tile displaying method, and tile connection method
KR102139664B1 (en) System and method for sharing profile image card
TW201407500A (en) Intelligent presentation of documents
CN108984707B (en) Method, device, terminal equipment and storage medium for sharing personal information
US9589296B1 (en) Managing information for items referenced in media content
CN108920515A (en) Information recommendation method, device, equipment and the storage medium of web displaying process
US20190325497A1 (en) Server apparatus, terminal apparatus, and information processing method
US20160117754A1 (en) Electronic wish list system
CN111932346A (en) Image display method, device, equipment and computer readable storage medium
CN110309412A (en) The method, apparatus and electronic equipment of clothing matching information are provided
KR102063268B1 (en) Method for creating augmented reality contents, method for using the contents and apparatus using the same
WO2019134501A1 (en) Method and device for simulating fit of garment on user, storage medium, and mobile terminal
US20210279786A1 (en) Methods and Systems for Managing Digital Looks
CN114785949A (en) Video object processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination