CN111242712B - Commodity display method and device - Google Patents

Commodity display method and device Download PDF

Info

Publication number
CN111242712B
CN111242712B CN201811443985.XA CN201811443985A CN111242712B CN 111242712 B CN111242712 B CN 111242712B CN 201811443985 A CN201811443985 A CN 201811443985A CN 111242712 B CN111242712 B CN 111242712B
Authority
CN
China
Prior art keywords
commodity
new
image
attribute information
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811443985.XA
Other languages
Chinese (zh)
Other versions
CN111242712A (en
Inventor
张俊
蒋梁斌
齐晓宁
王怡春
朱敏
杨昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811443985.XA priority Critical patent/CN111242712B/en
Publication of CN111242712A publication Critical patent/CN111242712A/en
Application granted granted Critical
Publication of CN111242712B publication Critical patent/CN111242712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a commodity display method and a device thereof, wherein the method comprises the following steps: acquiring commodity images related to commodities; acquiring new commodity content corresponding to a new commodity mode by utilizing the commodity image; and displaying the new content on the commodity according to the new mode. By adopting the method and the device, commodity updating can be realized only through commodity images.

Description

Commodity display method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying commodities.
Background
Many online merchants today offer goods on a web site for viewing by customers. After a questionnaire of more than one-half a million merchants, it was found that there was a pain spot on the new merchandise display (commonly referred to in the industry as "up-to-new") regardless of the size and type of merchant.
Specifically, a merchant may post goods on a shopping website, as shown in fig. 1, the merchant may input goods information (e.g., information of pictures, categories, brands, etc.) related to the goods using an electronic terminal, and then the electronic terminal may transmit the goods information to a server corresponding to the shopping website, and the server may provide the goods information of the goods to each client (e.g., mobile terminal) to implement a new operation. It can be seen that the merchant is required to manually input commodity information about the commodity with a great deal of effort whenever a new commodity needs to be pushed out.
Disclosure of Invention
One of the main objectives of the present application is to provide a method and a device for displaying merchandise, which are aimed at solving the above-mentioned new technical problems of automatically completing merchandise.
Exemplary embodiments of the present application provide a merchandise display method, the method comprising: acquiring commodity images related to commodities; acquiring new commodity content corresponding to a new commodity mode by utilizing the commodity image; and displaying the new content on the commodity according to the new mode.
An exemplary embodiment of the present application provides a commodity information processing method, including: acquiring a preset trigger event on a display interface of an application program; acquiring commodity images of new commodities to be uploaded based on the preset trigger event; and determining new content on the commodity of the commodity by carrying out image recognition on the commodity image.
An exemplary embodiment of the present application provides a commodity information processing method, including: sensing user input by a user for a newly launched control while the user interface is displayed; responding to the user input, and acquiring a commodity image of the commodity; and acquiring the commodity new content corresponding to the commodity new mode by utilizing the commodity image.
Another exemplary embodiment of the present application provides a computer readable storage medium having stored thereon computer instructions, characterized in that the instructions when executed implement the above-described method.
Another exemplary embodiment of the present application provides a merchandise display device, the device comprising a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: acquiring commodity images related to commodities; acquiring new commodity content corresponding to a new commodity mode by utilizing the commodity image; and displaying the new content on the commodity according to the new mode.
The above-mentioned at least one technical scheme that this application exemplary embodiment adopted can reach following beneficial effect:
the commodity display method of the exemplary embodiment of the application can finish commodity updating through the commodity image, so that the commodity can be displayed on a screen, and the labor cost of a merchant is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a diagram of a new scene on an existing commodity;
FIG. 2 is a view of a new scene on a commodity provided by the present application;
FIG. 3 is a flow chart of a merchandise display method according to an exemplary embodiment of the present application;
FIG. 4 is a scene graph of generating an upper new image using merchandise images according to an example embodiment of the present application;
FIG. 5 is a scene graph of another embodiment of a new on commodity provided by the present application;
FIG. 6 is a schematic diagram of classification of preset trigger events provided herein;
FIG. 7 is a scene graph of another embodiment of a new on commodity provided by the present application;
FIG. 8 is a block diagram of a merchandise display device according to an exemplary embodiment of the present application;
fig. 9 shows a block diagram of a mobile terminal according to an exemplary embodiment of the present disclosure.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
An electronic terminal according to the present application (hereinafter, an apparatus that may indicate use of a merchandise display method) is a device including a display unit, and may include, but is not limited to, any of the following: personal Computers (PCs), mobile devices such as cellular telephones, personal Digital Assistants (PDAs), digital cameras, portable game consoles, MP3 players, portable/Personal Multimedia Players (PMPs), hand-held electronic books, tablet PCs, portable laptop PCs, and Global Positioning System (GPS) navigators, smart TVs, and the like.
Further, it is understood that a display unit of a mobile terminal according to the present application may include a touch screen and a touch screen controller, wherein the touch screen may provide a User Interface (UI) corresponding to various services (e.g., displaying goods, etc.) to a user and transmit an analog signal corresponding to at least one touch on the UI to the touch screen controller. In the description of the present application, "touch" may include contact touches and contactless touches, where contact touches refer to a touch screen that may receive at least one touch input through a body part (e.g., finger, etc.) of a user or a touch input tool (e.g., stylus or stylus). The touch screen may also receive touch input signals corresponding to successive movements of touches between one or more touches. For example, a touch contact may include a single click, a double click, a drag, a drop, and the like. The touch screen may transmit an analog signal corresponding to the continuous movement of the input touch to the touch screen controller.
A contactless touch is also referred to as a hover touch, in particular, a contactless touch need not be limited to contact between the touch screen and a body part of the user or a touch input tool. The touch screen may detect different intervals according to the performance or configuration of the mobile terminal. Further, the touch screen may be implemented as a resistive type, a capacitive type, an infrared type, a sonic type, and the like.
The touch screen controller converts analog signals received from the touch screen into digital signals (e.g., X and Y coordinates). The controller may control the touch screen using digital signals received from the touch screen controller. For example, in response to a user's selection of a shortcut icon or button displayed on a touch screen, a mobile terminal according to the present application may display a user interface corresponding to the shortcut icon, e.g., the user clicks on an icon of the "panning" application, and the mobile terminal may display a user interface of the "panning" application.
Further, the electronic terminal may include a sensor for sensing various user inputs, for example, may include a vibration sensor so that shaking of a user may be sensed, and may include an audio sensor so that voice input of the user may be sensed, for example.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 2 is a view of a new scene on a commodity provided by the present application. As shown in fig. 2, the electronic terminal may acquire an image of a commodity related to the commodity, specifically, the electronic terminal may acquire an image of a commodity related to the commodity using an image acquisition module, and it should be noted that the image acquisition module may be a module inside the electronic device, for example, a camera, or may call an image acquired by an external device, for example, the electronic terminal may capture the commodity using an external video camera, and transmit the image of the commodity to the electronic terminal.
Although not shown in fig. 2, the commodity image according to the exemplary embodiment of the present application refers to an image related to a commodity, including, but not limited to, an image taken from the appearance of the commodity, a part, a tag, a product specification of the commodity, etc. photographed from various angles. The electronic terminal may then use these images to implement new merchandise, and the specific new merchandise operation will be described in detail below with reference to fig. 3 and 4, which will not be described again here.
As shown in fig. 2, after the article is new, the user may view article information of the article on the display interface, for example, may view an article detail page of the article on the display interface. It should be noted that the implementation of a new operation on a commodity using a commodity image may be performed in the electronic terminal, or may be processed using a computing device (e.g., a web server that presents the new commodity) that performs communication with the electronic terminal.
In order to better explain the new flow on the commodity, the following will be described in detail with reference to fig. 3 to 7. Fig. 3 shows a flow chart of a merchandise display method according to an exemplary embodiment of the present application.
In step S310, a merchandise image associated with the merchandise is acquired, wherein the merchandise image may be acquired by an external or internal image acquisition device, including but not limited to an appearance image of the merchandise, a drop image, and a merchandise instruction image.
In step S320, the new content on the commodity corresponding to the new mode of the commodity is acquired by using the commodity image, where the new mode may specifically refer to a mode of displaying the new mode on the commodity, for example, a plain text mode, a picture text mixed mode, a video mode, or other modes, and the new mode may be a default mode or a mode selected by the user and may be determined before executing the new mode on the commodity. Alternatively, different up-new modes may be set according to different up-new contents on the commodity included in the presented up-new style, for example, a primary up-new mode, a middle-level up-new mode, and a high-level up-new mode may be provided to a user (e.g., a merchant), so that the user may select one up-new mode (e.g., a high-level up-new mode) from among a plurality of up-new modes as the up-new mode of the commodity. In addition, the user can also formulate a new mode conforming to own commodity according to own demand.
In a specific implementation aspect, the new uploading mode may correspond to a specific new uploading template, for example, the new uploading template may include an uploading template manually selected by a user (merchant) according to requirements, or may be an uploading new template selected by the user in a predetermined template, where the new uploading template may include a title document module, an item detail module, an item 3D effect module, an item poster module, and other modules, and different new uploading modes may correspond to a plurality of different templates, and each template may include different new uploading contents, a content combination form, a content display form, and so on.
The on-merchandise new content may include one or more of text, images, and video depending on the content included in the on-merchandise new mode, e.g., when the on-merchandise new mode includes displaying merchandise with merchandise video, the on-merchandise new content includes an on-merchandise new video of the merchandise. When the new-up mode comprises a commodity poster module, the new-up content of the commodity comprises a new-up image of the commodity and a text corresponding to the new-up image.
After determining the last new mode of the commodity, the new content on the commodity to be acquired can be determined according to the last new mode, for example, according to the last new mode, the local image of the commodity and attribute information such as brands, categories and the like of the commodity need to be acquired, and it should be noted that the attribute information can also be called characteristic information, and refers to information which can be used for describing the attribute of the commodity, wherein the attribute information includes the attribute of the commodity and information such as a numerical value of the commodity for the attribute, and can be called an attribute value when the attribute information is represented by the numerical value. For example, the commodity information may be information such as that the category is dairy products, the capacity is 250 milliliters, and the brand is bright.
As shown in fig. 2, when the commodity is a sweater, when determining the latest mode of the commodity, it is necessary to acquire partial images of the neckline, cuffs, etc. of the sweater and attribute information of the sweater such as brand, material, price, etc.
Specifically, in the case of determining text corresponding to the last new mode, determining new content on the commodity corresponding to the last new mode includes: performing character recognition processing on the commodity image, and converting characters in the commodity image into text information; extracting attribute information required by the new mode from the text information; and generating a new text corresponding to the new mode based on the attribute information. For example, the characters in the image (e.g., characters in the drop image) may be converted into text information using optical character recognition (optical character recognition, OCR) technology, and finally, attribute information such as a brand, category, price, date of manufacture, etc. of the commodity may be extracted from the text information, and a new text may be generated using the attribute information, e.g., the category of the commodity may be extracted from the text information as a dairy product, and the capacity of the commodity is 250 milliliters.
Further, according to an exemplary embodiment of the present application, the new text may be generated by performing machine learning through a commodity image, specifically, inputting the commodity image to a machine learning model component, and obtaining attribute information of a commodity, where the machine learning model component performs machine learning according to a correspondence between each commodity image of a plurality of commodity images and the attribute information; and determining the last new text by utilizing the attribute information. Among other things, the machine learning model components include a Convolutional Neural Network (CNN) component, a Deep Neural Network (DNN) component, and a Recurrent Neural Network (RNN) component.
According to an exemplary embodiment of the present application, the machine learning model component is trained as follows: acquiring the plurality of commodity images and attribute information in each of the plurality of commodity images, for example, attribute information (attribute value) of the plurality of commodity images and categories of each of the plurality of commodity images, etc. may be acquired; constructing a machine learning model component, wherein training parameters are arranged in the machine learning model component; and training the machine learning model component by utilizing the corresponding relation between each commodity image in the commodity images and the attribute information, and adjusting the training parameters until the machine learning model component reaches a preset requirement, for example, the accuracy rate reaches more than 90%.
As shown in fig. 2, a commodity image of a sweater may be input into a machine learning model component, and it is determined that the collar of the sweater is a v-shaped collar and is a short sweater, so that attribute information of the sweater may be acquired, and the attribute information may be generated into a new text.
In addition, the method further relates to a new image on the commodity, and when the new content on the commodity comprises the new image, the method comprises the following steps of: according to the new image capturing mode, the new image of the commodity is obtained by performing various operations on the commodity image, including dividing the commodity image into new images corresponding to the attribute information and synthesizing the new image by using the commodity image.
As shown in fig. 4, after the attribute information is acquired using the commodity image 400, the commodity image 400 may be divided into upper new images 410 to 440 corresponding to the attribute information, wherein the upper new image 410 corresponds to the attribute information "sweater loose", the upper new image 420 corresponds to the attribute information "v-shaped collar", the upper new image 430 corresponds to the attribute information "rib", and the upper new image 440 corresponds to the attribute information "long sleeve". In addition, the plurality of commodity images can also be generated into a 3D image by utilizing a 3D rendering technology according to the new mode.
Finally, in step S330, the new content on the commodity is displayed according to the new mode. In other words, finally, the merchandise may be displayed in the shopping website in accordance with the merchandise display style desired by the user. Specifically, based on the new template, an item detail page including new content on the item is generated, where the item detail page may be an item detail page including a new text, a new image, and a new video, for example, a document module and an item detail module in the item detail page may be completed by using the new text, and an item detail module in the item detail page may be completed by using the new image. Finally, the commodity detail page is displayed.
In summary, the commodity display method according to the exemplary embodiment of the present application may complete commodity refreshing through the commodity image, so that the merchant may efficiently refresh. Further, in the commodity refreshing operation, various refreshing information included in the commodity image can be automatically recognized, so that attribute information related to the commodity can be dug out. Furthermore, different commodity detail pages can be displayed according to different up-to-date modes, so that the requirements of various merchants can be met. Further, when the new content of the commodity is determined, the commodity image can be processed by utilizing the machine model component, so that the commodity image can be processed by simulating human vision, and the multi-dimensional processing of the commodity image is realized.
A new scene graph on a commodity according to another exemplary embodiment of the present application will be described below in conjunction with fig. 5. As shown in fig. 5, the page displayed in step 1 by the mobile terminal client is a display interface of an application program, where the display interface of the application program may include various display interfaces in the application program. For example, in one embodiment, the display interface may be a primary display interface of an application, such as a main interface of XX shopping APP. In other embodiments, the display interface may further include a secondary or less secondary display interface of an application program, such as a merchandise list page, a merchandise details page, etc. of the XX shopping APP.
In another embodiment of the present application, the display interface may further include a display interface based on a client operating system, such as a main interface of a mobile terminal. At this time, the operating system of the mobile terminal may be used as the application program, the commodity updating module is coupled with the client operating system, and the commodity updating function of the client system level is implemented through the commodity updating module. In this embodiment, the new module on the commodity may acquire a preset trigger event on the display interface of the application program, which will be described in detail below with reference to fig. 6.
Fig. 6 is a schematic diagram illustrating classification of preset trigger events provided in the present application. As shown in fig. 6, the first type of input event shown in fig. 6 may include an input event captured with a vibration sensor, for example, a new up operation may be initiated by a user shaking the electronic device. The second type of input event shown in fig. 6 may include a preset gesture operation by a user on the display interface, which may be various gesture inputs implemented using a touch screen as described above. The third type of input event shown in fig. 6 may include: the receiving of the voice signal, the sound attribute value being greater than a preset attribute value, etc. It should be noted that, the setting manner of the preset triggering event is not limited to the above examples, and those skilled in the art may make other changes in the light of the technical spirit of the present application, but as long as the functions and effects achieved by the setting manner are the same as or similar to those of the present application, all the setting manner should be covered in the protection scope of the present application.
And the commodity new-up module executes a commodity new-up operation based on the preset touch event. In an alternative embodiment, the on-commodity new operation refers to an operation of performing the commodity display method according to the exemplary embodiment of the present application.
In summary, according to the commodity information processing method according to the exemplary embodiment of the present application, the new operation on the commodity can be implemented by using the new module on the commodity on the premise of not affecting the application program, so that it can be seen that the new module on the commodity is flexible to apply, and can be used in cooperation with various application programs without being embedded into the application program.
According to an exemplary embodiment of the present application, a commodity information processing method may include acquiring a preset trigger event on a display interface of an application program; acquiring commodity images of new commodities to be uploaded based on the preset trigger event; and determining new content on the commodity by carrying out image recognition on the commodity image.
Optionally, determining new content on the commodity of the commodity by performing image recognition on the commodity image comprises: and displaying the new content on the commodity according to a preset mode.
Optionally, determining new content on the commodity of the commodity by performing image recognition on the commodity image includes: determining attribute information of the commodity by performing image recognition on the commodity image; converting the attribute information into a new text according to a preset mode; converting the commodity image into a new image corresponding to the attribute information according to a preset mode; and determining the commodity new content of the commodity by using the new text and the new image.
Optionally, determining the attribute information of the commodity by performing image recognition on the commodity image includes: performing character recognition processing on the commodity image, and converting characters in the commodity image into text information; and extracting attribute information of the commodity from the text information.
Optionally, determining the attribute information of the commodity by performing image recognition on the commodity image includes: inputting the commodity images into a machine learning model component to acquire attribute information of the commodity, wherein the machine learning model component performs machine learning according to the corresponding relation between each commodity image in the commodity images and the attribute information.
To further describe the present application, a new scene graph on merchandise according to another exemplary embodiment of the present application will be described below in conjunction with fig. 7. As shown in fig. 7, a user may start an application on a display interface 710 of an electronic terminal by touching or voice income, etc., and the application is started. In one implementation, upon displaying user interface 720, a user is sensed for a new up-launch operation to launch a new on the article, wherein the up-launch operation refers to an operation for launching a new on the article.
As depicted in fig. 7, the user may implement the up-new launch operation by touching a control on the user interface 720. It should be noted that, the "control" may be any type of operable control, such as a button, a slider, a drag bar, etc., and the triggering or operation manner of the control may be, for example, but not limited to, clicking, a long stay of a cursor, sliding, etc. For ease of illustration, the following examples will be described with "buttons" and "clicks" as examples of controls and ways in which the controls operate or toggle, although the invention is not limited in this regard.
In particular, the last newly initiated operation may include a user's operation (e.g., touch, voice input, etc.) of one or more controls (e.g., buttons, menus, icons, etc., displayed on a display unit of the mobile terminal). It should be noted that the one or more controls may each be displayed at any position on the display unit according to user settings, and the one or more controls may be presented in different manners according to user needs, optionally, the user may change the position of the control on the display unit during operation by user operations such as dragging. In addition, the following description describes user operations by way of example only, and all user operations that are sensible by the mobile terminal are applicable to the present application.
And responding to the new starting operation, and executing new commodity processing. Finally, a new item may be displayed on the display interface 730, wherein the new item operation includes obtaining an item image associated with the item; determining attribute information of the commodity by identifying the commodity image; based on the attribute information, the commodity is updated according to a preset mode.
In order to more clearly understand the inventive concept of the exemplary embodiment of the present application, a block diagram of the merchandise display device of the exemplary embodiment of the present application will be described below with reference to fig. 8. Those of ordinary skill in the art will appreciate that: the apparatus in fig. 8 shows only components related to the present exemplary embodiment, and general components other than those shown in fig. 8 are included in the apparatus.
Fig. 8 shows a block diagram of a merchandise display device of an exemplary embodiment of the present application. Referring to fig. 8, at a hardware level, the apparatus includes a processor, an internal bus, and a computer-readable storage medium, wherein the computer-readable storage medium includes a volatile memory and a nonvolatile memory. The processor reads the corresponding computer program from the non-volatile memory and then runs. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
Specifically, the processor performs the following operations: acquiring commodity images related to commodities; acquiring new commodity content corresponding to a new commodity mode by utilizing a commodity image; and displaying the new content on the commodity according to the new mode.
Optionally, the processor acquiring, in the step of implementing, the new content on the commodity corresponding to the new mode of the commodity by using the commodity image includes: determining a new mode of the commodity; and determining new content on the commodity corresponding to the new mode based on the commodity image.
Optionally, the new content on the merchandise includes one or more of a new text, a new image, and a new video.
Optionally, the determining, by the processor, the new content on the commodity corresponding to the new mode if the new content on the commodity is text includes: performing character recognition processing on the commodity image, and converting characters in the commodity image into text information; extracting attribute information required by the new mode from the text information; and generating a new text corresponding to the new mode based on the attribute information.
Optionally, the determining, by the processor, the new content on the commodity corresponding to the new mode if the new content on the commodity is the new text comprises: inputting commodity images into a machine learning model component to acquire attribute information of commodities, wherein the machine learning model component performs machine learning according to the corresponding relation between each commodity image in a plurality of commodity images and the attribute information to acquire the commodity; and determining the last new text by utilizing the attribute information.
Optionally, the machine learning model component is trained as follows: acquiring the plurality of commodity images and attribute information in each commodity image of the plurality of commodity images; constructing a machine learning model component, wherein training parameters are arranged in the machine learning model component; and training the machine learning model component by utilizing the corresponding relation between each commodity image in the commodity images and the attribute information, and adjusting the training parameters until the machine learning model component reaches the preset requirement.
Optionally, the determining, by the processor, the new content on the commodity corresponding to the new mode if the new content on the commodity is a new image comprises: and dividing the commodity image into a plurality of new images corresponding to the attribute information according to the new mode.
Optionally, determining the new content on the commodity corresponding to the new mode in the case that the new content on the commodity is the new video comprises: and synthesizing the commodity image into a new video corresponding to the new mode.
Optionally, the processor in the implementation step displays new content on the commodity according to the new mode, including: determining a new template corresponding to the new mode; generating an commodity detail page comprising new contents on commodities based on the new template; and displaying the commodity detail page.
In summary, the commodity display apparatus according to the exemplary embodiments of the present application may complete commodity refreshing through the commodity image, so that the merchant may efficiently refresh. Further, in the commodity refreshing operation, various refreshing information included in the commodity image can be automatically recognized, so that attribute information related to the commodity can be dug out. Furthermore, different commodity detail pages can be displayed according to different up-to-date modes, so that the requirements of various merchants can be met. Further, when the new content of the commodity is determined, the commodity image can be processed by utilizing the machine model component, so that the commodity image can be processed by simulating human vision, and the multi-dimensional processing of the commodity image is realized.
Fig. 9 shows a block diagram of a mobile terminal according to an exemplary embodiment of the present disclosure. In the present application, the mobile terminal may perform the goods presentation method as described above using the internal unit and/or the component, thereby realizing the new goods, and in addition, the mobile terminal may perform the goods information processing method as described above. That is, the mobile terminal may be one of subjects performing the merchandise display/information processing method according to the present application.
Referring to fig. 9, the mobile terminal 900 may include a memory 110, a processor unit 120, an audio processing unit 130, an input and output control unit 140, a touch screen unit 150, and an input unit 160. Here, the memory 110 may be a plurality of memories.
The memory 110 may include a program storage unit 111 storing a program for controlling the operation of the mobile terminal 900 and a data storage unit 112 storing data generated when the program is executed. The program storage unit 111 stores a GUI program 113 and at least one application program 114. Here, the programs stored in the program storage unit 111 are sets of instructions, and thus these programs may be referred to as instruction sets. The data storage unit 112 may store all data about the new on the commodity.
Next, in the program storage unit 111, the GUI program 113 may include at least one program for realizing a new on the commodity. For example, when a user's new-up operation for starting up a new item is sensed, the GUI program 113 displays the item using the data stored in the data storage unit 112.
The processor unit 120 may include a memory interface 121, at least one processor 122, and a peripheral device interface 123, where the memory interface 121, the at least one processor 122, and the peripheral device interface 123 included in the processor unit may be implemented in at least one Integrated Circuit (IC) or as separate components. Memory interface 121 controls access to memory by components, such as processor 122 or peripheral interface 123. The peripheral interface 123 is used for connection of peripheral devices of the mobile terminal 800 to the processor 122 and the memory interface 121. The processor unit 120 may be any suitable hardware element, such as a microprocessor, an IC, an Application Specific IC (ASIC), and an erasable programmable read-only memory (EPROM), a controller, or any other similar and/or suitable hardware element.
The processor 122 controls the mobile terminal 900 by using at least one software program such that the mobile terminal 900 provides various applications. In this case, the processor 122 may execute at least one program stored in the memory 110 to provide services according to the corresponding program. For example, the processor 122 causes the mobile terminal 900 to provide shopping services using a shopping application stored in the application 114.
The audio processing unit 130 provides an audio interface between the user and the mobile terminal 800 using the speaker 131 and the microphone 132. The input and output control unit 140 provides an interface between input and output units (such as the touch screen 150 and the input unit 160) and the peripheral device interface 123. According to an exemplary embodiment, voice may be input to the mobile terminal using the speaker 131, for example, a user may start an application based on the voice input, and may start a new process of goods based on the voice input.
The touch screen 150 is an input and output unit that performs information input and information output, and may include a touch input unit 151 and a display unit 152. The touch input unit 151 supplies touch information sensed through the touch panel to the processor unit 120 through the input and output control unit 140. In this case, the touch input unit 151 provides touch information generated by an electronic pen, a finger, an external keyboard, or any other similar and/or suitable input device to the processor unit 120 through the input and output control unit 140.
The display unit 152 displays a GUI corresponding to an application, and displays the changed GUI according to inputs of the touch input unit 151, the input unit 160, and the audio processing unit 130. For example, the display unit 152 displays display data supplied from the GUI program 113. For example, the display unit 152 displays the current GUI through the GUI program 113. As another example, the display unit 152 displays a GUI layout of a specified version through the GUI program 113 after receiving control information for displaying the GUI layout. The display unit 152 may be any suitable display device, such as an Organic Light Emitting Diode (OLED) display, a Liquid Crystal Display (LCD), a Thin Film Transistor (TFT) display, an Active Matrix OLED (AMOLED) display, or any other similar and/or suitable display device. In addition, the display unit 152 and the touch input unit 151 may be formed as one unit and/or one hardware element, or as separate units or hardware elements.
The input unit 160 supplies input data generated by user selection to the processor unit 120 through the input and output control unit 140. For example, the input unit 160 may include control buttons for controlling the mobile terminal 800. As another example, the input unit 160 may include a peripheral input accessory for controlling the mobile terminal 800. According to an exemplary embodiment, the user may input a new up start operation for starting up a new on a commodity while displaying a user interface through the input unit 160 and the touch input unit 151.
In addition, the mobile terminal 900 may further include a communication unit (not shown) for performing or connecting to a communication network for voice or data communication. In this case, the communication unit may be divided into a plurality of communication sub-modules supporting different communication networks. For example, the communication network may include, but is not limited to: global system for mobile communications (GSM) networks, enhanced data rates for GSM evolution (EDGE) networks, code Division Multiple Access (CDMA) networks, wideband CDMA (W-CDMA) networks, long Term Evolution (LTE) networks, orthogonal Frequency Division Multiple Access (OFDMA) networks, wireless Local Area Networks (LANs), bluetooth networks, and Near Field Communication (NFC) networks, or any other similar and/or suitable network type. According to an exemplary embodiment, the mobile terminal 900 may receive commodity information about a commodity transmitted from an application server through a communication unit.
The execution subjects of the steps of the method provided in embodiment 1 may be the same apparatus, or the method may be executed by different apparatuses. For example, the execution subject of step 21 and step 22 may be device 1, and the execution subject of step 23 may be device 2; for another example, the execution body of step 21 may be device 1, and the execution bodies of step 22 and step 23 may be device 2; etc.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (21)

1. A merchandise display method comprising:
acquiring commodity images related to commodities;
acquiring new content on the commodity corresponding to the new mode of the commodity by utilizing the commodity image, wherein the method comprises the following steps: acquiring attribute information of the commodity by utilizing the commodity image; dividing the commodity image into a plurality of new images corresponding to the attribute information according to the new mode when the new commodity content comprises the new image;
And displaying the new content on the commodity according to the new mode.
2. The method of claim 1, wherein acquiring new content on the commodity corresponding to the new mode of the commodity using the commodity image comprises:
determining a new mode of the commodity;
and determining new content on the commodity corresponding to the new mode based on the commodity image.
3. The method of claim 1, wherein the on-merchandise new content comprises one or more of an on-new text, an on-new image, and an on-new video.
4. The method of claim 3, wherein, in the case where the on-commodity new content is text, determining the on-commodity new content corresponding to the on-new mode comprises:
performing character recognition processing on the commodity image, and converting characters in the commodity image into text information;
extracting attribute information required by the new mode from the text information;
and generating a new text corresponding to the new mode based on the attribute information.
5. The method of claim 4, wherein determining new content on the good corresponding to the new mode if the new content on the good is new text comprises:
Inputting the commodity images into a machine learning model component to acquire attribute information of the commodity, wherein the machine learning model component performs machine learning according to the corresponding relation between each commodity image in the plurality of commodity images and the attribute information to acquire the commodity;
and determining the last new text by utilizing the attribute information.
6. The method of claim 5, wherein the machine learning model component is trained as follows:
acquiring the plurality of commodity images and attribute information in each commodity image of the plurality of commodity images;
constructing a machine learning model component, wherein training parameters are arranged in the machine learning model component;
and training the machine learning model component by utilizing the corresponding relation between each commodity image in the commodity images and the attribute information, and adjusting the training parameters until the machine learning model component reaches the preset requirement.
7. The method of claim 3, wherein determining new content on the good corresponding to the new mode if the new content on the good is a new video comprises:
and synthesizing the commodity image into a new video corresponding to the new mode.
8. The method of claim 1, wherein presenting new content on the merchandise in the new mode comprises:
determining a new template corresponding to the new mode;
generating an article detail page comprising new contents on the article based on the new template;
and displaying the commodity detail page.
9. A commodity information processing method, characterized by comprising:
acquiring a preset trigger event on a display interface of an application program;
acquiring commodity images of new commodities to be uploaded based on the preset trigger event;
determining new content on the commodity of the commodity by carrying out image recognition on the commodity image comprises the following steps: determining attribute information of the commodity by carrying out image recognition on the commodity image; in the case where the commodity image is divided into a plurality of new images corresponding to the attribute information according to a new pattern when the new commodity content includes a new image.
10. The method of claim 9, wherein determining new content on the commodity of the commodity by image recognition of the commodity image comprises:
and displaying the new content on the commodity according to a preset mode.
11. The method of claim 9, wherein determining new content on the commodity of the commodity by image recognition of the commodity image comprises:
converting the attribute information into a new text according to a preset mode;
and determining the commodity new content of the commodity by utilizing the new text and the new images.
12. The method of claim 9, wherein determining attribute information of the commodity by performing image recognition on the commodity image comprises:
performing character recognition processing on the commodity image, and converting characters in the commodity image into text information; and extracting attribute information of the commodity from the text information.
13. The method of claim 9, wherein determining attribute information of the commodity by performing image recognition on the commodity image comprises:
inputting the commodity images into a machine learning model component to acquire attribute information of the commodity, wherein the machine learning model component performs machine learning according to the corresponding relation between each commodity image in the plurality of commodity images and the attribute information.
14. A commodity information processing method, characterized by comprising:
When a user interface is displayed, sensing a new starting operation of a user for starting a new on the commodity;
responding to the new starting operation to acquire commodity images of commodities;
acquiring new content on the commodity corresponding to the new mode of the commodity by utilizing the commodity image, wherein the method comprises the following steps: acquiring attribute information of the commodity by utilizing the commodity image; when the commodity image is divided into a plurality of new images corresponding to the attribute information according to the new pattern, the new commodity content includes a new image.
15. The method of claim 14, wherein sensing a user for a new up-start operation to start a new up on the article comprises:
a last newly initiated operation by a user operating one or more controls hovering over a user interface is sensed.
16. The method of claim 14, wherein acquiring new content on the commodity corresponding to the new mode of the commodity using the commodity image comprises:
determining a new mode of the commodity;
and determining new content on the commodity corresponding to the new mode based on the commodity image.
17. The method of claim 14, wherein new content on the commodity corresponding to a new mode on the commodity is obtained:
And displaying the new content on the commodity according to the new mode.
18. A computer readable storage medium having stored thereon computer instructions, which when executed, implement the method of any of claims 1 to 17.
19. A merchandise display device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring commodity images related to commodities;
acquiring new content on the commodity corresponding to the new mode of the commodity by utilizing the commodity image, wherein the method comprises the following steps: acquiring attribute information of the commodity by utilizing the commodity image; dividing the commodity image into a plurality of new images corresponding to the attribute information according to the new mode when the new commodity content comprises the new image;
and displaying the new content on the commodity according to the new mode.
20. A commodity information processing apparatus, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
Acquiring a preset trigger event on a display interface of an application program;
acquiring commodity images of new commodities to be uploaded based on the preset trigger event;
determining new content on the commodity of the commodity by carrying out image recognition on the commodity image comprises the following steps: determining attribute information of the commodity by carrying out image recognition on the commodity image; in the case where the commodity image is divided into a plurality of new images corresponding to the attribute information according to a new pattern when the new commodity content includes a new image.
21. A commodity information processing apparatus, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
when a user interface is displayed, sensing a new starting operation of a user for starting a new on the commodity;
responding to the new starting operation to acquire commodity images of commodities;
acquiring new content on the commodity corresponding to the new mode of the commodity by utilizing the commodity image, wherein the method comprises the following steps: acquiring attribute information of the commodity by utilizing the commodity image; when the commodity image is divided into a plurality of new images corresponding to the attribute information according to the new pattern, the new commodity content includes a new image.
CN201811443985.XA 2018-11-29 2018-11-29 Commodity display method and device Active CN111242712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811443985.XA CN111242712B (en) 2018-11-29 2018-11-29 Commodity display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811443985.XA CN111242712B (en) 2018-11-29 2018-11-29 Commodity display method and device

Publications (2)

Publication Number Publication Date
CN111242712A CN111242712A (en) 2020-06-05
CN111242712B true CN111242712B (en) 2023-04-28

Family

ID=70871056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811443985.XA Active CN111242712B (en) 2018-11-29 2018-11-29 Commodity display method and device

Country Status (1)

Country Link
CN (1) CN111242712B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781140A (en) * 2020-10-30 2021-12-10 北京沃东天骏信息技术有限公司 Video generation method and device, electronic equipment and computer readable medium
CN112686220B (en) * 2021-03-10 2021-06-22 浙江口碑网络技术有限公司 Commodity identification method and device, computing equipment and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373938A (en) * 2014-08-27 2016-03-02 阿里巴巴集团控股有限公司 Method for identifying commodity in video image and displaying information, device and system
CN107454964A (en) * 2016-12-23 2017-12-08 深圳前海达闼云端智能科技有限公司 A kind of commodity recognition method and device
CN107705066A (en) * 2017-09-15 2018-02-16 广州唯品会研究院有限公司 Information input method and electronic equipment during a kind of commodity storage
CN107861972A (en) * 2017-09-15 2018-03-30 广州唯品会研究院有限公司 The method and apparatus of the full result of display of commodity after a kind of user's typing merchandise news
CN108364209A (en) * 2018-02-01 2018-08-03 北京京东金融科技控股有限公司 Methods of exhibiting, device, medium and the electronic equipment of merchandise news

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013210971A (en) * 2012-03-30 2013-10-10 Toshiba Tec Corp Information processing apparatus and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373938A (en) * 2014-08-27 2016-03-02 阿里巴巴集团控股有限公司 Method for identifying commodity in video image and displaying information, device and system
CN107454964A (en) * 2016-12-23 2017-12-08 深圳前海达闼云端智能科技有限公司 A kind of commodity recognition method and device
CN107705066A (en) * 2017-09-15 2018-02-16 广州唯品会研究院有限公司 Information input method and electronic equipment during a kind of commodity storage
CN107861972A (en) * 2017-09-15 2018-03-30 广州唯品会研究院有限公司 The method and apparatus of the full result of display of commodity after a kind of user's typing merchandise news
CN108364209A (en) * 2018-02-01 2018-08-03 北京京东金融科技控股有限公司 Methods of exhibiting, device, medium and the electronic equipment of merchandise news

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Sirion Vittayakorn等.Automatic Attribute Discovery with Neural Activations.《European Conference on Computer Vision》.2016,252–268. *
王巧侠 ; .电子商务网站中商品展示技术应用研究.软件导刊.2013,(07),全文. *

Also Published As

Publication number Publication date
CN111242712A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN108416825B (en) Dynamic graph generation device, method and computer readable storage medium
CN107153541B (en) Browsing interaction processing method and device
RU2654145C2 (en) Information search method and device and computer readable recording medium thereof
KR102285699B1 (en) User terminal for displaying image and image display method thereof
CN107678644B (en) Image processing method and mobile terminal
US10423303B1 (en) Progressive information panels in a graphical user interface
US20150040031A1 (en) Method and electronic device for sharing image card
EP3872599A1 (en) Foldable device and method of controlling the same
US20170277499A1 (en) Method for providing remark information related to image, and terminal therefor
US20120280898A1 (en) Method, apparatus and computer program product for controlling information detail in a multi-device environment
US20170061609A1 (en) Display apparatus and control method thereof
US20160224591A1 (en) Method and Device for Searching for Image
US20130211923A1 (en) Sensor-based interactive advertisement
CN104090761A (en) Screenshot application device and method
KR102343361B1 (en) Electronic Device and Method of Displaying Web Page Using the same
WO2018113064A1 (en) Information display method, apparatus and terminal device
KR20160023412A (en) Method for display screen in electronic device and the device thereof
KR102652362B1 (en) Electronic apparatus and controlling method thereof
CN103797481A (en) Gesture based search
US10152496B2 (en) User interface device, search method, and program
US9619519B1 (en) Determining user interest from non-explicit cues
CN111242712B (en) Commodity display method and device
US20140181709A1 (en) Apparatus and method for using interaction history to manipulate content
US20130055114A1 (en) Enhanced and Extended Browsing Via Companion Mobile Device
CN112416486A (en) Information guiding method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40031325

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant