CN113298602A - Commodity object information interaction method and device and electronic equipment - Google Patents

Commodity object information interaction method and device and electronic equipment Download PDF

Info

Publication number
CN113298602A
CN113298602A CN202011357203.8A CN202011357203A CN113298602A CN 113298602 A CN113298602 A CN 113298602A CN 202011357203 A CN202011357203 A CN 202011357203A CN 113298602 A CN113298602 A CN 113298602A
Authority
CN
China
Prior art keywords
interactive
interaction
anchor point
target
commodity object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011357203.8A
Other languages
Chinese (zh)
Inventor
邵腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011357203.8A priority Critical patent/CN113298602A/en
Publication of CN113298602A publication Critical patent/CN113298602A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • Geometry (AREA)
  • Development Economics (AREA)
  • Human Computer Interaction (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a commodity object information interaction method, a commodity object information interaction device and electronic equipment, wherein the method comprises the following steps: determining a target commodity object; displaying the image content associated with the target commodity object and at least one interactive anchor point associated with the image content; after the target interaction anchor point is triggered, displaying a corresponding operation control according to the interaction type associated with the target interaction anchor point; and processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control. Through the embodiment of the application, the richness of the interaction capacity of the commodity object and the user immersion can be improved.

Description

Commodity object information interaction method and device and electronic equipment
Technical Field
The present application relates to the field of information interaction technologies, and in particular, to a method and an apparatus for interacting commodity object information, and an electronic device.
Background
In the commodity object information system, the detailed information of the commodity object is important content, and a user can know the detailed information, performance, evaluation and other information of the commodity object through a specific detailed information page so as to make a purchase decision. In the conventional scheme, the detailed page mainly describes the information of the commodity objects through pictures, videos, texts and the like, but for some commodity objects of special categories, the characteristics of the commodity objects may be difficult to embody only through the information. Therefore, 3D experience functions are provided for users in some commodity object information systems, more three-dimensional sensory experience can be provided for the users through the pre-generated 3D models, and in addition, the users can also realize some interaction by performing operations such as rotation, scaling and displacement on the 3D models, so that the users can know the characteristics of the commodity objects from multiple angles and multiple directions. Or, an anchor point may be set on the 3D model, and text information such as a name and a price of an associated commodity object may be displayed by clicking the anchor point. Can also be used
In the above prior art, an immersive experience can be provided for a user in a 3D model manner, but there is a further space for improvement in terms of richness and immersion of the interaction capability of the commodity object.
Disclosure of Invention
The application provides a commodity object information interaction method and device and electronic equipment, and richness of commodity object interaction capacity and user immersion can be improved.
The application provides the following scheme:
a commodity object information interaction method comprises the following steps:
determining a target commodity object;
displaying the image content associated with the target commodity object and at least one interactive anchor point associated with the image content;
after the target interaction anchor point is triggered, displaying a corresponding operation control according to the interaction type associated with the target interaction anchor point;
and processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
A method of generating merchandise object interaction information, comprising:
receiving image content imported for a specified commodity object;
displaying the image content, providing various types of interactive anchor points and operation options for adding the interactive anchor points in the image content;
after receiving the interactive anchor adding operation through the operation options, determining the type of the added interactive anchor and the information of the associated interactive content;
and the control component is used for providing corresponding operation control according to the type of the interactive anchor point when the interactive anchor point is triggered, and executing a processing flow so as to process the interactive content associated with the target interactive anchor point.
An information interaction method, comprising:
displaying a three-dimensional model associated with the target interaction scene and at least one interaction anchor point associated with the three-dimensional model;
after the target interaction anchor point is triggered, displaying a corresponding operation control according to the interaction type associated with the target interaction anchor point;
and processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
A merchandise object information interaction device, comprising:
a commodity object determination unit for determining a target commodity object;
the image content display unit is used for displaying the image content associated with the target commodity object and at least one interactive anchor point associated with the image content;
the operation control display unit is used for displaying the corresponding operation control according to the interaction type associated with the target interaction anchor point after the target interaction anchor point is triggered;
and the interactive processing unit is used for processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
An apparatus for generating interaction information for a commodity object, comprising:
an image content receiving unit configured to receive image content imported for a specified commodity object;
the operation option providing unit is used for displaying the image content, providing various types of interactive anchor points and adding operation options of the interactive anchor points in the image content;
the anchor point adding unit is used for determining the type of the added interactive anchor point and the information of the associated interactive content after receiving the interactive anchor point adding operation through the operation options;
and the control component is used for providing corresponding operation controls according to the type of the interactive anchor point when the interactive anchor point is triggered, and executing a processing flow so as to process the interactive content associated with the target interactive anchor point.
An information interaction device, comprising:
the three-dimensional model display unit is used for displaying a three-dimensional model associated with the target interaction scene and at least one interaction anchor point associated with the three-dimensional model;
the operation control display unit is used for displaying the corresponding operation control according to the interaction type associated with the target interaction anchor point after the target interaction anchor point is triggered;
and the interactive processing unit is used for processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
According to the specific embodiments provided herein, the present application discloses the following technical effects:
through the embodiment of the application, the interactive anchor point can be mounted for the image content of the specific commodity object, the interactive anchor point can comprise various different types, and after the interactive anchor point is triggered, the corresponding type of operation control can be provided, so that a user can control the interactive content associated with the interactive anchor point through the operation control. Therefore, the richness of the interaction capacity can be improved, the simulation of the actual control process can be effectively realized, and the detailed characteristics of the commodity object can be displayed more intuitively. From the perspective of the user, the user can obtain richer information about the commodity object through the interactive content associated with the specific interactive anchor point, and can obtain the experience of actually controlling the commodity object through the operation of the operation control, so that the immersion of the user is further promoted.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
FIGS. 3-1 to 3-3 are schematic diagrams of interaction anchor points provided in embodiments of the present application;
4-1 through 4-8 are diagrams of operational controls under various interaction types provided by embodiments of the present application;
FIG. 5 is a flow chart of a second method provided by embodiments of the present application;
FIG. 6 is a flow chart of a third method provided by embodiments of the present application;
FIG. 7 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a third apparatus provided by an embodiment of the present application;
fig. 10 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
In the embodiment of the application, in order to further improve the richness and the immersion of the interaction capacity of the commodity object, a corresponding solution is provided. In the scheme, a merchant or a seller user may create a 3D (three-dimensional) model for a commodity object of the merchant or the seller, and then, an interactive anchor point (specifically, an operation option such as a clickable button mounted on the commodity 3D model) may be added to the 3D model through a related function provided in the embodiment of the present application. And, the specific interactive anchor point may be of various types, for example, it may include an anchor point for triggering automatic playing of model animation, an anchor point for triggering an acousto-optic-electric effect, an anchor point for triggering replacement of parts, an anchor point for triggering video playing associated with a commodity object, and so on. In addition, the corresponding control component can be associated with the interaction anchor point of a specific type, and the specific control component can be divided into interface display at the front end and control logic at the background. In the aspect of displaying the front-end interface, the corresponding operation control can be displayed according to the interaction type information associated with the target interaction anchor, for example, for the anchor used for triggering automatic playing of the model animation, the operation controls of types such as a sliding rod and the like can be displayed, so that in the process of displaying a specific commodity object through the animation, control operations such as starting, accelerating, stopping and the like can be executed through the operation control, and thus, the real operation on the actual control component can be simulated through operating the operation control. For another example, for an anchor point for triggering the acousto-optic and electro-optic special effects, an operation control such as an audio progress bar, a light brightness control bar and the like can be displayed, so that the acousto-optic and electro-optic effects can be adjusted, and the like. By the method, a user can perform various types of interaction through the specific interaction anchor points in the process of browsing the specific commodity object 3D model, so that the interaction richness is improved; in addition, the user can participate in the control process more by operating the display of the control in the interaction process, and the experience which is closer to the actual operation of the actual commodity object is provided for the user.
Of course, in practical applications, some merchant or seller users may not make a 3D model for each commodity object due to cost control, but have a need to provide users with richer and more immersive information. Therefore, the embodiment of the application can also support adding the various types of interaction anchor points on the basis of 2D pictures or videos and the like, and after the anchor points are triggered, the corresponding operation controls are displayed, the interaction content is displayed, and other related functions.
Specifically, from the system architecture perspective, as shown in fig. 1, the embodiment of the present application may specifically relate to a client and a server of a commodity object information system, where the client may be divided into a first client facing a consumer user and a buyer user, and a second client facing a merchant or a seller user. The first client may generally exist in the form of an independent application program or a Web page of the mobile terminal, and is used for browsing, subsequent purchasing, purchasing and the like of the commodity object information by the user. The second client mainly exists in a form of a webpage and can be used for issuing commodity object information, including importing a 3D model associated with a commodity object, mounting an interactive anchor point, setting interactive content, and the like in the embodiment of the application. The server is mainly used for providing data of the background and supporting service capability. For example, control components corresponding to various different types of interactive anchors can be realized in advance, when a certain type of interactive anchor is mounted on a certain 3D model, the control components can be automatically associated with the corresponding type of interactive anchor, and after the interactive anchor is triggered, the display of the front-end operation control and the realization of the corresponding background control logic can be automatically performed. Therefore, for a merchant or a seller user, only a 3D model needs to be made in advance (or a 2D picture or video can be directly used), and then the 3D model can be imported into the configuration system provided by the embodiment of the application, and the type of the interactive anchor point is selected and mounted at a specific position in the 3D model. The realization of the subsequent specific interaction capability can be automatically completed by the capability provided by the server.
The following describes in detail specific implementations provided in embodiments of the present application.
Example one
In the first embodiment, a commodity object information interaction method is provided from the perspective of a first client, and referring to fig. 2, the method may specifically include:
s201: determining a target commodity object;
during specific implementation, a merchant or a seller user can determine which commodity objects specifically implement the interactive capability, that is, 3D modeling, mounting of an anchor point, and the like of the commodity objects specifically can be initiated by the merchant or the seller user. After the interaction capability is realized for the specific commodity object, corresponding interaction tags can be added to the commodity object, and interaction entries are released at various different browsing nodes. For example, in a commodity object list such as "guess you like" in the first client front page, if a certain commodity object supports interaction, an interaction entry for entering a specific interaction interface may be provided in the resource slot where the commodity object is located. Or, the system can also put the commodity objects in a search result list page, a commodity object detail page and an activity meeting place page, and even provide a special page such as a 3D interaction meeting place to collectively display the commodity objects supporting the 3D interaction, or put a specific 3D interaction block in some related industry channel pages to collectively display the commodity objects supporting the 3D interaction in a shop page of a specific merchant or a seller user, and the like. In general, the user is enabled to obtain specific interactive portals in a variety of ways. After receiving the interaction request through a specific interaction entry associated with a certain commodity object, the commodity object can be determined as a target commodity object to be subjected to interaction operation.
S202: displaying the image content associated with the target commodity object and at least one interactive anchor point associated with the image content;
after the specific target commodity object is determined, the image content of the commodity object may be displayed, where, as described above, the image content associated with the specific target commodity object may be a 3D model of the target commodity object, or may also be a 2D picture or video of the target commodity object. In addition, because the interactive anchor points are mounted in advance based on the specific image content, the interactive anchor points mounted in the image content can be displayed in the process of displaying the specific image content.
In the embodiment of the present application, specific interactive anchors may have a variety of different types, for example, specific interactive anchors may include anchors that trigger automatic playing of model animation, anchors that trigger an acousto-optic-electric effect, anchors that trigger replacement of parts, anchors that trigger video playing, and the like. In specific implementation, different icons can be provided for different types of anchor points respectively for distinguishing and indicating. For example, as shown in fig. 3-1, 31 is a 3D model of a certain commodity object, wherein the model mounts four interactive anchor points, 32 may be an anchor point for triggering automatic playing of model animation, 33 may be an anchor point for triggering video playing, 34 may be an anchor point for triggering acousto-optic effect, 35 may be an anchor point for triggering replacement of parts, and so on.
The type, the number, the position, the specific associated interactive contents and the like of the interactive anchor points mounted in the specific 3D model can be determined by the merchant or the seller according to actual needs. In addition, no matter the interactive anchor is mounted in the 3D model or the anchor is mounted in the 2D picture or video, the specific anchor usually has a position attribute, that is, the specific anchor can be mounted at a specific position of the 3D model or the 2D picture, and when the 3D model or the 2D picture is displayed, the specific anchor can be displayed at the corresponding position. However, since a specific 3D model has a concept of a camera view angle in a display process, at a certain view angle, only image content of the 3D model corresponding to the view angle can enter a visible range, and other image content can enter the visible range only by switching the view angle in a manner of rotating the 3D model, or the like. In addition, in a mode of mounting an interactive anchor point through a 2D picture or a video, since a specific 2D picture may also have a long picture, only a partial region may be within a visible range of a window at the same time, and other regions may need to enter the visible range through operations such as sliding or dragging. Therefore, in a specific implementation, a current display area in the image content associated with the target commodity object may be determined first, and then the interaction anchor point located in the current display area is displayed in the image content.
That is, the display logic and the number of anchors mounted in the image content of the commodity object may be related to the information such as the display angle and the position of the image content of the commodity object in the current screen. For example, when a commodity object in a scene is presented in a 3D model, only anchor points mounted in the covered area under the current view angle may be displayed according to the position of the camera. For example, in a first viewing angle as shown in fig. 3-2, two anchor points shown at 36 may be exhibited in particular. While in the second view shown in fig. 3-3, two anchor points shown at 37 may be exhibited in particular, and so on. Similarly, when the commodity object in the scene is presented in a picture, when the commodity object is translated to a different area of the picture for presentation, only the anchor point in the corresponding area may be displayed.
Of course, in the process of performing anchor point display according to the current display area of the specific image content, there may be a case that some anchor points cannot be displayed at the current time. Therefore, in specific implementation, an interactive anchor point list may be provided in an area outside the image content in the display interface, and the interactive anchor point list may be used to display all the interactive anchor points associated with the image content, or may be used to trigger a specific interactive anchor point through the list. For example, as shown at 38 in fig. 3-3, a specific list of interaction anchors may be provided at the top of the presentation interface, and so on.
In addition, because the number of the associated interactive anchor points in the image content of the same commodity object may be large, although different types of interactive anchor points may correspond to different icons and different positions, the same commodity object may also mount a plurality of different interactive anchor points of the same type and the like, and therefore, a user may forget which anchor points have been triggered and which anchor points have not been triggered in the interactive process. Therefore, in an optional implementation manner, regardless of whether the interactive anchor is displayed at a corresponding position on specific image content or displayed in a separate list, the triggered interactive anchor and the non-triggered interactive anchor can be displayed in a differentiated manner. For example, for interactive anchors that have not been triggered, an animation effect of a halo may be provided, while interactive anchors that have been triggered may be displayed in gray, and so on. Therefore, the user can know which anchor points are triggered and which anchor points are not triggered, so that the anchor points which are not triggered can be continuously clicked and checked, and the triggered anchor points are prevented from being repeatedly triggered.
S203: after the target interaction anchor point is triggered, displaying a corresponding operation control according to the interaction type associated with the target interaction anchor point;
after a specific interactive anchor point is displayed, the method can be triggered by clicking the interactive anchor point and the like. Of course, in a specific implementation, besides triggering the interactive anchor point by clicking, other ways are also possible. For example, the display of the operation control is triggered by floating the cursor on the interactive anchor point, at this time, the operation control may be displayed at a specific floating position of the cursor, and so on. By the method, the user can conveniently and quickly find the existence of the operation control, and further perform subsequent interactive operation.
In the embodiment of the application, after the specific interactive anchor point is triggered, the corresponding operation control can be displayed according to the interactive type information associated with the target interactive anchor point, so as to control the interactive content associated with the target interactive anchor point. That is to say, after a specific anchor point is triggered, not only can corresponding interactive content be provided, but also a corresponding operation control can be provided, so that in the process of displaying or playing the interactive content, the interactive content can be controlled through the operation control. Therefore, the user can obtain the experience of actually controlling the commodity object.
And the provided operation control can be different according to different types of specific interactive anchor points. For example, as described above, the interaction type information associated with the target interaction anchor includes: and the interaction type is used for automatically playing the animation of the image content associated with the target commodity object, and at the moment, the specific interaction content comprises the animation content associated with the image content. In a specific implementation manner, the specific operation control may include: and providing a corresponding operation control according to the style and the control mode of the corresponding actual control component in the target commodity object, so as to simulate the actual operation on the actual control component by operating the operation control. For example, for some commodity objects of mechanical structure class, the control rod can control the travel of the commodity object, and the animation content of the 3D model just corresponds to the travel process of the commodity object, at this time, the specifically provided operation control can be generated according to the appearance and control mode of the control rod. Of course, in a specific implementation, if the purpose is to be achieved, the merchant or the user of the seller may also provide information about the actual control component of the specific commodity object when importing the image content such as the specific 3D model and the like and needing to mount the interactive anchor point of the animation class, for example, the 3D model of the control component may be included, and the like. In this way, specific operation controls can be generated and displayed according to the information of specific control components.
In addition, the interaction type information associated with the specific target interaction anchor point may include: and the interaction type is used for controlling the image content associated with the target commodity object to generate the acousto-optic-electric special effect, and at this time, the specific interaction content can comprise media content used for generating the acousto-optic-electric special effect. For this type of interactive content, audio, light effects, etc. may be typical. And the specific operation control at this time may be an operation space for controlling the audio playing, including a progress bar for adjusting the audio playing progress, options for performing operations such as starting, pausing, stopping, fast forwarding, and the like on the audio playing, and the like. For example, as shown in fig. 4-1, an audio type interactive anchor is shown at 41, and when triggered, operational controls shown at 42, including a progress bar, a play button, etc., may be exhibited. Alternatively, as shown in fig. 4-2, an interaction anchor point of the light effect type is shown at 43, which when triggered may exhibit an operation space as shown at 44, which may comprise a slide bar for adjusting the brightness of the light effect, etc.
Moreover, the interaction type information associated with the specific target interaction anchor point may further include: and an interaction type for controlling the image content associated with the target commodity object to replace the parts, wherein the specific interaction content comprises the image content of the replaceable parts. For example, a specific interaction scene may include interaction of a commodity object of an assembly toy class, and the commodity object of the toy class often has a situation of part replacement, and the like. The 3D model or the picture of the specific replaceable component may also be created in advance by the user of the merchant or the seller, and may be configured as the interactive content corresponding to the specific interactive anchor point. After such an interactive anchor point is triggered, particularly when an operation control is provided, a plurality of operation options may be provided according to the replaceable component, so as to perform a selection operation on the replaced target component through the operation options. For example, as shown in fig. 4-3, the pendants in the scene can be replaced, so that an interactive anchor of the parts replacement class is mounted at 45, and when triggered, an operation control shown at 46 can be displayed, which can include a plurality of options for selecting other replaced pendants. The number of the specific options and the like can be determined according to the number of the parts in the interactive content configured by the specific interactive anchor point. 4-4, the hub color of the toy car may be replaceable, so that an interactive anchor point of the parts replacement class may be mounted at a position shown at 47, and when triggered, an operation control shown at 48 may be displayed, wherein options of other colors that are selectable may be displayed for the user to select and replace, and so on.
In addition, the interaction type information associated with the specific target interaction anchor point may further include: and an interaction type for playing the video and/or the image-text content associated with the target commodity object, where the specific interaction content may include the video and/or the image-text content associated with the target commodity object. After the interactive anchor point is triggered, operation controls in aspects of video control and the like can be provided. For example, as shown in fig. 4-5, a shoe in a scene mounts an interactive anchor point 49 of a video playing class, and when the anchor point is triggered, an operation control shown at 410 can be provided, which can be used to control a specific video playing process, including progress control, play/pause switching, and the like. In addition, in the concrete implementation, after the target interaction anchor point of the video and/or image-text content playing class is triggered, a content floating layer can be created for playing or displaying the video and/or image-text content associated with the target commodity object.
In specific implementation, the specific interaction anchor point may further be associated with a document content, and the specific document content may be used to prompt the interaction content and/or the interaction manner, and the like. At this time, when the operation control is displayed, the document content can be displayed. For example, as shown at 46 in FIGS. 4-3, a paperwork reminder such as "replaceable Marrio furnishings" may be provided; as shown at 48 in fig. 4-4, a message prompt such as "color replaceable hub" may be provided, or the like.
In addition, the interaction type associated with the specific target interaction anchor point may further include a composite type composed of a plurality of different interaction types, and at this time, the interaction content includes a plurality of different types of interaction content; correspondingly, a plurality of operation controls corresponding to a plurality of different interaction types associated with the target interaction anchor point can be displayed so as to control the plurality of different types of interaction contents. That is, a single anchor point may trigger a plurality of different types of interactive contents and their respective corresponding operation controls.
For example, as shown in fig. 4-1, the specific interactive content may be a specific audio played by opening a cassette tape box and placing the cassette tape, and the corresponding interactive anchor point may be two types of interactive contents, namely, model animation playing and audio playing, and the processes of opening the cassette tape box and placing the cassette tape are simulated through the model animation, and the played music effect is simulated through the mounted audio content. At this time, after the specific interactive anchor point is triggered, an operation control for "cassette tape insertion/extraction" operation and an operation control for specifically controlling audio playing, including a progress bar and the like, may be provided.
It should be noted that, when the above-mentioned interactive function is provided based on 2D pictures or videos, for anchor points of types such as component replacement, a specific replacement situation may be simulated by partially replacing picture contents, and the specific replacement effect may not be as realistic as the replacement based on a 3D model, but the user can also obtain corresponding information to some extent.
S204: and processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
After the specific operation control is displayed, the interactive content associated with the target interactive anchor point can be processed based on the input operation of the operation control. The method comprises the steps of carrying out progress control on the associated audio and video interactive contents by dragging an operation control of a progress bar class, simulating the control on the motion state of a commodity object by controlling an operation control of a sliding rod class, and the like.
In addition, in addition to providing a variety of rich interactive content through a variety of different types of interactive anchors and corresponding operational controls, the same target merchandise object often includes a variety of different stock keeping unit SKUs, e.g., different colors, etc., while different SKUs correspond to different image content, including different 3D models, different 2D pictures, etc. At this time, an operation option for switching and presenting the image contents of different SKUs can be provided. For example, the operation option may be called a "switcher", and by using the operation option, the overall change of a specific 3D model or 2D picture/video, etc. may be realized, and accordingly, a batch of interactive anchor points may also be changed. That is, the same commodity object may correspond to different image contents (including 3D models or 2D pictures/videos) under different SKUs, and may mount different interaction anchors, associate different interaction contents, and the like according to respective characteristics of a specific SKU, and the like.
Moreover, for some toy-like commodity object interaction scenes, a certain commodity object and other related commodity objects are often used in combination. Therefore, in the embodiment of the present application, it is also possible to support importing a plurality of different commodity objects in the same scene, and the commodity objects may have a combined use relationship with each other. That is, the target commodity object currently being displayed may be associated with a related commodity object having an attribute used in combination with the target commodity object. At this time, an operation option for adding the image content of the related commodity object may be further provided so as to provide a display effect of the target commodity object combined with the other commodity objects. For example, for a certain scene, when the current target commodity object is displayed separately, it can be as shown in fig. 4-6. And a plurality of related commodity objects also exist in the target commodity object, thumbnails and options for selection can be shown at the lower part and the like. For example, after selecting one of the related merchandise objects, the combined display effects shown in fig. 4-7 can be combined. The combined display effects shown in fig. 4-8 may be combined, and so on, after selecting other related merchandise objects. In addition, if there are a plurality of combinations of the target commodity object and the same related commodity object, an operation option for switching the combined exhibition effect may be provided.
In addition to providing the operation control corresponding to the interactive anchor point and the aforementioned switch, a global function controller may be provided, which may implement some general functions, for example, as specifically shown at 39 in fig. 3-3, may include hiding and displaying of the interactive anchor point, displaying and hiding of a size scale, a full screen function, a landscape-portrait switch, an AR (augmented reality) function, and the like.
It should be noted that, in a specific implementation, in order to facilitate displaying various types of information, a specific display interface may be designed hierarchically, for example, the specific display interface may be specifically divided into a scene layer, a model layer, an interaction layer, and a content floating layer. Wherein:
the scene layer is mainly used for displaying backgrounds, atmospheres and the like, and can be specifically designed by a merchant or a seller user according to the characteristics and the like of a specific commodity object so as to better match the model of the commodity object and the display of interactive contents.
The model layer is mainly used for importing a specific 3D model, realizing basic interaction (rotation, scaling and displacement) of the model, and carrying interaction anchor points, commodity size information and the like.
The interaction layer may mainly include basic information of the commodity object (for example, trade name, LOGO information such as LOGO of brand, etc.), a controller (that is, display of an operation control triggered by a specific interaction anchor point, etc.), a switch (that is, performing overall switching between models corresponding to different SKUs of the commodity object, etc. as described above), and a global function, etc. The controller is used for controlling the corresponding interaction behavior of the commodity, and particularly can be standardized in the form of components and applied to interaction modes of various commodity objects.
The content floating layer can be used for receiving the content (including videos, pictures and texts) of the commodity object bound with the interactive anchor point, and can be called up by clicking along with the anchor point, unfolded and viewed in an upward sliding mode, folded in a downward sliding mode and the like. That is to say, in the process of interaction, in addition to the display of the contents such as animation, part replacement and the like realized based on the model application, for the anchor point used for playing the interactive contents such as video and/or graphics and text, after the anchor point is triggered, the specific video and/or graphics and text contents can be displayed in the content floating layer.
Through the layered design, decoupling among different functional layers can be realized, and modification operation on a certain layer does not affect other functional layers. For example, if the background of a certain scene needs to be modified, only the scene layer needs to be modified, and other functional layers do not need to be modified.
In a word, through the embodiment of the application, the interactive anchor point can be mounted for the image content of the specific commodity object, the interactive anchor point can comprise various different types, and after the interactive anchor point is triggered, the corresponding type of operation control can be provided, so that a user can control the interactive content associated with the anchor point through the operation control. Therefore, the richness of the interaction capacity can be improved, the simulation of the actual control process can be realized more effectively, and the detailed characteristics of the commodity object can be displayed more intuitively. From the perspective of the user, the user can obtain richer information about the commodity object through the interactive content associated with the specific interactive anchor point, and can obtain the experience of actually controlling the commodity object through the operation of the operation control, so that the immersion of the user is further promoted.
It should be noted that, in practical application, the scheme provided by the embodiment of the present application may be applied to other scenes besides the interaction of the commodity object display scene. For example, regarding articles displayed in some places such as automobile shows, museums, and the like, it may also be necessary to provide an interactive experience for an online user, and at this time, an interactive anchor point may also be added to a two-dimensional or three-dimensional image of a specific place and/or an article displayed by the scheme provided in the embodiment of the present application, and accordingly, the system may associate a corresponding control component according to the type of the specific interactive anchor point. Therefore, in the interaction process, after the specific interaction anchor point is triggered, the corresponding operation control can be displayed, so that the user can obtain interactive browsing of more detailed information of the displayed article through the operation control. In addition, in scenes such as live broadcast, interactive content can be provided for a user by using the scheme provided by the embodiment of the application, and the like, which are not listed here.
Example two
In a second embodiment, a method for generating interaction information of a commodity object is provided from the perspective of a server for a process of configuring interaction information for a specific commodity object by a merchant or a seller user, and referring to fig. 5, the method may specifically include:
s501: receiving image content imported for a specified commodity object;
in a specific implementation, a specific editing interface may be provided for a merchant or a seller user through a Web interface or the like, and specific image content, including 3D model content, or 2D picture or video content, may be imported through the editing interface. Then, the user can perform operations such as mounting of the interactive anchor point on the basis of the image content.
S502: displaying the image content, providing various types of interactive anchor points and operation options for adding the interactive anchor points in the image content;
specifically, various types of interactive anchors can be provided, and can be displayed in the editing interface, and in addition, an operation option for adding an interactive anchor to image content can be provided at a toolbar and the like of the editing interface. After the operation option is selected, when the cursor enters the area where the image content is located, the display style can be changed to prompt the user to click through any position of the image content, so that the interactive anchor point is added to the position. Specifically, after clicking at a certain position in certain image content, multiple selectable interactive anchor point types can be provided, and a user can select one of the types to add an interactive anchor point of a corresponding type to the position. The specific types can also be selected in multiple ways, that is, multiple different interaction types can be associated with the same interaction anchor point, so that triggering of multiple types of interaction contents can be realized through the same interaction anchor point in a more complex scene.
S503: after receiving the interactive anchor adding operation through the operation options, determining the type of the added interactive anchor and the information of the associated interactive content;
s504: and the control component is used for providing corresponding operation control according to the type of the interactive anchor point when the interactive anchor point is triggered, and executing a processing flow so as to process the interactive content associated with the target interactive anchor point.
Because the control component corresponding to the specific interaction type can be realized in advance, the control component provides the capability support for the subsequent specific display of the operation control and the realization of the specific control logic. Therefore, when the interactive anchor point is triggered, the corresponding operation control can be provided according to the type of the interactive anchor point, and the control logic is realized, so that the interactive content associated with the target interactive anchor point can be controlled.
For the parts of the second embodiment that are not described in detail, reference may be made to the descriptions of the first embodiment, and details are not repeated here.
EXAMPLE III
The third embodiment further provides another information interaction method, referring to fig. 6, the method may include:
s601: displaying a three-dimensional model associated with the target interaction scene and at least one interaction anchor point associated with the three-dimensional model;
the specific target interaction scenario may be various, and for example, the specific target interaction scenario may include: and the commodity object interactively shows a scene, wherein the three-dimensional model comprises a three-dimensional model of the commodity object. Alternatively, the target interaction scenario may also include: a scene for interactive exhibition of the exhibited items in a target site, where the three-dimensional model includes a three-dimensional model of the site and/or the exhibited items, and so on.
S602: after the target interaction anchor point is triggered, displaying a corresponding operation control according to the interaction type associated with the target interaction anchor point;
s603: and processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
For the parts of the third embodiment that are not described in detail, reference may also be made to the descriptions of the first embodiment, and the description thereof is omitted here.
It should be noted that, in the embodiments of the present application, the user data may be used, and in practical applications, the user-specific personal data may be used in the scheme described herein within the scope permitted by the applicable law, under the condition of meeting the requirements of the applicable law and regulations in the country (for example, the user explicitly agrees, the user is informed, etc.).
Corresponding to the first embodiment, an embodiment of the present application further provides a commodity object information interaction apparatus, referring to fig. 7, the apparatus may include:
a commodity object determination unit 701 for determining a target commodity object;
an image content display unit 702, configured to display image content associated with the target commodity object and at least one interaction anchor point associated with the image content;
the operation control display unit 703 is configured to display a corresponding operation control according to the interaction type associated with the target interaction anchor after the target interaction anchor is triggered;
an interaction processing unit 704, configured to process the interaction content associated with the target interaction anchor point based on the input operation on the operation control.
Wherein the image content associated with the target commodity object comprises: a three-dimensional model of the target merchandise object.
Or, the image content associated with the target commodity object includes: a two-dimensional picture or video of the target merchandise object.
Specifically, the interaction type associated with the target interaction anchor point includes: and the interactive content comprises the animation content related to the image content.
At this time, the operation control display unit may specifically be configured to:
and providing a corresponding operation control according to the style and the control mode of the corresponding actual control component in the target commodity object, so as to simulate the actual operation on the actual control component by operating the operation control.
Or, the interaction type associated with the target interaction anchor point includes: and the interactive content comprises media content for generating the acousto-optic-electric special effect.
Or, the interaction type associated with the target interaction anchor point includes: and the interactive content is used for controlling the image content associated with the target commodity object to carry out part replacement, and the interactive content comprises the image content of replaceable parts.
At this time, the operation control display unit may specifically be configured to:
and providing a plurality of operation options according to the replaceable parts, so as to be used for carrying out selection operation on the replaced target parts through the operation options.
Or, the interaction type associated with the target interaction anchor point includes: and the interactive content comprises the video and/or the image-text content associated with the target commodity object.
At this time, the apparatus may further include:
and the content floating layer creating unit is used for creating a content floating layer after the target interaction anchor point is triggered so as to play or display the video and/or image-text content associated with the target commodity object.
And different types of interactive anchor points correspond to different icon styles.
In addition, the interactive anchor point is also associated with a document content, and the document content is used for prompting interactive content and/or an interactive mode; at this time, the apparatus may further include:
and the document content display unit is used for displaying the document content when the operation control is displayed.
In addition, the interaction type associated with the target interaction anchor point comprises a composite type formed by a plurality of different interaction types, and the interaction content comprises a plurality of different types of interaction content;
at this time, the operation control display unit may specifically be configured to:
and displaying a plurality of operation controls respectively corresponding to a plurality of different interaction types associated with the target interaction anchor point so as to control the plurality of different types of interaction contents.
In specific implementation, the interactive anchor point may be associated with position information in the image content; at this time, the apparatus may further include:
the display area determining unit is used for determining the current display area in the image content associated with the target commodity object;
and the anchor point display control unit is used for displaying the interactive anchor points positioned in the current display area in the image content.
Furthermore, the apparatus may further include:
the anchor point list display unit is used for providing an interactive anchor point list in an area outside the image content in a display interface, and the interactive anchor point list is used for displaying all interactive anchor points related to the image content;
the operation control display unit can be further used for: and after receiving the trigger operation of the target interactive anchor point in the interactive anchor point list, displaying a corresponding operation control according to the interactive type information associated with the target interactive anchor point so as to control the interactive content associated with the target interactive anchor point.
In addition, the apparatus may further include:
and the distinguishing and displaying unit is used for distinguishing and displaying the triggered interactive anchor points and the interactive anchor points which are not triggered.
The target commodity object comprises a plurality of different minimum Stock Keeping Units (SKUs), and the different SKUs correspond to different image contents; at this time, the apparatus may further include:
and the switching option providing unit is used for providing operation options for switching and displaying the image contents of different SKUs.
In specific implementation, the target commodity object may be associated with a related commodity object, and the related commodity object has an attribute used in combination with the target commodity object; at this time, the apparatus may further include:
and the adding option providing unit is used for providing an operation option for adding the image content of the related commodity object so as to provide a display effect after the target commodity object is combined with the other commodity objects.
In addition, the apparatus may further include:
and the combination mode switching unit is used for providing operation options for switching the combination modes if the combination modes of the target commodity object and one of the related commodity objects are multiple.
Corresponding to the second embodiment, an embodiment of the present application further provides an apparatus for generating interaction information of a commodity object, referring to fig. 8, where the apparatus may include:
an image content receiving unit 801 for receiving image content imported for a specified commodity object;
an operation option providing unit 802, configured to display the image content, provide multiple types of interactive anchors, and add an operation option of an interactive anchor to the image content;
an anchor adding unit 803, configured to determine the type of the added interactive anchor and information of associated interactive content after receiving the interactive anchor adding operation through the operation option;
a control component association unit 804, configured to associate a corresponding control component with the interactive anchor according to the type of the interactive anchor, where the control component is configured to provide a corresponding operation control according to the type of the interactive anchor when the interactive anchor is triggered, and execute a processing procedure, so as to process the interactive content associated with the target interactive anchor.
Corresponding to the three phases of the embodiment, the embodiment of the present application further provides an information interaction apparatus, see fig. 9, including:
a three-dimensional model display unit 901, configured to display a three-dimensional model associated with the target interaction scene and at least one interaction anchor point associated with the three-dimensional model;
an operation control display unit 902, configured to display, after a target interaction anchor is triggered, a corresponding operation control according to an interaction type associated with the target interaction anchor;
and an interaction processing unit 903, configured to process the interaction content associated with the target interaction anchor point based on an input operation on the operation control.
Wherein the target interaction scenario comprises: and displaying a scene by commodity object interaction, wherein the three-dimensional model comprises a three-dimensional model of the commodity object.
Or, the target interaction scene includes: and the three-dimensional model comprises a three-dimensional model of the place and/or the exhibited article.
In addition, the present application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method described in any of the preceding method embodiments.
And an electronic device comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the steps of the method of any of the preceding method embodiments.
Where fig. 10 illustrates an architecture of an electronic device, for example, device 1000 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, an aircraft, and so forth.
Referring to fig. 10, device 1000 may include one or more of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, audio component 1010, input/output (I/O) interface 1012, sensor component 1014, and communications component 1016.
The processing component 1002 generally controls the overall operation of the device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1002 may include one or more processors 1020 to execute instructions to perform all or a portion of the steps of the methods provided by the disclosed subject matter. Further, processing component 1002 may include one or more modules that facilitate interaction between processing component 1002 and other components. For example, the processing component 1002 can include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operation at the device 1000. Examples of such data include instructions for any application or method operating on device 1000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1004 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1006 provides power to the various components of the device 1000. The power components 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1000.
The multimedia component 1008 includes a screen that provides an output interface between the device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1008 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1000 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1010 is configured to output and/or input audio signals. For example, the audio component 1010 may include a Microphone (MIC) configured to receive external audio signals when the device 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, audio component 1010 also includes a speaker for outputting audio signals.
I/O interface 1012 provides an interface between processing component 1002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1014 includes one or more sensors for providing status assessment of various aspects of the device 1000. For example, sensor assembly 1014 may detect the open/closed status of device 1000, the relative positioning of components, such as a display and keypad of device 1000, the change in position of device 1000 or a component of device 1000, the presence or absence of user contact with device 1000, the orientation or acceleration/deceleration of device 1000, and the change in temperature of device 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communications component 1016 is configured to facilitate communications between device 1000 and other devices in a wired or wireless manner. The device 1000 may access a wireless network based on a communication standard, such as WiFi, or a mobile communication network such as 2G, 3G, 4G/LTE, 5G, etc. In an exemplary embodiment, the communication component 1016 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1004 comprising instructions, executable by the processor 1020 of the device 1000 to perform the methods provided by the aspects of the present disclosure is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The method, the device and the electronic device for interacting the commodity object information provided by the application are introduced in detail, specific examples are applied in the description to explain the principle and the implementation mode of the application, and the description of the embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific embodiments and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (28)

1. A commodity object information interaction method is characterized by comprising the following steps:
determining a target commodity object;
displaying the image content associated with the target commodity object and at least one interactive anchor point associated with the image content;
after the target interaction anchor point is triggered, displaying a corresponding operation control according to the interaction type associated with the target interaction anchor point;
and processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
2. The method of claim 1,
the image content associated with the target commodity object comprises: a three-dimensional model of the target merchandise object.
3. The method of claim 1,
the image content associated with the target commodity object comprises: a two-dimensional picture or video of the target merchandise object.
4. The method of claim 1,
the interaction type associated with the target interaction anchor point comprises the following steps: and the interactive content comprises the animation content related to the image content.
5. The method of claim 4,
the displaying of the corresponding operation control comprises:
and providing a corresponding operation control according to the style and the control mode of the corresponding actual control component in the target commodity object, so as to simulate the actual operation on the actual control component by operating the operation control.
6. The method of claim 1,
the interaction type associated with the target interaction anchor point comprises the following steps: and the interactive content comprises media content for generating the acousto-optic-electric special effect.
7. The method of claim 1,
the interaction type associated with the target interaction anchor point comprises the following steps: and the interactive content is used for controlling the image content associated with the target commodity object to carry out part replacement, and the interactive content comprises the image content of replaceable parts.
8. The method of claim 7,
the displaying of the corresponding operation control comprises:
and providing a plurality of operation options according to the replaceable parts, so as to be used for carrying out selection operation on the replaced target parts through the operation options.
9. The method of claim 1,
the interaction type associated with the target interaction anchor point comprises the following steps: and the interactive content comprises the video and/or the image-text content associated with the target commodity object.
10. The method of claim 9, further comprising:
and after the target interaction anchor point is triggered, creating a content floating layer for playing or displaying the video and/or image-text content associated with the target commodity object.
11. The method of claim 1,
different types of interactive anchor points correspond to different icon styles.
12. The method of claim 1,
the interactive anchor point is also associated with the document content which is used for prompting the interactive content and/or the interactive mode;
the method further comprises the following steps:
and when the operation control is displayed, displaying the file content.
13. The method of claim 1,
the interaction type associated with the target interaction anchor point comprises a composite type formed by a plurality of different interaction types, and the interaction content comprises a plurality of different types of interaction content;
the displaying the corresponding operation control comprises:
and displaying a plurality of operation controls respectively corresponding to a plurality of different interaction types associated with the target interaction anchor point so as to control the plurality of different types of interaction contents.
14. The method of claim 1,
the interactive anchor point is associated with position information in the image content;
the method further comprises the following steps:
determining a current display area in the image content associated with the target commodity object;
and displaying the interactive anchor points positioned in the current display area in the image content.
15. The method of claim 14, further comprising:
providing an interactive anchor point list in an area outside the image content in a display interface, wherein the interactive anchor point list is used for displaying all interactive anchor points related to the image content;
and after receiving the trigger operation of the target interactive anchor point in the interactive anchor point list, displaying a corresponding operation control according to the interactive type information associated with the target interactive anchor point so as to control the interactive content associated with the target interactive anchor point.
16. The method of claim 1, further comprising:
and distinguishing and displaying the triggered interactive anchor points and the interactive anchor points which are not triggered.
17. The method of claim 1,
the target commodity object comprises a plurality of different minimum Stock Keeping Units (SKUs), and the different SKUs correspond to different image contents;
the method further comprises the following steps:
and providing operation options for switching and showing the image contents of different SKUs.
18. The method of claim 1,
the target commodity object is also associated with a related commodity object, and the related commodity object has an attribute used in combination with the target commodity object;
the method further comprises the following steps:
and providing operation options for adding the image content of the related commodity object so as to provide a display effect after the target commodity object is combined with the other commodity objects.
19. The method of claim 18,
and if the combination modes of the target commodity object and one of the related commodity objects are various, providing an operation option for switching the combination modes.
20. A method for generating interaction information of commodity objects is characterized by comprising the following steps:
receiving image content imported for a specified commodity object;
displaying the image content, providing various types of interactive anchor points and operation options for adding the interactive anchor points in the image content;
after receiving the interactive anchor adding operation through the operation options, determining the type of the added interactive anchor and the information of the associated interactive content;
and the control component is used for providing corresponding operation control according to the type of the interactive anchor point when the interactive anchor point is triggered, and executing a processing flow so as to process the interactive content associated with the target interactive anchor point.
21. An information interaction method, comprising:
displaying a three-dimensional model associated with the target interaction scene and at least one interaction anchor point associated with the three-dimensional model;
after the target interaction anchor point is triggered, displaying a corresponding operation control according to the interaction type associated with the target interaction anchor point;
and processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
22. The method of claim 21,
the target interaction scene comprises: and displaying a scene by commodity object interaction, wherein the three-dimensional model comprises a three-dimensional model of the commodity object.
23. The method of claim 21,
the target interaction scene comprises: and the three-dimensional model comprises a three-dimensional model of the place and/or the exhibited article.
24. A commodity object information interaction device, comprising:
a commodity object determination unit for determining a target commodity object;
the image content display unit is used for displaying the image content associated with the target commodity object and at least one interactive anchor point associated with the image content;
the operation control display unit is used for displaying the corresponding operation control according to the interaction type associated with the target interaction anchor point after the target interaction anchor point is triggered;
and the interactive processing unit is used for processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
25. An apparatus for generating interactive information for merchandise objects, comprising:
an image content receiving unit configured to receive image content imported for a specified commodity object;
the operation option providing unit is used for displaying the image content, providing various types of interactive anchor points and adding operation options of the interactive anchor points in the image content;
the anchor point adding unit is used for determining the type of the added interactive anchor point and the information of the associated interactive content after receiving the interactive anchor point adding operation through the operation options;
and the control component is used for providing corresponding operation controls according to the type of the interactive anchor point when the interactive anchor point is triggered, and executing a processing flow so as to process the interactive content associated with the target interactive anchor point.
26. An information interaction device, comprising:
the three-dimensional model display unit is used for displaying a three-dimensional model associated with the target interaction scene and at least one interaction anchor point associated with the three-dimensional model;
the operation control display unit is used for displaying the corresponding operation control according to the interaction type associated with the target interaction anchor point after the target interaction anchor point is triggered;
and the interactive processing unit is used for processing the interactive content associated with the target interactive anchor point based on the input operation of the operation control.
27. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the steps of the method of any one of claims 1 to 23.
28. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the steps of the method of any of claims 1 to 23.
CN202011357203.8A 2020-11-26 2020-11-26 Commodity object information interaction method and device and electronic equipment Pending CN113298602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011357203.8A CN113298602A (en) 2020-11-26 2020-11-26 Commodity object information interaction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011357203.8A CN113298602A (en) 2020-11-26 2020-11-26 Commodity object information interaction method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113298602A true CN113298602A (en) 2021-08-24

Family

ID=77318449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011357203.8A Pending CN113298602A (en) 2020-11-26 2020-11-26 Commodity object information interaction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113298602A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837830A (en) * 2021-09-13 2021-12-24 珠海格力电器股份有限公司 Product display method, display device and electronic equipment
CN114398118A (en) * 2021-12-21 2022-04-26 深圳市易图资讯股份有限公司 Intelligent positioning system and method for smart city based on space anchor
CN114564246A (en) * 2022-02-18 2022-05-31 北京炎黄盈动科技发展有限责任公司 Method, device, equipment and medium for drawing graph anchor points on line
WO2023207901A1 (en) * 2022-04-29 2023-11-02 北京有竹居网络技术有限公司 Interaction method and apparatus, device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734538A (en) * 2017-04-24 2018-11-02 阿里巴巴集团控股有限公司 The method and device of data object information is provided
CN109636548A (en) * 2019-01-22 2019-04-16 广东亿润网络技术有限公司 A kind of more Merchant sales systems of VR scene
CN111475013A (en) * 2019-01-24 2020-07-31 阿里巴巴集团控股有限公司 Commodity object information processing method and device and electronic equipment
CN111626807A (en) * 2019-02-28 2020-09-04 阿里巴巴集团控股有限公司 Commodity object information processing method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734538A (en) * 2017-04-24 2018-11-02 阿里巴巴集团控股有限公司 The method and device of data object information is provided
CN109636548A (en) * 2019-01-22 2019-04-16 广东亿润网络技术有限公司 A kind of more Merchant sales systems of VR scene
CN111475013A (en) * 2019-01-24 2020-07-31 阿里巴巴集团控股有限公司 Commodity object information processing method and device and electronic equipment
CN111626807A (en) * 2019-02-28 2020-09-04 阿里巴巴集团控股有限公司 Commodity object information processing method and device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837830A (en) * 2021-09-13 2021-12-24 珠海格力电器股份有限公司 Product display method, display device and electronic equipment
CN114398118A (en) * 2021-12-21 2022-04-26 深圳市易图资讯股份有限公司 Intelligent positioning system and method for smart city based on space anchor
CN114398118B (en) * 2021-12-21 2023-03-24 深圳市易图资讯股份有限公司 Intelligent positioning system and method for smart city based on space anchor
CN114564246A (en) * 2022-02-18 2022-05-31 北京炎黄盈动科技发展有限责任公司 Method, device, equipment and medium for drawing graph anchor points on line
WO2023207901A1 (en) * 2022-04-29 2023-11-02 北京有竹居网络技术有限公司 Interaction method and apparatus, device and medium

Similar Documents

Publication Publication Date Title
CN113298602A (en) Commodity object information interaction method and device and electronic equipment
EP3985593A1 (en) Method for processing live streaming data and electronic device
CN110337023B (en) Animation display method, device, terminal and storage medium
WO2021233245A1 (en) Method and apparatus for providing commodity object information, and electronic device
WO2022247208A1 (en) Live broadcast data processing method and terminal
CN109754298B (en) Interface information providing method and device and electronic equipment
CN111626807A (en) Commodity object information processing method and device and electronic equipment
CN113065021B (en) Video preview method, apparatus, electronic device, storage medium and program product
KR101831802B1 (en) Method and apparatus for producing a virtual reality content for at least one sequence
US20140229823A1 (en) Display apparatus and control method thereof
CN109754275B (en) Data object information providing method and device and electronic equipment
CN109947506B (en) Interface switching method and device and electronic equipment
CN110506247B (en) System and method for interactive elements within a virtual reality environment
CN104156151A (en) Image display method and image display device
KR102121107B1 (en) Method for providing virtual reality tour and record media recorded program for implement thereof
CN110321042B (en) Interface information display method and device and electronic equipment
WO2019095810A1 (en) Interface display method and device
KR20190075596A (en) Method for creating augmented reality contents, method for using the contents and apparatus using the same
CN113420350A (en) Object, commodity, clothes try-on display processing method and device and electronic equipment
CN115499479A (en) Commodity comparison display method and device and electronic equipment
KR101806922B1 (en) Method and apparatus for producing a virtual reality content
CN114222173A (en) Object display method and device, electronic equipment and storage medium
CN115170220A (en) Commodity information display method and electronic equipment
CN110457032B (en) Data object information interface generation and display method and device
CN114363646A (en) Target object display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40057024

Country of ref document: HK