WO2022095733A1 - 信息处理方法及装置 - Google Patents

信息处理方法及装置 Download PDF

Info

Publication number
WO2022095733A1
WO2022095733A1 PCT/CN2021/125720 CN2021125720W WO2022095733A1 WO 2022095733 A1 WO2022095733 A1 WO 2022095733A1 CN 2021125720 W CN2021125720 W CN 2021125720W WO 2022095733 A1 WO2022095733 A1 WO 2022095733A1
Authority
WO
WIPO (PCT)
Prior art keywords
preset
scene
dimensional
display effect
scene information
Prior art date
Application number
PCT/CN2021/125720
Other languages
English (en)
French (fr)
Inventor
邓晓旭
Original Assignee
北京沃东天骏信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京沃东天骏信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京沃东天骏信息技术有限公司
Publication of WO2022095733A1 publication Critical patent/WO2022095733A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Definitions

  • the embodiments of the present application relate to the field of computer technologies, and in particular, to an information processing method and apparatus.
  • the embodiments of the present application provide an information processing determination method and apparatus.
  • the embodiments of the present application provide an information processing method, including: determining a three-dimensional display effect for a preset item according to a configuration operation of a target merchant; The 3D scene information for the preset item is obtained by rendering; the 3D scene information is bound to the preset item in the preset display page.
  • the web-side augmented reality technology adopts a componentized design method; the above-mentioned three-dimensional display effect is the goal, and based on the web-side augmented reality technology, the three-dimensional scene information for the preset items is obtained by rendering, including: according to the three-dimensional display effect , determine the target component that realizes the 3D display effect from the components that constitute the augmented reality technology on the web page; through the target component, take the 3D display effect as the goal, and render the 3D scene information for the preset item; wherein, the components include: scene The component represents the scene after accommodating all the items to be rendered; the camera component represents the position and direction of the acquired scene; the renderer component is used to render the 3D scene information for the preset item according to the scene acquired by the camera component.
  • the components further include: a light source component, which represents the light rendering effect in the scene; a texture component, which represents the texture of the object to be rendered; and a geometry component, which represents the geometry in the scene.
  • determining the three-dimensional display effect for the preset item according to the configuration operation of the target merchant includes: determining the scene including the preset item according to the editing operation of the target merchant; configuring the operation according to the data of the target merchant, Determine the configuration data of the scene to obtain a three-dimensional display effect.
  • the above method further includes: in response to determining that the first preset operation of the target user on the preset display page is received, displaying the three-dimensional scene information bound to the preset item to the target user.
  • the above further includes: in response to determining that the second preset operation of the target user on the preset display page is received, acquiring a real image through a camera; rendering the three-dimensional scene information bound to the preset item to the real world image, and show the target user the real image after rendering the 3D scene information.
  • rendering the 3D scene information bound to the preset item into the real image includes: detecting whether a preset identification image exists in the real image; in response to determining that the preset identification image exists in the real image, rendering The 3D scene information bound to the preset item is rendered to the position indicated by the preset identification image in the real image.
  • an embodiment of the present application provides an information processing device, including: a determining unit configured to determine a three-dimensional display effect for a preset item according to a configuration operation of a target merchant; a rendering unit configured to display a three-dimensional display effect in three dimensions.
  • the display effect is the goal, and based on the augmented reality technology on the web page, the 3D scene information for the preset item is obtained by rendering; the binding unit is configured to bind the 3D scene information with the preset item in the preset display page.
  • the web-side augmented reality technology adopts a componentized design method; the rendering unit is further configured to: according to the three-dimensional display effect, determine a target component for realizing the three-dimensional display effect from the components constituting the web-side augmented reality technology; Through the target component, with the 3D display effect as the goal, the 3D scene information for the preset item is rendered by rendering; the components include: a scene component, which represents the scene after accommodating all the items to be rendered; a camera component, which represents the location and location of the acquired scene.
  • the renderer component is used to render the 3D scene information for the preset item according to the scene obtained by the camera component.
  • the components further include: a light source component, which represents the light rendering effect in the scene; a texture component, which represents the texture of the object to be rendered; and a geometry component, which represents the geometry in the scene.
  • the determining unit is further configured to: determine the scene including the preset item according to the editing operation of the target merchant; determine the configuration data of the scene according to the data configuration operation of the target merchant to obtain a three-dimensional display effect.
  • the above-mentioned apparatus further includes: a first display unit, configured to, in response to determining that the first preset operation of the target user on the preset display page is received, display to the target user the information bound to the preset item. 3D scene information.
  • the above-mentioned apparatus further includes: a second display unit, configured to obtain a real image through a camera in response to determining that the second preset operation of the target user in the preset display page is received; The bound 3D scene information is rendered into the real image, and the real image after rendering the 3D scene information is displayed to the target user.
  • the second display unit is further configured to: detect whether a preset identification image exists in the real image; in response to determining that the preset identification image exists in the real image, the 3D scene information bound to the preset item Render to the location indicated by the preset marker image in the real image.
  • an embodiment of the present application provides a computer-readable medium on which a computer program is stored, wherein the method described in any implementation manner of the first aspect is implemented when the program is executed by a processor.
  • an embodiment of the present application provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, when the one or more programs are processed by the one or more processors Execution causes one or more processors to implement a method as described in any implementation form of the first aspect.
  • the information processing method and device determine the three-dimensional display effect for preset items by operating according to the configuration of the target merchant; take the three-dimensional display effect as the goal, and based on the augmented reality technology on the web page, render the results for the preset items.
  • Set the 3D scene information of the item ; bind the 3D scene information with the preset item in the preset display page, thereby providing an information processing method, generating the 3D scene information of the preset item, and enriching the display of the preset item Way.
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present application may be applied;
  • Fig. 2 is a flowchart according to an embodiment of the information processing method of the present application
  • FIG. 3 is a schematic diagram of an application scenario of the information processing method according to the present embodiment.
  • FIG. 4 is a flow chart of yet another embodiment of an information processing method according to the present application.
  • FIG. 5 is a structural diagram of an embodiment of an information processing apparatus according to the present application.
  • FIG. 6 is a schematic structural diagram of a computer system suitable for implementing the embodiments of the present application.
  • FIG. 1 shows an exemplary architecture 100 to which the information processing method and apparatus of the present application may be applied.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the communication connections among the terminal devices 101 , 102 , and 103 constitute a topology network, and the network 104 is used to provide a medium for communication links between the terminal devices 101 , 102 , and 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the terminal devices 101, 102, and 103 may be hardware devices or software that support network connections for data interaction and data processing.
  • the terminal devices 101, 102, 103 are hardware, they can be various electronic devices that support network connection, information interaction, display, processing and other functions, including but not limited to smart phones, tablet computers, e-book readers, laptops laptops and desktop computers, etc.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
  • the server 105 may be a server that provides various services, such as a background processing server that determines the three-dimensional display effect of preset items according to the configuration operation of the target merchant on the terminal devices 101 , 102 , and 103 .
  • the background processing server determines the 3D display effect for the preset item according to the configuration operation of the target merchant; takes the 3D display effect as the goal, and renders the 3D scene information for the preset item based on the augmented reality technology on the web page; The information is bound to the preset item in the preset display page.
  • the background processing server may display the three-dimensional scene information of the preset item to the target user.
  • the server 105 may be a cloud server.
  • the server may be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers, or can be implemented as a single server.
  • the server is software, it can be implemented as a plurality of software or software modules (for example, software or software modules for providing distributed services), or can be implemented as a single software or software module. There is no specific limitation here.
  • each part (eg, each unit, sub-unit, module, sub-module) included in the information processing apparatus can be all set in the server, can also all be set in the terminal device, and can also be set in the server and the terminal device respectively.
  • the numbers of terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • the system architecture may only include the electronic device (such as a server or terminal device) on which the information processing method runs.
  • a flow 200 of an embodiment of an information processing method is shown, including the following steps:
  • Step 201 according to the configuration operation of the target merchant, determine the three-dimensional display effect for the preset item.
  • the execution body of the information processing method may determine the three-dimensional display effect for the preset item according to the configuration operation of the target merchant.
  • the target merchant can be any merchant in the e-commerce platform, and has control authority over the preset items.
  • the preset item may be an item sold by the target merchant on the e-commerce station.
  • the target merchant can configure the preset items to determine the three-dimensional display effect of the preset items. It can be understood that, compared with methods such as pictures and videos, the display method obtained based on the three-dimensional information is easier for the target user to connect to the preset items.
  • the configuration operation may be that the target merchant converts the three-dimensional model to the three-dimensional model according to the preset three-dimensional model information of the item and the scene selected by the target business.
  • the information configures the action in the selected scene.
  • the configuration operation may also be an operation for the target merchant to configure the scene according to the material selected from the material library.
  • the target merchant can configure the scene name, material location, background information and other information through the configuration operation.
  • the material library is provided with various information for scene configuration, including but not limited to background information, texture information, and the like.
  • the foregoing execution subject may perform the foregoing step 201 in the following manner:
  • the scene including the preset item is determined.
  • the target merchant determines all the information included in the scene through an editing operation, including but not limited to the three-dimensional model information, background information, texture information, etc. of the preset item.
  • the configuration data of the scene is determined to obtain a three-dimensional display effect.
  • the target merchant performs data configuration of relative positions and mutual related information for all the information in the determined scene, and the above-mentioned execution subject obtains a three-dimensional display effect according to the data configuration operation.
  • the user target merchant can set multiple three-dimensional display effects for the same preset item through multiple configuration operations.
  • Step 202 taking the three-dimensional display effect as the goal, based on the web-end augmented reality technology, rendering the three-dimensional scene information for the preset item.
  • the above-mentioned execution subject takes the 3D display effect obtained in step 201 as the target, and renders the 3D scene information for the preset item based on the augmented reality technology on the web page.
  • Web-side augmented reality technology refers to Web AR (Augmented Reality).
  • AR technology widely uses multimedia, 3D modeling, real-time tracking, intelligent interaction, sensing and other technical means to simulate and simulate computer-generated text, images, 3D models, music, videos and other virtual information, and then apply it to the real world. , so as to skillfully integrate virtual information with the real world.
  • Web AR implements AR functions on the web.
  • the three-dimensional display effect of the preset item that the target merchant wishes to display is realized through Web AR, and the three-dimensional scene information of the preset item is obtained.
  • the 3D scene information is the digitized 3D display effect obtained after rendering through Web AR.
  • the augmented reality technology on the web page adopts a componentized design manner.
  • Componentized representation Web AR is encapsulated in units of individual functions included, thereby improving development efficiency and reusability of components.
  • the components in Web AR include: a scene component, a camera component, and a renderer component.
  • the scene component represents the scene after accommodating all the items to be rendered, and the item to be rendered is any item included in the scene, such as the 3D model of the preset item;
  • the camera component represents the position and direction of the acquired scene;
  • the renderer component It is used to render the 3D scene information for the preset item according to the scene obtained by the camera component.
  • the above-mentioned execution subject may perform the above-mentioned step 202 in the following manner:
  • the target components for realizing the three-dimensional display effect are determined from the components constituting the augmented reality technology on the web page.
  • the three-dimensional display effect needs to be achieved through the cooperation of scene components, camera components, and renderer components.
  • the target component with the three-dimensional display effect as the target, the three-dimensional scene information for the preset item is obtained by rendering.
  • the three-dimensional display effect represents displaying the front and side surfaces of a preset item based on a 180° rotation, and the three-dimensional display effect includes a background image corresponding to the preset item. Then, the scene including the 3D model of the preset item and the background picture is realized through the scene component, the 180° rotating display is realized through the camera component, and finally the 3D scene information for the preset item is obtained by rendering the scene obtained by the camera component through the renderer component. .
  • the components in the Web AR further include: a light source component, which represents the light presentation effect in the scene; a texture component, which represents the texture of the item to be rendered; and a geometry component, which represents the geometry in the scene .
  • Step 203 Bind the 3D scene information with the preset items in the preset display page.
  • the above-mentioned execution body binds the three-dimensional scene information obtained in step 202 with the preset items in the preset display page.
  • the preset display page is a page on the e-commerce platform displaying preset items. Bind the 3D scene information to the preset items in the preset display page, so that the 3D scene information is associated with the preset items in the preset display page, so that the target user can match the preset items in the preset display page by operation to display 3D scene information.
  • FIG. 3 is a schematic diagram 300 of an application scenario of the information processing method according to this embodiment.
  • the target merchant 301 configures a three-dimensional display effect for the clothes sold through the middle terminal device 302 in order to enrich the display effect of the clothes in its online store.
  • the target merchant sets a 360° rotating display effect for each piece of clothing.
  • the server 303 first determines the three-dimensional display effect for the preset item according to the configuration operation of the target merchant. Then, aiming at the 3D display effect, based on the augmented reality technology on the web page, the 3D scene information for the preset item is obtained by rendering. Finally, bind the 3D scene information to the preset items in the preset display page.
  • the method provided by the above-mentioned embodiments of the present disclosure determines the three-dimensional display effect for the preset item by operating according to the configuration of the target merchant; takes the three-dimensional display effect as the goal, and renders the preset item based on the augmented reality technology on the web page.
  • the 3D scene information of the preset item is bound; the 3D scene information is bound to the preset item in the preset display page, thereby providing an information processing method, generating the 3D scene information of the preset item, and enriching the display mode of the preset item.
  • the above-mentioned execution body may further display the 3D bound to the preset item to the target user in response to determining that the first preset operation of the target user on the preset display page is received. scene information.
  • the first preset operation may be a selection operation of one piece of three-dimensional scene information of the multiple three-dimensional scene information.
  • the above-mentioned execution body may also receive an animation control operation and/or an interactive operation of the target user.
  • Target users can select animation display effects through animation control operations, including but not limited to zooming in, zooming out, and rotating.
  • the above-mentioned execution body is provided with an animation effect list, and after the target user selects the target animation from the animation effect list, the above-mentioned execution body displays the three-dimensional scene information based on the target animation.
  • Interaction is the control of the interaction process between the target and the 3D model of the preset item.
  • This embodiment can provide two interaction modes. The first is a camera controller mode, which displays three-dimensional scene information from different angles by setting attributes such as different camera angles. The second is the touch controller mode, which controls the 3D scene information by touching the screen of the device that displays the 3D scene information.
  • the above-mentioned execution body may further acquire a real image through a camera in response to determining that the second preset operation of the target user on the preset display page is received;
  • the predetermined 3D scene information is rendered into the real image, and the real image after rendering the 3D scene information is displayed to the target user.
  • the three-dimensional scene information only represents a three-dimensional model of a preset item (eg, clothes), and the real image includes a target person (eg, a person interested in the preset item).
  • the target user can match the three-dimensional model of the preset item with the target person object in the real image in combination with the above-mentioned interactive operations, so as to show the target user the effect of dressing the preset item on the target person.
  • the position of the three-dimensional scene information in the real image can be controlled by using a preset identification image.
  • the above-mentioned execution body first detects whether there is a preset identification image in the real image; in response to determining that there is a preset identification image in the real image, the three-dimensional scene information bound to the preset item is rendered to the preset identification image in the real image. the indicated location.
  • the preset identification image may be any image. In order to improve the recognition degree of the preset identification image, a preset identification image with a black border may be used.
  • FIG. 4 a schematic flow 400 of another embodiment of the information processing method according to the present application is shown, including the following steps:
  • Step 401 according to the configuration operation of the target merchant, determine the three-dimensional display effect for the preset item.
  • Step 402 taking the three-dimensional display effect as the goal, based on the augmented reality technology on the web page, rendering the three-dimensional scene information for the preset item.
  • Step 403 Bind the 3D scene information with the preset items in the preset display page.
  • Step 404 in response to determining that the second preset operation of the target user in the preset display page is received, acquire a real image through the camera.
  • Step 405 Render the 3D scene information bound to the preset item into the real image, and display the real image after rendering the 3D scene information to the target user.
  • the process 500 of the information processing method in this embodiment specifically describes the implementation process of the augmented reality effect for preset items.
  • the present embodiment further enriches the display manner of the preset items.
  • the present disclosure provides an embodiment of an information processing apparatus.
  • the apparatus embodiment corresponds to the method embodiment shown in FIG. 2 .
  • the apparatus may specifically Used in various electronic devices.
  • the information processing apparatus includes: a determination unit 501 configured to determine a three-dimensional display effect for a preset item according to a configuration operation of a target merchant; a rendering unit 502 configured to take the three-dimensional display effect as The target is to render the 3D scene information for the preset item based on the augmented reality technology of the web page; the binding unit 503 is configured to bind the 3D scene information with the preset item in the preset display page.
  • the web-side augmented reality technology adopts a componentized design manner; the rendering unit 502 is further configured to: according to the three-dimensional display effect, determine the implementation from the components constituting the web-side augmented reality The target component of the 3D display effect; through the target component, the 3D display effect is the target, and the 3D scene information for the preset item is rendered; wherein, the components include: the scene component, which represents the scene after accommodating all the items to be rendered; the camera component , representing the location and direction of the acquired scene; the renderer component is used to render the 3D scene information for the preset item according to the scene acquired by the camera component.
  • the components further include: a light source component, representing a light display effect in the scene; a texture component, representing the texture of the object to be rendered; and a geometry component, representing the geometry in the scene.
  • the determining unit 501 is further configured to: determine the scene including the preset items according to the editing operation of the target merchant; determine the configuration data of the scene according to the data configuration operation of the target merchant , to get a three-dimensional display effect.
  • the above-mentioned apparatus further includes: a first presentation unit (not shown in the figure), which is configured to receive, in response to determining that the target user's first presentation in the preset display page is received Set the operation to display the 3D scene information bound to the preset item to the target user.
  • a first presentation unit (not shown in the figure), which is configured to receive, in response to determining that the target user's first presentation in the preset display page is received Set the operation to display the 3D scene information bound to the preset item to the target user.
  • the above-mentioned apparatus further includes: a second presentation unit (not shown in the figure), configured to respond to the determination of receiving the second presentation unit of the target user in the preset display page.
  • a second presentation unit (not shown in the figure), configured to respond to the determination of receiving the second presentation unit of the target user in the preset display page. Set the operation, obtain the real image through the camera; render the 3D scene information bound to the preset item into the real image, and show the real image after rendering the 3D scene information to the target user.
  • the second display unit (not shown in the figure) is further configured to: detect whether a preset identification image exists in the real image; in response to determining that a preset exists in the real image The identification image is to render the 3D scene information bound to the preset item to the position indicated by the preset identification image in the real image.
  • the determination unit in the information processing device determines the three-dimensional display effect for the preset item according to the configuration operation of the target merchant; the rendering unit takes the three-dimensional display effect as the goal, and based on the augmented reality technology on the web page, the rendering unit renders the display effect for the preset item.
  • the 3D scene information of the preset item; the binding unit binds the 3D scene information with the preset item in the preset display page, thereby providing an information processing method, generating the 3D scene information of the preset item, and enriching the preset item. How to display items.
  • FIG. 6 it shows a schematic structural diagram of a computer system 600 suitable for implementing the devices of the embodiments of the present application (eg, devices 101 , 102 , 103 , and 105 shown in FIG. 1 ).
  • the device shown in FIG. 6 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • a computer system 600 includes a processor (eg, CPU, central processing unit) 601 that can be loaded into a random access memory (RAM) according to a program stored in a read only memory (ROM) 602 or from a storage section 608
  • the program in 603 executes various appropriate actions and processes.
  • various programs and data necessary for the operation of the system 600 are also stored.
  • the processor 601 , the ROM 602 and the RAM 603 are connected to each other through a bus 604 .
  • An input/output (I/O) interface 605 is also connected to bus 604 .
  • the following components are connected to the I/O interface 605: an input section 606 including a keyboard, a mouse, etc.; an output section 607 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 608 including a hard disk, etc. ; and a communication section 609 including a network interface card such as a LAN card, a modem, and the like. The communication section 609 performs communication processing via a network such as the Internet.
  • a drive 610 is also connected to the I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 610 as needed so that a computer program read therefrom is installed into the storage section 608 as needed.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication portion 609 and/or installed from the removable medium 611 .
  • the computer program is executed by the processor 601, the above-mentioned functions defined in the method of the present application are performed.
  • the computer-readable medium of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedures, or a combination thereof programming language - such as "C" or a similar programming language.
  • the program code may execute entirely on the client computer, partly on the client computer, as a stand-alone software package, partly on the client computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the client computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the described unit may also be provided in the processor, for example, it may be described as: a processor including a determination unit, a rendering unit and a binding unit.
  • the name of these units does not constitute a limitation of the unit itself under certain circumstances.
  • the rendering unit can also be described as "targeting the three-dimensional display effect, based on the web-side augmented reality technology, rendering the A unit of 3D scene information for an item".
  • the present application also provides a computer-readable medium.
  • the computer-readable medium may be included in the device described in the above embodiments, or may exist alone without being assembled into the device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the device, the computer equipment: according to the configuration operation of the target merchant, determine the three-dimensional display effect for the preset item; The 3D display effect is the goal, and based on the augmented reality technology on the web page, the 3D scene information for the preset item is obtained by rendering; the 3D scene information is bound to the preset item in the preset display page.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种信息处理方法及装置,方法包括:根据目标商家的配置操作,确定针对于预设物品的三维展示效果(201);以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息(202);将三维场景信息与预设显示页面中的预设物品绑定(203)。该信息处理方法,生成了预设物品的三维场景信息,丰富了预设物品的展示方式。

Description

信息处理方法及装置
本专利申请要求于2020年11月06提交的、申请号为202011232150.7、发明名称为“信息处理方法及装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本申请实施例涉及计算机技术领域,具体涉及一种信息处理方法及装置。
背景技术
随着互联网技术的快速发展,各种电商平台应运而生,人们正逐渐习惯通过电商平台购买物品。在目前的电商平台中,物品一般通过图片、视频等形式展示。
发明内容
本申请实施例提出了一种信息处理确定方法及装置。
第一方面,本申请实施例提供了一种信息处理方法,包括:根据目标商家的配置操作,确定针对于预设物品的三维展示效果;以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息;将三维场景信息与预设显示页面中的预设物品绑定。
在一些实施例中,网页端增强现实技术采用组件化设计方式;上述以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息,包括:根据三维展示效果,从构成网页端增强现实技术的组件中确定出实现三维展示效果的目标组件;通过目标组件,以三维展示效果为目标,渲染得到针对于预设物品的三维场景信息;其中,组件包括:场景组件,表征容纳所有待渲染物品后 的场景;相机组件,表征获取场景的位置和方向;渲染器组件,用于根据相机组件所获取的场景,渲染得到针对于预设物品的三维场景信息。
在一些实施例中,组件还包括:光源组件,表征场景中的光线展现效果;纹理组件,表征待渲染物品的纹理;几何体组件,表征场景中的几何体。
在一些实施例中,上述根据目标商家的配置操作,确定针对于预设物品的三维展示效果,包括:根据目标商家的编辑操作,确定包括预设物品的场景;根据目标商家的数据配置操作,确定场景的配置数据,得到三维展示效果。
在一些实施例中,上述方法还包括:响应于确定接收到目标用户在预设显示页面中的第一预设操作,向目标用户展示预设物品所绑定的三维场景信息。
在一些实施例中,上述还包括:响应于确定接收到目标用户在预设显示页面中的第二预设操作,通过摄像头获取现实影像;将预设物品所绑定的三维场景信息渲染至现实影像中,并向目标用户展示渲染三维场景信息后的现实影像。
在一些实施例中,上述将预设物品所绑定的三维场景信息渲染至现实影像中,包括:检测现实影像中是否存在预设标识图像;响应于确定现实影像中存在预设标识图像,将预设物品所绑定的三维场景信息渲染至现实影像中预设标识图像所指示的位置。
第二方面,本申请实施例提供了一种信息处理装置,包括:确定单元,被配置成根据目标商家的配置操作,确定针对于预设物品的三维展示效果;渲染单元,被配置成以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息;绑定单元,被配置成将三维场景信息与预设显示页面中的预设物品绑定。
在一些实施例中,网页端增强现实技术采用组件化设计方式;渲染单元,进一步被配置成:根据三维展示效果,从构成网页端增强现实技术的组件中确定出实现三维展示效果的目标组件;通过目标组件,以三维展示效果为目标,渲染得到针对于预设物品的三维场景信息; 其中,组件包括:场景组件,表征容纳所有待渲染物品后的场景;相机组件,表征获取场景的位置和方向;渲染器组件,用于根据相机组件所获取的场景,渲染得到针对于预设物品的三维场景信息。
在一些实施例中,组件还包括:光源组件,表征场景中的光线展现效果;纹理组件,表征待渲染物品的纹理;几何体组件,表征场景中的几何体。
在一些实施例中,确定单元,进一步被配置成:根据目标商家的编辑操作,确定包括预设物品的场景;根据目标商家的数据配置操作,确定场景的配置数据,得到三维展示效果。
在一些实施例中,上述装置还包括:第一展示单元,被配置成响应于确定接收到目标用户在预设显示页面中的第一预设操作,向目标用户展示预设物品所绑定的三维场景信息。
在一些实施例中,上述装置还包括:第二展示单元,被配置成响应于确定接收到目标用户在预设显示页面中的第二预设操作,通过摄像头获取现实影像;将预设物品所绑定的三维场景信息渲染至现实影像中,并向目标用户展示渲染三维场景信息后的现实影像。
在一些实施例中,第二展示单元,进一步被配置成:检测现实影像中是否存在预设标识图像;响应于确定现实影像中存在预设标识图像,将预设物品所绑定的三维场景信息渲染至现实影像中预设标识图像所指示的位置。
第三方面,本申请实施例提供了一种计算机可读介质,其上存储有计算机程序,其中,程序被处理器执行时实现如第一方面任一实现方式描述的方法。
第四方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面任一实现方式描述的方法。
本申请实施例提供的信息处理方法及装置,通过根据目标商家的配置操作,确定针对于预设物品的三维展示效果;以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场 景信息;将三维场景信息与预设显示页面中的预设物品绑定,从而提供了一种信息处理方法,生成了预设物品的三维场景信息,丰富了预设物品的展示方式。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请的一个实施例可以应用于其中的示例性系统架构图;
图2是根据本申请信息处理方法的一个实施例的流程图;
图3是根据本实施例的信息处理方法的应用场景的示意图;
图4是根据本申请的信息处理方法的又一个实施例的流程图;
图5是根据本申请的信息处理装置的一个实施例的结构图;
图6是适于用来实现本申请实施例的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的信息处理方法及装置的示例性架构100。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。终端设备101、102、103之间通信连接构成拓扑网络,网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
终端设备101、102、103可以是支持网络连接从而进行数据交互 和数据处理的硬件设备或软件。当终端设备101、102、103为硬件时,其可以是支持网络连接,信息交互、显示、处理等功能的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如针对于目标商家在终端设备101、102、103的配置操作,确定针对于预设物品的三维展示效果的后台处理服务器。后台处理服务器根据目标商家的配置操作,确定针对于预设物品的三维展示效果;以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息;将三维场景信息与预设显示页面中的预设物品绑定。可选的,后台处理服务器可以向目标用户展示预设物品的三维场景信息。作为示例,服务器105可以是云端服务器。
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
还需要说明的是,本公开的实施例所提供的信息处理方法可以由服务器执行,也可以由终端设备执行,还可以由服务器和终端设备彼此配合执行。相应地,信息处理装置包括的各个部分(例如各个单元、子单元、模块、子模块)可以全部设置于服务器中,也可以全部设置于终端设备中,还可以分别设置于服务器和终端设备中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。当信息处理方法运行于其上的电子设备不需要与其他电子设备进行数据传输时,该系统架构可以仅包括信息处理方法运行于其上的电子设备(例如服务器或终端设备)。
继续参考图2,示出了信息处理方法的一个实施例的流程200,包括以下步骤:
步骤201,根据目标商家的配置操作,确定针对于预设物品的三维展示效果。
本实施例中,信息处理方法的执行主体(例如图1中的服务器)可以根据目标商家的配置操作,确定针对于预设物品的三维展示效果。
其中,目标商家可以是电商平台中的任一商家,对预设物品具有支配权限。作为示例,预设物品可以是目标商家在电商品台所卖的物品。
为了丰富预设物品在电商平台的展示方式,目标商家可以对预设物品进行配置操作,确定预设物品的三维展示效果。可以理解,相比于图片、视频等方式,基于三维信息所得到的展示方式更易于目标用户连接预设物品。
作为示例,上述执行主体或者与上述执行主体通信连接的电子设备中预先存储有各种场景,配置操作可以是目标商家根据预设物品的三维模型信息,以及目标商家所选取的场景,将三维模型信息配置于所选取的场景中的操作。
作为又一示例,配置操作还可以是目标商家根据从素材库中选取的素材,进行场景配置的操作。具体的,目标商家可以通过配置操作进行场景名称、素材位置、背景信息等信息的配置。其中,素材库设有进行场景配置的各种信息,包括但不限于背景信息、纹理信息等。
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式执行上述步骤201:
首先,根据目标商家的编辑操作,确定包括预设物品的场景。作为示例,目标商家通过编辑操作确定场景中所包括所有信息,包括但不限于预设物品的三维模型信息、背景信息、纹理信息等。
然后,根据目标商家的数据配置操作,确定场景的配置数据,得到三维展示效果。作为示例,目标商家对所确定的场景中的所有信息进行相对位置、相互之间的关联信息的数据配置,上述执行主体根据 数据配置操作得到三维展示效果。
可以理解,用户目标商家可以通过多次配置操作,为同一个预设物品设置多个三维展示效果。
步骤202,以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息。
本实施例中,上述执行主体以步骤201得到的三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息。
网页端增强现实技术是指Web AR(Augmented Reality,增强现实)。AR技术广泛运用了多媒体、三维建模、实时跟踪、智能交互、传感等多种技术手段,将计算机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后,应用到真实世界中,从而将虚拟信息与真实世界巧妙融合。而Web AR则是在Web端实现AR功能。
本实施例中,通过Web AR实现目标商家所希望展示的预设物品的三维展示效果,得到预设物品的三维场景信息。其中,三维场景信息即是通过Web AR渲染后所得到的数据化后的三维展示效果。
在本实施例的一些可选的实现方式中,网页端增强现实技术采用组件化设计方式。组件化表征Web AR中以所包括的单个功能为单位进行封装,从而提高了开发效率以及组件的重用性。
本实现方式中,Web AR中的组件包括:场景组件、相机组件和渲染器组件。其中,场景组件,表征容纳所有待渲染物品后的场景,待渲染物品是场景中所包括的任意物品,例如预设物品的三维模型;相机组件,表征获取场景的位置和方向;渲染器组件,用于根据相机组件所获取的场景,渲染得到针对于预设物品的三维场景信息。
本实现方式中,上述执行主体可以通过如下方式执行上述步骤202:
首先,根据三维展示效果,从构成网页端增强现实技术的组件中确定出实现三维展示效果的目标组件。
可以理解,实现三维展示效果的目标组件一般为多个。一般情况下,需要通过场景组件、相机组件、渲染器组件共同配合,才能实现 三维展示效果。
然后,通过目标组件,以三维展示效果为目标,渲染得到针对于预设物品的三维场景信息。
作为示例,三维展示效果表征基于180°旋转展示预设物品的正面和侧面,并且三维展示效果中包括预设物品对应的背景图片。则通过场景组件实现包括预设物品的三维模型和背景图片的场景,通过相机组件实现180°旋转展示,最后通过渲染器组件以相机组件所获取的场景渲染得到针对于预设物品的三维场景信息。
在本实施例的一些可选的实现方式中,Web AR中的组件还包括:光源组件,表征场景中的光线展现效果;纹理组件,表征待渲染物品的纹理;几何体组件,表征场景中的几何体。通过上述组件,可以实现更为丰富多彩的三维展示效果。
步骤203,将三维场景信息与预设显示页面中的预设物品绑定。
本实施例中,上述执行主体将步骤202得到的三维场景信息与预设显示页面中的预设物品绑定。
其中,预设显示页面为电商平台中展示预设物品的页面。将三维场景信息与预设显示页面中的预设物品绑定,使三维场景信息与预设显示页面中的预设物品相关联,以使得目标用户通过对与预设显示页面中的预设物品的操作,展示三维场景信息。
可以理解,同一个预设物品可以与多个三维场景信息绑定。
继续参见图3,图3是根据本实施例的信息处理方法的应用场景的一个示意图300。在图3的应用场景中,目标商家301为丰富自己的在线店铺中的衣服的展示效果,通过中终端设备302对所卖的衣服配置三维展示效果。具体的,目标商家为每件衣服设置360°旋转展示效果。服务器303首先根据目标商家的配置操作,确定针对于预设物品的三维展示效果。然后,以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息。最后,将三维场景信息与预设显示页面中的预设物品绑定。
本公开的上述实施例提供的方法,通过根据目标商家的配置操作, 确定针对于预设物品的三维展示效果;以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息;将三维场景信息与预设显示页面中的预设物品绑定,从而提供了一种信息处理方法,生成了预设物品的三维场景信息,丰富了预设物品的展示方式。
在本实施例的一些可选的实现方式中,上述执行主体还可以响应于确定接收到目标用户在预设显示页面中的第一预设操作,向目标用户展示预设物品所绑定的三维场景信息。
作为示例,当预设物品的三维场景信息包括多个时,第一预设操作可以是对多个三维场景信息的一个三维场景信息的选取操作。
在三维场景信息的展示过程中,上述执行主体还可以接收目标用户的动画控制操作和/或交互操作。目标用户通过动画控制操作可以选取动画展示效果,包括但不限于放大、缩小、旋转等。作为示例,上述执行主体中设置有动画效果列表,当目标用户从动画效果列表中选取出目标动画后,上述执行主体基于目标动画展示三维场景信息。
交互操作是对目标与预设物品的三维模型之间交互过程的控制。本实施例可以提供两种交互方式,第一种是相机控制器方式,通过设置不同的相机角度等属性,展示不同角度的三维场景信息。第二种是触摸控制器方式,通过触摸显示有三维场景信息的设备屏幕控制三维场景信息。
在本实施例的一些可选的实现方式中,上述执行主体还可以响应于确定接收到目标用户在预设显示页面中的第二预设操作,通过摄像头获取现实影像;将预设物品所绑定的三维场景信息渲染至现实影像中,并向目标用户展示渲染三维场景信息后的现实影像。
作为示例,三维场景信息仅表征预设物品(例如为衣服)的三维模型,现实影像中包括目标人员(例如是对预设物品感兴趣的人员)。目标用户可以结合上述的交互操作,将预设物品的三维模型与现实图像中的目标人员对象进行匹配,以向目标用户展示预设物品在目标人员身上的穿衣效果。
在本实施例的一些可选的实现方式中,可以通过预设标识图像控 制三维场景信息在现实图像中的位置。具体的,上述执行主体首先检测现实影像中是否存在预设标识图像;响应于确定现实影像中存在预设标识图像,将预设物品所绑定的三维场景信息渲染至现实影像中预设标识图像所指示的位置。
其中,预设标识图像可以是任意图像。为了提高预设标识图像的辨识度,可以采用具有黑色边框的预设标识图像。
继续参考图4,示出了根据本申请的信息处理方法的另一个实施例的示意性流程400,包括以下步骤:
步骤401,根据目标商家的配置操作,确定针对于预设物品的三维展示效果。
步骤402,以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息。
步骤403,将三维场景信息与预设显示页面中的预设物品绑定。
步骤404,响应于确定接收到目标用户在预设显示页面中的第二预设操作,通过摄像头获取现实影像。
步骤405,将预设物品所绑定的三维场景信息渲染至现实影像中,并向目标用户展示渲染三维场景信息后的现实影像。
从本实施例中可以看出,与图2对应的实施例相比,本实施例中的信息处理方法的流程500具体说明了针对于预设物品的增强现实效果的实现过程。如此,本实施例进一步丰富了预设物品的展示方式。
继续参考图5,作为对上述各图所示方法的实现,本公开提供了一种信息处理装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,信息处理装置包括:包括:确定单元501,被配置成根据目标商家的配置操作,确定针对于预设物品的三维展示效果;渲染单元502,被配置成以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息;绑定单元503,被配置成将三维场景信息与预设显示页面中的预设物品绑定。
在本实施例的一些可选的实现方式中,网页端增强现实技术采用组件化设计方式;渲染单元502,进一步被配置成:根据三维展示效果,从构成网页端增强现实的组件中确定出实现三维展示效果的目标组件;通过目标组件,以三维展示效果为目标,渲染得到针对于预设物品的三维场景信息;其中,组件包括:场景组件,表征容纳所有待渲染物品后的场景;相机组件,表征获取场景的位置和方向;渲染器组件,用于根据相机组件所获取的场景,渲染得到针对于预设物品的三维场景信息。
在本实施例的一些可选的实现方式中,组件还包括:光源组件,表征场景中的光线展现效果;纹理组件,表征待渲染物品的纹理;几何体组件,表征场景中的几何体。
在本实施例的一些可选的实现方式中,确定单元501,进一步被配置成:根据目标商家的编辑操作,确定包括预设物品的场景;根据目标商家的数据配置操作,确定场景的配置数据,得到三维展示效果。
在本实施例的一些可选的实现方式中,上述装置还包括:第一展示单元(图中未示出),被配置成响应于确定接收到目标用户在预设显示页面中的第一预设操作,向目标用户展示预设物品所绑定的三维场景信息。
在本实施例的一些可选的实现方式中,上述装置还包括:第二展示单元(图中未示出),被配置成响应于确定接收到目标用户在预设显示页面中的第二预设操作,通过摄像头获取现实影像;将预设物品所绑定的三维场景信息渲染至现实影像中,并向目标用户展示渲染三维场景信息后的现实影像。
在本实施例的一些可选的实现方式中,第二展示单元(图中未示出),进一步被配置成:检测现实影像中是否存在预设标识图像;响应于确定现实影像中存在预设标识图像,将预设物品所绑定的三维场景信息渲染至现实影像中预设标识图像所指示的位置。
本实施例中,信息处理装置中的确定单元根据目标商家的配置操作,确定针对于预设物品的三维展示效果;渲染单元以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维 场景信息;绑定单元将三维场景信息与预设显示页面中的预设物品绑定,从而提供了一种信息处理方法,生成了预设物品的三维场景信息,丰富了预设物品的展示方式。
下面参考图6,其示出了适于用来实现本申请实施例的设备(例如图1所示的设备101、102、103、105)的计算机系统600的结构示意图。图6示出的设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图6所示,计算机系统600包括处理器(例如CPU,中央处理器)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储部分608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM603中,还存储有系统600操作所需的各种程序和数据。处理器601、ROM602以及RAM603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。在该计算机程序被处理器601执行时,执行本申请的方法中限定的上述功能。
需要说明的是,本申请的计算机可读介质可以是计算机可读信号 介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,程序设计语言包括面向目标的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在客户计算机上执行、部分地在客户计算机上执行、作为一个独立的软件包执行、部分在客户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到客户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本申请各种实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点 上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器,包括确定单元、渲染单元和绑定单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,渲染单元还可以被描述为“以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息的单元”。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的设备中所包含的;也可以是单独存在,而未装配入该设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该计算机设备:根据目标商家的配置操作,确定针对于预设物品的三维展示效果;以三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于预设物品的三维场景信息;将三维场景信息与预设显示页面中的预设物品绑定。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (16)

  1. 一种信息处理方法,包括:
    根据目标商家的配置操作,确定针对于预设物品的三维展示效果;
    以所述三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于所述预设物品的三维场景信息;
    将所述三维场景信息与预设显示页面中的所述预设物品绑定。
  2. 根据权利要求1所述的方法,其中,所述网页端增强现实技术采用组件化设计方式;
    所述以所述三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于所述预设物品的三维场景信息,包括:
    根据所述三维展示效果,从构成所述网页端增强现实技术的组件中确定出实现所述三维展示效果的目标组件;
    通过所述目标组件,以所述三维展示效果为目标,渲染得到针对于所述预设物品的三维场景信息;
    其中,所述组件包括:
    场景组件,表征容纳所有待渲染物品后的场景;
    相机组件,表征获取场景的位置和方向;
    渲染器组件,用于根据所述相机组件所获取的场景,渲染得到针对于所述预设物品的三维场景信息。
  3. 根据权利要求2所述的方法,其中,所述组件还包括:
    光源组件,表征场景中的光线展现效果;
    纹理组件,表征所述待渲染物品的纹理;
    几何体组件,表征场景中的几何体。
  4. 根据权利要求1-3任一项所述的方法,其中,所述根据目标商家的配置操作,确定针对于预设物品的三维展示效果,包括:
    根据所述目标商家的编辑操作,确定包括所述预设物品的场景;
    根据所述目标商家的数据配置操作,确定所述场景的配置数据,得到所述三维展示效果。
  5. 根据权利要求1-4任一项所述的方法,其中,所述方法还包括:
    响应于确定接收到目标用户在所述预设显示页面中的第一预设操作,向所述目标用户展示所述预设物品所绑定的三维场景信息。
  6. 根据权利要求1-5任一项所述的方法,其中,所述方法还包括:
    响应于确定接收到目标用户在所述预设显示页面中的第二预设操作,通过摄像头获取现实影像;
    将所述预设物品所绑定的三维场景信息渲染至所述现实影像中,并向所述目标用户展示渲染所述三维场景信息后的现实影像。
  7. 根据权利要求6所述的方法,其中,所述将所述预设物品所绑定的三维场景信息渲染至所述现实影像中,包括:
    检测所述现实影像中是否存在预设标识图像;
    响应于确定所述现实影像中存在所述预设标识图像,将所述预设物品所绑定的三维场景信息渲染至所述现实影像中所述预设标识图像所指示的位置。
  8. 一种信息处理装置,包括:
    确定单元,被配置成根据目标商家的配置操作,确定针对于预设物品的三维展示效果;
    渲染单元,被配置成以所述三维展示效果为目标,基于网页端增强现实技术,渲染得到针对于所述预设物品的三维场景信息;
    绑定单元,被配置成将所述三维场景信息与预设显示页面中的所述预设物品绑定。
  9. 根据权利要求8所述的装置,其中,所述网页端增强现实技术采用组件化设计方式;
    所述渲染单元,进一步被配置成:
    根据所述三维展示效果,从构成所述网页端增强现实技术的组件中确定出实现所述三维展示效果的目标组件;通过所述目标组件,以所述三维展示效果为目标,渲染得到针对于所述预设物品的三维场景信息;
    其中,所述组件包括:
    场景组件,表征容纳所有待渲染物品后的场景;相机组件,表征获取场景的位置和方向;渲染器组件,用于根据所述相机组件所获取的场景,渲染得到针对于所述预设物品的三维场景信息。
  10. 根据权利要求9所述的装置,其中,所述组件还包括:
    光源组件,表征场景中的光线展现效果;纹理组件,表征所述待渲染物品的纹理;几何体组件,表征场景中的几何体。
  11. 根据权利要求8-10任一项所述的装置,其中,所述确定单元,进一步被配置成:
    根据所述目标商家的编辑操作,确定包括所述预设物品的场景;根据所述目标商家的数据配置操作,确定所述场景的配置数据,得到所述三维展示效果。
  12. 根据权利要求8-11任一项所述的装置,其中,还包括:
    第一展示单元,被配置成响应于确定接收到目标用户在所述预设显示页面中的第一预设操作,向所述目标用户展示所述预设物品所绑定的三维场景信息。
  13. 根据权利要求8-12任一项所述的装置,其中,还包括:
    第二展示单元,被配置成响应于确定接收到目标用户在所述预设显示页面中的第二预设操作,通过摄像头获取现实影像;将所述预设物品所绑定的三维场景信息渲染至所述现实影像中,并向所述目标用户展示渲染所述三维场景信息后的现实影像。
  14. 根据权利要求13所述的装置,其中,所述第二展示单元,进一步被配置成:
    检测所述现实影像中是否存在预设标识图像;响应于确定所述现实影像中存在所述预设标识图像,将所述预设物品所绑定的三维场景信息渲染至所述现实影像中所述预设标识图像所指示的位置。
  15. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-7中任一所述的方法。
  16. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的方法。
PCT/CN2021/125720 2020-11-06 2021-10-22 信息处理方法及装置 WO2022095733A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011232150.7 2020-11-06
CN202011232150.7A CN113763090B (zh) 2020-11-06 2020-11-06 信息处理方法及装置

Publications (1)

Publication Number Publication Date
WO2022095733A1 true WO2022095733A1 (zh) 2022-05-12

Family

ID=78786000

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125720 WO2022095733A1 (zh) 2020-11-06 2021-10-22 信息处理方法及装置

Country Status (2)

Country Link
CN (1) CN113763090B (zh)
WO (1) WO2022095733A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590342A (zh) * 2014-11-14 2016-05-18 上海上大海润信息系统有限公司 一种构建三维展品展示场景的系统
US20160253745A1 (en) * 2015-02-26 2016-09-01 Staging Design Inc. Virtual shopping system and method utilizing virtual reality and augmented reality technology
CN111724231A (zh) * 2020-05-19 2020-09-29 五八有限公司 一种商品信息的展示方法和装置
CN111767456A (zh) * 2019-05-20 2020-10-13 北京京东尚科信息技术有限公司 用于推送信息的方法和装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8451266B2 (en) * 2009-12-07 2013-05-28 International Business Machines Corporation Interactive three-dimensional augmented realities from item markers for on-demand item visualization
CN109840948B (zh) * 2017-11-29 2023-08-15 深圳市掌网科技股份有限公司 基于增强现实的目标物体的投放方法和装置
CN108038760B (zh) * 2017-12-08 2021-11-12 大连知行天下网络科技有限公司 一种基于ar技术的商品展示控制系统
CN109993594A (zh) * 2017-12-29 2019-07-09 苏宁云商集团股份有限公司 商品模型的3d与ar切换展示方法及装置
US10937240B2 (en) * 2018-01-04 2021-03-02 Intel Corporation Augmented reality bindings of physical objects and virtual objects
CN108305316A (zh) * 2018-03-08 2018-07-20 网易(杭州)网络有限公司 基于ar场景的渲染方法、装置、介质和计算设备
CN108597035A (zh) * 2018-05-02 2018-09-28 福建中锐海沃科技有限公司 一种基于增强现实的三维物件显示方法、存储介质及计算机
CN110662015A (zh) * 2018-06-29 2020-01-07 北京京东尚科信息技术有限公司 用于显示图像的方法及装置
US11494987B2 (en) * 2018-09-06 2022-11-08 8th Wall Inc. Providing augmented reality in a web browser
CN111325824B (zh) * 2019-07-03 2023-10-10 杭州海康威视系统技术有限公司 图像数据展示方法、装置、电子设备及存储介质
CN111598996B (zh) * 2020-05-08 2024-02-09 上海实迅网络科技有限公司 一种基于ar技术的物品3d模型展示方法和系统
CN111833423A (zh) * 2020-06-30 2020-10-27 北京市商汤科技开发有限公司 展示方法、装置、设备和计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590342A (zh) * 2014-11-14 2016-05-18 上海上大海润信息系统有限公司 一种构建三维展品展示场景的系统
US20160253745A1 (en) * 2015-02-26 2016-09-01 Staging Design Inc. Virtual shopping system and method utilizing virtual reality and augmented reality technology
CN111767456A (zh) * 2019-05-20 2020-10-13 北京京东尚科信息技术有限公司 用于推送信息的方法和装置
CN111724231A (zh) * 2020-05-19 2020-09-29 五八有限公司 一种商品信息的展示方法和装置

Also Published As

Publication number Publication date
CN113763090B (zh) 2024-05-21
CN113763090A (zh) 2021-12-07

Similar Documents

Publication Publication Date Title
JP7461941B2 (ja) 音声コマンドおよび定義されたパースペクティブを用いた仮想データビューの最適化
Wheeler et al. Virtual interaction and visualisation of 3D medical imaging data with VTK and Unity
Linowes et al. Augmented reality for developers: Build practical augmented reality applications with unity, ARCore, ARKit, and Vuforia
CN106846497B (zh) 应用于终端的呈现三维地图的方法和装置
US20140002443A1 (en) Augmented reality interface
KR20220035380A (ko) 증강 현실 장면들을 위한 시스템 및 방법
US11989845B2 (en) Implementation and display of augmented reality
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
CN112672185B (zh) 基于增强现实的显示方法、装置、设备及存储介质
WO2022007565A1 (zh) 增强现实的图像处理方法、装置、电子设备及存储介质
CN109656363B (zh) 一种用于设置增强交互内容的方法与设备
US20130181975A1 (en) Systems and methods for objects associated with a three-dimensional model
WO2020216310A1 (zh) 用于生成应用的方法、终端设备和计算机可读介质
US10282904B1 (en) Providing augmented reality view of objects
TW201928451A (zh) 呈現一擴增實境界面
CN114461064A (zh) 虚拟现实交互方法、装置、设备和存储介质
CN110990106B (zh) 数据展示方法、装置、计算机设备及存储介质
CN110673886A (zh) 用于生成热力图的方法和装置
WO2022095733A1 (zh) 信息处理方法及装置
WO2023114530A1 (en) Augmented reality (ar) visual display to save
Verma et al. Digital assistant with augmented reality
US10290146B2 (en) Displaying depth effects in digital artwork based on movement of a display
CN110620805B (zh) 用于生成信息的方法和装置
WO2023169089A1 (zh) 一种视频播放方法、装置、电子设备、介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21888430

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21888430

Country of ref document: EP

Kind code of ref document: A1