WO2020173222A1 - 物品虚拟化处理方法、装置、电子设备及存储介质 - Google Patents

物品虚拟化处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020173222A1
WO2020173222A1 PCT/CN2020/070203 CN2020070203W WO2020173222A1 WO 2020173222 A1 WO2020173222 A1 WO 2020173222A1 CN 2020070203 W CN2020070203 W CN 2020070203W WO 2020173222 A1 WO2020173222 A1 WO 2020173222A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
virtual
scene
entity model
virtual entity
Prior art date
Application number
PCT/CN2020/070203
Other languages
English (en)
French (fr)
Inventor
何进萍
Original Assignee
北京京东尚科信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东尚科信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京京东尚科信息技术有限公司
Priority to US17/430,853 priority Critical patent/US11978111B2/en
Publication of WO2020173222A1 publication Critical patent/WO2020173222A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • This application relates to the field of Internet technology, in particular to a method, device, electronic equipment, and storage medium for virtualizing items.
  • E-commerce platforms generally simply display photos of products, but they are far from reaching the expectations of manufacturers and consumers.
  • the manufacturer designs the product and puts the photo corresponding to the product on the e-commerce platform for display, and the consumer decides whether to buy it based on the photo of the product.
  • real goods can already be made into virtual physical models through virtual model technology, and virtual physical models can be displayed from multiple angles, so that goods can be displayed more comprehensively and display efficiency can be improved.
  • the embodiment of the present application provides a method for virtualizing an item, which is executed by a terminal device, and the method includes:
  • the embodiment of the present application also provides an item virtualization processing device, including:
  • the model acquisition module is used to acquire and display the virtual entity model with editable external features constructed from real objects;
  • a scene construction module for constructing a virtual scene corresponding to the real scene according to the real scene
  • a model projection module configured to project the virtual entity model into the virtual scene
  • the model editing module is used to receive an editing operation on the editable external feature of the virtual entity model, adjust the editable external feature according to the editing operation, to obtain an adjusted virtual entity model, and display the The adjusted virtual entity model is displayed in the scene.
  • An embodiment of the present application also provides an electronic device, including:
  • a memory connected to the processor; the memory stores machine-readable instructions; the machine-readable instructions can be executed by the processor to complete the above-mentioned item virtualization processing method.
  • the embodiment of the present application provides a non-volatile computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the virtual processing method of the article are realized.
  • the method for virtualizing an item of the present application can construct a real item as a virtual entity model with editable external features, and project the virtual entity model into a virtual scene constructed from the real scene.
  • the projection effect of the virtual entity model in the real scene edits the editable external features of the virtual entity model to obtain a customized virtual entity model, that is, an adjusted virtual entity model.
  • items can be displayed more efficiently and the display efficiency can be improved.
  • the actual product can be further generated according to the customized virtual entity model.
  • FIG. 1 is a schematic flowchart of a method for virtualizing an item in some embodiments of this application;
  • FIG. 2 is a schematic diagram of another process of the virtual processing method of an item in some embodiments of the application.
  • FIG. 3 is an example diagram of a virtual entity model constructed based on real objects in some embodiments of the application.
  • FIG. 4 is an example diagram of dividing a virtual entity model into multiple model regions in some embodiments of the application.
  • FIG. 5 is a schematic diagram of another process of a method for virtualizing an item in some embodiments of the application.
  • FIG. 6 is a schematic diagram of another flow of the virtual processing method of an item in some embodiments of the application.
  • FIG. 7 is an example diagram of a model projection effect of a virtual processing method of an item in some embodiments of the application.
  • FIG. 8 is a schematic diagram of another flow of a method for virtualizing an item in some embodiments of the application.
  • FIG. 9 is an example diagram of editing operations on editable external features of a virtual entity model in some embodiments of the application.
  • FIG. 10 is an example diagram of an adjusted virtual entity model in some embodiments of this application.
  • Fig. 11 is a schematic diagram of another process of a method for virtualizing an item in some embodiments of the application.
  • FIG. 12 is a schematic diagram of an application interface of a method for virtualizing an item in some embodiments of this application.
  • FIG. 13 is a schematic structural diagram of a device for virtualizing an item in some embodiments of the application.
  • FIG. 14 is a schematic diagram of another structure of an apparatus for virtualizing an item in some embodiments of the application.
  • 15 is a schematic diagram of another structure of an apparatus for virtualizing an item in some embodiments of the application.
  • FIG. 16 is a schematic diagram of another structure of an apparatus for virtualizing an item in some embodiments of the application.
  • FIG. 17 is a schematic diagram of another structure of an apparatus for virtualizing an item in some embodiments of the application.
  • FIG. 18 is a schematic diagram of the structure of an electronic device in some embodiments of the application.
  • the article virtualization processing method of the present application can construct a real article as a virtual entity model with editable and customizable features, and perform virtual customization to obtain a customized virtual entity model after comparing the effects of the virtual scene constructed by the real scene.
  • actual products can be further generated according to the customized virtual entity model, which solves the technical problem that items cannot be customized according to the effects in the scene.
  • FIG. 1 is a schematic flowchart of a method for virtualizing an item in some embodiments of the application. This method can be executed by any computer device with data processing capabilities, for example, a terminal device.
  • the terminal device can be a personal computer (PC), a notebook computer, or other smart terminal device, or a smart mobile phone, tablet computer, etc.
  • Terminal Equipment As shown in FIG. 1, in some embodiments, a method for virtualizing an item is disclosed, and the method includes:
  • S101 Acquire and display a virtual entity model with editable external features constructed based on real objects
  • the construction of the virtual entity model may be completed by other terminal devices or servers and stored in the server.
  • the terminal device may obtain the virtual entity model from the server after receiving the instruction to display the virtual entity model. Virtual entity model and display.
  • the virtual entity model in this step is a two-dimensional (2D) or three-dimensional (3D) entity model of the constructed object.
  • Common modeling methods can be solid modeling through parameter size, or solid scanning through reverse engineering to build a model.
  • the editable external features of the virtual entity model may be features such as the structure size, surface color, and pattern of the virtual entity model, and may also be referred to as editable customized features or customized features.
  • the structural size of the virtual solid model can be used as a customized feature. If the structural size of the virtual solid model is used as a customized feature, then the specific size parameters of the structural size should be editable. It should also be pointed out that in addition to the structure, other features that need to be customized can also be constructed, such as the surface color or pattern of the article.
  • the virtual entity model can be designed in white to facilitate the filling in later customization.
  • the structure, colors, and textures are just examples to better illustrate this step, and are not specific restrictions on custom features.
  • the virtual entity model is divided into multiple model regions, and each model region has an independent editable external feature, that is, the external feature of each model region can be independently edited.
  • the virtual solid model can be divided into multiple model areas according to the structure of the virtual solid model, or a grid can be laid on the surface of the virtual solid model, and the virtual solid model can be divided into multiple models according to the grid.
  • Area for example, each grid serves as a model area.
  • S102 Construct a virtual scene corresponding to the real scene according to the real scene acquired by the terminal device;
  • the method used in this step can be a static image taken from a real scene through the camera of the terminal device as the virtual scene, or a dynamic video can be obtained from the real scene in the way of recording as the virtual scene.
  • the above-mentioned photographing and video recording methods for acquiring the real scene are just examples.
  • the real scene can also be acquired in other ways to construct the virtual scene.
  • this step it is mainly to establish a connection between the virtual physical model and the virtual scene to improve the user experience of the user.
  • the virtual scene will have a visual impact on the virtual physical model in terms of space, color, layout and many other aspects.
  • S104 Receive an editing operation on the editable external features of the virtual entity model, adjust the virtual entity model according to the editing operation to obtain an adjusted virtual entity model, and display the virtual entity model in the virtual scene The adjusted virtual physical model.
  • the effect in this step can be understood as the effect of space, color, layout, etc.
  • the real object is a sofa
  • the virtual entity model constructed from the sofa can be projected in the virtual scene constructed through reality, and the virtual scene may be the consumer's home environment.
  • the real effect of the virtual entity model in the real virtual scene can be displayed. Consumers can edit the customized features according to the real effect, and after editing, they can also repeatedly compare the effect of the virtual entity model in the virtual scene.
  • the merchant models their actual real items (commodities) to generate a virtual entity model.
  • the modeling method can adopt the modeling method of size parameters, or the modeling method of entity scanning.
  • the virtual entity model is displayed on the e-commerce platform, and consumers can obtain real scenes and construct virtual scenes by taking photos or videos. After the virtual entity model is projected into the virtual scene, the contrast effect editing and customization features are obtained according to the spatial relationship, color relationship, and layout relationship of the virtual entity model in the virtual scene. After repeated comparisons, the expected customized virtual entity model is obtained. It needs to be pointed out that the above effect can actually be a quantitative relationship on parameters, such as the layout of the sofa in the space, and whether the sofa will interfere in the space.
  • FIG. 2 is a schematic diagram of another flow chart of a method for virtualizing an item in some embodiments of the application. As shown in FIG. 2, in some embodiments, before step S101, the method further includes: constructing a virtual entity model with editable external characteristics according to the real object, which specifically includes:
  • a virtual entity model is first constructed based on real objects.
  • this step is to provide a carrier for setting custom features later.
  • the customized feature is a model size parameter
  • the physical model feature of the virtual solid model is the carrier for constructing the customized feature.
  • the customized features are dependent on the virtual entity model.
  • the division method can include any of the following:
  • the virtual entity model is divided into multiple model regions.
  • the virtual entity model of the sofa can be divided into areas such as the armrest of the sofa and the backrest of the sofa.
  • S2022 Laying a grid on the surface of the virtual solid model, and dividing the virtual solid model into multiple model regions according to the grid;
  • the virtual solid model is meshed, the color of the virtual solid model can be better edited, and the mesh is laid on the surface of the virtual solid model.
  • a triangular network can be used as the mesh.
  • the way of grid laying, because in the virtual entity model, the triangular grid can form a triangle by the (x,y,z) coordinates of three points in space or the (x,y) coordinates of three points on the plane.
  • triangle meshing is to reduce the amount of calculation during rendering.
  • S203 Create independent editable external features for each model area according to the divided model areas.
  • the user can independently edit the size and color of the armrest of the sofa, or independently edit the size and color of the backrest of the sofa.
  • the above-mentioned features such as size and color are the customized features of the created independent model area ( That is, you can edit external features).
  • FIG. 3 is an example diagram of the effect of a virtual entity model constructed based on real objects in some embodiments of the application.
  • FIG. 4 is an example diagram of dividing a virtual entity model into multiple model regions in an embodiment of the application.
  • the virtual solid model is a one-handed model, and grids are laid on the surface of the virtual solid model.
  • Each grid can have a separate editable external feature, that is, each grid can be individually edited.
  • the external features of the grid are edited, for example, the color of the model should be filled in for custom editing.
  • the hand model in the figure can also be simply divided into independent model areas such as fingers, palms, and wrists.
  • the fingers can be further divided according to the finger joints, and the thumb can be divided into two sections.
  • the fingers can be divided into three segments.
  • the first joint above the thumb in Figure 4 has been painted black. According to the above example, it can be seen that by dividing the model area of the virtual entity model, and then creating customized features for the independently divided model area, it is more convenient to realize customized specific editing.
  • FIG. 5 is a schematic diagram of another flow chart of a method for virtualizing an item in some embodiments of the application.
  • the projecting the virtual physical model into the virtual scene includes:
  • a positioning origin should be selected in the virtual scene.
  • the location of the positioning origin can be selected according to specific preset options.
  • the preset may be to select the center point of the virtual scene, or it may be selected according to the special circumstances of the virtual scene, such as identifying the floor area in the virtual scene first. Then select the center point of the floor area.
  • the above-mentioned preset method may be any default method, or it may be set manually, or even randomly.
  • a point needs to be selected on the virtual entity model, namely the model origin.
  • the selection method can be a random method, the geometric center of the virtual solid model, or other methods.
  • the model origin coincides with the positioning origin
  • the virtual solid model is projected by point coincidence to make the virtual solid model rotate freely around the positioning origin.
  • the free rotation of the virtual solid model can facilitate the editing of the customized features of the virtual solid model.
  • the positioning method in projecting the virtual entity model in the virtual scene can adopt the default method or random method projection in other spatial degrees of freedom, except for the overlap between the selected points, which will not be repeated here.
  • a positioning method of a virtual entity model in a virtual scene is actually provided. Applying the method in this embodiment, this method can make the virtual solid model freely rotate and facilitate editing of the custom features of the virtual solid model.
  • FIG. 6 is a schematic diagram of another flow of the method for virtualizing an item in an embodiment of the application.
  • the selecting the positioning origin for placing the article model in the virtual scene includes:
  • the first reference plane is selected, which can be a horizontal plane in the virtual scene or another plane.
  • the virtual entity model can be placed on the first datum plane.
  • the reason for establishing the first datum plane is because the virtual entity model can be directly placed on the first datum plane of the virtual scene during projection to establish a sense of daily space in line with people.
  • the spatial relationship of the first reference plane in the real scene can be determined by the placement angle of the device (such as a mobile phone) during shooting or the image in the virtual scene.
  • the device can determine the spatial position relationship between the captured real scene and the virtual scene according to the built-in gyroscope when shooting, so as to find the first reference plane in the virtual scene, or obtain relevant information from the image of the virtual scene
  • the information obtains the first reference plane, and related information such as the shadow in the image, the occlusion relationship between objects, etc.
  • the selection method explained above is just an example for a better understanding of this step.
  • a second datum plane in the virtual entity model is selected, and the selection method can adopt the default method, or it can be selected artificially.
  • S403 Project the virtual entity model into the virtual scene in a manner of defining a spatial position relationship between the first reference surface and the second reference surface.
  • defining the first datum plane and the second datum plane is to construct a virtual entity model in the virtual scene that conforms to the conventional spatial position relationship of people. For example, if you place a computer on the desktop, you need to find the desktop as the first reference surface and the bottom surface of the computer as the second reference surface. When the first reference surface and the second reference surface overlap, the virtual physical model of the computer needs to be set. When placed above the desktop, the effect of placing the computer on the desktop can be achieved.
  • FIG. 7 is an example diagram of a model projection effect of a virtual processing method of an item in some embodiments of the application.
  • the trees in the figure that is, the virtual entity model
  • the desktop First of all, first select the first datum plane in the virtual scene. It can be seen from the figure that the table surface should be a horizontal table surface in the real scene. However, due to the shooting angle, it is not a horizontal plane of orthographic projection in the picture.
  • the image of the desktop near the big and far is distorted, so it is necessary to restore the real situation of the image according to the angle, focal length and other parameters when shooting, and then select the plane where the desktop is located as the first reference plane.
  • the second reference plane can select a reference plane that places the virtual solid model in a reasonable state. Specifically, referring to Fig. 7, it can be the ground where trees grow.
  • a desired correct position of the virtual entity model in the virtual scene can be seen. Taking Figure 7 as an example, the final effect is that trees grow from the tabletop.
  • the definition of the spatial relationship between the first reference surface and the second reference surface can be performed in a preset and automatic manner, or it can be manually defined later.
  • FIG. 8 is a schematic diagram of another flow chart of a method for virtualizing an item in some embodiments of the application.
  • the editing operation on the editable external features of the virtual entity model may be: using a brush of any color to draw on the screen, or other structures for the virtual entity model
  • the operation of editing external features such as size, color and pattern.
  • the painting operation will be described as an example.
  • S501 Receive a painting operation on the screen, and convert screen coordinates corresponding to the painting operation into 3D coordinates.
  • S502 Determine a position corresponding to the painting operation in the virtual physical model according to the 3D coordinates.
  • select a grid to be colored or textured select the specific color or texture to be colored.
  • the color or surface image of the grid presents the selected color or texture, or the above steps are reversed, and the color or texture that needs to be presented can also achieve the purpose of this step.
  • the selected mode can be identified by a brush, and the user can use the brush to edit, which can be more vivid.
  • FIG. 9 is an example diagram of editing the editable external features of the virtual entity model in some embodiments of the application
  • FIG. 10 is an example of the embodiment of the application.
  • the hand model in the figure is colored as an example. The user can directly color in a single grid area laid by the virtual solid model. The top of the thumb of the hand model in Figure 10 is done by using multiple grids. The effect of grid coloring.
  • the above-mentioned method can also be used to map the virtual solid model, that is, a single grid area is used to map (UV map mode can be used), and finally the map editing work for the entire hand model is completed.
  • FIG. 11 is a schematic flowchart of a method for virtualizing an item in some embodiments of the application. As shown in FIG. 11, in some embodiments, when the virtual entity model is displayed in step S101, the method further includes:
  • a control corresponding to the virtual entity model is created.
  • the consumer sees the virtual entity model that needs to be purchased, the consumer can be introduced into the way of virtualized customization of items.
  • the mark It is an entrance to trigger this function.
  • S602 In response to an operation on the control, obtain the real scene through a terminal device, and trigger a step of constructing a virtual scene corresponding to the real scene according to the real scene acquired by the terminal device.
  • the subsequent steps are triggered by clicking the control, such as turning on the camera function of the mobile phone, collecting real scenes and constructing virtual scenes.
  • the control such as turning on the camera function of the mobile phone, collecting real scenes and constructing virtual scenes.
  • this embodiment is only an example for illustration, and other related operations may also be triggered.
  • FIG. 12 is a schematic diagram of an application interface of a method for virtualizing an item in some embodiments.
  • the "3D display" button 1201 in the middle of the sports shoe is the corresponding control in this embodiment. Clicking on this button will trigger the camera of the mobile phone to take a picture or take a video to create a virtual scene.
  • FIG. 13 is a schematic structural diagram of an apparatus for virtualizing an item in some embodiments of the application. As shown in FIG. 13, in some embodiments, an apparatus 1300 for virtualizing an item is provided, including:
  • the model acquisition module 101 is used to acquire and display a virtual entity model with editable and customized features constructed according to real objects;
  • the scene construction module 102 is configured to construct a virtual scene corresponding to the real scene according to the real scene;
  • the model projection module 103 is configured to project the virtual entity model into the virtual scene
  • the model editing module 104 is configured to receive an editing operation on the editable external feature of the virtual entity model, adjust the editable external feature according to the editing operation, to obtain an adjusted virtual entity model, and perform The adjusted virtual entity model is displayed in the virtual scene.
  • the virtual entity model is divided into multiple model regions
  • Each model area has independent editable external features.
  • Fig. 14 is a schematic diagram of another structure of a device for virtualizing an item in some embodiments of the application.
  • the model editing module 104 further includes:
  • the coordinate conversion unit 1041 is configured to receive a painting operation on the screen of the terminal device, and convert the screen coordinates corresponding to the painting operation into 3D coordinates;
  • the position determining unit 1042 is configured to determine the corresponding position of the painting operation in the virtual physical model according to the 3D coordinates;
  • the feature editing unit 1043 is configured to ignore the painting operation when the location determining unit 1042 determines that the location of the painting operation is outside the virtual physical model; if the location determining unit 1042 determines the painting operation When the operation location is in the virtual entity model, the model area corresponding to the painting operation is determined, and the editable external features of the model area are modified according to the painting operation.
  • FIG. 15 is another schematic diagram of a device for virtualizing an item in some embodiments of the application.
  • the model projection module 103 further includes:
  • the scene point selection unit 301 is configured to select a positioning origin where the virtual entity model is placed in the virtual scene;
  • the model point selection unit 302 is configured to select the model origin on the virtual entity model
  • the first model projection unit 303 is configured to project the virtual entity model into the virtual scene in a manner that the positioning origin coincides with the model origin.
  • FIG. 16 is a schematic structural diagram of an apparatus for virtualizing an item in some embodiments of the application. As shown in FIG. 16, in some embodiments, the model projection module 103 further includes:
  • a scene plane selection unit 401 configured to select a first reference plane in the virtual scene
  • the model plane selection unit 402 is configured to select a second reference plane in the virtual solid model
  • the second model projection unit 403 is configured to project the virtual entity model into the virtual scene in a manner of defining the spatial position relationship between the first reference surface and the second reference surface.
  • Fig. 17 is a schematic diagram of another structure of the device for virtualizing an item in some embodiments of the application. As shown in FIG. 17, in some embodiments, the apparatus 1700 further includes:
  • the control display module 601 is configured to display the control corresponding to the virtual entity model when the model display module displays the virtual entity model;
  • the control triggering module 602 is configured to trigger the scene construction module 102 to acquire the real scene in response to the operation of the control, and execute the operation of constructing a virtual scene corresponding to the real scene according to the real scene.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device 1800 includes a memory 1806, a processor 1802, a communication module 1804, a user interface 1810, and a communication bus for interconnecting these components. 1808.
  • the memory 1806 may be a high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid-state storage devices; or a non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, and flash memory devices, Or other non-volatile solid-state storage devices.
  • a high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid-state storage devices
  • non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, and flash memory devices, Or other non-volatile solid-state storage devices.
  • the user interface 1810 may include one or more output devices 1812, and one or more input devices 1814.
  • the memory 1806 stores an instruction set executable by the processor 1802, including a program for implementing the processing procedures in the foregoing embodiments, and the processor 1802 implements the steps of the virtual processing method of the article when the processor 1802 executes the program.
  • a non-volatile computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the virtual processing method of the article are realized.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种物品虚拟化处理方法、装置、电子设备及存储介质,该方法包括:获取并展示根据现实物品构建的具有可编辑外部特征的虚拟实体模型(S101);根据终端设备获取的现实场景构建与所述现实场景对应的虚拟场景(S102);将所述虚拟实体模型投影到所述虚拟场景中(S103);接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述虚拟实体模型进行调整,得到调整后的虚拟实体模型,并在所述虚拟场景中展示所述调整后的虚拟实体模型(S104)。

Description

物品虚拟化处理方法、装置、电子设备及存储介质
本申请要求于2019年2月28日提交中国专利局、申请号为201910150192.7,发明名称为“一种物品虚拟化定制方法、装置以及其存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及互联网技术领域,尤其是指一种物品的虚拟化处理方法、装置、电子设备及存储介质。
背景技术
随着电子商务在互联网领域中的不断壮大,作为电商提供给客户更好的消费体验变的尤为重要。电商平台一般只是简单的对商品的照片进行展示,但还远远没有达到厂家和消费者的预期。现有技术中,厂家设计好商品将商品对应的照片放在电商平台上进行展示,消费者则根据商品的照片决定是否购买。
在一些电商平台上,已经可以通过虚拟模型技术将现实的商品做成虚拟实体模型,并且可以对虚拟实体模型进行多角度的展示,这样可以更全面的展示商品,提高展示效率。
技术内容
本申请的实施例提供了一种物品的虚拟化处理方法,由终端设备执行,该方法包括:
获取并展示根据现实物品构建的具有可编辑外部特征的虚拟实体模型;
根据终端设备获取的现实场景构建与所述现实场景对应的虚拟场景;
将所述虚拟实体模型投影到所述虚拟场景中;
接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述可编辑外部特征进行调整,得到调整后的虚拟实体模型,并在所述虚拟场景中展示所述调整后的虚拟实体模型。
本申请的实施例还提供了一种物品的虚拟化处理装置,包括:
模型获取模块,用于获取并展示根据现实物品构建的具有可编辑外部特征的虚拟实体模型;
场景构建模块,用于根据现实场景构建与所述现实场景对应的虚拟场景;
模型投影模块,用于将所述虚拟实体模型投影到所述虚拟场景中;
模型编辑模块,用于接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述可编辑外部特征进行调整,得到调整后的虚拟实体模型,并在所述虚拟场景中展示所述调整后的虚拟实体模型。
本申请实施例还提供了一种电子设备,包括:
处理器;
与所述处理器相连接的存储器;所述存储器中存储有机器可读指令;所述机器可读指令可以由所述处理器执行以完成上述的物品虚拟化处理方法。
本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现所述的物品的虚拟化处理方法步骤。
如上可见,基于上述实施例,本申请的物品的虚拟化处理方法可以将现实物品构建为具有可编辑外部特征的虚拟实体模型,将虚拟实体模型投影到通过现实场景构建的虚拟场景中,可以根据虚拟实体模型在现 实场景中的投影效果对虚拟实体模型的可编辑外部特征进行编辑,得到定制虚拟实体模型,即调整后的虚拟实体模型。从而可以更高效的展示物品,提高展示效率。最终根据定制虚拟实体模型可以进一步生成出实际产品。
附图简要说明
图1为本申请一些实施例中物品的虚拟化处理方法的流程示意图;
图2为本申请一些实施例中物品的虚拟化处理方法的另一流程示意图;
图3为本申请一些实施例中根据现实物品构建的虚拟实体模型的示例图;
图4为本申请一些实施例中将虚拟实体模型划分为多个模型区域的示例图;
图5为本申请一些实施例中物品的虚拟化处理方法的另一流程示意图;
图6为本申请一些实施例中物品的虚拟化处理方法的又一流程示意图;
图7为本申请一些实施例中物品的虚拟化处理方法的模型投影效果示例图;
图8为本申请一些实施例中物品的虚拟化处理方法的另一流程示意图;
图9为本申请一些实施例中对虚拟实体模型的可编辑外部特征进行编辑操作的示例图;
图10为本申请一些实施例中调整后的虚拟实体模型的示例图;
图11为本申请一些实施例中物品的虚拟化处理方法的另一流程示 意图;
图12为本申请一些实施例中物品的虚拟化处理方法的应用界面示意图;
图13为本申请一些实施例中物品的虚拟化处理装置的结构示意图;
图14为本申请一些实施例中物品的虚拟化处理装置的另一结构示意图;
图15为本申请一些实施例中物品的虚拟化处理装置的另一结构示意图;
图16为本申请一些实施例中物品的虚拟化处理装置的另一结构示意图;
图17为本申请一些实施例中物品的虚拟化处理装置的另一结构示意图;
图18为本申请一些实施例中电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。
在一些电商平台上,已经可以通过虚拟模型技术将现实的物品做成虚拟实体模型,并且可以对虚拟实体模型进行多角度的展示,但是目前虚拟实体模型也只是停留在展示的阶段。用户无法根据自己的需求对虚拟实体模型做出任何改动。
本申请的物品的虚拟化处理方法可以将现实物品构建为具有可编辑定制特征的虚拟实体模型,通过现实场景构建的虚拟场景比对效果后进行虚拟化定制得到定制虚拟实体模型。最终根据定制虚拟实体模型可以进一步生成出实际产品,解决了物品无法根据场景中的效果实现定制化 的技术问题。
图1为本申请一些实施例中物品的虚拟化处理方法的流程示意图。该方法可以由任何具有数据处理能力的计算机设备来执行,例如,终端设备,所述终端设备可以是个人计算机(PC)、笔记本电脑等智能终端设备,也可以是智能手机、平板电脑等智能移动终端设备。如图1所示,在一些实施例中,公开了一种物品的虚拟化处理方法,该方法包括:
S101:获取并展示根据现实物品构建的具有可编辑外部特征的虚拟实体模型;
在一些实施例中,虚拟实体模型的构建可以由其他终端设备或者服务器来完成,并保存在所述服务器中,终端设备可以在接收到展示所述虚拟实体模型的指令之后,从服务器获取所述虚拟实体模型并进行展示。
本步骤中首先要对现实的物品(商品)进行建模,需要指出的是本步骤中的现实物品并非一定是一个具体的实物,也可能是设计人员脑中物品设计的构思。本步骤其中的虚拟实体模型是构建物品的二维(2D)或三维(3D)实体模型。常见的建模方法可以通过参数尺寸进行实体建模,也可以通过逆向工程的实体扫描建立模型。
在一些实施例中,所述虚拟实体模型的可编辑外部特征可以是虚拟实体模式的结构尺寸、表面颜色、图案等特征,也可以称为可编辑定制特征、或定制特征。其中,虚拟实体模型的结构尺寸可以作为一种定制特征,如果将虚拟实体模型的结构尺寸作为定制特征,那么结构尺寸具体的尺寸参数应该是可编辑的。还需要指出的是除了结构以外,还可以对需要定制的其他特征进行构建,比如物品的表面色彩或图案。为了便于后期的编辑可以将虚拟实体模型设计为白色,以方便之后定制时的填涂。结构与颜色、贴图只是为了更好说明本步骤的举例,并不是对定制特征的具体限定。
在一些实施例中,所述虚拟实体模型被划分多个模型区域,每个模型区域具有独立的可编辑外部特征,即每个模型区域的外部特征是可以独立进行编辑的。
例如,可以按照虚拟实体模型的结构,将虚拟实体模型划分为多个模型区域,或者,可以在虚拟实体模型的表面铺设网格,按照所述网格将所述虚拟实体模型划分为多个模型区域,比如,每个网格作为一个模型区域。
S102:根据终端设备获取的现实场景,构建与所述现实场景对应的虚拟场景;
本步骤中采用的方式可以是从现实场景中通过终端设备的摄像头拍照获得的静态图像,作为所述虚拟场景,也可以是以录像的方式从现实场景中获得一个动态的视频,作为所述虚拟场景。上述获取现实场景的拍照和录像方式只是举例说明,当然也可以通过其他方式获取现实场景用于构建虚拟场景。
S103:将所述虚拟实体模型投影到所述虚拟场景中;
本步骤中主要是将虚拟实体模型和虚拟场景两者之间建立起联系,提高使用者的用户体验。另外需要指出的是虚拟场景会从空间、色彩、布局等诸多方面对虚拟实体模型产生视觉上的影响。
S104:接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述虚拟实体模型进行调整,得到调整后的虚拟实体模型,并在所述虚拟场景中展示所述调整后的虚拟实体模型。
本步骤中的效果可以理解为空间、色彩、布局等方面的效果。举例而言,比如现实物品是一个沙发,那么根据沙发构建的虚拟实体模型可以投影在通过现实构建的虚拟场景中,虚拟场景可能就是消费者的家居环境。当虚拟实体模型投影到虚拟场景中以后,虚拟实体模型在现实虚 拟场景中的真实效果得以展示。消费者可以根据真实的效果对定制特征进行编辑,编辑后还可以反复对比虚拟实体模型在虚拟场景中的效果。
本实施例中商家对自己的实际现实物品(商品)进行建模生成虚拟实体模型,建模的方式可以采用尺寸参数的建模方式,也可以通过实体扫描的方式建模。将虚拟实体模型在电商平台上进行展示,消费者可以通过拍照或者摄像等方式获取现实场景构建虚拟场景。将虚拟实体模型投影到虚拟场景中以后,根据虚拟实体模型在虚拟场景中的空间关系、色彩关系、布局关系等得到对比效果编辑定制特征。再经反复比对后得到预期的定制虚拟实体模型。需要指出的是上述的效果实际上可以是一种参数上的数量关系,比如沙发放置在空间中的布局,沙发在空间中会不会产生干涉。也可以是一种感性上的关系,比如那种颜色更适合在虚拟场景中应用。甚至两者并不存在上述的联系,但是将虚拟实体模型投影在虚拟场景中进行定制也可以提高用户的体验感受。在得到最终编辑的虚拟实体模型后对其进行制造加工生产出对应的产品,这样不但实现了虚拟化的定制,也实现了对实际产品的定制。
图2为本申请一些实施例中物品的虚拟化处理方法的另一流程示意图。如图2所示,在一些实施例中,在步骤S101之前,进一步包括:根据现实物品构建具有可编辑外部特征的虚拟实体模型,具体包括:
S201,根据现实物品构建虚拟实体模型;
在本步骤中,首先根据现实物品构建虚拟实体模型。实际本步骤是为了提供后面定制特征的设置载体。具体来说,比如定制特征为模型尺寸参数时,那么虚拟实体模型的实体模型特征就是构建该定制特征的载体。也就是说,定制特征是依附于虚拟实体模型而存在的。
S202,将所述虚拟实体模型划分多个模型区域;
在本步骤中,划分的方法可以包括以下任意一种:
S2021,根据虚拟实体模型的结构,将虚拟实体模型划分为多个模型区域。
针对虚拟实体模型的不同实体模型区域进行划分,提供之后以虚拟实体模型划分区域创建定制特征的模型基础。以沙发为例,沙发的虚拟实体模型就可以被划分为沙发的扶手,沙发的靠背等区域。
S2022,在所述虚拟实体模型的表面铺设网格,根据所述网格将所述虚拟实体模型划分为多个模型区域;
本步骤中将虚拟实体模型进行网格化,可以更好的对虚拟实体模型的颜色进行编辑,将网格铺设在所述虚拟实体模型的表面,在一些实施例中,可以采用三角形网络作为网格铺设的方式,因为在虚拟实体模型中,三角形的网格可以通过空间中的三个点的(x,y,z)坐标或者平面上的三个点的(x,y)坐标形成一个三角面,三角形网格化的最大优点在于减小渲染时的运算量。
S203,根据划分的模型区域创建每个模型区域独立的可编辑外部特征。
本步骤再以沙发为例,用户可以独立编辑沙发的扶手尺寸、颜色,也可以独立的编辑沙发靠背的尺寸、颜色,而上述的尺寸、颜色等特征就是被创建的独立模型区域的定制特征(即,可编辑外部特征)。
为了更好的对本实施例进行举例说明,图3为本申请一些实施例中根据现实物品构建的虚拟实体模型的效果示例图。图4为本申请实施例中实施例中将虚拟实体模型划分成多个模型区域的示例图。如图3和图4所示,虚拟实体模型是一只手模型,在虚拟实体模型的表面铺设了网格,每个网格可以具有单独的可编辑外部特征,即,可以单独对每个网格的外部特征进行编辑,比如,定制编辑要填涂模型的颜色。在一些实施例中,图中的手模型也可以简单划分为手指、手掌、手腕等独立的模 型区域,进一步还可以继续将手指根据手指关节再进行划分,大拇指则可以分为两段,其他手指可以分为三段,图4中的大拇指上方的第一关节往上已经被填涂着色为黑色。根据上面的举例可以看出通过对虚拟实体模型的模型区域进行划分,再针对被独立划分出来的模型区域创建定制特征,可以更加方便的实现对定制特定的编辑。
图5为本申请一些实施例中物品的虚拟化处理方法的另一流程示意图。如图5所示,在一些实施例中,所述将所述虚拟实体模型投影到所述虚拟场景中包括:
S301:在所述虚拟场景中选取所述虚拟实体模型放置的定位原点;
在本步骤投影虚拟实体模型的过程中,应该在虚拟场景中选取一个定位原点。定位原点的位置可以根据具体的预先设定选来进行选取,预先设定可能是选取虚拟场景的位置中心点,也可能根据虚拟场景的特殊情况进行选取,比如先识别虚拟场景中的地板区域,再选取地板区域的位置中心点。当然上述的预先设定方式可能是默认的任何方式,也可以是人为进行的设定,甚至是随机的。
S302:在所述虚拟实体模型上选取模型原点;
本步骤中需要在虚拟实体模型上也选取一点,即模型原点。其选取的方式可以是随机方式,也可以是虚拟实体模型的模型几何中心或者其他方式。
S303:以所述定位原点与所述模型原点重合的方式将所述虚拟实体模型投影到所述虚拟场景中。
在本步骤中模型原点与定位原点重合,采用点重合的方式投影虚拟实体模型可以使虚拟实体模型围绕定位原点自由旋转,虚拟实体模型的自由旋转可以方便编辑虚拟实体模型的定制特征,采用本步骤中的定位方式将虚拟实体模型投影在虚拟场景中除了选取点之间的重合,在其他 空间自由度上可以采用默认方式,或随机方式投影,在此就不再赘述了。
在本实施例中,实际提供了一种虚拟实体模型在虚拟场景中的定位方式。应用本实施例中方式,该方式可以使虚拟实体模型自由旋转方便编辑虚拟实体模型的定制特征。
图6为本申请实施例中物品的虚拟化处理方法的另一流程示意图。如图6所示,在一些实施例中,所述在所述虚拟场景中选取所述物品模型放置的定位原点包括:
S401:在所述虚拟场景中选取第一基准面;
在本步骤中选取第一基准面,可以是选取虚拟场景中的水平面,也可以是其他平面。虚拟实体模型可以放置在第一基准面上,之所以要建立第一基准面是因为投影时可以将虚拟实体模型直接放置在虚拟场景的第一基准面上,建立一种符合人的日常空间感的默认投影关系。第一基准面在现实场景中的空间关系可以通过拍摄时设备(比如手机)的放置角度或者是虚拟场景中图像确定。具体来说,设备在拍摄时可以根据自带陀螺仪确定拍摄的现实场景与虚拟场景之间的空间位置关系,从而在虚拟场景中找到第一基准面,或者从虚拟场景的图像中获取相关的信息得到第一基准面,相关的信息比如图像中的阴影,物体之间的遮挡关系等。上述解释的选取方式只是为了更好的理解本步骤所进行的举例说明。
S402:在所述虚拟实体模型中选取第二基准面;
在本步骤中选取虚拟实体模型中的一个第二基准面,选取的方式可以采用默认的方式,也可以通过人为方式进行选取。
S403:以定义所述第一基准面和所述第二基准面空间位置关系的方式将所述虚拟实体模型投影到所述虚拟场景中。
在本步骤中,定义第一基准面和第二基准面是为了建构虚拟实体模 型在虚拟场景中的一种符合人常规空间位置关系。举例而言,比如桌面上放置一台电脑,需要找到桌面作为第一基准面,电脑的底面作为第二基准面,当第一基准面和第二基准面重合,并将电脑的虚拟实体模型置于桌面上方时就可以实现电脑放置在桌面上的效果。
在本实施例中,实际上是提供了一种将虚拟实体模型以合理的空间方式在虚拟场景下进行定位的方式。为了更好的对本实施例进行解释,图7为本申请一些实施例中物品的虚拟化处理方法的模型投影效果示例图。如图7所示,对上述实施例进行进一步的解释,图中的树木即虚拟实体模型放置在桌面上。首先,先在虚拟场景中选取第一基准面,从图中可以看出桌子的台面现实场景中应该是水平的台面,但是由于拍摄角度的问题在图片中并不是一个正投影的水平面,而且由于近大远小的关系桌面的图像已经失真,所以要根据拍摄时的角度、焦距等参数还原图像的真实情况,再将这个桌面的所在平面选取成第一基准面。第二基准面可以选取一个将虚拟实体模型放置于合理状态的基准面。具体来说以图7而言,可以是树木生长出来的地面。最后,当第一基准面和第二基准面的空间关系被定义后,即可以看到虚拟实体模型在虚拟场景中的一个所需的正确位置。以图7为例,最终的效果就是树木从桌面上生长出来。另外,需要指出的是第一基准面和第二基准面的空间关系的定义即可以是预设自动的方式执行,也可以是之后人工去定义的。
图8为本申请一些实施例中物品的虚拟化处理方法的另一流程示意图。如图8所示,在一些实施例中,所述对对虚拟实体模型的可编辑外部特征的编辑操作可以是:利用任意颜色的画笔在屏幕上作画的操作,或者其他对虚拟实体模型的结构尺寸、颜色、图案等外部特征进行编辑的操作。以下,以作画操作为例进行说明。接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述虚拟实体模型进 行调整包括:
S501:接收屏幕上的作画操作,将所述作画操作对应的屏幕坐标转换为3D坐标。
S502,根据所述3D坐标确定所述作画操作在所述虚拟实体模型中对应的位置。
S503,如果在所述虚拟实体模型外,则忽略所述作画操作。
S504,如果在所述虚拟实体内,确定所述作画操作对应的模型区域,根据所述作画操作修改所述模型区域的可编辑特征。
在本申请实施例中选取所要着色或贴图的一个网格,然后选择要着色的具体颜色或者贴图。网格的颜色或者表面图像就呈现所选的颜色或者贴图,或者将上述的步骤对调,选择需要呈现的颜色或者贴图也可以实现本步骤的目的。选取的模式可以通过画笔的方式进行标识,用户利用画笔进行编辑,可以更加的形象。
为了更好的为本实施例进行进一步的说明解释,图9为本申请一些实施例中对虚拟实体模型的可编辑外部特征进行编辑操作的示例图,图10为本申请实施例中实施例中调整后的虚拟实体模型的示例图。如图9和图10所示,图中的手模型以着色为例,用户可以直接在虚拟实体模型铺设的单个网格区域内进行着色,图10手模型的大拇指顶部就是通过对多个网格进行着色的效果。
下面列出了本申请一些实施例中物品的虚拟化处理方法的手模型网格空间坐标与贴图空间坐标。在本实施例中还可以采用上述方法对虚拟实体模型进行贴图,即利用单个网格区域进行贴图(可以采用UV贴图的模式),最后完成对整个手模型的贴图编辑工作。
Figure PCTCN2020070203-appb-000001
Figure PCTCN2020070203-appb-000002
Figure PCTCN2020070203-appb-000003
图11为本申请一些实施例中物品的虚拟化处理方法的流程示意图。如图11所示,一些实施例中,在步骤S101展示所述虚拟实体模型时,该方法还包括:
S601:展示与所述虚拟实体模型对应的控件;
在本步骤中创建了一个与所述虚拟实体模型对应的控件,当消费者看到需要购买的所述虚拟实体模型时,可以将消费者引入到物品虚拟化定制的方式中来,所述标识是一个触发该功能的入口。
S602:响应于对所述控件的操作,通过终端设备获取所述现实场景,并触发根据所述终端设备获取的现实场景建构与所述现实场景对应的 虚拟场景的步骤。
在本步骤中通过点击所述控件触发之后的步骤,比如将手机的摄像功能打开,采集现实场景构建虚拟场景等。但本实施例只是举例示意说明,也可以触发其他相关操作。
为了更好的解释本实施例,通过运动鞋的具体案例进行说明。图12为一些实施例中物品的虚拟化处理方法的应用界面示意图。如图所示12所示,运动鞋中间的“3D展示”按钮1201就是本实施例中的对应控件,点击该按钮后则会触发手机的摄像头进行拍照或者摄像用于建立虚拟场景。
图13为本申请一些实施例中物品的虚拟化处理装置的结构示意图。如图13所示,一些实施例中,提供了一种物品的虚拟化处理装置1300,包括:
模型获取模块101,用于获取并展示根据现实物品构建的具有可编辑定制特征的虚拟实体模型;
场景构建模块102,用于根据现实场景构建与所述现实场景对应的虚拟场景;
模型投影模块103,用于将所述虚拟实体模型投影到所述虚拟场景中;
模型编辑模块104,用于接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述可编辑外部特征进行调整,得到调整后的虚拟实体模型,并在所述虚拟场景中展示所述调整后的虚拟实体模型。
在一些实施例中,所述虚拟实体模型被划分多个模型区域;
每个模型区域具有独立的可编辑外部特征。
图14为本申请一些实施例中物品的虚拟化处理装置的另一结构示 意图。如图14所示,所述模型编辑模块104进一步包括:
坐标转换单元1041,用于接收所述终端设备的屏幕上的作画操作,将所述作画操作对应的屏幕坐标转换为3D坐标;
位置确定单元1042,用于根据所述3D坐标确定所述作画操作在所述虚拟实体模型中对应的位置;
特征编辑单元1043,用于当所述位置确定单元1042确定出所述作画操作的位置在所述虚拟实体模型外时,忽略所述作画操作;如果当所述位置确定单元1042确定出所述作画操作的位置在所述虚拟实体模型内时,确定所述作画操作对应的模型区域,根据所述作画操作修改所述模型区域的可编辑外部特征。
图15为本申请一些实施例中物品的虚拟化处理装置的另一示意图。如图15所示,在一些实施例中,所述模型投影模块103进一步包括:
场景点选取单元301,用于在所述虚拟场景中选取所述虚拟实体模型放置的定位原点;
模型点选取单元302,用于在所述虚拟实体模型上选取模型原点;
第一模型投影单元303,用于以所述定位原点与所述模型原点重合的方式将所述虚拟实体模型投影到所述虚拟场景中。
图16为本申请一些实施例中物品的虚拟化处理装置的结构示意图。如图16所示,在一些实施例中,所述模型投影模块103进一步包括:
场景面选取单元401,用于在所述虚拟场景中选取第一基准面;
模型面选取单元402,用于在所述虚拟实体模型中选取第二基准面;
第二模型投影单元403,用于以定义所述第一基准面和所述第二基准面空间位置关系的方式将所述虚拟实体模型投影到所述虚拟场景中。
图17为本申请一些实施例中物品的虚拟化处理装置的又一结构示 意图。如图17所示,在一些实施例中,所述装置1700进一步包括:
控件展示模块601,用于在所述模型展示模块展示所述虚拟实体模型时,展示与所述虚拟实体模型对应的控件;
控件触发模块602,用于响应于对所述控件的操作,触发所述场景构建模块102获取所述现实场景,并执行根据现实场景建构与所述现实场景对应的虚拟场景的操作。
此外,本申请实施例中还提供一种电子设备,如图18所示,所述电子设备1800包括存储器1806、处理器1802,通信模块1804,用户接口1810,以及用于互联这些组件的通信总线1808。
存储器1806可以是高速随机存取存储器,诸如DRAM、SRAM、DDR RAM、或其他随机存取固态存储设备;或者非易失性存储器,诸如一个或多个磁盘存储设备、光盘存储设备、闪存设备,或其他非易失性固态存储设备。
用户接口1810可以包括一个或多个输出设备1812,以及一个或多个输入设备1814。
存储器1806存储处理器1802可执行的指令集,包括用于实现上述各实施例中的处理流程的程序,所述处理器1802执行所述程序时实现所述的物品的虚拟化处理方法的步骤。
在一些实施例中,公开了一种非易失性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现所述的物品的虚拟化处理方法步骤。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (14)

  1. 一种物品的虚拟化处理方法,由终端设备执行,该方法包括:
    获取并展示根据现实物品构建的具有可编辑外部特征的虚拟实体模型;
    根据终端设备获取的现实场景构建与所述现实场景对应的虚拟场景;
    将所述虚拟实体模型投影到所述虚拟场景中;
    接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述可编辑外部特征进行调整,得到调整后的虚拟实体模型,并在所述虚拟场景中展示所述调整后的虚拟实体模型。
  2. 根据权利要求1所述的物品的虚拟化处理方法,其中:
    所述虚拟实体模型被划分多个模型区域;
    每个模型区域具有独立的可编辑外部特征。
  3. 根据权利要求2所述的物品的虚拟化处理方法,其中,所述接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述虚拟实体模型进行调整,包括:
    接收所述终端设备的屏幕上的作画操作,将所述作画操作对应的屏幕坐标转换为3D坐标;
    根据所述3D坐标确定所述作画操作在所述虚拟实体模型中对应的位置;
    如果在所述虚拟实体模型外,忽略所述作画操作;
    如果在所述虚拟实体模型内,确定所述作画操作对应的模型区域,根据所述作画操作修改所述模型区域的可编辑外部特征。
  4. 根据权利要求1所述的物品的虚拟化处理方法,所述将所述虚拟实体模型投影到所述虚拟场景中包括:
    在所述虚拟场景中选取所述虚拟实体模型放置的定位原点;
    在所述虚拟实体模型上选取模型原点;
    以所述定位原点与所述模型原点重合的方式将所述虚拟实体模型投影到所述虚拟场景中。
  5. 根据权利要求1所述的物品的虚拟化处理方法,所述将所述虚拟实体模型投影到所述虚拟场景中包括:
    在所述虚拟场景中选取第一基准面;
    在所述虚拟实体模型中选取第二基准面;
    以定义所述第一基准面和所述第二基准面空间位置关系的方式将所述虚拟实体模型投影到所述虚拟场景中。
  6. 根据权利要求1所述的物品的虚拟化处理方法,该方法还包括:
    在展示所述虚拟实体模型时,展示与所述虚拟实体模型对应的控件;
    响应于对所述控件的操作,通过所述终端设备获取所述现实场景,并触发根据所述终端设备获取的现实场景建构与所述现实场景对于的虚拟场景的步骤。
  7. 一种物品的虚拟化处理装置,包括:
    模型获取模块,用于获取并展示根据现实物品构建的具有可编辑外部特征的虚拟实体模型;
    场景构建模块,用于根据现实场景构建与所述现实场景对应的虚拟场景;
    模型投影模块,用于将所述虚拟实体模型投影到所述虚拟场景中;
    模型编辑模块,用于接收对所述虚拟实体模型的可编辑外部特征的编辑操作,根据所述编辑操作对所述可编辑外部特征进行调整,得到调整后的虚拟实体模型,并在所述虚拟场景中展示所述调整后的虚拟实体模型。
  8. 根据权利要求7所述的物品的虚拟化处理装置,其中:所述虚拟 实体模型被划分多个模型区域;
    每个模型区域具有独立的可编辑外部特定。
  9. 根据权利要求8所述的物品的虚拟化处理装置,其中,所述模型编辑模块进一步包括:
    坐标转换单元,用于接收所述终端设备的屏幕上的作画操作,将所述作画操作对应的屏幕坐标转换为3D坐标;
    位置确定单元,用于根据所述3D坐标确定所述作画操作在所述虚拟实体模型中对应的位置;
    特征编辑单元,用于当所述位置确定单元确定出所述作画操作的位置在所述虚拟实体模型外时,忽略所述作画操作;如果当所述位置确定单元确定出所述作画操作的位置在所述虚拟实体模型内时,确定所述作画操作对应的模型区域,根据所述作画操作修改所述模型区域的可编辑外部特征。
  10. 根据权利要求7所述的物品的虚拟化处理装置,所述模型投影模块进一步包括:
    场景点选取模块,用于在所述虚拟场景中选取所述虚拟实体模型放置的定位原点;
    模型点选取模块,用于在所述虚拟实体模型上选取模型原点;
    投影模块,用于以所述定位原点与所述模型原点重合的方式将所述虚拟实体模型投影到所述虚拟场景中。
  11. 根据权利要求7所述的物品的虚拟化处理装置,所述模型投影模块进一步包括:
    场景面选取模块,用于在所述虚拟场景中选取第一基准面;
    模型面选取模块,用于在所述虚拟实体模型中选取第二基准面;
    投影模块,用于以定义所述第一基准面和所述第二基准面空间位置 关系的方式将所述虚拟实体模型投影到所述虚拟场景中。
  12. 根据权利要求7所述的物品虚拟化处理装置,该装置还包括:
    控件展示模块,用于在所述模型展示模块展示所述虚拟实体模型时,展示与所述虚拟实体模型对应的控件;
    控件触发模块,用于响应于对所述控件的操作,触发所述场景构建模块获取所述现实场景,并执行根据现实场景建构与所述现实场景对应的虚拟场景的操作。
  13. 一种电子设备,包括:
    处理器;
    与所述处理器相连接的存储器;所述存储器中存储有机器可读指令;所述机器可读指令可以由所述处理器执行以完成如权利要求1-7任一项所述的方法。
  14. 一种非易失性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-6任一项所述的物品的虚拟化处理方法步骤。
PCT/CN2020/070203 2019-02-28 2020-01-03 物品虚拟化处理方法、装置、电子设备及存储介质 WO2020173222A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/430,853 US11978111B2 (en) 2019-02-28 2020-01-03 Object virtualization processing method and device, electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910150192.7 2019-02-28
CN201910150192.7A CN111626803A (zh) 2019-02-28 2019-02-28 一种物品虚拟化定制方法、装置以及其存储介质

Publications (1)

Publication Number Publication Date
WO2020173222A1 true WO2020173222A1 (zh) 2020-09-03

Family

ID=72239050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/070203 WO2020173222A1 (zh) 2019-02-28 2020-01-03 物品虚拟化处理方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US11978111B2 (zh)
CN (1) CN111626803A (zh)
WO (1) WO2020173222A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113274734A (zh) * 2021-06-08 2021-08-20 网易(杭州)网络有限公司 虚拟场景的生成方法、装置和终端设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1801218A (zh) * 2004-12-31 2006-07-12 英业达股份有限公司 三维对象渲染系统及方法
CN106683193A (zh) * 2016-12-07 2017-05-17 歌尔科技有限公司 一种三维模型的设计方法和设计装置
US20180101986A1 (en) * 2016-10-10 2018-04-12 Aaron Mackay Burns Drawing in a 3d virtual reality environment
CN108269307A (zh) * 2018-01-15 2018-07-10 歌尔科技有限公司 一种增强现实交互方法及设备
CN108510597A (zh) * 2018-03-09 2018-09-07 北京小米移动软件有限公司 虚拟场景的编辑方法、装置及非临时性计算机可读存储介质
CN109087369A (zh) * 2018-06-22 2018-12-25 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、电子装置及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6958752B2 (en) * 2001-01-08 2005-10-25 Sensable Technologies, Inc. Systems and methods for three-dimensional modeling
CA2496773A1 (en) * 2001-09-12 2003-03-20 Volume Interactions Pte Ltd Interaction with a three-dimensional computer model
US20100013833A1 (en) * 2008-04-14 2010-01-21 Mallikarjuna Gandikota System and method for modifying features in a solid model
CN102800121B (zh) * 2012-06-18 2014-08-06 浙江大学 一种交互编辑虚拟人群场景中虚拟个体的方法
US9058693B2 (en) * 2012-12-21 2015-06-16 Dassault Systemes Americas Corp. Location correction of virtual objects
US20150062123A1 (en) * 2013-08-30 2015-03-05 Ngrain (Canada) Corporation Augmented reality (ar) annotation computer system and computer-readable medium and method for creating an annotated 3d graphics model
CN107016704A (zh) * 2017-03-09 2017-08-04 杭州电子科技大学 一种基于增强现实的虚拟现实实现方法
US10706636B2 (en) * 2017-06-26 2020-07-07 v Personalize Inc. System and method for creating editable configurations of 3D model
CN108805679B (zh) * 2018-06-14 2021-07-16 珠海必要工业科技股份有限公司 一种通过移动终端加入自主设计的生产方法
CN109191590B (zh) * 2018-09-26 2023-11-07 浙江优创信息技术有限公司 一种用于制作虚拟现实应用的处理系统及处理方法
US10955257B2 (en) * 2018-12-28 2021-03-23 Beijing Didi Infinity Technology And Development Co., Ltd. Interactive 3D point cloud matching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1801218A (zh) * 2004-12-31 2006-07-12 英业达股份有限公司 三维对象渲染系统及方法
US20180101986A1 (en) * 2016-10-10 2018-04-12 Aaron Mackay Burns Drawing in a 3d virtual reality environment
CN106683193A (zh) * 2016-12-07 2017-05-17 歌尔科技有限公司 一种三维模型的设计方法和设计装置
CN108269307A (zh) * 2018-01-15 2018-07-10 歌尔科技有限公司 一种增强现实交互方法及设备
CN108510597A (zh) * 2018-03-09 2018-09-07 北京小米移动软件有限公司 虚拟场景的编辑方法、装置及非临时性计算机可读存储介质
CN109087369A (zh) * 2018-06-22 2018-12-25 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、电子装置及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG, ZHONG; YANG, HAO; DU, HUA: "A Method of Point Positioning in a Virtual 3D Scene", COMPUTER KNOWLEDGE AND TECHNOLOGY, vol. 13, no. 9, 31 March 2017 (2017-03-31), pages 187 - 189, XP009522846, DOI: 10.14004/j.cnki.ckt.2017.0751 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113274734A (zh) * 2021-06-08 2021-08-20 网易(杭州)网络有限公司 虚拟场景的生成方法、装置和终端设备
CN113274734B (zh) * 2021-06-08 2024-02-02 网易(杭州)网络有限公司 虚拟场景的生成方法、装置和终端设备

Also Published As

Publication number Publication date
CN111626803A (zh) 2020-09-04
US20220164863A1 (en) 2022-05-26
US11978111B2 (en) 2024-05-07

Similar Documents

Publication Publication Date Title
US11367250B2 (en) Virtual interaction with three-dimensional indoor room imagery
JP7176012B2 (ja) オブジェクト・モデリング動作方法及び装置並びにデバイス
US9420253B2 (en) Presenting realistic designs of spaces and objects
US10147233B2 (en) Systems and methods for generating a 3-D model of a user for a virtual try-on product
US9443353B2 (en) Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
US9214137B2 (en) Methods and systems for realistic rendering of digital objects in augmented reality
WO2017092303A1 (zh) 虚拟现实场景模型建立方法及装置
US20180276882A1 (en) Systems and methods for augmented reality art creation
WO2022021965A1 (zh) 虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序
CN110321048A (zh) 三维全景场景信息处理、交互方法及装置
US20130016098A1 (en) Method for creating a 3-dimensional model from a 2-dimensional source image
JP2022077148A (ja) 画像処理方法、プログラムおよび画像処理システム
WO2020173222A1 (zh) 物品虚拟化处理方法、装置、电子设备及存储介质
JP6852224B2 (ja) 全視角方向の球体ライトフィールドレンダリング方法
WO2022141886A1 (zh) 一种家装素材调整方法、装置、计算机设备和存储介质
CN110349269A (zh) 一种目标穿戴物试戴方法及系统
CN114820980A (zh) 三维重建方法、装置、电子设备和可读存储介质
CN102169597B (zh) 一种平面图像上物体的深度设置方法和系统
WO2018151612A1 (en) Texture mapping system and method
CN114327174A (zh) 虚拟现实场景的显示方法、光标的三维显示方法和装置
GB2595445A (en) Digital sandtray
US11830140B2 (en) Methods and systems for 3D modeling of an object by merging voxelized representations of the object
JP7475625B2 (ja) 3次元空間における入力受付および入力表示方法、プログラム、および3次元空間における入力受付および入力表示装置
TWI799012B (zh) 呈現立體空間模型的電子裝置及方法
JP7417827B2 (ja) 画像編集方法、画像表示方法、画像編集システム、及び画像編集プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762550

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.01.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20762550

Country of ref document: EP

Kind code of ref document: A1