CN107330974B - Commodity display method and device and mobile equipment - Google Patents

Commodity display method and device and mobile equipment Download PDF

Info

Publication number
CN107330974B
CN107330974B CN201710641661.6A CN201710641661A CN107330974B CN 107330974 B CN107330974 B CN 107330974B CN 201710641661 A CN201710641661 A CN 201710641661A CN 107330974 B CN107330974 B CN 107330974B
Authority
CN
China
Prior art keywords
model
commodity
environment
target commodity
structured light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710641661.6A
Other languages
Chinese (zh)
Other versions
CN107330974A (en
Inventor
唐城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710641661.6A priority Critical patent/CN107330974B/en
Publication of CN107330974A publication Critical patent/CN107330974A/en
Application granted granted Critical
Publication of CN107330974B publication Critical patent/CN107330974B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention provides a commodity display method, a commodity display device and mobile equipment, wherein the method comprises the steps of collecting speckle patterns corresponding to a target commodity under structured light, and collecting a plane image of the target commodity; performing 3D modeling on a target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model; acquiring a 3D model of an environment for placing a target commodity and taking the 3D model as a second 3D model; and carrying out synthesis processing on the first 3D model and the second 3D model, and displaying the models after the synthesis processing. By the method and the device, information when the target commodity is displayed can be more comprehensive, the angle is displayed in a multidimensional manner, a user can know the matching display effect of the target commodity and the environment, the display result is more three-dimensional and visual, and the use experience of the user is improved while the commodity display effect is improved.

Description

Commodity display method and device and mobile equipment
Technical Field
The invention relates to the technical field of mobile equipment, in particular to a commodity display method and device and mobile equipment.
Background
With the development of mobile devices, in some usage scenarios, users may need to browse goods to be purchased through the mobile devices. For example, the user takes a picture of a commodity to be purchased, and learns the display effect of the commodity based on the obtained picture.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the invention provides a commodity display method, a commodity display device and mobile equipment, which can enable information to be more comprehensive when a target commodity is displayed, display angles to be multidimensional, enable a user to know the matching display effect of the target commodity and the environment, enable the display result to be more three-dimensional and visualized, and improve the commodity display effect and the use experience of the user.
The commodity display method provided by the embodiment of the first aspect of the invention comprises the following steps: collecting a speckle pattern corresponding to a target commodity under structured light, and collecting a planar image of the target commodity; 3D modeling is carried out on the target commodity on the basis of the depth information of the speckle pattern and the plane image to obtain a first 3D model; acquiring a 3D model of an environment for placing the target commodity and taking the 3D model as a second 3D model; and synthesizing the first 3D model and the second 3D model, and displaying the synthesized models.
In the merchandise display method provided by the embodiment of the first aspect of the invention, the speckle pattern corresponding to the target merchandise under the structured light is collected, the planar image of the target merchandise is collected, the 3D modeling is performed on the target merchandise based on the depth information of the speckle pattern and the planar image to obtain the first 3D model, the 3D model of the environment for placing the target merchandise is obtained and is used as the second 3D model, the first 3D model and the second 3D model are subjected to synthesis processing, and the model after the synthesis processing is displayed The visualization improves the commodity display effect and improves the user experience.
The embodiment of the second aspect of the invention provides a merchandise display device, which comprises: the first acquisition module is used for acquiring a speckle pattern corresponding to a target commodity under structured light and acquiring a planar image of the target commodity; the first modeling module is used for carrying out 3D modeling on the target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model; the acquisition module is used for acquiring a 3D model of the environment for placing the target commodity and taking the 3D model as a second 3D model; and the synthesis module is used for carrying out synthesis processing on the first 3D model and the second 3D model and displaying the models after the synthesis processing.
The commodity display device provided by the embodiment of the second aspect of the invention acquires the speckle pattern corresponding to the target commodity under the structured light, acquires the planar image of the target commodity, and performs 3D modeling on the target commodity based on the depth information of the speckle pattern and the planar image to obtain the first 3D model, acquires the 3D model of the environment for placing the target commodity as the second 3D model, performs synthesis processing on the first 3D model and the second 3D model, and displays the model after the synthesis processing The visualization improves the commodity display effect and improves the user experience.
A merchandise display device according to an embodiment of a third aspect of the present invention includes: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: collecting a speckle pattern corresponding to a target commodity under structured light, and collecting a planar image of the target commodity; 3D modeling is carried out on the target commodity on the basis of the depth information of the speckle pattern and the plane image to obtain a first 3D model; acquiring a 3D model of an environment for placing the target commodity and taking the 3D model as a second 3D model; and synthesizing the first 3D model and the second 3D model, and displaying the synthesized models.
In the commodity display device provided by the embodiment of the third aspect of the invention, the speckle pattern corresponding to the target commodity under the structured light is collected, the planar image of the target commodity is collected, the 3D modeling is carried out on the target commodity based on the depth information of the speckle pattern and the planar image to obtain the first 3D model, the 3D model of the environment for placing the target commodity is obtained and used as the second 3D model, the first 3D model and the second 3D model are subjected to synthesis processing, and the model after the synthesis processing is displayed The visualization improves the commodity display effect and improves the user experience.
A fourth aspect of the present invention provides a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of a terminal, enable the terminal to perform a merchandise display method, the method comprising: collecting a speckle pattern corresponding to a target commodity under structured light, and collecting a planar image of the target commodity; 3D modeling is carried out on the target commodity on the basis of the depth information of the speckle pattern and the plane image to obtain a first 3D model; acquiring a 3D model of an environment for placing the target commodity and taking the 3D model as a second 3D model; and synthesizing the first 3D model and the second 3D model, and displaying the synthesized models.
The non-transitory computer-readable storage medium provided in the fourth aspect of the present invention acquires a speckle pattern corresponding to a target commodity under structured light, acquires a planar image of the target commodity, and performs 3D modeling on the target commodity based on depth information of the speckle pattern and the planar image to obtain a first 3D model, acquires a 3D model of an environment in which the target commodity is placed and serves as a second 3D model, performs synthesis processing on the first 3D model and the second 3D model, and displays the model after the synthesis processing, because not only the 3D modeling is performed on the target commodity, but also the 3D modeling is performed on an environment in which the target commodity needs to be displayed in a matching manner when the target commodity is displayed, information when the target commodity is displayed is more comprehensive, and the angle is multidimensional, so that a user can know a matching display effect of the target commodity and the environment, the display result is more three-dimensional and visual, and the commodity display effect is improved, and meanwhile, the use experience of a user is improved.
The fifth aspect of the present invention further provides a mobile device, which includes a memory and a processor, where the memory stores computer readable instructions, and the instructions, when executed by the processor, cause the processor to execute the merchandise display method according to the embodiment of the first aspect of the present invention.
In the mobile device provided by the embodiment of the fifth aspect of the present invention, the speckle pattern corresponding to the target commodity under the structured light is collected, the planar image of the target commodity is collected, the 3D modeling is performed on the target commodity based on the depth information of the speckle pattern and the planar image to obtain the first 3D model, the 3D model of the environment where the target commodity is placed is obtained and used as the second 3D model, the first 3D model and the second 3D model are subjected to the synthesis processing, and the model after the synthesis processing is displayed The visualization improves the commodity display effect and improves the user experience.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart illustrating a merchandise display method applied to online shopping according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a related art structured light;
FIG. 3 is a schematic view of a projection set of structured light according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a merchandise display method applied to online shopping according to another embodiment of the present invention;
FIG. 5 is a schematic view of an apparatus for projecting structured light;
fig. 6 is a schematic flow chart of a merchandise display method according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of a merchandise display device according to an embodiment of the present invention;
FIG. 8 is a schematic structural view of a merchandise display device according to another embodiment of the present invention;
FIG. 9 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Fig. 1 is a schematic flow chart of a merchandise display method according to an embodiment of the present invention.
The present embodiment is exemplified in a case where the merchandise display method is configured as a merchandise display device.
The merchandise display device may be disposed in a mobile device, which is not limited in this regard.
The user can obtain more three-dimensional and visual watching experience through the commodity display device.
Among them, mobile devices such as smart phones, tablet computers, personal digital assistants, electronic books, and the like have hardware devices with various operating systems.
Referring to fig. 1, the method includes:
step 101: the speckle pattern corresponding to the target commodity under the structured light is collected, and the plane image of the target commodity is collected.
The target commodity is a commodity which needs to be displayed currently.
Wherein, the projection set of the known spatial direction light beam is called as the structured light, as shown in fig. 2, fig. 2 is a schematic view of the structured light in the related art, and the device for generating the structured light may be some kind of projection device or instrument for projecting a light spot, a line, a grating, a grid or a speckle onto the object to be measured, or may be a laser for generating a laser beam.
In an embodiment of the present invention, the structured light may be projected onto the target product, and some image data related to the target product based on the structured light may be collected. Due to the physical characteristics of the structured light, the depth information of the target commodity can be reflected through the image data collected by the structured light, the depth information can be 3D information of the target commodity, and the depth information is not only displayed based on the plane image of the target commodity but also combined with the depth information of the target commodity, so that the display effect of the commodity is improved.
Optionally, referring to fig. 3, fig. 3 is a schematic view of a projection set of the structured light in the embodiment of the present invention. Exemplified by a set of points that are a projected set of structured light, which may be referred to as a set of speckles.
The projection set corresponding to the structured light in the embodiment of the invention is specifically a speckle set, that is, a device for projecting the structured light specifically projects a light spot onto an object to be measured, and the light spot is projected onto the object to be measured, so that the speckle set of the object to be measured under the structured light is generated, instead of projecting a line, a grating, a grid or a stripe onto the object to be measured, and because the storage space required by the speckle set is small, the operating efficiency of the mobile device can be prevented from being influenced, and the storage space of the device can be saved.
Further, in the embodiment of the present invention, when the speckle pattern corresponding to the target commodity under the structured light is collected, a planar image of the target commodity is also collected. For example, the target commodity can be identified by a camera of the mobile device, and a planar image of the target commodity is acquired.
Optionally, referring to fig. 4, before step 101, the method further comprises:
step 100: upon identification of the target item, structured light is projected.
Specifically, the camera of the mobile device may focus a plurality of commodities in the photographing range, and if the focusing time on a certain commodity is greater than or equal to a preset threshold, query information may be generated (for example, whether the currently focused commodity is determined as the target commodity) and displayed to the user, and the transmission structured light is triggered when the user determines the commodity as the target commodity based on the query information, or, if only one commodity exists in the photographing range, query information may be generated (for example, whether the current commodity is determined as the target commodity) and displayed to the user, and the transmission structured light is triggered when the user determines the commodity as the target commodity based on the query information, which is not limited.
In an embodiment of the present invention, a device capable of projecting structured light may be configured in advance in the mobile device, and when the target commodity is identified, the device for projecting structured light may be turned on to transmit the structured light.
Referring to fig. 5, fig. 5 is a schematic diagram of an apparatus for projecting structured light, exemplified by a projection set of structured light as lines, which may include an optical projector and a camera, for the principle of structured light with speckle as the projection set, wherein the optical projector projects a pattern of structured light onto the surface of the object to be measured, forming a three-dimensional image of the lines on the surface modulated by the surface shape of the object to be measured. The three-dimensional image is detected by a camera at another location to obtain a two-dimensional distorted image of the line. The distortion degree of the line depends on the relative position between the optical projector and the camera and the surface contour of the measured object, intuitively, the displacement (or deviation) displayed along the line is proportional to the surface height of the measured object, the distortion of the line represents the change of the plane of the measured object, the physical clearance of the surface of the measured object is discontinuously displayed, and when the relative position between the optical projector and the camera is fixed, the three-dimensional contour of the surface of the measured object can be reproduced by the two-dimensional distortion image coordinates of the line.
By projecting structured light only when a target commodity is identified, energy consumption of the mobile device can be saved.
Step 102: and 3D modeling is carried out on the target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model.
The depth information may specifically be, for example, a contour of the target product and a distance of the target product, where the contour may be, for example, a coordinate value of each point on the target product in a rectangular spatial coordinate system, and the distance may be, for example, a distance of each point on the target product relative to a reference position, and the reference position may be a certain position (for example, a position where the camera is located) on the mobile device, which is not limited thereto.
In particular, depth information may be obtained from distortion of the speckle image.
According to the physical characteristics of the structured light, if the structured light is projected on a three-dimensional object to be measured, speckle distortion occurs in a speckle image of a projection set, that is, the arrangement mode of some speckles is offset from other speckles.
Therefore, in the embodiment of the present invention, based on the offset information, the coordinates of the distorted two-dimensional speckle image are determined as corresponding depth information, and the contour of the target commodity and the distance of the target commodity are directly restored according to the depth information.
The planar image is obtained by taking a picture of the target product based on a camera, or may be read from an album of the mobile device, which is not limited in this respect.
The planar image is a two-dimensional image.
In the embodiment of the present invention, the contour of the target product and the distance of the target product restored from the depth information may be combined with the plane image to perform 3D image synthesis processing, and the 3D model obtained by the synthesis processing may be used as the first 3D model.
The target commodity is subjected to 3D modeling through the depth information and the plane image based on the speckle pattern, wherein the depth information and the plane image based on the speckle pattern acquired by the structured light are synthesized into a 3D model, the modeling method is simple and easy to realize, the data source is accurate, the commodity display effect is improved, and meanwhile, the modeling effect and the efficiency of the commodity 3D model are improved.
Step 103: A3D model of an environment in which the target commodity is placed is acquired and used as a second 3D model.
The environment in which the target product is placed is an environment in which the target product needs to be displayed in a matching manner when the target product is displayed, for example, if the target product is furniture, the environment is a room in which the furniture is to be placed, and if the target product is clothing, the environment is a wardrobe in which the clothing is to be placed or a 3D human body model, which is not limited thereto.
Further, the environment may be 3D modeled by the same method as that for 3D modeling of the target product, or the 3D model of the environment may be directly downloaded from the network database, which is not limited thereto.
The network database may be pre-established, and specifically, the database may be established in a statistical manner, for example, backstage personnel perform statistics on data in websites of the commodity class, and store the 3D models matching more environments in the database. Alternatively, the database may be built by machine learning, for example, a 3D model of a more matched environment is obtained from a web page and stored in the database by using web page correlation techniques such as crawler technique.
In the embodiment of the invention, the 3D modeling is carried out not only on the target commodity, but also on the environment which needs to be matched with the target commodity for display when the target commodity is displayed, so that the information is more comprehensive when the target commodity is displayed, the angle is multidimensional, and the display effect is improved.
Step 104: and carrying out synthesis processing on the first 3D model and the second 3D model, and displaying the models after the synthesis processing.
The synthesized model is used for displaying the shelving effect of the target commodity in the environment.
Under some scenes, for example, if the target commodity is furniture, the environment is a room where the furniture is to be placed, in order to better display the placing effect of the furniture, the furniture and the room can be displayed in a matched manner, for example, a first 3D model corresponding to the furniture and a second 3D model corresponding to the environment are subjected to synthesis processing, the model subjected to synthesis processing is displayed, the matching display effect of the target commodity and the environment can be obtained by a user, the display result is more three-dimensional and visualized, the commodity display effect is improved, and the user experience degree is improved.
In the embodiment, the speckle pattern corresponding to the target commodity under the structured light is collected, the planar image of the target commodity is collected, the 3D modeling is carried out on the target commodity based on the depth information of the speckle pattern and the planar image to obtain the first 3D model, the 3D model of the environment for placing the target commodity is obtained and used as the second 3D model, the first 3D model and the second 3D model are subjected to synthesis processing, and the model after synthesis processing is displayed, the user experience is improved.
Fig. 6 is a flowchart illustrating a merchandise display method according to another embodiment of the present invention.
Referring to fig. 6, the method includes:
step 601: and collecting the corresponding speckle pattern of the environment under the structured light.
The environment is an environment that needs to be displayed in a matching manner with the target product when the target product is displayed, for example, if the target product is furniture, the environment is a room where the furniture is to be placed, and if the target product is clothing, the environment is a wardrobe where the clothing is to be placed or a 3D human body model, which is not limited to this.
In the embodiment of the present invention, when the depth information of the environment needs to be collected, the device for projecting the structured light may be started, and the structured light is projected to the environment based on the device, so as to collect the speckle pattern corresponding to the environment under the structured light, or the speckle pattern corresponding to the structured light may be directly downloaded from the internet, which is not limited to this.
Step 602: and starting the camera to acquire a plane image corresponding to the environment.
Further, in embodiments of the present invention, when collecting the speckle pattern corresponding to the environment under the structured light, a planar image of the environment is also collected. For example, the environment may be identified by a camera of the mobile device and a planar image of the environment is acquired, or the planar image corresponding to the environment may be directly downloaded from the internet, which is not limited to this.
Step 603: and 3D modeling is carried out on the environment according to the depth information of the corresponding speckle pattern and the corresponding plane image to obtain a 3D model of the environment.
The environment may be 3D modeled by the same method as the 3D modeling of the target product, or the 3D model of the environment may be directly downloaded from the network database, which is not limited to this.
The network database may be pre-established, and specifically, the database may be established in a statistical manner, for example, backstage personnel perform statistics on data in websites of the commodity class, and store the 3D models matching more environments in the database. Alternatively, the database may be built by machine learning, for example, a 3D model of a more matched environment is obtained from a web page and stored in the database by using web page correlation techniques such as crawler technique.
The environment is subjected to 3D modeling through the depth information and the plane image based on the speckle pattern, wherein the depth information of the speckle pattern acquired based on the structured light and the 3D model of the plane image synthetic environment are adopted, the modeling method is simple and easy to realize, the data source is accurate, and the modeling effect and efficiency of the 3D model of the environment are improved.
Step 604: a 3D model of the environment is stored in a local storage.
Through storing the 3D model of environment in local storage, can be convenient for follow-up directly read the 3D model of environment from local storage, promote the treatment effeciency.
Step 605: the speckle pattern corresponding to the target commodity under the structured light is collected, and the plane image of the target commodity is collected.
Further, in the embodiment of the present invention, when the speckle pattern corresponding to the target commodity under the structured light is collected, a planar image of the target commodity is also collected. For example, the target commodity can be identified by a camera of the mobile device, and a planar image of the target commodity is acquired.
Step 606: and 3D modeling is carried out on the target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model.
The depth information may specifically be, for example, a contour of the target product and a distance of the target product, where the contour may be, for example, a coordinate value of each point on the target product in a rectangular spatial coordinate system, and the distance may be, for example, a distance of each point on the target product relative to a reference position, and the reference position may be a certain position (for example, a position where the camera is located) on the mobile device, which is not limited thereto.
In particular, depth information may be obtained from distortion of the speckle image.
According to the physical characteristics of the structured light, if the structured light is projected on a three-dimensional object to be measured, speckle distortion occurs in a speckle image of a projection set, that is, the arrangement mode of some speckles is offset from other speckles.
Therefore, in the embodiment of the present invention, based on the offset information, the coordinates of the distorted two-dimensional speckle image are determined as corresponding depth information, and the contour of the target commodity and the distance of the target commodity are directly restored according to the depth information.
The planar image is obtained by taking a picture of the target product based on a camera, or may be read from an album of the mobile device, which is not limited in this respect.
The planar image is a two-dimensional image.
In the embodiment of the present invention, the contour of the target product and the distance of the target product restored from the depth information may be combined with the plane image to perform 3D image synthesis processing, and the 3D model obtained by the synthesis processing may be used as the first 3D model.
The target commodity is subjected to 3D modeling through the depth information and the plane image based on the speckle pattern, wherein the depth information and the plane image based on the speckle pattern acquired by the structured light are synthesized into a 3D model, the modeling method is simple and easy to realize, the data source is accurate, the commodity display effect is improved, and meanwhile, the modeling effect and the efficiency of the commodity 3D model are improved.
Step 607: and directly acquiring the pre-stored 3D model of the environment of the target commodity from the local storage and using the pre-stored 3D model as a second 3D model.
The processing efficiency is improved by directly obtaining the pre-stored 3D model of the environment of the target commodity from the local storage and using the pre-stored 3D model as the second 3D model.
Step 608: and carrying out synthesis processing on the first 3D model and the second 3D model, and displaying the models after the synthesis processing.
The synthesized model is used for displaying the shelving effect of the target commodity in the environment.
Under some scenes, for example, if the target commodity is furniture, the environment is a room where the furniture is to be placed, in order to better display the placing effect of the furniture, the furniture and the room can be displayed in a matched manner, for example, a first 3D model corresponding to the furniture and a second 3D model corresponding to the environment are subjected to synthesis processing, the model subjected to synthesis processing is displayed, the matching display effect of the target commodity and the environment can be obtained by a user, the display result is more three-dimensional and visualized, the commodity display effect is improved, and the user experience degree is improved.
Step 609: and receiving an operation instruction of the user on the model after the synthesis processing.
The operation instruction can be used for enabling a user to adjust the placement position of the first 3D model in the synthesized model.
Specifically, the operation instruction may be, for example, that the user may select the first 3D model and drag the first 3D model according to a preset direction, which is not limited.
Step 610: and adjusting the placing position of the first 3D model in the synthesized model according to the operation instruction.
The placing position of the first 3D model in the model after synthesis processing is adjusted according to the operation instruction, diversified demands of a user on commodity display can be met, different display effects are displayed on the commodity display, adjustment and switching among the different display effects are facilitated, and user use experience is improved.
In this embodiment, the environment is modeled in a 3D manner by the depth information based on the speckle pattern and the planar image, wherein the environment is synthesized by the depth information based on the speckle pattern acquired by the structured light and the planar image, the modeling method is simple and easy to implement, the data source is accurate, and the modeling effect and efficiency of the environment 3D model are improved. Through storing the 3D model of environment in local storage, can be convenient for follow-up directly read the 3D model of environment from local storage, promote the treatment effeciency. By collecting the speckle pattern corresponding to the target commodity under the structured light, collecting the plane image of the target commodity, performing 3D modeling on the target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model, obtaining a 3D model of an environment for placing the target commodity as a second 3D model, performing synthesis processing on the first 3D model and the second 3D model, and displaying the models after the synthesis processing, because the 3D modeling is not only performed on the target commodity, but also the 3D modeling is performed on the environment which needs to be matched and displayed with the target commodity when the target commodity is displayed, the information when the target commodity is displayed is more comprehensive, the angle is more multidimensional, the user can know the matching display effect of the target commodity and the environment, the display result is more three-dimensional and visual, and the commodity display effect is improved, the user experience is improved. The placing position of the first 3D model in the model after synthesis processing is adjusted according to the operation instruction, diversified demands of a user on commodity display can be met, different display effects are displayed on the commodity display, adjustment and switching among the different display effects are facilitated, and user use experience is improved.
Fig. 7 is a schematic structural diagram of a merchandise display device according to an embodiment of the invention.
Referring to fig. 7, the apparatus 700 includes: a first acquisition module 701, a first modeling module 702, an acquisition module 703, and a synthesis module 704, wherein,
the first collecting module 701 is configured to collect a speckle pattern corresponding to a target commodity under structured light, and collect a planar image of the target commodity.
The first modeling module 702 is configured to perform 3D modeling on the target commodity based on the depth information of the speckle pattern and the planar image to obtain a first 3D model.
An obtaining module 703 is configured to obtain a 3D model of an environment where the target product is placed as a second 3D model.
Optionally, in some embodiments, the obtaining module 703 is specifically configured to:
and directly acquiring the pre-stored 3D model of the environment of the target commodity from the local storage and using the pre-stored 3D model as a second 3D model.
And a synthesizing module 704, configured to perform synthesizing processing on the first 3D model and the second 3D model, and display the synthesized models.
Optionally, in some embodiments, referring to fig. 8, the apparatus 700 further comprises:
and a projection module 705 for projecting structured light when the target commodity is identified.
And a second collecting module 706, configured to collect a speckle pattern corresponding to the ambient light.
And a third collecting module 707, configured to start a camera and collect a planar image corresponding to the environment.
And a second modeling module 708, configured to perform 3D modeling on the environment according to the depth information of the corresponding speckle pattern and the corresponding planar image, so as to obtain a 3D model of the environment.
A storage module 709 for storing the 3D model of the environment in a local storage.
And the receiving module 710 is configured to receive an operation instruction of the synthesized model from the user.
And the adjusting module 711 is configured to adjust the placement position of the first 3D model in the synthesized model according to the operation instruction.
It should be noted that the explanation of the embodiment of the merchandise display method in the foregoing embodiments of fig. 1 to fig. 6 is also applicable to the merchandise display device 700 of the embodiment, and the implementation principle is similar, and is not repeated here.
In the embodiment, the speckle pattern corresponding to the target commodity under the structured light is collected, the planar image of the target commodity is collected, the 3D modeling is carried out on the target commodity based on the depth information of the speckle pattern and the planar image to obtain the first 3D model, the 3D model of the environment for placing the target commodity is obtained and used as the second 3D model, the first 3D model and the second 3D model are subjected to synthesis processing, and the model after synthesis processing is displayed, the user experience is improved.
The embodiment of the invention also provides the mobile equipment. The mobile device includes an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present invention are shown.
As shown in fig. 9, the image processing circuit includes an imaging device 910, an ISP processor 930, and control logic 940. The imaging device 910 may include a camera with one or more lenses 912, an image sensor 914, and a structured light projector 916. The structured light projector 916 projects the structured light to the object to be measured. The structured light pattern may be a laser stripe, a gray code, a sinusoidal stripe, or a randomly arranged speckle pattern. The image sensor 914 captures a structured light image projected onto the object to be measured and transmits the structured light image to the ISP processor 930, and the ISP processor 930 demodulates the structured light image to obtain depth information of the object to be measured. At the same time, the image sensor 914 may also capture color information of the object under test. Of course, the structured light image and the color information of the measured object may be captured by the two image sensors 914, respectively.
Taking speckle structured light as an example, the ISP processor 930 demodulates the structured light image, specifically including acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and obtaining a moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image. And (4) converting and calculating by using a trigonometry method to obtain the depth value of each scattered spot of the speckle image, and obtaining the depth information of the measured object according to the depth value.
Of course, the depth image information and the like may be acquired by a binocular vision method or a method based on the time difference of flight TOF, and the method is not limited thereto, as long as the depth information of the object to be measured can be acquired or obtained by calculation, and all methods fall within the scope of the present embodiment.
After ISP processor 930 receives the color information of the object to be measured captured by image sensor 914, image data corresponding to the color information of the object to be measured may be processed. ISP processor 930 analyzes the image data to obtain image statistics that may be used to determine and/or image one or more control parameters of imaging device 910. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 930.
ISP processor 930 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 930 may perform one or more image processing operations on the raw image data, collecting image statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 930 may also receive pixel data from image memory 920. The image Memory 920 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving the raw image data, ISP processor 930 may perform one or more image processing operations.
After the ISP processor 930 acquires the color information and the depth information of the object to be measured, they may be fused to obtain a three-dimensional image. The feature of the corresponding object to be measured can be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the object to be measured are extracted by methods such as an active shape model method ASM, an active appearance model method AAM, a principal component analysis method PCA, and a discrete cosine transform method DCT, which are not limited herein. And then the characteristics of the measured object extracted from the depth information and the characteristics of the measured object extracted from the color information are subjected to registration and characteristic fusion processing. The fusion processing may be a process of directly combining the features extracted from the depth information and the color information, a process of combining the same features in different images after weight setting, or a process of generating a three-dimensional image based on the features after fusion in other fusion modes.
The image data for the three-dimensional image may be sent to an image memory 920 for additional processing before being displayed. ISP processor 930 receives the processed data from image memory 920 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. Image data for a three-dimensional image may be output to a display 960 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 930 may also be sent to image memory 920 and display 960 may read the image data from image memory 920. In one embodiment, image memory 920 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 930 may be transmitted to the encoder/decoder 950 to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 960 device. The encoder/decoder 950 may be implemented by a CPU or a GPU or a coprocessor.
The image statistics determined by ISP processor 930 may be sent to control logic 940 unit. Control logic 940 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 910 based on the received image statistics.
In the embodiment of the present invention, the steps of implementing the merchandise display method by using the image processing technology in fig. 9 may be referred to the above embodiments, and are not described herein again.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of a terminal, enable the terminal to perform a merchandise display method, the method comprising: collecting a speckle pattern corresponding to a target commodity under structured light, and collecting a planar image of the target commodity; performing 3D modeling on a target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model; acquiring a 3D model of an environment for placing a target commodity and taking the 3D model as a second 3D model; and carrying out synthesis processing on the first 3D model and the second 3D model, and displaying the models after the synthesis processing.
In the non-transitory computer-readable storage medium in this embodiment, the speckle pattern corresponding to the target product under the structured light is collected, the planar image of the target product is collected, the 3D modeling is performed on the target product based on the depth information of the speckle pattern and the planar image to obtain the first 3D model, the 3D model of the environment where the target product is placed is obtained and used as the second 3D model, the first 3D model and the second 3D model are subjected to synthesis processing, and the model after the synthesis processing is displayed The visualization improves the commodity display effect and improves the user experience.
In order to implement the above embodiments, the present invention further provides a computer program product, wherein when instructions in the computer program product are executed by a processor, a method for displaying a product is performed, and the method includes: collecting a speckle pattern corresponding to a target commodity under structured light, and collecting a planar image of the target commodity; performing 3D modeling on a target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model; acquiring a 3D model of an environment for placing a target commodity and taking the 3D model as a second 3D model; and carrying out synthesis processing on the first 3D model and the second 3D model, and displaying the models after the synthesis processing.
The computer program product in this embodiment acquires a speckle pattern corresponding to a target commodity under structured light, acquires a planar image of the target commodity, and performs 3D modeling on the target commodity based on depth information of the speckle pattern and the planar image to obtain a first 3D model, acquires a 3D model of an environment in which the target commodity is placed, and performs synthesis processing on the first 3D model and the second 3D model, and displays the model after the synthesis processing The visualization improves the commodity display effect and improves the user experience.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A merchandise display method, comprising the steps of:
upon identifying the target item, projecting structured light; upon identifying the target item, projecting the structured light, comprising: focusing a plurality of commodities in a photographing range, generating inquiry information and displaying the inquiry information to a user if the focusing time on a certain commodity is greater than or equal to a preset threshold value, triggering and transmitting the structured light when the user determines the commodity as a target commodity based on the inquiry information, or directly generating the inquiry information and displaying the inquiry information to the user when only one commodity exists in the photographing range, and triggering and transmitting the structured light when the user determines the commodity as the target commodity based on the inquiry information;
collecting a speckle pattern corresponding to a target commodity under structured light, and collecting a planar image of the target commodity;
3D modeling is carried out on the target commodity on the basis of the depth information of the speckle pattern and the plane image to obtain a first 3D model;
acquiring a 3D model of an environment for placing the target commodity and taking the 3D model as a second 3D model;
and synthesizing the first 3D model and the second 3D model, and displaying the synthesized models.
2. The merchandise display method of claim 1, wherein said obtaining a 3D model of an environment in which the target merchandise is shelved and as a second 3D model comprises:
and directly acquiring a pre-stored 3D model of the environment of the target commodity from a local storage and using the pre-stored 3D model as the second 3D model.
3. The merchandise display method of claim 2, further comprising, prior to said capturing the corresponding speckle pattern of the target merchandise under structured light:
collecting a speckle pattern corresponding to the environment under the structured light;
starting a camera, and collecting a plane image corresponding to the environment;
3D modeling is carried out on the environment according to the depth information of the corresponding speckle pattern and the corresponding plane image, and a 3D model of the environment is obtained;
storing a 3D model of the environment in the local storage.
4. The merchandise display method of any one of claims 1-3, further comprising, after the displaying the synthetic processed model:
receiving an operation instruction of a user on the model after the synthesis processing;
and adjusting the placing position of the first 3D model in the synthesized model according to the operation instruction.
5. A merchandise display device, comprising:
the projection module is used for projecting structured light when a target commodity is identified, and projecting the structured light when the target commodity is identified, and comprises: focusing a plurality of commodities in a photographing range, generating inquiry information and displaying the inquiry information to a user if the focusing time on a certain commodity is greater than or equal to a preset threshold value, triggering and transmitting the structured light when the user determines the commodity as a target commodity based on the inquiry information, or directly generating the inquiry information and displaying the inquiry information to the user when only one commodity exists in the photographing range, and triggering and transmitting the structured light when the user determines the commodity as the target commodity based on the inquiry information;
the first acquisition module is used for acquiring a speckle pattern corresponding to a target commodity under structured light and acquiring a planar image of the target commodity;
the first modeling module is used for carrying out 3D modeling on the target commodity based on the depth information of the speckle pattern and the plane image to obtain a first 3D model;
the acquisition module is used for acquiring a 3D model of the environment for placing the target commodity and taking the 3D model as a second 3D model;
and the synthesis module is used for carrying out synthesis processing on the first 3D model and the second 3D model and displaying the models after the synthesis processing.
6. The merchandise display device of claim 5, wherein the acquisition module is specifically configured to:
and directly acquiring a pre-stored 3D model of the environment of the target commodity from a local storage and using the pre-stored 3D model as the second 3D model.
7. The merchandise display device of claim 6, further comprising:
the second acquisition module is used for acquiring the corresponding speckle pattern of the environment under the structured light;
the third acquisition module is used for starting the camera and acquiring a plane image corresponding to the environment;
the second modeling module is used for carrying out 3D modeling on the environment according to the depth information of the corresponding speckle pattern and the corresponding plane image to obtain a 3D model of the environment;
a storage module to store the 3D model of the environment in the local storage.
8. The merchandise display device of any one of claims 5-7, further comprising:
the receiving module is used for receiving an operation instruction of a user on the synthesized model;
and the adjusting module is used for adjusting the placing position of the first 3D model in the synthesized model according to the operation instruction.
9. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the merchandise display method of any one of claims 1-4.
10. A mobile device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform a merchandise display method according to any one of claims 1 to 4.
CN201710641661.6A 2017-07-31 2017-07-31 Commodity display method and device and mobile equipment Expired - Fee Related CN107330974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710641661.6A CN107330974B (en) 2017-07-31 2017-07-31 Commodity display method and device and mobile equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710641661.6A CN107330974B (en) 2017-07-31 2017-07-31 Commodity display method and device and mobile equipment

Publications (2)

Publication Number Publication Date
CN107330974A CN107330974A (en) 2017-11-07
CN107330974B true CN107330974B (en) 2021-01-15

Family

ID=60200450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710641661.6A Expired - Fee Related CN107330974B (en) 2017-07-31 2017-07-31 Commodity display method and device and mobile equipment

Country Status (1)

Country Link
CN (1) CN107330974B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967095A (en) * 2017-11-24 2018-04-27 天脉聚源(北京)科技有限公司 A kind of image display method and device
CN110021062A (en) * 2018-01-08 2019-07-16 佛山市顺德区美的电热电器制造有限公司 A kind of acquisition methods and terminal, storage medium of product feature
CN113077306B (en) * 2021-03-25 2023-07-07 中国联合网络通信集团有限公司 Image processing method, device and equipment
CN113298619A (en) * 2021-05-24 2021-08-24 成都威爱新经济技术研究院有限公司 3D commodity live broadcast display method and system based on free viewpoint technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794056A (en) * 2010-02-05 2010-08-04 明基电通有限公司 Photographing setting control method and photographing device
CN103308149A (en) * 2013-06-24 2013-09-18 中国航空工业集团公司北京长城计量测试技术研究所 Machine vision synchronous focusing scanning type laser vibration measuring device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8773526B2 (en) * 2010-12-17 2014-07-08 Mitutoyo Corporation Edge detection using structured illumination
CN103810748B (en) * 2012-11-08 2019-02-12 北京京东尚科信息技术有限公司 The building of 3D simulation system, management method and 3D simulator
CN103702100B (en) * 2013-12-17 2017-06-06 Tcl商用信息科技(惠州)股份有限公司 The 3D methods of exhibiting and system of a kind of scene
CN104182880B (en) * 2014-05-16 2015-10-28 孙锋 A kind of net purchase method and system based on true man and/or 3D model in kind
CN104935893B (en) * 2015-06-17 2019-02-22 浙江大华技术股份有限公司 Monitor method and apparatus
CN106504283A (en) * 2016-09-26 2017-03-15 深圳奥比中光科技有限公司 Information broadcasting method, apparatus and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794056A (en) * 2010-02-05 2010-08-04 明基电通有限公司 Photographing setting control method and photographing device
CN103308149A (en) * 2013-06-24 2013-09-18 中国航空工业集团公司北京长城计量测试技术研究所 Machine vision synchronous focusing scanning type laser vibration measuring device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
数字投影结构光三维测量方法研究;张万祯;《中国博士学位论文全文数据库信息科技辑》;20160115;全文 *

Also Published As

Publication number Publication date
CN107330974A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN109118569B (en) Rendering method and device based on three-dimensional model
JP6560480B2 (en) Image processing system, image processing method, and program
CN107481304B (en) Method and device for constructing virtual image in game scene
US9514537B2 (en) System and method for adaptive depth map reconstruction
CN107517346B (en) Photographing method and device based on structured light and mobile device
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
EP2824923B1 (en) Apparatus, system and method for projecting images onto predefined portions of objects
CN107330974B (en) Commodity display method and device and mobile equipment
US20120242795A1 (en) Digital 3d camera using periodic illumination
Tian et al. Handling occlusions in augmented reality based on 3D reconstruction method
US9049369B2 (en) Apparatus, system and method for projecting images onto predefined portions of objects
US20150302648A1 (en) Systems and methods for mapping an environment using structured light
CN107480615B (en) Beauty treatment method and device and mobile equipment
Takimoto et al. 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor
CN107209007A (en) Method, circuit, equipment, accessory, system and the functionally associated computer-executable code of IMAQ are carried out with estimation of Depth
CN107392874B (en) Beauty treatment method and device and mobile equipment
CN107610171B (en) Image processing method and device
CN107452034B (en) Image processing method and device
WO2016018422A1 (en) Virtual changes to a real object
KR20180039013A (en) Feature data management for environment mapping on electronic devices
CN107705278B (en) Dynamic effect adding method and terminal equipment
CN110310325B (en) Virtual measurement method, electronic device and computer readable storage medium
CN107515844B (en) Font setting method and device and mobile device
JP6311461B2 (en) Gaze analysis system and gaze analysis apparatus
WO2017126072A1 (en) Image recognition system, camera state estimation device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210115