CN113221043A - Picture generation method and device, computer equipment and computer readable storage medium - Google Patents

Picture generation method and device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN113221043A
CN113221043A CN202110602457.XA CN202110602457A CN113221043A CN 113221043 A CN113221043 A CN 113221043A CN 202110602457 A CN202110602457 A CN 202110602457A CN 113221043 A CN113221043 A CN 113221043A
Authority
CN
China
Prior art keywords
displayed
picture
dimensional
preset
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110602457.XA
Other languages
Chinese (zh)
Inventor
沈艳
高春旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koubei Shanghai Information Technology Co Ltd
Original Assignee
Koubei Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koubei Shanghai Information Technology Co Ltd filed Critical Koubei Shanghai Information Technology Co Ltd
Priority to CN202110602457.XA priority Critical patent/CN113221043A/en
Publication of CN113221043A publication Critical patent/CN113221043A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a picture generation method, a picture generation device, computer equipment and a computer readable storage medium, and relates to the technical field of internet. The method comprises the following steps: responding to the picture generation request, and performing data acquisition on the object to be displayed in a plurality of preset directions to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions; constructing a virtual effect model of an object to be displayed based on a plurality of three-dimensional data, and generating a three-dimensional effect picture comprising the virtual effect model; and determining an object detail page of the object to be displayed, and adding the three-dimensional effect picture to the object detail page for displaying.

Description

Picture generation method and device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for generating a picture, a computer device, and a computer-readable storage medium.
Background
In the mobile internet era, with the rapid development of internet technology, many merchants place products in stores on-line platforms in the form of virtual products, and the virtual products include not only solid commodities such as food and clothes, but also service commodities such as hairdressing, skin care, massage and the like. In order to enable a user to comprehensively know commodities before ordering, a series of pictures for displaying works and effects are usually generated for artists in stores for the commodities such as hairdressing and nail art, the effect pictures are used as a work set related to the commodities, the user can know details such as hairstyle and nail art in the work set, and the user can conveniently refer to and select the commodities.
In the related art, taking a hair-dressing product as an example, the effect pictures generated for the work set are usually pictures of the front, side and back of the hairstyle, the pictures are sequentially arranged and added in the work set, and the user can see the complete details of the related hairstyle by sliding the pictures in the work set.
In carrying out the present application, the applicant has found that the related art has at least the following problems:
the effect pictures are usually shot and uploaded manually by merchants, but the shooting capability of the merchants is limited, the shot pictures are not standard enough, and the pictures of a certain work are limited in a centralized manner, so that the problems of incomplete shooting angle of the work, missing shooting details and the like are easily caused, and the usability of the pictures is poor.
Disclosure of Invention
In view of this, the present application provides a picture generation method, an apparatus, a computer device, and a computer readable storage medium, and mainly aims to solve the problem that the usability of a picture is poor due to insufficient comprehensiveness of the shooting angle and lack of shooting details in the current work.
According to a first aspect of the present application, there is provided a picture generation method, including:
responding to a picture generation request, carrying out data acquisition on an object to be displayed in a plurality of preset directions to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, wherein the three-dimensional data comprises basic parameters of the object to be displayed in the preset directions acquired based on a laser radar component configured on a terminal and image data of the object to be displayed in the preset directions acquired based on an image acquisition component configured on the terminal;
constructing a virtual effect model of the object to be displayed based on the plurality of three-dimensional data, and generating a three-dimensional effect picture comprising the virtual effect model;
and determining an object detail page of the object to be displayed, and adding the three-dimensional effect picture to the object detail page for displaying.
Optionally, the responding to the picture generation request, performing data acquisition on the object to be displayed in a plurality of preset orientations to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset orientations includes:
responding to the picture generation request, and calling the laser radar component and the image acquisition component, wherein the laser radar component is a component which is configured on the terminal and has an object parameter detection function and used in data acquisition;
displaying a preset contour range, and marking the preset contour range by adopting an orientation to be acquired, wherein the orientation to be acquired is any one of the plurality of preset orientations;
transmitting a detection signal to the object to be displayed within the preset contour range based on the laser radar component to obtain basic parameters of the object to be displayed in the direction to be collected;
acquiring an image of the object to be displayed within the preset contour range based on the image acquisition assembly to obtain image data of the object to be displayed in the position to be acquired;
taking the basic parameters and the image data as three-dimensional data of the object to be displayed in the position to be acquired;
selecting a new position to be collected from other preset positions, collecting three-dimensional data of the object to be displayed in the new position to be collected until all the preset positions are traversed to obtain the plurality of three-dimensional data, wherein the other preset positions are preset positions except the position to be collected in the plurality of preset positions.
Optionally, the obtaining of the basic parameters of the to-be-displayed object in the to-be-collected orientation based on the laser radar component transmitting a detection signal to the to-be-displayed object within the preset contour range includes:
receiving an echo signal returned by the detection signal, wherein the echo signal is a signal returned after the detection signal hits the object to be displayed;
and comparing the detection signal with the echo signal, and outputting the object outline and the object size of the object to be displayed as basic parameters of the object to be displayed in the direction to be acquired according to the energy difference between the detection signal and the echo signal.
Optionally, the method further comprises:
if the object contour output according to the energy difference between the detection signal and the echo signal has a fracture, generating an object correction prompt;
and displaying the object correction reminder, wherein the display mode of the object correction reminder is any one of character display or voice display.
Optionally, the constructing a virtual effect model of the object to be displayed based on the plurality of three-dimensional data includes:
acquiring a plurality of basic parameters included in the plurality of three-dimensional data, and constructing an initial virtual model according to the plurality of basic parameters;
and reading red, green and blue channel RGB values of a plurality of image data included in the plurality of three-dimensional data, and filling the RGB values of the plurality of image data into the initial virtual model according to the preset directions corresponding to the plurality of image data to obtain the virtual effect model.
Optionally, the generating a three-dimensional effect picture including the virtual effect model includes:
acquiring data of an object background of the object to be displayed in the preset directions to generate an original background picture, and adding the virtual effect model to the original background picture to obtain a three-dimensional effect picture; and/or the presence of a gas in the gas,
and acquiring a default filling color, generating a background base map filled with the default filling color, and adding the virtual effect model to the background base map to obtain the three-dimensional effect picture.
Optionally, after the constructing a virtual effect model of the object to be displayed based on the plurality of three-dimensional data and generating a three-dimensional effect picture including the virtual effect model, the method further includes:
displaying the three-dimensional effect picture;
in response to receiving a picture adjustment request based on the three-dimensional effect picture, determining a trigger point of the trigger operation on the three-dimensional effect picture, and adjusting the position of the virtual effect model in the three-dimensional effect picture according to the moving direction of the trigger point;
responding to a picture generation request received based on the three-dimensional effect picture, associating the three-dimensional effect picture with the object to be displayed, caching the associated three-dimensional effect picture, collecting a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions again, and generating a new three-dimensional effect picture.
Optionally, the method further comprises:
when detecting that the number of cached three-dimensional effect pictures associated with the object to be displayed reaches a number threshold value, displaying all the currently cached three-dimensional effect pictures associated with the object to be displayed;
and responding to the triggering of a target three-dimensional effect picture in all the three-dimensional effect pictures, and adding the target three-dimensional effect picture to the object detail page for displaying.
According to a second aspect of the present application, there is provided a picture generation apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for responding to a picture generation request, carrying out data acquisition on an object to be displayed in a plurality of preset directions to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, and the three-dimensional data comprises basic parameters of the object to be displayed in the preset directions acquired based on a laser radar component configured on a terminal and image data of the object to be displayed in the preset directions acquired based on an image acquisition component configured on the terminal;
the building module is used for building a virtual effect model of the object to be displayed based on the plurality of three-dimensional data and generating a three-dimensional effect picture comprising the virtual effect model;
and the first display module is used for determining an object detail page of the object to be displayed and adding the three-dimensional effect picture to the object detail page for display.
Optionally, the acquisition module is configured to respond to the picture generation request, and call the laser radar component and the image acquisition component, where the laser radar component is a component with an object parameter detection function configured on the terminal used in data acquisition; displaying a preset contour range, and marking the preset contour range by adopting an orientation to be acquired, wherein the orientation to be acquired is any one of the plurality of preset orientations; transmitting a detection signal to the object to be displayed within the preset contour range based on the laser radar component to obtain basic parameters of the object to be displayed in the direction to be collected; acquiring an image of the object to be displayed within the preset contour range based on the image acquisition assembly to obtain image data of the object to be displayed in the position to be acquired; taking the basic parameters and the image data as three-dimensional data of the object to be displayed in the position to be acquired; selecting a new position to be collected from other preset positions, collecting three-dimensional data of the object to be displayed in the new position to be collected until all the preset positions are traversed to obtain the plurality of three-dimensional data, wherein the other preset positions are preset positions except the position to be collected in the plurality of preset positions.
Optionally, the acquisition module is configured to receive an echo signal returned by the detection signal, where the echo signal is a signal returned after the detection signal hits the object to be displayed; and comparing the detection signal with the echo signal, and outputting the object outline and the object size of the object to be displayed as basic parameters of the object to be displayed in the direction to be acquired according to the energy difference between the detection signal and the echo signal.
Optionally, the acquisition module is further configured to generate an object correction prompt if the object contour output according to the energy difference between the detection signal and the echo signal is broken; and displaying the object correction reminder, wherein the display mode of the object correction reminder is any one of character display or voice display.
Optionally, the building module is configured to obtain a plurality of basic parameters included in the plurality of three-dimensional data, and build an initial virtual model according to the plurality of basic parameters; and reading red, green and blue channel RGB values of a plurality of image data included in the plurality of three-dimensional data, and filling the RGB values of the plurality of image data into the initial virtual model according to the preset directions corresponding to the plurality of image data to obtain the virtual effect model.
Optionally, the building module is configured to perform data acquisition on an object background of the object to be displayed in the plurality of preset orientations, generate an original background picture, and add the virtual effect model to the original background picture to obtain the three-dimensional effect picture; and/or acquiring default filling color, generating a background base map filled with the default filling color, and adding the virtual effect model to the background base map to obtain the three-dimensional effect picture.
Optionally, the apparatus further comprises:
the second display module is used for displaying the three-dimensional effect picture;
an adjusting module, configured to determine, in response to receiving a picture adjustment request based on the three-dimensional effect picture, a trigger point of the trigger operation on the three-dimensional effect picture, and adjust a position of the virtual effect model in the three-dimensional effect picture according to a moving direction of the trigger point;
the acquisition module is further configured to associate the three-dimensional effect picture with the object to be displayed, cache the associated three-dimensional effect picture, and acquire a plurality of three-dimensional data of the object to be displayed in the plurality of preset orientations again in response to receiving a picture generation request based on the three-dimensional effect picture to generate a new three-dimensional effect picture.
Optionally, the second display module is further configured to display all currently cached three-dimensional effect pictures associated with the object to be displayed when it is detected that the number of cached three-dimensional effect pictures associated with the object to be displayed reaches a number threshold;
the first display module is further configured to add the target three-dimensional effect picture to the object detail page for display in response to the target three-dimensional effect picture in all the three-dimensional effect pictures being triggered.
According to a third aspect of the present application, there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the first aspects when the computer program is executed.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the first aspects described above.
By the technical scheme, the picture generation method, the device, the computer equipment and the computer readable storage medium are provided, when a user requests to generate a three-dimensional effect picture of an object to be displayed, data acquisition is carried out on the object to be displayed in a plurality of preset directions, a virtual effect model of the object to be displayed is constructed on the basis of a plurality of three-dimensional data, the three-dimensional effect picture comprises the virtual effect model, the three-dimensional effect picture is added to an object detail page of the object to be displayed for displaying, the three-dimensional data collected in the plurality of preset directions are synthesized, the three-dimensional effect picture comprising the three-dimensional virtual effect model is generated for the object to be displayed, the effect of the object to be displayed is restored in a three-dimensional surrounding mode, the representation form of the object to be displayed is enriched, and the usability of the picture is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic flow chart of a picture generation method provided in an embodiment of the present application;
fig. 2A shows a schematic flowchart of a picture generation method provided in an embodiment of the present application;
fig. 2B is a schematic diagram illustrating a picture generation method provided in an embodiment of the present application;
fig. 3A is a schematic structural diagram illustrating a picture generation apparatus according to an embodiment of the present application;
fig. 3B is a schematic structural diagram of a picture generation apparatus provided in an embodiment of the present application;
fig. 4 shows a schematic device structure diagram of a computer apparatus according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
An embodiment of the present application provides a picture generation method, as shown in fig. 1, the method includes:
101. and responding to the picture generation request, performing data acquisition on the object to be displayed in a plurality of preset directions to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, wherein the three-dimensional data comprises basic parameters of the object to be displayed in the preset directions acquired based on a laser radar assembly configured on the terminal and image data of the object to be displayed in the preset directions acquired based on an image acquisition assembly configured on the terminal.
102. And constructing a virtual effect model of the object to be displayed based on the plurality of three-dimensional data, and generating a three-dimensional effect picture comprising the virtual effect model.
103. And determining an object detail page of the object to be displayed, and adding the three-dimensional effect picture to the object detail page for displaying.
The method provided by the embodiment of the application comprises the steps that when a user requests to generate a three-dimensional effect picture of an object to be displayed, data acquisition is carried out on the object to be displayed in a plurality of preset directions, the basic parameters of the object to be displayed in the preset directions acquired by a laser radar assembly configured on a terminal and the image data of the object to be displayed in the preset directions acquired by an image acquisition assembly configured on the terminal are used as a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, a virtual effect model of the object to be displayed is constructed based on the plurality of three-dimensional data, the three-dimensional effect picture comprising the virtual effect model is generated, the three-dimensional effect picture is added to an object detail page of the object to be displayed for displaying, the three-dimensional data acquired in the plurality of preset directions are synthesized, and the three-dimensional effect picture comprising the three-dimensional virtual effect model is generated for the object to be displayed, the effect of the object to be displayed is restored in a three-dimensional surrounding mode, the representation form of the object to be displayed is enriched, and the usability of the picture is improved.
An embodiment of the present application provides a method for generating an image, as shown in fig. 2A, the method includes:
201. and responding to the picture generation request, and performing data acquisition on the object to be displayed in a plurality of preset positions to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset positions.
At present, in order to enable a user to comprehensively know the online commodities, a plurality of platforms providing online expense functions can set an online work set for the commodities such as nail art and hairdressing, the online work set intensively collects multi-azimuth photos of the front, the side, the back and the like of the works such as nail art and hairdressing, and the user can know the details of the corresponding commodities and the design content of the commodities by looking over the work sets corresponding to the commodities. However, the applicant recognizes that most of the works sets provided by the platform arrange the multi-directional photos according to a certain sequence to form a photo queue, so that the user needs to actively turn pages in the works set transversely and smoothly to see all detailed pictures of the commodity, the operation requirement on the user is high, and some users do not want to browse the works sets, so that the browsing frequency of the detailed pictures of the commodity is low. Moreover, most of the pictures with concentrated works are shot by the merchants of stores at random at selected angles, so that the shooting capability is limited, the pictures are not standard enough, the problems of incomplete shooting angles of the works, missing of shooting details and the like are easily caused, and the usability of the pictures is poor. Therefore, the application provides a picture generation method, when a user requests to generate a three-dimensional effect picture of an object to be displayed, data acquisition is carried out on the object to be displayed in a plurality of preset positions, a plurality of three-dimensional data of the object to be displayed in the plurality of preset positions are obtained, a virtual effect model of the object to be displayed is constructed based on the plurality of three-dimensional data, a three-dimensional effect picture comprising the virtual effect model is generated, the three-dimensional effect picture is added to an object detail page of the object to be displayed for displaying, the three-dimensional data acquired in the plurality of preset positions are synthesized, the three-dimensional effect picture comprising the three-dimensional virtual effect model is generated for the object to be displayed, the effect of the object to be displayed is restored in a three-dimensional surrounding mode, the representation form of the object to be displayed is enriched, and the usability of the picture is improved.
The platform is provided with a picture generation inlet, when the platform detects that the picture generation inlet is triggered, a picture generation request of a user is determined to be received, data acquisition is carried out on an object to be displayed in a plurality of preset directions, and a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions are obtained. The platform can be a platform which provides wired consuming functions for the outside, or can also be a platform which provides social services and can display entity objects, and the type of the platform is not limited in the application. Responding to a picture generation request, calling a laser radar component and an image acquisition component by a platform, acquiring basic parameters of an object to be displayed in a preset position based on the laser radar component configured on a terminal and image data of the object to be displayed in the preset position based on the image acquisition component configured on the terminal, and taking the acquired basic parameters and the image data in multiple directions as multiple three-dimensional data of the object to be displayed in multiple preset positions. Therefore, the three-dimensional data not only can embody the outline of the object to be displayed in the corresponding preset position, such as height, length and the like, but also can embody the color, pattern and the like of the object to be displayed in the corresponding preset position.
Specifically, the LiDAR component is a component with a LiDAR scanning function, such as a LiDAR (optical radar) component, configured on a mobile handheld terminal used when a user requests the platform to generate a picture, so that when the user operates in the terminal to request the terminal to generate the picture, the LiDAR component configured on the terminal is called to enable the LiDAR component to be turned on. The LiDAR component emits a beam of pulse laser to the object to be displayed, and measures basic parameters such as height, width and outline of the object to be displayed. The image acquisition component is a component with a shooting function of the terminal, such as a camera, and the platform can control the terminal to start the laser radar component and the image acquisition component through an instruction under the condition that a user allows the component to be accessed, so that the laser radar component and the image acquisition component can be scheduled. In addition, it should be noted that the content described above is described by taking the LiDAR component as an example, and in the process of practical application, if the LiDAR component is not configured at the terminal, the object to be displayed and the environment where the object to be displayed is located may also be understood based on the RealSense (real sensing technology) component configured at the terminal, and the RealSense component is used to perform interactive learning with the environment and the object to be displayed, so as to obtain the basic parameters of the object to be displayed. Or, the object to be displayed may be depicted based on a 3D (3 Dimensions) function of a primesencompine (short range motion sensing camera) component configured in the terminal, so as to obtain basic parameters of the object to be displayed. Or, the object to be displayed can be subjected to three-dimensional modeling in real time based on Project Tango (component for realizing three-dimensional modeling) configured by the terminal. It should be noted that, because the Project Tango is capable of real-time modeling, the 3D model established by the Project Tango can be directly used as the virtual effect model of the object to be displayed, and no reconstruction is needed. The type of the component used for data acquisition of the terminal is not specifically limited.
And then, in order to enable the collected three-dimensional data of the object to be displayed in a plurality of preset directions to be comprehensive and accord with the standard, the platform displays a preset contour range at the front end provided for the user, and the preset contour range is used for restricting the shooting range of the user. Taking the object to be displayed as the hair style as an example, the preset contour range may be a range from the set head to the upper half of the body, and is displayed in the center of the screen of the user terminal when the preset contour range is displayed, as shown in fig. 2B in particular, so that the user can photograph the hair style according to the specification.
And because need treat in a plurality of preset positions and show the object and carry out data acquisition in this application, the user will hold the terminal and treat that the show object shoots in the position of difference, in order to make the user know that need shoot at which position at present, the platform can adopt and treat the position of gathering and mark the preset profile scope when showing the preset profile scope, should treat that the position of gathering is any preset position in a plurality of preset positions to the user treats the show object in treating the position of gathering and carries out data acquisition. For example, a "front side" may be displayed directly above the preset outline range so that the user is currently going to the front side of the object to be displayed for data acquisition by the platform. It should be noted that, in the present application, a plurality of preset orientations are involved, and in order to make the user clearly acquire the progress, a sequence may be set for the plurality of preset orientations, a queue of the preset orientations is formed in sequence, and the queue is displayed right above the preset outline range. Further, the current orientation to be collected is highlighted in the queue in a color font, a large font and a label mode, so that the user knows which orientation data is currently collected, specifically, as shown in fig. 2B, a queue of "front → side → back → finish" is generated, when the user is required to collect front data, the font size of the "front" in the queue is increased, the "front" is highlighted, and the user is reminded by a character such as "please scan the front" below the preset outline range. It should be noted that the preset orientations of the front side, the side, and the back side and the sequence of the three are only an example, and in practical application, the preset orientations may be further refined into the preset orientations such as the left side, the right side, and the like.
After the platform displays the preset contour range and marks the preset contour range by adopting the position to be collected, data collection can be started for the object to be displayed, which is arranged in the preset contour range by the user. Because the laser radar component and the image acquisition component are simultaneously called by the platform, on one hand, the platform can transmit detection signals to the object to be displayed in the preset contour range based on the laser radar component, and basic parameters of the object to be displayed in the direction to be acquired are obtained. The basic parameters specifically include an object contour and an object size, and specifically can receive an echo signal returned by a detection signal, wherein the echo signal is a signal returned after the detection signal hits an object to be displayed. Then, since the probe signal is subjected to energy consumption when hitting an obstacle, an energy difference exists between the returned echo signal and the probe signal, and information related to the object to be displayed, such as the posture, the shape, the height, the width and the like of the object, can be obtained by calculating the energy difference, the platform processes the probe signal and the echo signal, compares the probe signal with the echo signal, and outputs the object outline and the object size of the object to be displayed as basic parameters of the object to be displayed in the to-be-acquired orientation according to the energy difference between the probe signal and the echo signal, so that a virtual effect model which is the same as the shape and the posture of the object to be displayed is established based on the basic parameters.
On the other hand, the platform can be based on the image acquisition subassembly and carry out image acquisition to the object of waiting to show that places in the scope of presetting the profile, obtain the image data of the object of waiting to show in waiting to gather the position. Specifically, a picture of the object to be displayed in the to-be-collected position may be taken as image data of the object to be displayed in the to-be-collected position, which is not specifically limited in the present application.
After basic parameters and image data of the object to be displayed in the position to be acquired are acquired based on the laser radar component and the image acquisition component, the platform takes the basic parameters and the image data as three-dimensional data of the object to be displayed in the position to be acquired, and data acquisition in the direction to be acquired is completed. And then, the platform selects a new position to be acquired from other preset positions, the data acquisition process is executed again, and the three-dimensional data of the object to be displayed in the new position to be acquired is acquired until all the preset positions are traversed to obtain a plurality of three-dimensional data, wherein the other preset positions are the preset positions except the position to be acquired in the preset positions.
It should be noted that, in order to restrict the shooting range of the user, the preset contour range may be displayed at the front end, and in order to correct the user in time, when considering that some users may sometimes acquire three-dimensional data, an object to be displayed may exceed the preset contour range, an object correction reminder may be generated when the object to be displayed exceeds the preset contour range, and the object correction reminder may be displayed. Specifically, the outline of the object to be displayed can be identified when the object to be displayed does not exceed the preset outline range, but the identified outline is broken when the object to be displayed exceeds the preset outline range, and the outline cannot be identified at some positions, so that whether the object to be displayed exceeds the preset outline range can be determined according to whether the object outline is broken or not. That is, if the contour of the object output according to the energy difference between the detection signal and the echo signal is broken, it is determined that the object to be displayed exceeds the preset contour range, and therefore, an object correction prompt is generated and displayed. The display mode of the object correction reminder is divided into two modes, one mode is text display, a prompt including a file such as 'beyond outline' is generated and displayed in a screen of the terminal, the other mode is voice display, the file such as 'beyond outline' is broadcasted in a voice mode, the display mode of the object correction reminder can be any one of the two modes, and the object correction reminder is not specifically limited in the application.
202. And constructing a virtual effect model of the object to be displayed based on the plurality of three-dimensional data.
In the embodiment of the application, after the plurality of three-dimensional data of the object to be displayed are acquired at the plurality of preset positions, the platform constructs a virtual effect model of the object to be displayed based on the plurality of three-dimensional data, so that 360-degree surrounding three-dimensional display of the object to be displayed on the terminal is realized.
When the virtual effect model is constructed, firstly, the platform acquires a plurality of basic parameters included by a plurality of three-dimensional data, and because the basic parameters indicate the outline, the posture, the shape and the like of the object to be displayed, the platform can perform wiring according to the plurality of basic parameters, and then an initial virtual model with the consistent posture, the shape and the like of the object to be displayed is constructed in a virtual space based on lines. Then, because the initial virtual model is only a line model in the virtual space and does not fill any relevant content such as color, material and the like, and the image data in the three-dimensional data just records the color, material and the like of the object to be displayed in each preset orientation, the platform reads the RGB (Red Green Blue, Red Green Blue channel) values of the plurality of image data included in the plurality of three-dimensional data, and fills the RGB values of the plurality of image data to the initial virtual model according to the preset orientations corresponding to the plurality of image data, so as to obtain the virtual effect model.
It should be noted that the process of generating the virtual effect model described above is actually a process of rendering a plurality of acquired three-dimensional data, and some model rendering components can directly render data acquired by the laser radar component and the image acquisition component at present and output a virtual effect model of an object to be displayed.
203. A three-dimensional effect picture including a virtual effect model is generated.
In the embodiment of the application, after the virtual effect model is generated, because the virtual effect model needs to be displayed for the user reference, in order to make the virtual effect model more beautiful, the platform adds some background elements to the virtual effect model, and generates a three-dimensional effect picture including the virtual effect model.
Specifically, the platform can acquire data of an object background of an object to be displayed in a plurality of preset orientations to generate an original background picture, and add the virtual effect model to the original background picture to obtain a three-dimensional effect picture, that is, a scene background of a scene where the current object to be displayed is used as a background to generate the three-dimensional effect picture, so that the reality sense of the three-dimensional effect picture is increased. Or, the platform may also set a default filling color, such as white, black, and the like, obtain the default filling color, generate a background base map filled with the default filling color, and add the virtual effect model to the background base map to obtain the three-dimensional effect picture. That is, the virtual effect model is displayed on the white background picture and the black background picture to generate the three-dimensional effect picture, so that the artistic sense of the three-dimensional effect picture is enhanced.
In the process of practical application, considering that a user may not be satisfied with the generated three-dimensional effect picture, and needs to regenerate, or wants to adjust the position or angle of the virtual effect model in the three-dimensional effect picture, the platform displays the three-dimensional effect picture, so that the user can see the result generated by the picture, and the user can conveniently select whether to regenerate the picture, and provide the function of manually adjusting the three-dimensional effect picture for the user. Further, in response to receiving a picture adjustment request based on the three-dimensional effect picture, determining a trigger point of a trigger operation on the three-dimensional effect picture, and adjusting the position of the virtual effect model in the three-dimensional effect picture according to the moving direction of the trigger point. For example, if the moving direction of the trigger point indicates that the position of the virtual effect model is adjusted to the right, the position of the virtual effect model in the three-dimensional effect picture may be adjusted according to the distance that the trigger point moves to the right. It should be noted that, in practical application, the angle of the virtual effect model in the three-dimensional effect picture may also be adjusted, for example, if the user uses two fingers to trigger the three-dimensional effect picture, the platform determines two trigger points of the two fingers on the three-dimensional effect picture, and rotates the virtual effect model according to an angle formed between a connecting line between the two trigger points and a horizontal line or a vertical line.
Further, in response to receiving a picture generation request based on the three-dimensional effect picture, the platform determines that the user wants to regenerate the three-dimensional effect picture, and in order to record the generated three-dimensional effect picture, the platform associates the three-dimensional effect picture with the object to be displayed, caches the associated three-dimensional effect picture, and collects a plurality of three-dimensional data of the object to be displayed in a plurality of preset directions again to generate a new three-dimensional effect picture. It should be noted that, in order to avoid that a user caches a large number of three-dimensional effect pictures for a certain object to be displayed occupies a large cache space of the platform, a number threshold may be set in the platform, and the number of three-dimensional effect pictures that can be cached by each object to be displayed is limited by the number threshold, for example, a number of 5, 10, and the like may be set as the number threshold. In this way, when the platform detects that the number of the cached three-dimensional effect pictures associated with the object to be displayed reaches the number threshold, all the currently cached three-dimensional effect pictures associated with the object to be displayed are displayed for the user to preview in the plurality of three-dimensional effect pictures, so that the user can select one three-dimensional effect picture for subsequent display. Further, in response to the target three-dimensional effect picture in all the three-dimensional effect pictures being triggered, the platform determines the subsequent display of the target three-dimensional effect picture by the user, and therefore the target three-dimensional effect picture is added to the object detail page for display. It should be noted that, when the platform displays all three-dimensional effect pictures for the user to preview, a deletion entry may be provided, so that the user triggers the deletion entry to delete one or more three-dimensional effect pictures, or delete all three-dimensional effect pictures.
204. And determining an object detail page of the object to be displayed, and adding the three-dimensional effect picture to the object detail page for displaying.
In the embodiment of the application, in order to enable a customer to directly see the three-dimensional effect picture on the related detail page of the object to be displayed when browsing the related detail page of the object to be displayed in the platform, the platform determines the object detail page of the object to be displayed and adds the three-dimensional effect picture to the object detail page for displaying. In particular, the platform may add three-dimensional effect pictures to the bottom right, top left, top right, etc. of the plan view of the object to be shown in the object details page. In addition, in order to enable a customer to have a comprehensive understanding of an object to be displayed based on the three-dimensional effect picture, the virtual effect model displayed in the three-dimensional effect picture of the object detail page provides a function of adjusting the model, and the customer can trigger the three-dimensional effect picture to rotate, move and the like the virtual effect model, so that the visibility of the picture is improved. Or the platform can also add the three-dimensional effect picture to the object to be displayed or a product set associated with a store providing the object to be displayed, so that the expression form of the product set is enriched.
The method provided by the embodiment of the application comprises the steps that when a user requests to generate a three-dimensional effect picture of an object to be displayed, data acquisition is carried out on the object to be displayed in a plurality of preset directions, the basic parameters of the object to be displayed in the preset directions acquired by a laser radar assembly configured on a terminal and the image data of the object to be displayed in the preset directions acquired by an image acquisition assembly configured on the terminal are used as a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, a virtual effect model of the object to be displayed is constructed based on the plurality of three-dimensional data, the three-dimensional effect picture comprising the virtual effect model is generated, the three-dimensional effect picture is added to an object detail page of the object to be displayed for displaying, the three-dimensional data acquired in the plurality of preset directions are synthesized, and the three-dimensional effect picture comprising the three-dimensional virtual effect model is generated for the object to be displayed, the effect of the object to be displayed is restored in a three-dimensional surrounding mode, the representation form of the object to be displayed is enriched, and the usability of the picture is improved.
Further, as a specific implementation of the method shown in fig. 1, an embodiment of the present application provides an image generating apparatus, and as shown in fig. 3A, the apparatus includes: an acquisition module 301, a construction module 302, and a first presentation module 303.
The acquisition module 301 is configured to, in response to a picture generation request, perform data acquisition on an object to be displayed in a plurality of preset orientations to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset orientations, where the three-dimensional data includes basic parameters of the object to be displayed in a preset orientation acquired based on a laser radar component configured on a terminal and image data of the object to be displayed in the preset orientation acquired based on an image acquisition component configured on the terminal;
the constructing module 302 is configured to construct a virtual effect model of the object to be displayed based on the plurality of three-dimensional data, and generate a three-dimensional effect picture including the virtual effect model;
the first display module 303 is configured to determine an object detail page of the object to be displayed, and add the three-dimensional effect picture to the object detail page for display.
In a specific application scenario, the acquisition module 301 is configured to respond to the picture generation request and call the laser radar component and the image acquisition component, where the laser radar component is a component with an object parameter detection function configured on the terminal used in data acquisition; displaying a preset contour range, and marking the preset contour range by adopting an orientation to be acquired, wherein the orientation to be acquired is any one of the plurality of preset orientations; transmitting a detection signal to the object to be displayed within the preset contour range based on the laser radar component to obtain basic parameters of the object to be displayed in the direction to be collected; acquiring an image of the object to be displayed within the preset contour range based on the image acquisition assembly to obtain image data of the object to be displayed in the position to be acquired; taking the basic parameters and the image data as three-dimensional data of the object to be displayed in the position to be acquired; selecting a new position to be collected from other preset positions, collecting three-dimensional data of the object to be displayed in the new position to be collected until all the preset positions are traversed to obtain the plurality of three-dimensional data, wherein the other preset positions are preset positions except the position to be collected in the plurality of preset positions.
In a specific application scenario, the acquisition module 301 is configured to receive an echo signal returned by the detection signal, where the echo signal is a signal returned after the detection signal hits the object to be displayed; and comparing the detection signal with the echo signal, and outputting the object outline and the object size of the object to be displayed as basic parameters of the object to be displayed in the direction to be acquired according to the energy difference between the detection signal and the echo signal.
In a specific application scenario, the acquisition module 301 is further configured to generate an object correction prompt if the object contour output according to the energy difference between the detection signal and the echo signal is broken; and displaying the object correction reminder, wherein the display mode of the object correction reminder is any one of character display or voice display.
In a specific application scenario, the constructing module 302 is configured to obtain a plurality of basic parameters included in the plurality of three-dimensional data, and construct an initial virtual model according to the plurality of basic parameters; and reading red, green and blue channel RGB values of a plurality of image data included in the plurality of three-dimensional data, and filling the RGB values of the plurality of image data into the initial virtual model according to the preset directions corresponding to the plurality of image data to obtain the virtual effect model.
In a specific application scenario, the constructing module 302 is configured to perform data acquisition on the object background of the object to be displayed in the plurality of preset orientations, generate an original background picture, and add the virtual effect model to the original background picture to obtain the three-dimensional effect picture; and/or acquiring default filling color, generating a background base map filled with the default filling color, and adding the virtual effect model to the background base map to obtain the three-dimensional effect picture.
In a specific application scenario, as shown in fig. 3B, the apparatus further includes: a second presentation module 304 and an adjustment module 305.
The second display module 304 is configured to display the three-dimensional effect picture;
the adjusting module 305 is configured to, in response to receiving a picture adjustment request based on the three-dimensional effect picture, determine a trigger point of the trigger operation on the three-dimensional effect picture, and adjust a position of the virtual effect model in the three-dimensional effect picture according to a moving direction of the trigger point;
the acquiring module 301 is further configured to, in response to receiving a picture generation request based on the three-dimensional effect picture, associate the three-dimensional effect picture with the object to be displayed, cache the associated three-dimensional effect picture, and acquire a plurality of three-dimensional data of the object to be displayed in the plurality of preset orientations again to generate a new three-dimensional effect picture.
In a specific application scenario, the second display module 304 is further configured to display all currently cached three-dimensional effect pictures associated with the object to be displayed when it is detected that the number of the cached three-dimensional effect pictures associated with the object to be displayed reaches a number threshold;
the first displaying module 303 is further configured to add the target three-dimensional effect picture to the object detail page for displaying in response to the target three-dimensional effect picture in all the three-dimensional effect pictures being triggered.
According to the device provided by the embodiment of the application, when a user requests to generate a three-dimensional effect picture of an object to be displayed, data acquisition is carried out on the object to be displayed in a plurality of preset directions, the basic parameters of the object to be displayed in the preset directions acquired by a laser radar component configured on a terminal and the image data of the object to be displayed in the preset directions acquired by an image acquisition component configured on the terminal are used as a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, a virtual effect model of the object to be displayed is constructed based on the plurality of three-dimensional data, the three-dimensional effect picture comprising the virtual effect model is generated, the three-dimensional effect picture is added to an object detail page of the object to be displayed for displaying, the three-dimensional data acquired in the plurality of preset directions are synthesized, and the three-dimensional effect picture comprising the three-dimensional virtual effect model is generated for the object to be displayed, the effect of the object to be displayed is restored in a three-dimensional surrounding mode, the representation form of the object to be displayed is enriched, and the usability of the picture is improved.
It should be noted that other corresponding descriptions of the functional units related to the image generating device provided in the embodiment of the present application may refer to the corresponding descriptions in fig. 1 and fig. 2A, and are not repeated herein.
In an exemplary embodiment, referring to fig. 4, there is further provided a device including a communication bus, a processor, a memory, and a communication interface, and further including an input/output interface and a display device, wherein the functional units may communicate with each other through the bus. The memory stores computer programs, and the processor is used for executing the programs stored in the memory and executing the picture generation method in the embodiment.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the picture generation method.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by hardware, and also by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A picture generation method, comprising:
responding to a picture generation request, carrying out data acquisition on an object to be displayed in a plurality of preset directions to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, wherein the three-dimensional data comprises basic parameters of the object to be displayed in the preset directions acquired based on a laser radar component configured on a terminal and image data of the object to be displayed in the preset directions acquired based on an image acquisition component configured on the terminal;
constructing a virtual effect model of the object to be displayed based on the plurality of three-dimensional data, and generating a three-dimensional effect picture comprising the virtual effect model;
and determining an object detail page of the object to be displayed, and adding the three-dimensional effect picture to the object detail page for displaying.
2. The method according to claim 1, wherein the acquiring data of the object to be displayed in a plurality of preset orientations in response to the request for generating the picture to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset orientations comprises:
responding to the picture generation request, and calling the laser radar component and the image acquisition component, wherein the laser radar component is a component which is configured on the terminal and has an object parameter detection function and used in data acquisition;
displaying a preset contour range, and marking the preset contour range by adopting an orientation to be acquired, wherein the orientation to be acquired is any one of the plurality of preset orientations;
transmitting a detection signal to the object to be displayed within the preset contour range based on the laser radar component to obtain basic parameters of the object to be displayed in the direction to be collected;
acquiring an image of the object to be displayed within the preset contour range based on the image acquisition assembly to obtain image data of the object to be displayed in the position to be acquired;
taking the basic parameters and the image data as three-dimensional data of the object to be displayed in the position to be acquired;
selecting a new position to be collected from other preset positions, collecting three-dimensional data of the object to be displayed in the new position to be collected until all the preset positions are traversed to obtain the plurality of three-dimensional data, wherein the other preset positions are preset positions except the position to be collected in the plurality of preset positions.
3. The method according to claim 2, wherein the obtaining of the basic parameters of the to-be-displayed object in the to-be-acquired orientation based on the laser radar component transmitting the detection signal to the to-be-displayed object within the preset contour range comprises:
receiving an echo signal returned by the detection signal, wherein the echo signal is a signal returned after the detection signal hits the object to be displayed;
and comparing the detection signal with the echo signal, and outputting the object outline and the object size of the object to be displayed as basic parameters of the object to be displayed in the direction to be acquired according to the energy difference between the detection signal and the echo signal.
4. The method of claim 3, further comprising:
if the object contour output according to the energy difference between the detection signal and the echo signal has a fracture, generating an object correction prompt;
and displaying the object correction reminder, wherein the display mode of the object correction reminder is any one of character display or voice display.
5. The method according to claim 1, wherein said constructing a virtual effect model of said object to be shown based on said plurality of three-dimensional data comprises:
acquiring a plurality of basic parameters included in the plurality of three-dimensional data, and constructing an initial virtual model according to the plurality of basic parameters;
and reading red, green and blue channel RGB values of a plurality of image data included in the plurality of three-dimensional data, and filling the RGB values of the plurality of image data into the initial virtual model according to the preset directions corresponding to the plurality of image data to obtain the virtual effect model.
6. The method of claim 1, wherein generating the three-dimensional effect picture including the virtual effect model comprises:
acquiring data of an object background of the object to be displayed in the preset directions to generate an original background picture, and adding the virtual effect model to the original background picture to obtain a three-dimensional effect picture; and/or the presence of a gas in the gas,
and acquiring a default filling color, generating a background base map filled with the default filling color, and adding the virtual effect model to the background base map to obtain the three-dimensional effect picture.
7. The method according to claim 1, wherein after constructing the virtual effect model of the object to be displayed based on the plurality of three-dimensional data and generating the three-dimensional effect picture including the virtual effect model, the method further comprises:
displaying the three-dimensional effect picture;
in response to receiving a picture adjustment request based on the three-dimensional effect picture, determining a trigger point of the trigger operation on the three-dimensional effect picture, and adjusting the position of the virtual effect model in the three-dimensional effect picture according to the moving direction of the trigger point;
responding to a picture generation request received based on the three-dimensional effect picture, associating the three-dimensional effect picture with the object to be displayed, caching the associated three-dimensional effect picture, collecting a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions again, and generating a new three-dimensional effect picture.
8. A picture generation apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for responding to a picture generation request, carrying out data acquisition on an object to be displayed in a plurality of preset directions to obtain a plurality of three-dimensional data of the object to be displayed in the plurality of preset directions, and the three-dimensional data comprises basic parameters of the object to be displayed in the preset directions acquired based on a laser radar component configured on a terminal and image data of the object to be displayed in the preset directions acquired based on an image acquisition component configured on the terminal;
the building module is used for building a virtual effect model of the object to be displayed based on the plurality of three-dimensional data and generating a three-dimensional effect picture comprising the virtual effect model;
and the first display module is used for determining an object detail page of the object to be displayed and adding the three-dimensional effect picture to the object detail page for display.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110602457.XA 2021-05-31 2021-05-31 Picture generation method and device, computer equipment and computer readable storage medium Pending CN113221043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110602457.XA CN113221043A (en) 2021-05-31 2021-05-31 Picture generation method and device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110602457.XA CN113221043A (en) 2021-05-31 2021-05-31 Picture generation method and device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113221043A true CN113221043A (en) 2021-08-06

Family

ID=77081766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110602457.XA Pending CN113221043A (en) 2021-05-31 2021-05-31 Picture generation method and device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113221043A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056630A (en) * 2023-08-28 2023-11-14 广东保伦电子股份有限公司 Webpage layout picture display method, system, terminal equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101873341A (en) * 2010-05-17 2010-10-27 孙煜 Information issuing method, virtual show method, system, terminal and server
CN104699842A (en) * 2015-03-31 2015-06-10 百度在线网络技术(北京)有限公司 Method and device for displaying pictures
CN106504339A (en) * 2016-11-09 2017-03-15 四川长虹电器股份有限公司 Historical relic 3D methods of exhibiting based on virtual reality
US20180261015A1 (en) * 2017-03-07 2018-09-13 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and Methods for Receiving and Detecting Dimensional Aspects of a Malleable Target Object
CN108876878A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 Head portrait generation method and device
CN109410313A (en) * 2018-02-28 2019-03-01 南京恩瑞特实业有限公司 A kind of meteorology three-dimensional information 3D simulation inversion method
CN112308982A (en) * 2020-11-11 2021-02-02 安徽山水空间装饰有限责任公司 Decoration effect display method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101873341A (en) * 2010-05-17 2010-10-27 孙煜 Information issuing method, virtual show method, system, terminal and server
CN104699842A (en) * 2015-03-31 2015-06-10 百度在线网络技术(北京)有限公司 Method and device for displaying pictures
CN106504339A (en) * 2016-11-09 2017-03-15 四川长虹电器股份有限公司 Historical relic 3D methods of exhibiting based on virtual reality
US20180261015A1 (en) * 2017-03-07 2018-09-13 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and Methods for Receiving and Detecting Dimensional Aspects of a Malleable Target Object
CN108876878A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 Head portrait generation method and device
CN109410313A (en) * 2018-02-28 2019-03-01 南京恩瑞特实业有限公司 A kind of meteorology three-dimensional information 3D simulation inversion method
CN112308982A (en) * 2020-11-11 2021-02-02 安徽山水空间装饰有限责任公司 Decoration effect display method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056630A (en) * 2023-08-28 2023-11-14 广东保伦电子股份有限公司 Webpage layout picture display method, system, terminal equipment and medium

Similar Documents

Publication Publication Date Title
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
KR101636027B1 (en) Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
CN108573527B (en) Expression picture generation method and equipment and storage medium thereof
KR102120046B1 (en) How to display objects
KR20210019552A (en) Object modeling and movement methods and devices, and devices
US10204404B2 (en) Image processing device and image processing method
JP5093053B2 (en) Electronic camera
CN113994396A (en) User guidance system based on augmented reality and/or gesture detection technology
US10467793B2 (en) Computer implemented method and device
JP2014238731A (en) Image processor, image processing system, and image processing method
CN109144252B (en) Object determination method, device, equipment and storage medium
KR20090001667A (en) Apparatus and method for embodying contents using augmented reality
EP2343685A1 (en) Information processing device, information processing method, program, and information storage medium
US20220329770A1 (en) Information processing apparatus, video generation method and program
KR20190043925A (en) Method, system and non-transitory computer-readable recording medium for providing hair styling simulation service
CN114125421A (en) Image processing method, mobile terminal and storage medium
CN116523579A (en) Display equipment, virtual fitting system and method
CN115861575A (en) Commodity virtual trial effect display method and electronic equipment
US10079966B2 (en) Systems and techniques for capturing images for use in determining reflectance properties of physical objects
CN113221043A (en) Picture generation method and device, computer equipment and computer readable storage medium
US10636223B2 (en) Method and apparatus for placing media file, storage medium, and virtual reality apparatus
CN111798549A (en) Dance editing method and device and computer storage medium
CN108932055B (en) Method and equipment for enhancing reality content
CN116524088B (en) Jewelry virtual try-on method, jewelry virtual try-on device, computer equipment and storage medium
CN112604279A (en) Special effect display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination