CN109472873B - Three-dimensional model generation method, device and hardware device - Google Patents

Three-dimensional model generation method, device and hardware device Download PDF

Info

Publication number
CN109472873B
CN109472873B CN201811303618.XA CN201811303618A CN109472873B CN 109472873 B CN109472873 B CN 109472873B CN 201811303618 A CN201811303618 A CN 201811303618A CN 109472873 B CN109472873 B CN 109472873B
Authority
CN
China
Prior art keywords
dimensional model
terminal equipment
control
generating
trigger signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811303618.XA
Other languages
Chinese (zh)
Other versions
CN109472873A (en
Inventor
陈曼仪
陈怡�
潘皓文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811303618.XA priority Critical patent/CN109472873B/en
Publication of CN109472873A publication Critical patent/CN109472873A/en
Application granted granted Critical
Publication of CN109472873B publication Critical patent/CN109472873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure discloses a method, a device and a hardware device for generating a three-dimensional model. The method for generating the three-dimensional model comprises the following steps: the terminal equipment displays the first control; the terminal equipment receives a trigger signal of a first control and generates a first three-dimensional model; the terminal equipment displays a second control; and the terminal equipment receives the trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model. The three-dimensional model generating method can be based on the basic three-dimensional model, and the shape of the three-dimensional model can be directly modified through the mobile terminal equipment, so that the flexibility and convenience of three-dimensional model generation are improved.

Description

Three-dimensional model generation method, device and hardware device
Technical Field
The present disclosure relates to the field of three-dimensional model generation, and in particular, to a method, an apparatus, and a hardware apparatus for generating a three-dimensional model.
Background
The augmented reality technology (Augmented Reality, abbreviated as AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and virtual objects, and the goal of the technology is to fit the virtual world around the real world on a screen and interact with the virtual world.
The realization method of the augmented reality technology is to put virtual objects in a real scene, namely, real environments and the virtual objects are overlapped on the same picture or space in real time. After superposition, the virtual object moves according to a preset movement track, or the virtual object is controlled to perform a preset action through a control. The virtual object in augmented reality may typically be a three-dimensional model that is pre-fabricated in a third party fabrication tool and loaded into the real scene.
In the above-mentioned augmented reality technology, the three-dimensional model cannot be directly modified, and the modification is required by a manufacturing tool, which is cumbersome and inflexible.
Disclosure of Invention
According to one aspect of the present disclosure, the following technical solutions are provided:
a method of generating a three-dimensional model, comprising: the terminal equipment displays the first control; the terminal equipment receives a trigger signal of a first control and generates a first three-dimensional model; the terminal equipment displays a second control; and the terminal equipment receives the trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model.
Further, the terminal device receives a trigger signal of the first control, and generates a first three-dimensional model, including: the terminal equipment receives a trigger signal of the first control, and acquires an image of a real scene through an image sensor of the terminal equipment; the terminal equipment identifies a plane in the image; in response to identifying the plane, the terminal device generates a first three-dimensional model on the plane.
Further, the terminal device generates a first three-dimensional model on the plane in response to identifying the plane, including: responsive to identifying the plane, the terminal device displays a third control; and the terminal equipment receives the trigger signal of the third control and generates a first three-dimensional model on the plane.
Further, after the terminal device receives the trigger signal of the first control and generates the first three-dimensional model, the method further includes: the terminal equipment displays a fourth control; and the terminal equipment receives the trigger signal of the fourth control, shoots a screen picture of the terminal equipment, and generates a picture or video of the screen picture.
Further, after the terminal device receives the trigger signal of the fourth control and shoots the screen picture of the terminal device, the method further includes: the terminal equipment displays a sixth control; and the terminal equipment receives a trigger signal of the sixth control and edits the picture or the video of the screen picture.
Further, after the terminal device receives the trigger signal of the first control and generates the first three-dimensional model, the method further includes: the terminal equipment displays a fifth control; and the terminal equipment receives the trigger signal of the fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model.
Further, after the terminal device receives the trigger signal of the fifth control, generating a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model, the method further includes: the terminal device displays the first state of the fifth control.
Further, the terminal device displays a second control, including: the terminal equipment detects the distance between the terminal equipment and the first three-dimensional model; responsive to the distance being within a first threshold, the terminal displays a first state of a second control; and responding to the distance being outside the first threshold, and displaying a second state of the second control by the terminal.
Further, the terminal device receives a trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal device and the first three-dimensional model, including: the terminal equipment receives a trigger signal of the second control, detects the movement amount of the terminal equipment, and analyzes the movement amount into a vertical movement component and a horizontal movement component; and the terminal equipment generates a second three-dimensional model according to the vertical movement component, the horizontal movement component and the first three-dimensional model.
Further, the terminal device generates a second three-dimensional model according to the vertical movement component, the horizontal movement component and the first three-dimensional model, including: and moving the key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the key points after movement.
According to another aspect of the present disclosure, the following technical solution is also provided:
a three-dimensional model generation device, comprising:
the first control display module is used for displaying the first control by the terminal equipment;
the first model generation module is used for receiving a trigger signal of the first control by the terminal equipment and generating a first three-dimensional model;
the second control display module is used for displaying a second control by the terminal equipment;
the second model generation module is used for receiving a trigger signal of the second control by the terminal equipment and generating a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model.
Further, the first model generating module includes:
the image acquisition module is used for receiving the trigger signal of the first control by the terminal equipment and acquiring an image of the real scene through an image sensor of the terminal equipment;
the plane identification module is used for identifying a plane in the image by the terminal equipment;
and the first model generation sub-module is used for generating a first three-dimensional model on the plane by the terminal equipment in response to the identification of the plane.
Further, the first model generating sub-module further includes:
the third control display module is used for responding to the identification of the plane, and the terminal equipment displays a third control;
The first model generates a first sub-module, which is used for the terminal equipment to receive the trigger signal of the third control, and generates a first three-dimensional model on the plane.
Further, the generating device of the three-dimensional model further includes:
the fourth control display module is used for displaying a fourth control by the terminal equipment;
and the shooting module is used for receiving the trigger signal of the fourth control by the terminal equipment, shooting the screen picture of the terminal equipment and generating a picture or video of the screen picture.
Further, the shooting module further comprises:
the sixth control display module is used for displaying a sixth control by the terminal equipment;
and the editing module is used for receiving a trigger signal of the sixth control by the terminal equipment and editing the picture or the video of the screen picture.
Further, the generating device of the three-dimensional model further includes:
the fifth control display module is used for displaying a fifth control by the terminal equipment;
the third model generation module is used for receiving a trigger signal of the fifth control by the terminal equipment and generating a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model.
Further, the generating device of the three-dimensional model further includes:
And the fifth control state changing module is used for displaying the first state of the fifth control by the terminal equipment.
Further, the second control display module further includes:
the distance detection module is used for detecting the distance between the terminal equipment and the first three-dimensional model by the terminal equipment;
the second control state changing module is used for responding to the fact that the distance is within a first threshold value, and the terminal displays a first state of the second control; and responding to the distance being outside the first threshold, and displaying a second state of the second control by the terminal.
Further, the second model generating module further includes:
the mobile detection module is used for receiving a trigger signal of the second control by the terminal equipment, detecting the movement amount of the terminal equipment and analyzing the movement amount into a vertical movement component and a horizontal movement component;
and the second model generation submodule is used for generating a second three-dimensional model by the terminal equipment according to the vertical movement component, the horizontal movement component and the first three-dimensional model.
Further, the second model generating sub-module is further configured to:
and moving the key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the key points after movement.
According to still another aspect of the present disclosure, the following technical solutions are also provided:
an electronic device, comprising: a memory for storing non-transitory computer readable instructions; and a processor configured to execute the computer readable instructions, such that the processor performs the steps of any of the above methods for generating a three-dimensional model.
According to still another aspect of the present disclosure, the following technical solutions are also provided:
a computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the methods described above.
The disclosure discloses a method, a device and a hardware device for generating a three-dimensional model. The method for generating the three-dimensional model comprises the following steps: the terminal equipment displays the first control; the terminal equipment receives a trigger signal of a first control and generates a first three-dimensional model; the terminal equipment displays a second control; and the terminal equipment receives the trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model. The three-dimensional model generating method can be based on the basic three-dimensional model, and the shape of the three-dimensional model can be directly modified through the mobile terminal equipment, so that the flexibility and convenience of three-dimensional model generation are improved.
The foregoing description is only an overview of the disclosed technology, and may be implemented in accordance with the disclosure of the present disclosure, so that the above-mentioned and other objects, features and advantages of the present disclosure can be more clearly understood, and the following detailed description of the preferred embodiments is given with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow diagram of a method of generating a three-dimensional model according to one embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a method of calculating vertical and horizontal components of movement according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of generating a second three-dimensional model by moving keypoints according to one embodiment of the disclosure;
FIGS. 4a-4e are example schematic diagrams of a method of generating a three-dimensional model according to one embodiment of the present disclosure;
FIG. 5 is a schematic structural view of a three-dimensional model generating apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Other advantages and effects of the present disclosure will become readily apparent to those skilled in the art from the following disclosure, which describes embodiments of the present disclosure by way of specific examples. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a method for generating a three-dimensional model. The method for generating a three-dimensional model according to the present embodiment may be performed by a computing device, which may be implemented as software, or as a combination of software and hardware, and may be integrally provided in a server, a terminal device, or the like. As shown in fig. 1, the method for generating a three-dimensional model mainly includes the following steps S101 to S104. Wherein:
step S101: the terminal equipment displays the first control;
in this step, the terminal device may be a mobile terminal device with a display device and an image sensor, and typically the terminal device may be a smart phone, a tablet computer, a personal digital assistant, or the like. The first control can be any form of control such as a virtual button, a slider, and the like. Typically, the terminal device is a smart phone, which includes a touch screen, and a virtual button is displayed on the touch screen as a first control, where the first control may be located at any position on the screen of the terminal device.
Step S102: the terminal equipment receives a trigger signal of a first control and generates a first three-dimensional model;
in this embodiment, the first three-dimensional model is a preset three-dimensional model, and the preset three-dimensional model may include a plurality of different types or kinds, and the user may select one three-dimensional model to be displayed from the plurality of preset three-dimensional models or randomly display one three-dimensional model.
In one embodiment, when the image sensor of the terminal device is turned on, an image in a real scene is acquired through the image sensor, where the image includes a plane displaying the scene, and the plane may include a desktop, a ground, a wall surface, or planes in various other real scenes, the present disclosure does not specifically limit the method, and the first three-dimensional model is generated on the plane after the plane is identified. In a specific example of this embodiment, a user opens a rear camera of the smartphone, the rear camera captures an image and identifies a plane in a current scene, when a desktop in the current scene is scanned, a preset three-dimensional vase is generated on the desktop in the image, and the desktop and the three-dimensional vase are displayed on a display screen of the smartphone.
In one embodiment, responsive to identifying the plane, the terminal device displays a third control; and the terminal equipment receives the trigger signal of the third control and generates a first three-dimensional model on the plane. The third control may be any form of control. In a specific example of this embodiment, a user opens a rear camera of the smart phone, the rear camera captures an image and identifies a plane in a current scene, when scanning a desktop in the current scene, the terminal device displays a placement button on a screen, after the user clicks the button, a preset three-dimensional vase is generated on the desktop in the image, and the desktop and the three-dimensional vase are displayed on a display screen of the smart phone.
In one embodiment, in response to identifying the plane, a configuration file of the first three-dimensional model is read; and generating the first three-dimensional model on the plane according to the three-dimensional model configuration parameters in the configuration file. In this embodiment, each preset first three-dimensional model is described by a set of configuration parameters, the configuration parameters are stored in the configuration files, when a plane is scanned, the configuration files of the preset three-dimensional models are read, the configuration parameters of the preset three-dimensional models are obtained, and the first three-dimensional models are rendered on the terminal according to the configuration parameters. Typical configuration parameters include: coordinates of feature points of the three-dimensional model, colors of the three-dimensional model, materials of the three-dimensional model and the like, and default positions of the three-dimensional model. It can be understood that the configuration parameters in the above configuration files are merely examples, and are not limiting to the disclosure, and any configuration parameters that can configure a three-dimensional model can be applied to the technical solutions of the disclosure.
Step S103: the terminal equipment displays a second control;
in this step, the terminal device displays a second control on the screen, the second control being displayed together with the first three-dimensional model.
In one embodiment, the second control comprises two states, an active state and an inactive state, the states being determined by the distance of the terminal device from the first three-dimensional model. The terminal equipment detects the distance between the terminal equipment and the first three-dimensional model, and the terminal displays the first state of the second control in response to the distance being within a first threshold; and responding to the distance being outside the first threshold, and displaying a second state of the second control by the terminal. And detecting the distance between the terminal equipment and the first three-dimensional model, wherein a perpendicular line can be drawn to a plane where the terminal equipment is located through a center point of the terminal equipment, an intersection point is formed between the perpendicular line and the first three-dimensional model, the distance between the center point and the intersection point is used as the distance between the terminal equipment and the first three-dimensional model, when the distance is within a preset threshold range, the first control is set to be in an activated state, when the distance is outside the preset threshold range, the first control cannot be operated, and the first control is set to be in an unactivated state. It will be appreciated that the distance from the central point to the intersection point may be calculated by coordinates in the world coordinate system, where the terminal device and the first three-dimensional model are in the same world coordinate system. The first threshold is a preset value, and it can be understood that the distance in this step may be replaced by another parameter, in an embodiment, the area occupied by the first three-dimensional model in the terminal device display apparatus may be detected, when the area is within the preset threshold range, it indicates that the first three-dimensional model may be operated, the first control is set to an activated state, and when the area is outside the preset threshold range, it indicates that the first three-dimensional model may not be operated, and the first control is set to an inactivated state.
Step S104: and the terminal equipment receives the trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model.
In this embodiment, the user may hold the terminal device for movement, and when the terminal device detects movement, the movement amount of the terminal device may be recorded, the generation parameter of the second three-dimensional model may be obtained according to the movement amount, and the second three-dimensional model may be generated based on the first three-dimensional model according to the generation parameter. The movement can be further resolved into a movement component in the vertical direction and a movement component in the horizontal direction. The detection of the movement of the terminal equipment can be performed by using an acceleration sensor carried in the terminal equipment, a typical acceleration sensor is a gyroscope, a gravity sensor and the like, an image sensor of the terminal equipment can be used for collecting images, the movement of the terminal equipment is detected according to the change in the images, and the vertical direction and the horizontal direction refer to the vertical direction and the horizontal direction of a plane where the terminal equipment is located; or may use specific signals to determine the start and end points of the end device.
In one embodiment, when the movement of the terminal device is detected, the moving direction and the moving distance of the mobile terminal are determined, wherein the moving direction can be represented by an included angle between a connecting line of an original position of the terminal device and a position after the movement and the horizontal direction, the moving distance can be represented by a length of a connecting line between the original position of the terminal device and the position after the movement, and the moving distance in the vertical direction and the moving distance in the horizontal direction can be calculated by using the included angle and the length respectively. Specifically, as shown in fig. 2, the terminal device moves from the point a to the point B, the included angle between AB and the horizontal direction is θ, perpendicular lines are drawn from the point B to the vertical axis and the horizontal axis, and the intersecting points are B1 and B2, so that the components of AB1 and AB2 in the vertical direction and the horizontal direction are respectively, and AB 1=sin θ×ab, and AB 2=cos θ×ab can be calculated.
In one embodiment, the determining the direction and distance of movement in response to detecting movement of the terminal device comprises: in response to detecting the trigger signal, determining a start point of movement; determining an endpoint of the movement in response to detecting the disappearance of the trigger signal; and determining the moving direction and distance according to the starting point and the ending point. In this embodiment, a trigger signal is included, where the trigger signal determines a start point and an end point of movement of the terminal device, typically, a trigger control may be set on the terminal device, for example, a virtual button is set on a touch display screen of the smart phone, when the user continuously presses the virtual button, the current position of the terminal device is determined to be the start point of movement, after the user releases the virtual button, the trigger signal disappears, the position of the terminal device at this time is determined to be the end point position, an angle between a line between the start point position and the end point position and a horizontal direction is taken as a direction of movement, and a length of the line between the start point position and the end point position is taken as a distance of movement.
In one embodiment, the height of the first three-dimensional model is changed according to the vertical movement component, and the width of the first three-dimensional model is changed according to the horizontal movement component, thereby generating a second three-dimensional model.
In one embodiment, a keypoint of the first three-dimensional model is moved according to the vertical movement component and the horizontal movement component, and a second three-dimensional model is generated according to the keypoint after the movement. Specifically, after the first three-dimensional model is generated, making a perpendicular line from a central point of the terminal equipment to a plane where the terminal equipment is located to form an intersection point with the first three-dimensional model, determining a key point of the first three-dimensional model closest to the intersection point, moving the key point in a vertical direction by a distance of a component in the vertical direction, moving the key point in a horizontal direction by a distance of the component in the horizontal direction, and generating a second three-dimensional model according to the moved position of the key point.
In another embodiment, after determining the keypoint, determining a contour curve of the first three-dimensional model in which the keypoint is located according to the keypoint, moving the keypoint in a vertical direction by a distance of the component in the vertical direction, moving the keypoint in a horizontal direction by a distance of the component in the horizontal direction, generating a new contour curve according to the keypoint after the movement, and rotating the new contour curve around a central axis of the first three-dimensional model to generate the second three-dimensional model. In this embodiment, a typical scenario is a ceramic manufacturing scenario, the first three-dimensional model is a ceramic blank of a ceramic pot, when a user moves a key point on the ceramic blank through a smart phone, the ceramic blank is stretched or extruded according to the moving distance and direction of the key point, and the stretching and extrusion are applied to the whole three-dimensional ceramic blank through rotation of the ceramic blank to form a new ceramic blank, so that the manufacturing process of the ceramic blank is completed. In this embodiment, the contour curve may be a spline curve generated from a plurality of key points on a three-dimensional model. FIG. 3 shows an example of the present embodiment, wherein the point C is a key point on the first three-dimensional model, L is a central axis of the first three-dimensional model, and the first three-dimensional model is a cylinder shown by a dotted line; when the user moves the terminal device and the point C moves along with the movement of the terminal device, for simplicity, taking the movement only including the horizontal component as an example, after calculating the horizontal movement distance of the terminal device, the point C moves horizontally to the point C1, and at this time, the contour curve where the point C1 is calculated again, that is, the bus of the second three-dimensional model, which is a straight line, and the second three-dimensional model, that is, the cylinder shown by the solid line in fig. 3, is generated according to the bus.
In one embodiment, the method for generating a three-dimensional model further includes: in response to detecting a first position selection signal, the first three-dimensional model or the second three-dimensional model is moved to the first position. In this embodiment, the user may select a point outside the area of any one of the first three-dimensional model or the second three-dimensional model, such as another point on the plane of the first three-dimensional model or a point on another plane, at which time the first three-dimensional model is moved to the point, and adjust the state of the three-dimensional model according to the plane state of the new point position, where the state may be an angle between the plane and the horizontal direction. This step may be performed at any step after step S102, such as after generating the first three-dimensional model, after changing the first three-dimensional model, and after generating the second three-dimensional model, the present disclosure is not particularly limited.
In one embodiment, after step S102, it may further include: the terminal equipment displays a fourth control; and the terminal equipment receives the trigger signal of the fourth control, shoots a screen picture of the terminal equipment, and generates a picture or video of the screen picture. In this step, the fourth control is used to trigger shooting, where the shooting may generate a picture or video of a screen, so as to implement a screen capturing or recording function. In this embodiment, the fourth control is triggered, and the generation process of the three-dimensional model may be photographed.
In one embodiment, after the terminal device receives the trigger signal of the fourth control and shoots the screen picture of the terminal device, the method further includes: a sixth control of the terminal equipment; and the terminal equipment receives a trigger signal of the sixth control and edits the picture or the video of the screen picture. In this embodiment, after the shooting is completed, the terminal device may display an edit control to edit the shot pictures and videos advanced, including but not limited to: editing background music, adding special effects, adding stickers, adding filters, and the like.
In one embodiment, after step S102, it may further include: the terminal equipment displays a fifth control; and the terminal equipment receives the trigger signal of the fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model. In this embodiment, the fifth control may control the attribute change of the three-dimensional model without changing the shape and size of the three-dimensional model, for example, the fifth control may change the texture and/or the material of the three-dimensional model, the specific first three-dimensional model may be a ceramic blank of a ceramic pot, the fifth control may be a firing button, and after the user clicks the firing button, the material of the ceramic blank is replaced by the material of the finished ceramic pot. Of course, the fifth control may change other properties of the three-dimensional model, which is not limited to textures and materials, and will not be described herein. In this embodiment, after the fifth control is triggered, the state of the fifth control may be changed, and as in the specific example described above, after the fire button is triggered, the fire button may be displayed in gray, indicating that the fire is completed and cannot be triggered again.
As shown in fig. 4a-4e, is a specific example of an embodiment of the present disclosure. As shown in fig. 4a, the terminal device displays a first control, which in this specific example is a Create button, as a screen display interface of the terminal device; as shown in fig. 4b, when the user clicks the create button, a process of plane scanning (Find a desk and scan it) is entered, in which the user can scan a plane where a three-dimensional model needs to be placed through the mobile terminal device; as shown in fig. 4c, after identifying the plane, the terminal device displays a third control, in this specific example, the third control is an arrow-shaped placement button, and displays a prompt word (Tap to place object), and when the user clicks the placement button, a first three-dimensional model is generated, as shown in fig. 4d, in this specific example, the first three-dimensional model is a ceramic blank of a ceramic pot, and the terminal device displays a second control, in this specific example, the second control is a human hand-shaped button; as shown in fig. 4e, after the hand-shaped button mobile terminal device is pressed, the first three-dimensional model is changed to generate a second three-dimensional model, in this specific example, the shapes of the bottleneck and the belly of the ceramic pot are changed to be more like a vase, the initial shape of the ceramic blank is the first three-dimensional model, and the shape after the shapes of the bottleneck and the belly of the ceramic pot are changed is the second three-dimensional model.
The disclosure discloses a method, a device and a hardware device for generating a three-dimensional model. The method for generating the three-dimensional model comprises the following steps: the terminal equipment displays the first control; the terminal equipment receives a trigger signal of a first control and generates a first three-dimensional model; the terminal equipment displays a second control; and the terminal equipment receives the trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model. The three-dimensional model generating method can be based on the basic three-dimensional model, and the shape of the three-dimensional model can be directly modified through the mobile terminal equipment, so that the flexibility and convenience of three-dimensional model generation are improved.
In the foregoing, although the steps in the foregoing method embodiments are described in the foregoing order, it should be clear to those skilled in the art that the steps in the embodiments of the disclosure are not necessarily performed in the foregoing order, but may be performed in reverse order, parallel, cross, etc., and other steps may be further added to those skilled in the art on the basis of the foregoing steps, and these obvious modifications or equivalent manners are also included in the protection scope of the disclosure and are not repeated herein.
The following is an embodiment of the disclosed apparatus, which may be used to perform steps implemented by an embodiment of the disclosed method, and for convenience of explanation, only those portions relevant to the embodiment of the disclosed method are shown, and specific technical details are not disclosed, referring to the embodiment of the disclosed method.
The embodiment of the disclosure provides a generation device of a three-dimensional model. The apparatus may perform the steps described in the above embodiments of the method for generating a three-dimensional model. As shown in fig. 5, the apparatus 500 mainly includes: a first control display module 501, a first model generation module 502, a second control display module 503, and a second model generation module 504. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the first control display module 501 is configured to display a first control by a terminal device;
the first model generating module 502 is configured to generate a first three-dimensional model when the terminal device receives a trigger signal of the first control;
a second control display module 503, configured to display a second control by the terminal device;
and the second model generating module 504 is configured to generate a second three-dimensional model according to the movement amount of the terminal device and the first three-dimensional model when the terminal device receives the trigger signal of the second control.
Further, the first model generating module 502 includes:
The image acquisition module is used for receiving the trigger signal of the first control by the terminal equipment and acquiring an image of the real scene through an image sensor of the terminal equipment;
the plane identification module is used for identifying a plane in the image by the terminal equipment;
and the first model generation sub-module is used for generating a first three-dimensional model on the plane by the terminal equipment in response to the identification of the plane.
Further, the first model generating sub-module further includes:
the third control display module is used for responding to the identification of the plane, and the terminal equipment displays a third control;
the first model generates a first sub-module, which is used for the terminal equipment to receive the trigger signal of the third control, and generates a first three-dimensional model on the plane.
Further, the three-dimensional model generating device 500 further includes:
the fourth control display module is used for displaying a fourth control by the terminal equipment;
and the shooting module is used for receiving the trigger signal of the fourth control by the terminal equipment, shooting the screen picture of the terminal equipment and generating a picture or video of the screen picture.
Further, the shooting module further comprises:
the sixth control display module is used for displaying a sixth control by the terminal equipment;
And the editing module is used for receiving a trigger signal of the sixth control by the terminal equipment and editing the picture or the video of the screen picture.
Further, the three-dimensional model generating device 500 further includes:
the fifth control display module is used for displaying a fifth control by the terminal equipment;
the third model generation module is used for receiving a trigger signal of the fifth control by the terminal equipment and generating a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model.
Further, the three-dimensional model generating device 500 further includes:
and the fifth control state changing module is used for displaying the first state of the fifth control by the terminal equipment.
Further, the second control display module 503 further includes:
the distance detection module is used for detecting the distance between the terminal equipment and the first three-dimensional model by the terminal equipment;
the second control state changing module is used for responding to the fact that the distance is within a first threshold value, and the terminal displays a first state of the second control; and responding to the distance being outside the first threshold, and displaying a second state of the second control by the terminal.
Further, the second model generating module 504 further includes:
The mobile detection module is used for receiving a trigger signal of the second control by the terminal equipment, detecting the movement amount of the terminal equipment and analyzing the movement amount into a vertical movement component and a horizontal movement component;
and the second model generation submodule is used for generating a second three-dimensional model by the terminal equipment according to the vertical movement component, the horizontal movement component and the first three-dimensional model.
Further, the second model generating sub-module is further configured to:
and moving the key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the key points after movement.
The apparatus shown in fig. 5 may perform the method of the embodiment shown in fig. 1, and reference is made to the relevant description of the embodiment shown in fig. 1 for parts of this embodiment not described in detail. The implementation process and the technical effect of this technical solution refer to the description in the embodiment shown in fig. 1, and are not repeated here.
Referring now to fig. 6, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects an internet protocol address from the at least two internet protocol addresses and returns the internet protocol address; receiving an Internet protocol address returned by the node evaluation equipment; wherein the acquired internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer-readable medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (11)

1. A method for generating a three-dimensional model, comprising:
the terminal equipment displays the first control;
the terminal equipment receives a trigger signal of a first control and generates a first three-dimensional model;
the terminal equipment displays a second control;
The terminal equipment receives a trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model;
the terminal equipment receives a trigger signal of the second control, and generates a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model, wherein the method comprises the following steps:
the terminal equipment receives a trigger signal of the second control, detects the movement amount of the terminal equipment, and analyzes the movement amount into a vertical movement component and a horizontal movement component, wherein the vertical movement component is the movement amount of the plane where the terminal equipment is located in the vertical direction, and the horizontal movement component is the movement amount of the plane where the terminal equipment is located in the horizontal direction;
the terminal equipment generates a second three-dimensional model according to the vertical movement component, the horizontal movement component and the first three-dimensional model;
the terminal equipment generates a second three-dimensional model according to the vertical movement component, the horizontal movement component and the first three-dimensional model, and the method comprises the following steps:
and moving the key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the key points after movement.
2. The method for generating the three-dimensional model according to claim 1, wherein the terminal device receives a trigger signal of the first control, and generates the first three-dimensional model, comprising:
the terminal equipment receives a trigger signal of the first control, and acquires an image of a real scene through an image sensor of the terminal equipment;
the terminal equipment identifies a plane in the image;
in response to identifying the plane, the terminal device generates a first three-dimensional model on the plane.
3. The method of generating a three-dimensional model of claim 2, wherein the terminal device generates a first three-dimensional model on the plane in response to identifying the plane, comprising:
responsive to identifying the plane, the terminal device displays a third control;
and the terminal equipment receives the trigger signal of the third control and generates a first three-dimensional model on the plane.
4. The method for generating a three-dimensional model according to claim 1, wherein after the terminal device receives the trigger signal of the first control, generating the first three-dimensional model further comprises:
the terminal equipment displays a fourth control;
and the terminal equipment receives the trigger signal of the fourth control, shoots a screen picture of the terminal equipment, and generates a picture or video of the screen picture.
5. The method for generating a three-dimensional model according to claim 4, wherein after the terminal device receives the trigger signal of the fourth control and shoots the screen of the terminal device, the method further comprises:
the terminal equipment displays a sixth control;
and the terminal equipment receives a trigger signal of the sixth control and edits the picture or the video of the screen picture.
6. The method for generating a three-dimensional model according to claim 1, wherein after the terminal device receives the trigger signal of the first control, generating the first three-dimensional model further comprises:
the terminal equipment displays a fifth control;
and the terminal equipment receives the trigger signal of the fifth control and generates a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model.
7. The method for generating a three-dimensional model according to claim 6, further comprising, after the terminal device receives the trigger signal of the fifth control, generating a third three-dimensional model according to the first three-dimensional model or the second three-dimensional model:
the terminal device displays the first state of the fifth control.
8. The method for generating a three-dimensional model according to claim 1, wherein the terminal device displays a second control, comprising:
The terminal equipment detects the distance between the terminal equipment and the first three-dimensional model;
responsive to the distance being within a first threshold, the terminal displays a first state of a second control;
and responding to the distance being outside the first threshold, and displaying a second state of the second control by the terminal.
9. A three-dimensional model generation device, comprising:
the first control display module is used for displaying the first control by the terminal equipment;
the first model generation module is used for receiving a trigger signal of the first control by the terminal equipment and generating a first three-dimensional model;
the second control display module is used for displaying a second control by the terminal equipment;
the second model generation module is used for receiving a trigger signal of the second control by the terminal equipment and generating a second three-dimensional model according to the movement amount of the terminal equipment and the first three-dimensional model;
the second model generating module further includes:
the mobile detection module is used for receiving a trigger signal of the second control by the terminal equipment, detecting the movement amount of the terminal equipment, and analyzing the movement amount into a vertical movement component and a horizontal movement component, wherein the vertical movement component is the movement amount of the plane where the terminal equipment is positioned in the vertical direction, and the horizontal movement component is the movement amount of the plane where the terminal equipment is positioned in the horizontal direction;
The second model generation submodule is used for generating a second three-dimensional model by the terminal equipment according to the vertical movement component, the horizontal movement component and the first three-dimensional model;
the second model generation sub-module is further configured to:
and moving the key points of the first three-dimensional model according to the vertical movement component and the horizontal movement component, and generating a second three-dimensional model according to the key points after movement.
10. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor, when executed, implements a method of generating a three-dimensional model according to any one of claims 1-8.
11. A computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the method of generating a three-dimensional model according to any one of claims 1-8.
CN201811303618.XA 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device Active CN109472873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811303618.XA CN109472873B (en) 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811303618.XA CN109472873B (en) 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device

Publications (2)

Publication Number Publication Date
CN109472873A CN109472873A (en) 2019-03-15
CN109472873B true CN109472873B (en) 2023-09-19

Family

ID=65666713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811303618.XA Active CN109472873B (en) 2018-11-02 2018-11-02 Three-dimensional model generation method, device and hardware device

Country Status (1)

Country Link
CN (1) CN109472873B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120101B (en) * 2019-04-30 2021-04-02 中国科学院自动化研究所 Cylinder augmented reality method, system and device based on three-dimensional vision

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130101622A (en) * 2012-02-14 2013-09-16 구대근 Apparatus and system for 3 dimensional design using augmented reality and method for design evaluation
CN104282041A (en) * 2014-09-30 2015-01-14 小米科技有限责任公司 Three-dimensional modeling method and device
CN105338391A (en) * 2015-12-11 2016-02-17 腾讯科技(深圳)有限公司 Intelligent television control method and mobile terminal
CN105493154A (en) * 2013-08-30 2016-04-13 高通股份有限公司 System and method for determining the extent of a plane in an augmented reality environment
CN105513137A (en) * 2014-09-23 2016-04-20 小米科技有限责任公司 Three dimensional model and scene creating method and apparatus based on mobile intelligent terminal
CN105974804A (en) * 2016-05-09 2016-09-28 北京小米移动软件有限公司 Method and device for controlling equipment
CN106373187A (en) * 2016-06-28 2017-02-01 上海交通大学 Two-dimensional image to three-dimensional scene realization method based on AR
CN106896940A (en) * 2017-02-28 2017-06-27 杭州乐见科技有限公司 Virtual objects are presented effect control method and device
CN107452074A (en) * 2017-07-31 2017-12-08 上海联影医疗科技有限公司 A kind of image processing method and system
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN107945285A (en) * 2017-10-11 2018-04-20 浙江慧脑信息科技有限公司 A kind of threedimensional model is exchanged cards containing all personal details and become sworn brothers figure and deformation method
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9443353B2 (en) * 2011-12-01 2016-09-13 Qualcomm Incorporated Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
CN103472985B (en) * 2013-06-17 2017-12-26 展讯通信(上海)有限公司 A kind of user's edit methods of three-dimensional shopping platform display interface
US10200819B2 (en) * 2014-02-06 2019-02-05 Position Imaging, Inc. Virtual reality and augmented reality functionality for mobile devices

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130101622A (en) * 2012-02-14 2013-09-16 구대근 Apparatus and system for 3 dimensional design using augmented reality and method for design evaluation
CN105493154A (en) * 2013-08-30 2016-04-13 高通股份有限公司 System and method for determining the extent of a plane in an augmented reality environment
CN105513137A (en) * 2014-09-23 2016-04-20 小米科技有限责任公司 Three dimensional model and scene creating method and apparatus based on mobile intelligent terminal
CN104282041A (en) * 2014-09-30 2015-01-14 小米科技有限责任公司 Three-dimensional modeling method and device
CN105338391A (en) * 2015-12-11 2016-02-17 腾讯科技(深圳)有限公司 Intelligent television control method and mobile terminal
CN105974804A (en) * 2016-05-09 2016-09-28 北京小米移动软件有限公司 Method and device for controlling equipment
CN106373187A (en) * 2016-06-28 2017-02-01 上海交通大学 Two-dimensional image to three-dimensional scene realization method based on AR
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects
CN106896940A (en) * 2017-02-28 2017-06-27 杭州乐见科技有限公司 Virtual objects are presented effect control method and device
CN107452074A (en) * 2017-07-31 2017-12-08 上海联影医疗科技有限公司 A kind of image processing method and system
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN107945285A (en) * 2017-10-11 2018-04-20 浙江慧脑信息科技有限公司 A kind of threedimensional model is exchanged cards containing all personal details and become sworn brothers figure and deformation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于增强现实的堆石坝工程三维可视化场景构建研究;王志宁;崔博;任炳昱;吴斌平;关涛;;水力发电(05);第57-60页 *

Also Published As

Publication number Publication date
CN109472873A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN111242881A (en) Method, device, storage medium and electronic equipment for displaying special effects
CN109582122B (en) Augmented reality information providing method and device and electronic equipment
CN104081307A (en) Image processing apparatus, image processing method, and program
CN112764845B (en) Video processing method and device, electronic equipment and computer readable storage medium
WO2022007565A1 (en) Image processing method and apparatus for augmented reality, electronic device and storage medium
US11561651B2 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN112672185B (en) Augmented reality-based display method, device, equipment and storage medium
CN112965780B (en) Image display method, device, equipment and medium
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CN113163135B (en) Animation adding method, device, equipment and medium for video
CA3119609A1 (en) Augmented reality (ar) imprinting methods and systems
CN109472873B (en) Three-dimensional model generation method, device and hardware device
CN109636917B (en) Three-dimensional model generation method, device and hardware device
CN110070617B (en) Data synchronization method, device and hardware device
WO2017024954A1 (en) Method and device for image display
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN113961066B (en) Visual angle switching method and device, electronic equipment and readable medium
CN110070600B (en) Three-dimensional model generation method, device and hardware device
CN113592997A (en) Object drawing method, device and equipment based on virtual scene and storage medium
CN112449210A (en) Sound processing method, sound processing device, electronic equipment and computer readable storage medium
CN111353929A (en) Image processing method and device and electronic equipment
CN110047520B (en) Audio playing control method and device, electronic equipment and computer readable storage medium
CN110060355B (en) Interface display method, device, equipment and storage medium
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant