CN116977545A - Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium - Google Patents

Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium Download PDF

Info

Publication number
CN116977545A
CN116977545A CN202310316621.XA CN202310316621A CN116977545A CN 116977545 A CN116977545 A CN 116977545A CN 202310316621 A CN202310316621 A CN 202310316621A CN 116977545 A CN116977545 A CN 116977545A
Authority
CN
China
Prior art keywords
model
dimensional
dimensional model
contour
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310316621.XA
Other languages
Chinese (zh)
Inventor
王星星
李佳禧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN202310316621.XA priority Critical patent/CN116977545A/en
Publication of CN116977545A publication Critical patent/CN116977545A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to an artificial intelligence cloud service, in particular to a three-dimensional model display method, a three-dimensional model display device, computer equipment, a storage medium and a computer program product. The method comprises the following steps: responding to the triggering operation aiming at the image acquisition control, calling the image acquisition equipment to acquire the real scene, and displaying the real scene picture of the acquired real scene; when the real scene comprises identifiable real scene object graphics, displaying a three-dimensional model corresponding to the identifiable real scene object graphics in the real scene; the three-dimensional model is provided with a model contour surface and a model depth surface, the contour of the model contour surface is matched with the object contour of the real object graph, and the model depth surface is matched with the contour surface to represent the depth of the three-dimensional model; the three-dimensional model is displayed to move in the live-action picture. The method can improve the display diversity of the live-action objects.

Description

Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a three-dimensional model display method, apparatus, computer device, and storage medium.
Background
With the development of science and technology, a live-action picture collection technology has emerged, and the live-action picture collection technology refers to a technology for collecting a live-action scene by means of an image collection device, for example, a user can collect a picture of the live-action scene in real time through a camera in a terminal, so as to obtain a live-action picture. However, in the process of live-action acquisition of a real scene at present, only two-dimensional graphics of a live-action object existing in the real scene are displayed in a live-action picture, so that the display diversity of the live-action object is greatly limited.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a three-dimensional model display method, apparatus, computer device, computer-readable storage medium, and computer program product capable of improving display diversity of live-action objects.
In a first aspect, the present application provides a three-dimensional model display method, the method including:
responding to the triggering operation aiming at the image acquisition control, calling the image acquisition equipment to acquire the real scene, and displaying the real scene picture of the acquired real scene;
when the real scene comprises an identifiable real scene object graph, displaying a three-dimensional model corresponding to the identifiable real scene object graph in the real scene; the three-dimensional model is provided with a model contour surface and a model depth surface, the contour of the model contour surface is matched with the object contour of the live-action object graph, and the model depth surface is matched with the contour surface to represent the depth of the three-dimensional model;
And displaying the motion of the three-dimensional model in the live-action picture.
In one embodiment, the live-action screen displays a start identification control; the method for displaying the three-dimensional model corresponding to the identifiable real object graph in the real image when the real image comprises the identifiable real object graph comprises the following steps:
responding to the triggering operation of the start identification control, and changing the start identification control into a stop identification control for display; the stop recognition control is used for stopping recognition of the live-action object graph in the live-action picture;
when the identifiable real-scene object graph exists in the real-scene picture, displaying a corresponding three-dimensional model, and changing the identification stopping control into the identification starting control for displaying.
In one embodiment, the live-action screen includes a plurality of collection graphics; the method further comprises the steps of:
when the collection graphics to be lightened, which are matched with the identifiable real object graphics, exist in the collection graphics, the collection graphics to be lightened are lightened and displayed;
and displaying preset information when each of the plurality of collection graphics is lightened.
In one embodiment, the method further comprises:
responding to the export operation of the three-dimensional model, obtaining a three-dimensional material corresponding to the three-dimensional model, and sending the three-dimensional material to a server; the three-dimensional material is used for triggering the server to generate a combined three-dimensional model based on the received three-dimensional material.
In a second aspect, the present application also provides a three-dimensional model display device, the device including:
the image display module is used for responding to the triggering operation of the image acquisition control, calling the image acquisition equipment to acquire the real scene and displaying the real scene of the acquired real scene;
the model display module is used for displaying a three-dimensional model corresponding to the identifiable real object graph in the real image when the real image comprises the identifiable real object graph; the three-dimensional model is provided with a model contour surface and a model depth surface, the contour of the model contour surface is matched with the object contour of the live-action object graph, and the model depth surface is matched with the contour surface to represent the depth of the three-dimensional model;
and the model movement module is used for displaying the movement of the three-dimensional model in the live-action picture.
In one embodiment, when the live-action picture includes an identifiable live-action object graph, the model display module is further configured to segment the identifiable live-action object graph from the live-action picture to obtain a segmented image: and generating an initial three-dimensional model according to the segmentation image, generating a three-dimensional model corresponding to the identifiable real-scene object graph according to the segmentation image and the initial three-dimensional model, and displaying the three-dimensional model.
In one embodiment, the model display module is further configured to generate two symmetrical model contours with preset intervals according to the segmented image; the model contour is matched with the image contour of the segmented image; filling each model contour through a three-dimensional material to obtain two symmetrical model contour surfaces; connecting the two symmetrical model contour surfaces through a three-dimensional material to obtain a three-dimensional model to be optimized; and optimizing the three-dimensional model to be optimized to obtain an initial three-dimensional model.
In one embodiment, the model display module is further configured to, for each contour pixel point in the segmented image, convert the targeted contour pixel point from a pixel coordinate system to a world coordinate system, to obtain a world coordinate point corresponding to the targeted contour pixel point; according to the preset distance, the world coordinate points corresponding to the targeted outline pixel points are subjected to fission, and a fission coordinate point pair corresponding to the targeted outline pixel points is obtained; and generating two symmetrical model contours with the preset spacing according to the fission coordinate point pairs corresponding to each contour pixel point.
In one embodiment, the two symmetrical model contour surfaces include a first model contour surface and a second model contour surface; the model display module is further used for determining, for each first edge contour point on the first model contour surface, a target second edge contour point on the second model contour surface, which is symmetrical to the first edge contour point; and respectively connecting each first edge contour point with a corresponding symmetrical target second edge contour point through the three-dimensional material to obtain a three-dimensional model to be optimized.
In one embodiment, the model display module is further configured to scan the three-dimensional model to be optimized to determine a cavity on the three-dimensional model to be optimized; and filling the cavity through the three-dimensional material to obtain an initial three-dimensional model.
In one embodiment, the model display module is further configured to scan the three-dimensional model to be optimized to obtain a three-dimensional spatial mask of the three-dimensional model to be optimized; the three-dimensional space mask comprises a plurality of space mask points; for each spatial Monte point in the plurality of spatial Monte points, detecting the aimed spatial Monte point to determine whether the three-dimensional material is added on the aimed spatial Monte point; and when the three-dimensional material is not added to the aimed spatial mask point, determining that a hole exists at a position corresponding to the aimed spatial mask point on the three-dimensional model to be optimized.
In one embodiment, the live-action screen displays a start identification control; the model display module is further used for responding to the triggering operation of the starting identification control, and changing the starting identification control into a stopping identification control for display; the stop recognition control is used for stopping recognition of the live-action object graph in the live-action picture; when the identifiable real-scene object graph exists in the real-scene picture, displaying a corresponding three-dimensional model, and changing the identification stopping control into the identification starting control for displaying.
In one embodiment, the model contour surface is covered with a model map; the pattern in the model map is adapted to the pattern in the identifiable real object graphic.
In one embodiment, the model display module is further configured to display a model map list in response to a trigger operation for the model map switching control; and responding to the selection operation for the map list, and switching the model map covered on the model contour surface into the target model map selected by the selection operation for display.
In one embodiment, the model motion module is further configured to display a three-dimensional animation in the live-action picture; the three-dimensional animation comprises a three-dimensional model which moves according to a movement mode matched with the recognizable live-action object graph.
In one embodiment, a three-dimensional animation switching control is displayed in the live-action picture; the model motion module is further used for responding to the triggering operation of the three-dimensional animation switching control, and displaying a motion action list; and responding to the selection operation for the motion action list, and switching the three-dimensional model in the three-dimensional animation, which moves according to the motion mode matched with the identifiable real object graph, into the three-dimensional model which moves according to the target motion mode selected by the selection operation.
In one embodiment, the live-action screen includes a plurality of collection graphics; the model display module is further used for displaying the to-be-lighted collection graph in a lighting mode when the to-be-lighted collection graph matched with the identifiable real object graph exists in the plurality of collection graphs; and displaying preset information when each of the plurality of collection graphics is lightened.
In one embodiment, the real scene is a scene acquired for a real scene by an image acquisition device; the model display module is further used for canceling to display a three-dimensional model corresponding to the identifiable real-time object graph when the real-time image acquired by the image acquisition device is changed from the real-time image including the identifiable real-time object graph to the real-time image not including the real-time identifiable real-time object graph.
In one embodiment, the three-dimensional model display device further includes a combination module, configured to obtain a three-dimensional material corresponding to the three-dimensional model in response to an export operation for the three-dimensional model, and send the three-dimensional material to a server; the three-dimensional material is used for triggering the server to generate a combined three-dimensional model based on the received three-dimensional material.
In one embodiment, a model combination control is displayed in the live-action picture; the three-dimensional model display device further comprises a combination module, a display module and a display module, wherein the combination module is used for responding to the triggering operation of the model combination control to display a three-dimensional material list; the three-dimensional material list comprises a plurality of three-dimensional materials; each three-dimensional material is obtained in response to the export operation of the three-dimensional model in the corresponding live-action picture; responding to the selection operation for the three-dimensional material list, and displaying a corresponding combined three-dimensional model; the combined three-dimensional model comprises three-dimensional model components corresponding to the multiple target three-dimensional materials selected by the selection operation.
In one embodiment, the combining module is further configured to determine, in response to a selection operation for the three-dimensional material list, a plurality of target three-dimensional materials selected by the selection operation; respectively carrying out model reconstruction processing on each target three-dimensional material to obtain a three-dimensional model component corresponding to each target three-dimensional material; and responding to the editing operation of the three-dimensional model component, obtaining a combined three-dimensional model and displaying the combined three-dimensional model.
In one embodiment, the editing operation includes at least one of a spatial position moving operation, a rotation angle adjusting operation, or a size scaling operation; the space position moving operation is used for adjusting the space position of the three-dimensional model component; the rotation angle adjusting operation is used for adjusting the rotation angle of the three-dimensional model component; the size scaling operation is used for adjusting the size of the three-dimensional model component.
In one embodiment, the combining module is further configured to display a recipient list in response to a sharing operation for the combined three-dimensional model; transmitting a model poster corresponding to the combined three-dimensional model to an target receiver selected by a selection operation in response to the selection operation for the receiver list; and the sent model poster is used for displaying the corresponding combined three-dimensional model when the target receiver triggers the model poster.
In a third aspect, the present application further provides a computer device, where the computer device includes a memory and a processor, where the memory stores a computer program, and the processor implements steps in any three-dimensional model display method provided by the embodiment of the present application when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the three-dimensional model display methods provided by the embodiments of the present application.
In a fifth aspect, the present application also provides a computer program product, which includes a computer program, where the computer program when executed by a processor implements the steps in any of the three-dimensional model display methods provided by the embodiments of the present application.
The three-dimensional model display method, the three-dimensional model display device, the computer equipment, the storage medium and the computer program product can display the three-dimensional model corresponding to the real scene object graph when the identifiable real scene object graph is included in the real scene image by displaying the real scene image of the real scene. By displaying the three-dimensional model, the three-dimensional model can be displayed to move in the live-action picture, and compared with the traditional method for displaying the two-dimensional live-action object graph only in the live-action picture, the method can display the three-dimensional model corresponding to the live-action object, thereby greatly improving the diversity of live-action object display. In addition, the real interaction effect can be brought by displaying the three-dimensional model of the motion corresponding to the real object in the real image, so that the aim of combining the virtual and the reality is achieved, and the display effect of the real object is further improved.
Drawings
FIG. 1 is an application environment diagram of a three-dimensional model display method in one embodiment;
FIG. 2 is a flow chart of a three-dimensional model display method according to an embodiment;
FIG. 3 is a schematic diagram of a live-action object graphic in one embodiment;
FIG. 4 is a schematic diagram of a three-dimensional model in one embodiment;
FIG. 5 is a schematic illustration of a model profile surface and a model depth surface in one embodiment;
FIG. 6 is a schematic diagram of three-dimensional model motion in one embodiment;
FIG. 7 is a schematic sharing of a three-dimensional model in one embodiment;
FIG. 8 is a schematic diagram of a model map in one embodiment;
FIG. 9 is a schematic diagram of lighting a collection graphic in one embodiment;
FIG. 10 is a schematic diagram of a combined three-dimensional model in one embodiment;
FIG. 11 is a schematic diagram of the generation of a combined three-dimensional model in one embodiment;
FIG. 12 is a schematic diagram of the generation of a three-dimensional model to be optimized in one embodiment;
FIG. 13 is a flow chart of a three-dimensional model display method in one embodiment;
FIG. 14 is a schematic diagram of a three-dimensional model generation architecture in one embodiment;
FIG. 15 is a block diagram showing a structure of a three-dimensional model display device in one embodiment;
fig. 16 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The three-dimensional model display method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on the cloud or other servers. When the terminal 102 displays a live-action picture of a real scene, the terminal 102 may send the live-action picture to the server 104, so that the server 104 identifies the live-action picture and obtains a live-action object graph in the live-action picture. The server 104 generates three-dimensional model data corresponding to the live-action object figure, and returns the generated three-dimensional model data to the terminal 102, so that the terminal 102 displays the corresponding three-dimensional model in the live-action picture based on the received three-dimensional model data, and controls the three-dimensional model to move in the live-action picture. The terminal 102 may be, but not limited to, various desktop computers, notebook computers, smart phones, vehicle-mounted terminals, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services.
The present application relates to artificial intelligence cloud services, for example, by which the present application can generate three-dimensional models. The artificial intelligence cloud Service is also commonly called AIaaS (AI as a Service, chinese is "AI as Service"). The service mode of the artificial intelligent platform is the mainstream at present, and particularly, the AIaaS platform can split several common AI services and provide independent or packaged services at the cloud. This service mode is similar to an AI theme mall: all developers can access one or more artificial intelligence services provided by the use platform through an API interface, and partial deep developers can also use an AI framework and AI infrastructure provided by the platform to deploy and operate and maintain self-proprietary cloud artificial intelligence services.
It should be noted that the present application relates to an augmented reality technology, for example, the terminal of the present application may be an augmented reality device. The augmented Reality device is implemented by adopting an augmented Reality technology, which is also called as an XR technology, and the augmented Reality technology fuses Reality and Virtual through a computer technology and a wearable device, so that a Virtual environment capable of man-machine interaction is created, and technical characteristics of VR (Virtual Reality), AR (Augmented Reality ) and MR (media Reality) are included, so that an immersion feeling of seamless transition between the Virtual world and the real world is brought to an experimenter. VR technology refers to generating a realistic virtual world with multiple sensory experiences such as three-dimensional vision, touch sense, smell sense and the like by means of devices such as a computer, so that a person in the virtual world generates an immersive sense, and the virtual world is mostly used in game entertainment scenes such as VR glasses, VR display and VR integrated machines; the AR technology is a technology for superposing virtual information to the real world and even realizing beyond reality, is an extension of VR technology to a certain extent, and relatively speaking, the AR equipment product has the characteristics of small volume, light weight, portability and the like; MR technique is VR and AR technique's further development, through presenting virtual scene at real scene, builds the communication closed loop between the user, greatly strengthens user experience and feels. The XR technology comprises the characteristics of the three technologies, has wide application prospect, and can be applied to scenes for realizing science and experiment course remote teaching in education and training, or immersive entertainment scenes in film and television entertainment, such as immersive film watching, games and the like, exhibition activity scenes in concerts, dramas, museums and the like, or 3D home decoration and architectural design scenes in industrial modeling and design, or novel scene consumption, such as cloud shopping, cloud trial fitting and the like.
It should be noted that the terms "first," "second," and the like as used herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The singular forms "a," "an," or "the" and similar terms do not denote a limitation of quantity, but rather denote the presence of at least one, unless the context clearly dictates otherwise. The numbers of "plural" or "multiple" etc. mentioned in the embodiments of the present application each refer to the number of "at least two", for example, "plural" means "at least two", and "multiple" means "at least two".
In one embodiment, as shown in fig. 2, a three-dimensional model display method is provided, and the method is applied to the terminal in fig. 1 for illustration, and includes the following steps:
step 202, responding to a triggering operation for an image acquisition control, calling an image acquisition device to acquire a real scene, and displaying a real scene picture of the acquired real scene.
Specifically, when the live-action picture needs to be acquired, a user can acquire the live-action scene through the image acquisition equipment, so that the live-action picture is obtained. For example, a user can collect a real scene through a camera in the terminal, and the collected real scene is displayed in real time in a display screen of the terminal.
Step 204, when the real scene includes the identifiable real scene object graph, displaying a three-dimensional model corresponding to the identifiable real scene object graph in the real scene; the three-dimensional model is provided with a model contour surface and a model depth surface, wherein the contour of the model contour surface is matched with the object contour of the real object graph, and the model depth surface is matched with the contour surface to represent the depth of the three-dimensional model.
Specifically, a pattern recognition model for recognizing the pattern of the real object is deployed in the terminal, and the pattern recognition model can be used for recognizing the pattern of the real object in the real picture. Wherein the pattern recognition model refers to a machine learning model; the real-scene object graph refers to a two-dimensional picture of a real-scene object displayed in a real-scene picture. For example, referring to fig. 3, when a yogurt bottle 301 in a real scene is photographed by a camera in a terminal, a real scene screen 302 displayed on a display screen of the terminal may include a yogurt bottle graphic 303, where the yogurt bottle graphic 303 is a real scene object graphic. The recognizable live-action object pattern refers to a pattern recognizable by the pattern recognition model. The pattern recognition model may be trained in advance through a training sample so that the pattern recognition model can recognize a real object pattern of a certain kind of real object. For example, the pattern recognition model can be trained through a large number of pictures shot for apples, pictures shot for water bottles and pictures shot for handholds, so that the trained pattern recognition model can recognize apples, water bottles or handholds in a live-action picture and the like. FIG. 3 illustrates a schematic diagram of a real world object graphic in one embodiment.
Further, when it is determined that the identifiable real object pattern is included in the real picture, the terminal may display a three-dimensional model corresponding to the real object pattern in the real picture. For example, referring to fig. 4, when the real scene includes a penguin map, a real object graph 401 of the penguin map is included in a real scene collected for the real scene, and the real object graph is an identifiable graph, the terminal may display a three-dimensional model 402 of the penguin map in the real scene. As will be readily appreciated, "penguin map" in this example refers to a real object that actually exists in a real scene; the 'real object graph of the penguin map' is a penguin graph in a real picture acquired by the penguin map which is actually existed; the three-dimensional model refers to a three-dimensional model with an adaptive contour with the live-action object graph of the penguin map. FIG. 4 illustrates a schematic diagram of a three-dimensional model in one embodiment. It is easy to understand that the terminal may also display the three-dimensional model in other pages.
In one embodiment, the three-dimensional model has a model contour surface with a contour that matches an object contour of the live-action object graphic and a model depth surface that cooperates with the contour surface to characterize the depth of the three-dimensional model. For example, referring to fig. 4, the model contour surface may be a surface (front and back of the three-dimensional model) displayed as a penguin contour in the three-dimensional model, and the model depth surface may be a surface (side of the three-dimensional model) for connecting different model contour surfaces. Wherein, the outline refers to lines constituting the outer edge of the figure or object. The contour of the model contour surface refers to the line of the outer edge of the model contour; the object outline of the real object pattern refers to the line of the outer edge of the real object pattern.
In one embodiment, the three-dimensional model may have two symmetrical model contour surfaces and a model depth surface for connecting the two symmetrical model contour surfaces. Referring to fig. 5, when the real object pattern is a circle, the three-dimensional model displayed in the real picture may be a cylinder corresponding to the circle. The model contour surfaces in the cylinder can be 501 and 502, and the model depth surface can be 503. The model depth plane cooperates with the model contour plane to characterize the depth of the three-dimensional model. Where depth refers to depth in three-dimensional space, for example, depth may characterize the distance between model contour surface 501 and model contour surface 502. FIG. 5 shows a schematic diagram of a model profile surface and a model depth surface in one embodiment.
In one embodiment, the three-dimensional model may have two or more model contour surfaces. For example, there may be three model contour faces so that the three-dimensional model may exist in the form of a mitsubishi prism.
In one embodiment, the contour adaptation means that a difference between the contour of the model contour surface and the object contour of the live-action object pattern is less than or equal to a preset difference threshold.
In one embodiment, the model contour surface in the three-dimensional model may be planar or convex, or concave. For example, the model contour surface may be a plane surface having no concave-convex information, or a concave surface or a convex surface having concave-convex information.
Step 206, displaying the motion of the three-dimensional model in the live-action picture.
Specifically, the terminal may display a three-dimensional model of motion in a live-action screen. For example, when a three-dimensional model corresponding to the penguin map is displayed, the terminal may control the three-dimensional model to perform autorotation, or control the three-dimensional model to perform movement according to a preset movement track, or the like.
In one embodiment, referring to fig. 6, when the real scene includes a identifiable real object pattern being a water bottle pattern 601, the terminal may display a three-dimensional animation in which a water bottle three-dimensional model 602 is displayed, and the water bottle three-dimensional model 602 rotates about its own center line as a rotation axis. FIG. 6 illustrates a schematic diagram of three-dimensional model motion in one embodiment.
In one embodiment, the terminal may be an augmented reality device, where the augmented reality device may add a virtual three-dimensional model to a displayed live-action frame, and control the three-dimensional model to move in the displayed live-action frame, so as to combine the virtual with the reality.
In one embodiment, the user may share the displayed three-dimensional model. For example, the terminal may display a recipient list in response to a sharing operation for the three-dimensional model, and send the three-dimensional model to the selected target recipient in response to a selection operation for the recipient list, so that the receiving terminal corresponding to the target recipient may display the received three-dimensional model. For example, referring to fig. 7, a user may share a displayed three-dimensional model through an instant messaging application, and the terminal may respond to a sharing operation of the user, and display a message card 701 corresponding to the shared three-dimensional model in a session interface of the receiving terminal, so that when the receiving terminal clicks on the message card, the receiving terminal may be triggered to display the shared three-dimensional model 702. FIG. 7 illustrates a sharing schematic of a three-dimensional model in one embodiment.
In the three-dimensional model display method, by displaying the real scene of the real scene, when the identifiable real object graph is included in the real scene, the three-dimensional model corresponding to the real object graph can be displayed. By displaying the three-dimensional model, the three-dimensional model can be displayed to move in the live-action picture, and compared with the traditional method for displaying the two-dimensional live-action object graph only in the live-action picture, the method can display the three-dimensional model corresponding to the live-action object, thereby greatly improving the diversity of live-action object display. In addition, the real interaction effect can be brought by displaying the three-dimensional model of the motion corresponding to the real object in the real image, so that the aim of combining the virtual and the reality is achieved, and the display effect of the real object is further improved.
In one embodiment, the live-action screen displays a start recognition control; when the live-action picture includes the identifiable live-action object figure, displaying a three-dimensional model corresponding to the identifiable live-action object figure in the live-action picture, including: responding to a triggering operation aiming at the start identification control, and changing the start identification control into a stop identification control for display; the stop recognition control is used for stopping recognition of the live-action object graph in the live-action picture; when the identifiable real-scene object graph exists in the real-scene picture, displaying the corresponding three-dimensional model, and changing the identification stopping control into the identification starting control for displaying.
Specifically, a starting identification control can be displayed in a live-action picture displayed by the terminal, and a user can trigger the terminal to identify a live-action object figure in the live-action picture by clicking the starting identification control. For example, referring to fig. 6, the user may trigger the terminal to recognize the live object graphic in the live view screen by clicking on the start recognition control 603. When the terminal is identifying the real object, the terminal can display prompt information in the real image, for example, display prompt information of 'identification in', and change the start identification control to stop identification control for display, for example, change the start identification control 603 to stop identification control 604 for display, so that the user can pause the identification of the real object graph by clicking the stop identification control 604. When the terminal identifies the real object graph in the real image, the terminal can cancel the display of the prompt information, change the identification stopping control into the identification starting control for display, and display the three-dimensional model corresponding to the identified real object graph in the real image.
In one embodiment, when the terminal recognizes that the recognizable real object pattern is obtained, the terminal may display the recognizable real object pattern in a small window. For example, a small window may be opened up in the upper right corner of the live-action picture, and a recognizable live-action object pattern may be displayed in the small window. Therefore, the user can know the three-dimensional model in the live-action picture based on the generated live-action object graph, and the user experience is greatly improved.
In the embodiment, the starting identification control is displayed, so that a user can conveniently trigger the terminal to identify the pattern of the real object in the real picture based on the starting identification control; through changing the start recognition control into the stop recognition control in the recognition process, the recognition control can be stopped to recognize the pattern of the real object conveniently by a user, and therefore user experience is greatly improved.
In one embodiment, the model contour surface is covered with a model map; the pattern in the model map is adapted to the pattern in the recognizable live-action object graphic.
Specifically, the model contour surface is covered with a model map. The model map may be a picture posted on a contour surface of the model, wherein a contour of the picture is consistent with a contour of the contour surface of the model, and a pattern on the picture is consistent with a pattern in a pattern of the real object. For example, referring to FIG. 4, the model map may be embodied as a penguin image that is posted on the contour of the model.
In one embodiment, the initial three-dimensional model may be a model generated by three-dimensional materials, and the corresponding initial model contour surface may also be a surface generated by three-dimensional materials, where the initial model contour surface does not have any pattern thereon. The terminal can divide identifiable real object graphics from the real image, take the divided real object graphics as a model map, and post the model map on a model outline surface so as to generate a three-dimensional model displayed in the real image. For example, referring to fig. 8, the initial three-dimensional model generated may be a model 801 generated by three-dimensional materials, the model map divided from the live-action image may be 802, and the terminal may post the model map on each model contour surface to generate a three-dimensional model 803. FIG. 8 illustrates a schematic diagram of model maps in one embodiment.
In the above embodiment, the model map adapted to the pattern in the real object graph is displayed on the model contour surface, so that the generated three-dimensional model is more similar to the identifiable real object graph, and the display effect of the three-dimensional model is further improved.
In one embodiment, a model map switching control is displayed in the live-action picture; the model contour surface is covered with a model map; the method further comprises the following steps: responding to the triggering operation of the model map switching control, and displaying a model map list; and responding to the selection operation for the mapping list, and switching the model mapping covered on the model outline surface into the target model mapping selected by the selection operation for display.
In particular, a model map switching control may be displayed in the live-action screen, for example, referring to fig. 6, the terminal may display a control 605 to switch the model map. The user can switch the model map on the model contour surface through the model map switching control. When the user clicks the model map switching control, the terminal can display a map list, and a plurality of model maps are displayed in the map list, so that the user can select a target model map to be switched to from the model map list, and further the terminal responds to the selection operation of the user for the map list to switch the model map on the model contour surface to the target model map selected by the user.
In one embodiment, when the three-dimensional model has a plurality of model contour surfaces, the model map on each model contour surface may be replaced separately, e.g., model map of model contour surface a may be replaced with model map a, model map of model contour surface B may be replaced with model map B, model map of model contour surface C may be replaced with model map C, and so on. And the model mapping of the model contour surfaces A to C can be replaced by the model mapping switching control, for example, the model mapping d is replaced by the model mapping switching control, so that one-key replacement of the model mapping is realized.
In the above embodiment, by setting the model map switching control, the model map on the model contour surface can be replaced conveniently based on the model map switching control, so that the switching efficiency of the model map is improved, and the diversity of the displayed three-dimensional model is further improved.
In one embodiment, displaying the three-dimensional model for movement in the live-action view includes: displaying three-dimensional animation in the live-action picture; the three-dimensional animation includes a three-dimensional model that moves in a motion pattern that is compatible with the recognizable live-action object graphics.
Specifically, a three-dimensional animation may be superimposed and displayed in a live-action picture, where the three-dimensional animation includes a moving three-dimensional model, and a moving manner of the three-dimensional model may be adapted to a live-action object graphic. For example, when the real-scene object graph is a static graph such as a water bottle, an apple and the like, the motion mode of the three-dimensional model can be autorotation or dance in a real-scene picture; when the real object graph is a dynamic graph such as a horse and a penguin, the motion mode of the three-dimensional model can be a mode of moving along a preset motion track, for example, a mode of moving along the edge of the real picture. In one embodiment, the correspondence between the motion pattern and the pattern category of the real object may be stored in the terminal. When the terminal identifies the identifiable real object graph, the terminal can determine the category to which the real object graph belongs, further determine the target motion mode corresponding to the category to which the real object graph belongs according to the corresponding relation between the motion mode and the real object graph category, and then display the three-dimensional model corresponding to the real object graph according to the target motion mode.
In the embodiment, the three-dimensional model of the motion is displayed, so that the display diversity of the real-scene object is further enriched. By triggering the three-dimensional model to move in a mode of matching with the pattern of the real scene object, the displayed three-dimensional model can embody the characteristics of the pattern of the real scene object, and the display effect is further improved.
In one embodiment, a three-dimensional animation switching control is displayed in the live-action picture; the method further comprises the following steps: responding to the triggering operation of the three-dimensional animation switching control, and displaying a motion action list; and responding to the selection operation for the motion action list, switching the three-dimensional model in the three-dimensional animation, which moves according to the motion mode matched with the identifiable real object graph, into the three-dimensional model which moves according to the target motion mode selected by the selection operation.
Specifically, a three-dimensional animation switching control may be displayed in the live-action screen, for example, referring to fig. 6, a control 606 for switching a three-dimensional animation may be displayed. By the three-dimensional animation switching control, the three-dimensional animation displayed in the live-action picture, that is, the movement mode of the three-dimensional model in the three-dimensional animation displayed in the live-action picture can be switched. When the user clicks the three-dimensional animation switching control, the terminal can respond to the clicking operation of the three-dimensional animation switching control by the user to display a motion action list. The motion action list can comprise a plurality of motion actions, and a user can select a target motion action to be switched to from the motion action list, so that the terminal responds to the selection operation of the user on the motion action list to switch a three-dimensional model moving in the three-dimensional animation to a three-dimensional model moving according to the target motion mode selected by the selection operation.
In this embodiment, by displaying the three-dimensional animation switching control, the three-dimensional animation can be switched conveniently based on the three-dimensional animation switching control. The three-dimensional animation displayed in the overlapped mode in the switchable live-action picture further enriches the display diversity of the live-action object, and greatly improves the user experience.
In one embodiment, the live-action picture comprises a plurality of collection graphics; the method further comprises the following steps: when the collection graphics to be lightened, which are matched with the identifiable real object graphics, exist in the collection graphics, the displayed collection graphics to be lightened are lightened; when each of the plurality of collection graphics is lit, preset information is displayed.
Specifically, a plurality of collection graphics may be displayed in the live-action picture. For example, referring to fig. 9, when the terminal collects a live-action scene through the camera, a plurality of collection graphics 901 can be displayed in the terminal screen. The collection graph is a two-dimensional graph of the digital collection, for example, a plurality of three-dimensional digital collections can be designed in advance, so that a two-dimensional plane graph corresponding to the three-dimensional digital collection is taken as the collection graph. The terminal can judge whether each real object graph in the real images is matched with the collection graph, if the collection graph matched with the identifiable real object graph exists in the collection graphs, the terminal can lighten the collection graph to be lightened, for example, the terminal can change the collection graph to be lightened from a gray level graph to a color graph, or the terminal can highlight the collection graph to be lightened and display a three-dimensional model corresponding to the collection graph to be lightened in the real images. For example, referring to fig. 9, when the collection graphic to be lit is a collection graphic a, the user may print the collection graphic a to obtain a printed graphic and scan the printed graphic through a camera in the terminal. At this time, the scanned real scene may have a real object graphic adapted to the collection graphic a, so that the terminal may light the collection graphic a and display the three-dimensional model 902 corresponding to the collection graphic a in the real scene. FIG. 9 illustrates a schematic diagram of lighting a collection graphic, in one embodiment.
In one embodiment, the user may also find a real object with a collection graphic in a real scene, and scan the real object through a camera in the terminal to light the corresponding collection graphic.
In one embodiment, referring to fig. 9, when a corresponding collection graphic is lit, the terminal may display a collection card 903 on which introduction information corresponding to the lit collection graphic may be displayed, so that a user may learn about the found collection graphic based on the displayed introduction information.
In one embodiment, when the user lights up all the collection graphics, the terminal may display preset information, for example, may display a prompt message of "you have lit up all the collection graphics". And the terminal can trigger the user to transfer the resources, for example, the resources in the preset account are transferred to the user account.
In one embodiment, when the user lights the collection graphic, the user may share the collection graphic lighting information to the friend, so that the friend also participates in the collection and lighting process of the collection graphic.
In the embodiment, the illuminated collection graph and the collection graph to be illuminated are displayed, and when a real object with the collection graph in a real scene is found, the corresponding collection graph is illuminated, so that the interestingness of the three-dimensional model display is greatly improved, and the diversity of the displayed three-dimensional model is improved.
In one embodiment, the live-action picture is a picture acquired by the image acquisition device for the live-action scene; the method further comprises the following steps: when the real scene acquired by the image acquisition equipment is changed from including the identifiable real scene object graph to not including the real scene object graph identifiable in real time, the three-dimensional model corresponding to the identifiable real scene object graph is canceled from being displayed.
Specifically, when the image acquisition equipment in the terminal acquires the real scene in real time, the terminal can display the acquired real scene in real time in the display screen. When the user moves the terminal to change the real scene from including the identifiable real scene object graph to not including the real scene object graph identifiable in real time, the terminal can cancel to display the three-dimensional model corresponding to the identifiable real scene object graph in the real scene graph. That is, when the terminal changes from image acquisition for an identifiable real object to image acquisition not for the identifiable real object, the terminal may cancel displaying the corresponding three-dimensional model.
Accordingly, when the user moves the terminal again to change the real scene from including the unrecognizable real scene object graphic to including the real scene object graphic identifiable in real time, the terminal may display the three-dimensional model corresponding to the identifiable real scene object graphic again.
In this embodiment, when the real-scene image changes from including the identifiable real-scene object image to not including the real-time identifiable real-scene object image, the effect that the three-dimensional model changes along with the change of the collected real-scene image can be achieved by canceling the display of the three-dimensional model corresponding to the identifiable real-scene object image, so that the purpose of real-time display of the three-dimensional model is achieved.
In one embodiment, the method further comprises: responding to the export operation of the three-dimensional model, obtaining a three-dimensional material corresponding to the three-dimensional model, and sending the three-dimensional material to a server; the three-dimensional material is used for triggering the server to generate a combined three-dimensional model based on the received three-dimensional material.
Specifically, a one-key export control can be displayed in the real-time picture, and a user can export a three-dimensional model through the one-key export control to obtain the three-dimensional material. The three-dimensional material refers to a digitized three-dimensional model, for example, the three-dimensional model may be composed of a plurality of three-dimensional space points, and the three-dimensional material may include coordinate information corresponding to each three-dimensional space point in the three-dimensional model, and material information of three-dimensional materials composing the three-dimensional model, so that a three-dimensional model may be reconstructed through the three-dimensional material. For example, when the three-dimensional material a is a material derived from a three-dimensional model of a water bottle pattern, the three-dimensional material a can be restored to the three-dimensional model of the water bottle pattern by performing a model reconstruction process on the three-dimensional material a.
Further, when the terminal obtains the three-dimensional material, the terminal can send the generated three-dimensional material to the server, so that the server stores the received three-dimensional material. When the server receives a plurality of three-dimensional materials, for example, when the server receives a plurality of three-dimensional materials sent by a single terminal within a period of time, or when the server receives a plurality of three-dimensional materials sent by a plurality of terminals, the server may combine the received plurality of three-dimensional materials to obtain a combined three-dimensional model. For example, the server may perform model reconstruction processing on each three-dimensional material, so as to generate a three-dimensional model corresponding to each three-dimensional material, and recombine the generated multiple three-dimensional models to obtain a combined three-dimensional model. For convenience of distinction, a three-dimensional model obtained by performing model reconstruction processing on a three-dimensional material is hereinafter referred to as a three-dimensional model component. For example, referring to fig. 10, when the server receives the three-dimensional material a and the three-dimensional material B, and reconstructs a three-dimensional model of a water bottle style from the three-dimensional material a, and reconstructs a three-dimensional model of a penguin style from the three-dimensional material B, the server may combine the three-dimensional model of the water bottle style with the three-dimensional model of the penguin style to obtain a combined three-dimensional model. FIG. 10 illustrates a schematic diagram of a combined three-dimensional model in one embodiment.
In one embodiment, when a plurality of identifiable real object graphics are displayed in the real picture, the terminal may superimpose and display a plurality of three-dimensional models in the real picture. The terminal can respond to the export operation for the plurality of three-dimensional models to obtain a plurality of three-dimensional materials.
In one embodiment, when multiple three-dimensional model components are obtained, the server may adjust information such as relative positions, relative rotation angles, and relative sizes between the three-dimensional model components to obtain a combined three-dimensional model.
In one embodiment, referring to FIG. 11, a combined three-dimensional model generation platform may be deployed on a server, by which a combined three-dimensional model may be generated. The user may upload the parameter file, the combining rule, and the plurality of three-dimensional stories to the combined three-dimensional model generation platform. The parameter file records the position coordinates corresponding to each three-dimensional model component, the combination rule records the combination mode among a plurality of three-dimensional model components, and when a combination three-dimensional model needs to be generated, the combination three-dimensional model generation platform can generate a plurality of three-dimensional model components according to a plurality of three-dimensional materials, and the plurality of three-dimensional model components are combined according to the parameter file and the combination rule to obtain an initial combination three-dimensional model. When the combined three-dimensional model is not required to be generated, the combined three-dimensional model generation platform can only perform model reconstruction processing on the three-dimensional materials, and a three-dimensional model assembly is obtained. Further, when the initial combined three-dimensional model or the three-dimensional model component is obtained, the combined three-dimensional model generation platform can compress the initial combined three-dimensional model or the three-dimensional model component so as to reduce the storage volume of the combined three-dimensional model or the three-dimensional model component, and a compressed combined three-dimensional model or the three-dimensional model component is obtained. Further, the combined three-dimensional model generation platform converts the combined three-dimensional model or the three-dimensional model component into a two-dimensional image to obtain a model poster, and stores the model poster. The combined three-dimensional model generating platform can display the compressed combined three-dimensional model or three-dimensional model component, display the corresponding model poster, and issue the combined three-dimensional model or the three-dimensional model component to be linked when the displayed combined three-dimensional model, the displayed three-dimensional model component and the displayed model poster are displayed in an error-free manner. And when the combined three-dimensional model displays abnormality, adjusting the parameter file, and regenerating the combined three-dimensional model. When the model poster is displayed abnormally, the combined three-dimensional model or the three-dimensional model component can be adjusted, or the model poster can be manually modified to obtain the model poster which can be displayed normally. FIG. 11 illustrates a schematic of the generation of a combined three-dimensional model in one embodiment.
Through issuing the uplink three-dimensional model component or the combined three-dimensional model, a user can download the issued uplink three-dimensional model component and the issued uplink combined three-dimensional model, and store and share the downloaded three-dimensional model component and the downloaded combined three-dimensional model, so that the sharing efficiency of the three-dimensional model component or the downloaded combined three-dimensional model is improved. By generating the model poster, a user can download the model poster and store and share the downloaded model poster, so that the sharing efficiency of the model poster is improved.
In the above embodiment, by generating the combined three-dimensional model, the terminal may be triggered to display the generated combined three-dimensional model, thereby improving the diversity of the displayed three-dimensional model.
In one embodiment, a model combination control is displayed in the live-action picture; the method further comprises the following steps: responding to the triggering operation for the model combination control, and displaying a three-dimensional material list; the three-dimensional material list comprises a plurality of three-dimensional materials; each three-dimensional material is obtained in response to the export operation of the three-dimensional model in the corresponding live-action picture; responding to the selection operation for the three-dimensional material list, and displaying the corresponding combined three-dimensional model; combining the three-dimensional model comprises selecting three-dimensional model components corresponding to the selected multiple target three-dimensional materials.
Specifically, a model combination control can be displayed in the live-action picture, and the three-dimensional model can be combined through the model combination control. When the user clicks the model combination control, the terminal may display a three-dimensional material list in response to a trigger operation for the model combination control. The three-dimensional material list includes a plurality of three-dimensional materials, and the plurality of three-dimensional materials may include three-dimensional materials corresponding to a three-dimensional model displayed in a current live-action picture, that is, three-dimensional materials obtained by performing an export operation on the three-dimensional model displayed in the current live-action picture. Correspondingly, the three-dimensional material list can also comprise three-dimensional materials corresponding to the three-dimensional model displayed on the historical live-action picture. The three-dimensional material list can also comprise three-dimensional materials downloaded from a network.
Further, the user can select a plurality of target three-dimensional materials from the three-dimensional material list, so that the terminal responds to the selection operation of the user on the three-dimensional material list, determines the plurality of target three-dimensional materials selected by the selection operation, generates a combined three-dimensional model according to the plurality of target three-dimensional materials, and displays the combined three-dimensional model. For example, a pop-up window displays the combined three-dimensional model, displays the combined three-dimensional model in a live-action screen, and so on.
In one embodiment, in response to a selection operation for a three-dimensional material list, displaying a corresponding combined three-dimensional model includes: determining a plurality of target three-dimensional materials selected by the selection operation in response to the selection operation for the three-dimensional material list; respectively carrying out model reconstruction processing on each target three-dimensional material to obtain a three-dimensional model component corresponding to each target three-dimensional material; and responding to the editing operation of the three-dimensional model component, obtaining a combined three-dimensional model and displaying the combined three-dimensional model.
Specifically, for each of a plurality of target three-dimensional materials selected by a user, the terminal performs model reconstruction processing on the targeted target three-dimensional material to restore the targeted target three-dimensional material to a three-dimensional model, so as to obtain a three-dimensional model assembly. Further, when generating the three-dimensional model component corresponding to each target three-dimensional material, the terminal may display each generated three-dimensional model component, and the user may edit each displayed three-dimensional model component to obtain a combined three-dimensional model. For example, a user may adjust the relative position, coverage relationship, etc. between the three-dimensional model components.
In one embodiment, the terminal may display an edit control for editing the three-dimensional model component, so that a user may edit the displayed three-dimensional model component through the edit control. The terminal may respond to an editing operation for the three-dimensional model component, and adjust information such as a position, a rotation angle, a size and the like of the corresponding three-dimensional model component in the screen based on the editing operation, so as to obtain a combined three-dimensional model.
In one embodiment, the editing operation includes at least one of a spatial position moving operation, a rotation angle adjusting operation, or a size scaling operation; a spatial position moving operation for adjusting a spatial position of the three-dimensional model component; rotation angle adjustment operation for adjusting the rotation angle of the three-dimensional model component; and a size scaling operation for adjusting the size of the three-dimensional model component.
Specifically, when the terminal displays the three-dimensional model components corresponding to each target three-dimensional material, the user can adjust the relative positional relationship among the three-dimensional model components, the rotation angle and the size of each three-dimensional model component and the like through editing operation, so as to obtain a combined three-dimensional model. For example, when having three-dimensional model components a to C, the user may adjust three-dimensional model component a to the left side of three-dimensional model component B, adjust three-dimensional model component C to the right side of three-dimensional model component B by editing operation, increase the size of three-dimensional model component a, decrease the size of three-dimensional model component C, and adjust the rotation angle of three-dimensional model component B to obtain adjusted three-dimensional model components, and the adjusted three-dimensional model components form a combined three-dimensional model.
In the above embodiment, by displaying the three-dimensional material list, the user can freely select the target three-dimensional material for generating the combined three-dimensional model based on the displayed three-dimensional material list, thereby greatly improving the degree of freedom of generating the combined three-dimensional model. By displaying the three-dimensional model components corresponding to the target three-dimensional materials, a user can freely edit the three-dimensional model components to generate a corresponding combined three-dimensional model, and the degree of freedom of generating the combined three-dimensional model is further improved. Because the combined three-dimensional model can be freely generated, not only are the generation modes of the combined three-dimensional model enriched, but also the types of the generated combined three-dimensional model are enriched.
In one embodiment, the method further comprises: responding to the sharing operation for the combined three-dimensional model, and displaying a receiver list; responding to the selection operation for the receiver list, and sending a model poster corresponding to the combined three-dimensional model to the target receiver selected by the selection operation; and the sent model poster is used for displaying the corresponding combined three-dimensional model when the target receiver triggers the model poster.
Specifically, when the user desires to share the generated combined three-dimensional model, the user may trigger a sharing operation, so that the terminal may display the recipient list in response to the sharing operation. The user may select the target recipient from the displayed recipient list, and the terminal may generate a model poster corresponding to the combined three-dimensional model and send the model poster to the terminal corresponding to the target recipient. When the terminal corresponding to the target receiver receives the model poster, the terminal corresponding to the target receiver can display the model poster, and when the target receiver triggers the model poster, for example, when the target receiver clicks the model poster, the terminal corresponding to the target receiver can display the combined three-dimensional model.
In one embodiment, the terminal may draw a picture of the combined three-dimensional model, generate a two-dimensional picture of the combined three-dimensional model, and use the two-dimensional picture as a model poster.
In the above embodiment, since the model poster is a two-dimensional picture, the storage volume thereof is smaller than the storage volume of the combined three-dimensional model, that is, the data volume of the model poster is smaller than the data volume of the combined three-dimensional model, therefore, compared with directly transmitting the combined three-dimensional model to the terminal corresponding to the target receiver, resources such as bandwidth consumed by transmission can be saved.
In one embodiment, when the real scene includes identifiable real scene object graphics, displaying a three-dimensional model corresponding to the real scene object graphics in the real scene includes: when the real scene comprises identifiable real scene object graphics, the identifiable real scene object graphics are segmented from the real scene to obtain segmented images: and generating an initial three-dimensional model according to the segmentation image, and generating and displaying a three-dimensional model corresponding to the identifiable real-scene object graph according to the segmentation image and the initial three-dimensional model.
Specifically, when the live-action picture is obtained, the terminal can perform object recognition on the identifiable live-action object graph in the live-action picture to obtain edge contour information of the identifiable live-action object graph, and divide the identifiable live-action object graph from the live-action picture according to the edge contour information, for example, when the identifiable live-action object graph is a water bottle graph, the terminal divides the water bottle graph from the live-action picture to obtain a divided image. Further, the terminal may generate an initial three-dimensional model based on the segmented image, and generate and display a three-dimensional model corresponding to the identifiable real object pattern based on the segmented image and the initial three-dimensional model.
In one embodiment, the initial three-dimensional model may be a model having only three-dimensional contours, for example, the initial three-dimensional model may be 801 as shown in FIG. 8. The terminal may determine the size of the model contour surface in the initial three-dimensional model and adjust the size of the segmented image to the size of the model contour surface to obtain a model map, e.g., 802 as shown in fig. 8. Further, the terminal posts the model map to a model contour surface in the initial three-dimensional model to obtain a three-dimensional model corresponding to the identifiable live-action object graph.
In the above embodiment, the split image is split from the live-action picture, so that the split image can be used as a reference to generate a corresponding three-dimensional model, thereby realizing the conversion from two dimensions to three dimensions. In addition, as the corresponding three-dimensional model can be generated by only one two-dimensional image, the data volume for generating the three-dimensional model is reduced, the resources such as a computer for processing the data are saved, the generation efficiency of the three-dimensional model is improved, and the purpose of converting the real-time object graph into the three-dimensional model is realized.
In one embodiment, generating an initial three-dimensional model from the segmented image includes: generating two symmetrical model contours with preset intervals according to the segmented image; filling each model contour through a three-dimensional material to obtain two symmetrical model contour surfaces; connecting two symmetrical model contour surfaces through a three-dimensional material to obtain a three-dimensional model to be optimized; and optimizing the three-dimensional model to be optimized to obtain an initial three-dimensional model.
Specifically, the terminal may generate two symmetrical model contours with a preset interval from the segmented image. Wherein the model contour is adapted to the contour of the segmented image, e.g. the model contour corresponds to the contour of the segmented image. Further, the terminal can fill each model contour through preset three-dimensional materials respectively so as to obtain two symmetrical model contour surfaces. And connecting the two symmetrical model contour surfaces through the three-dimensional material by the terminal to obtain the three-dimensional model to be optimized. The terminal can directly take the three-dimensional model to be optimized as an initial three-dimensional model, and the terminal can optimize the three-dimensional model to be optimized to obtain the initial three-dimensional model. For example, the three-dimensional model to be optimized may have a cavity without the three-dimensional model material added thereto, and then the three-dimensional model may be obtained by adding the three-dimensional model material to the cavity.
In one embodiment, referring to fig. 12, when the identifiable real-scene graph is a circle, two symmetrical model contours generated by the circle may be 1201, a model contour surface obtained by filling the model contours with a three-dimensional material may be 1202, and a three-dimensional model to be optimized obtained by connecting the two symmetrical model contour surfaces with the three-dimensional material may be 1203. FIG. 12 illustrates a schematic of the generation of a three-dimensional model to be optimized in one embodiment.
In one embodiment, generating two symmetrical model contours with preset spacing from the segmented image includes: for each contour pixel point in the segmented image, converting the targeted contour pixel point from a pixel coordinate system to a world coordinate system to obtain a world coordinate point corresponding to the targeted contour pixel point; fissioning world coordinate points corresponding to the targeted contour pixel points according to the preset distance to obtain fissioning coordinate point pairs corresponding to the targeted contour pixel points; and generating two symmetrical model contours with preset intervals according to the fission coordinate point pairs corresponding to each contour pixel point.
Specifically, the contour of the segmented image may be composed of a plurality of contour pixel points. The contour pixel points refer to pixel points on the contour of the divided image. For example, when the segmented image is a water bottle pattern, a plurality of contour pixels on the segmented image are matched to form the water bottle pattern. For each contour pixel point in the segmented image, the terminal converts the corresponding contour pixel point from a pixel coordinate system to an image coordinate system to obtain an image coordinate point, and then converts the image coordinate point from the image coordinate to a world coordinate system to obtain a world coordinate point corresponding to the corresponding contour pixel point. The terminal obtains a preset interval, increases the Y-axis coordinate of the world coordinate point by half of the preset interval to obtain a first fission coordinate, decreases the Y-axis coordinate of the world coordinate point by half of the preset interval to obtain a second fission coordinate, and synthesizes the first fission coordinate and the second fission coordinate to obtain a fission coordinate point pair. And synthesizing the fission coordinate point pairs corresponding to each contour pixel point by the terminal to obtain two symmetrical model contours with preset intervals. For example, the terminal synthesizes first fission coordinates in each pair of fission coordinate points to obtain a first model contour, synthesizes second fission coordinates in each pair of fission coordinate points to obtain a second model contour, and the distance between the first model contour and the second model contour is a preset distance.
In one embodiment, the terminal may convert the targeted contour pixel points to world coordinate points by the following formula:
wherein u and v are coordinate values of the outline pixel point in a pixel coordinate system respectively; and x, y in f (x, y, 1) is the coordinate value of the image coordinate point in the image coordinate system. fx is x in f (x, y, 1), and fy is y in f (x, y, 1); r is an empirical value, and T is also an empirical value. The inverse matrix can be calculated for K1 and K2 respectively, and then matrix operation is carried out to obtain the x, y and z values of the world coordinate system. Then, increasing the y value in the world coordinate system by half of the preset interval to obtain the y value of the first fission coordinate; and subtracting half of the preset interval from the y value in the world coordinate system to obtain the y value of the second fission coordinate.
In the embodiment, the world coordinate points are fissiled through the preset distance, and the fissile coordinate point pairs can be obtained rapidly based on the fissile process, so that the generation efficiency of the model contour is improved.
In one embodiment, the two symmetrical model contour surfaces include a first model contour surface and a second model contour surface; connecting two symmetrical model contour surfaces through three-dimensional materials to obtain a three-dimensional model to be optimized, wherein the method comprises the following steps: for each first edge contour point on the first model contour surface, determining a target second edge contour point on the second model contour surface, which is symmetrical to the first edge contour point aimed at; and respectively connecting each first edge contour point with a corresponding target second edge contour point through the three-dimensional material to obtain a three-dimensional model to be optimized.
Specifically, when the first model contour is filled with three-dimensional materials, a first model contour surface can be obtained, and when the second model contour is filled with three-dimensional materials, a second model contour surface can be obtained. The contour of the first model contour surface may be composed of first edge contour points, and the contour of the second model contour surface may be composed of second edge contour points, wherein the edge contour points refer to pixel points on the contour of the corresponding model contour surface. Aiming at each first edge contour point on the first model contour surface, the terminal determines a target edge contour point symmetrical to the aimed first edge contour point on the second model contour surface, and connects the aimed first edge contour point with the symmetrical target edge contour point through a three-dimensional material, so that the aim of connecting the two model contour surfaces is fulfilled, and a three-dimensional model to be optimized is obtained. For example, at the preset interval p, the coordinates of the first edge contour point may be (x 1, y1-p/2, z 1), and the coordinates of the second edge contour point symmetrical to the first edge contour point may be (x 1, y1+p/2, z 1), so that the terminal may connect (x 1, y1-p/2, z 1) with (x 1, y1+p/2, z 1) through the preset three-dimensional material. It will be readily appreciated that when each first edge contour point is connected to a symmetrical second edge contour point, a model depth plane is formed that characterizes the depth of the three-dimensional model.
In one embodiment, the terminal may connect the first edge profile point with the symmetrical second edge profile point by the following formula:
H(M x1,y1,z1 ,N x2,y2,z2 )=Connect(M x1,y1,z1 ,N x2,y2,z2 )
wherein M is x1,y1,z1 Representing a first edge contour point, N x2,y2,z2 Representing a second edge contour point, connect represents connecting the first edge contour point and the third edge contour point by a three-dimensional material.
In the above embodiment, two edge contour points with symmetrical three-dimensional material quality are connected, so that each edge contour point can be covered with three-dimensional material.
In one embodiment, optimizing the three-dimensional model to be optimized to obtain an initial three-dimensional model includes: scanning the three-dimensional model to be optimized to determine a cavity on the three-dimensional model to be optimized; filling the cavity by the three-dimensional material to obtain an initial three-dimensional model.
Specifically, due to the process of converting the two-dimensional graph into the three-dimensional model, after the corresponding edge contour points are connected, the probability that the edge gaps are not closed exists, so that the unclosed holes in the three-dimensional model to be optimized need to be repaired. The terminal is provided with a three-dimensional cavity repairing device, and the three-dimensional cavity repairing device can scan and detect the outer surface of the three-dimensional model to be optimized so as to determine the cavity which is not covered with the three-dimensional material on the three-dimensional model to be optimized. When the cavity on the three-dimensional model to be optimized is determined, the terminal can fill the cavity through the three-dimensional material to obtain an initial three-dimensional model. The cavity refers to a coordinate point which is not covered with a three-dimensional material on the three-dimensional model to be optimized.
In this embodiment, by filling the three-dimensional material into the cavity, each point on the initial three-dimensional model is covered with the corresponding three-dimensional material, so that the model map is covered on the model contour surface of the initial three-dimensional model, and the model map and the model contour surface are more adapted.
In one embodiment, scanning the three-dimensional model to be optimized to determine a void on the three-dimensional model to be optimized includes: scanning the three-dimensional model to be optimized to obtain a three-dimensional space mask of the three-dimensional model to be optimized; the three-dimensional spatial mask comprises a plurality of spatial mask points; for each spatial Mongolian point in the plurality of spatial Mongolian points, detecting the spatial Mongolian point to determine whether a three-dimensional material is added to the spatial Mongolian point; when the three-dimensional material is not added to the aimed spatial mask point, determining that a cavity exists at a position corresponding to the aimed spatial mask point on the three-dimensional model to be optimized.
Specifically, the three-dimensional cavity repairing device can scan the three-dimensional model to be optimized to obtain a three-dimensional space mask of the three-dimensional model to be optimized, and determine the cavity on the three-dimensional model to be optimized by determining whether each space mask point on the three-dimensional space mask is covered with a three-dimensional material. The three-dimensional space mask may include a plurality of space mask points, where each space mask point is obtained by scanning a three-dimensional model to be optimized, for example, the three-dimensional cavity repairing device may scan the three-dimensional model to be optimized to obtain a point on an outer surface of the three-dimensional model to be optimized. For convenience of description, points on the outer surface will be referred to as outer surface points hereinafter. The three-dimensional cavity repairing device can determine the position coordinates of the obtained outer surface points in the world coordinate system and the material information of the outer surface points, and integrate the position coordinates and the material information to obtain the space mask points. As will be readily appreciated, the three-dimensional hole repairer scans each external surface point on the external surface of the three-dimensional model to be optimized and generates a spatial mask point corresponding to each external surface point, and synthesizes the spatial mask points to obtain a three-dimensional spatial mask.
Because the three-dimensional space mask can be composed of a plurality of space mask points, for each space mask point in the plurality of space mask points, the three-dimensional cavity repairing device can determine whether the space mask point is covered with a three-dimensional material or not according to the material information carried on the space mask point, and if the space mask point is not covered with the three-dimensional material, the three-dimensional cavity repairing device determines that a cavity exists at a position corresponding to the space mask point on the three-dimensional model to be optimized. Because the spatial mask points are obtained by integrating the position coordinates and the material information, the position coordinates carried by the aimed spatial mask points are the positions corresponding to the aimed spatial mask points on the three-dimensional model to be optimized, namely, the position coordinates carried by the aimed spatial mask points are the positions with holes on the three-dimensional model to be optimized.
In one embodiment, the terminal may fill the cavity with the three-dimensional material according to the following formula:
wherein w is the three-dimensional model to be optimized; the max_boundary is a three-dimensional space mask for acquiring a three-dimensional model w to be optimized; n is the total number of outer surface points; q is a three-dimensional coordinate point on the three-dimensional model to be optimized. H is the acquired three-dimensional space mask; 3 Material Characterizing a three-dimensional material; and +.>And c, representing and traversing to judge whether the added three-dimensional material exists on the outer surface point corresponding to the space mask point in the step H, if not, supplementing the three-dimensional material, otherwise, not operating. After the traversing is completed, a complete initial three-dimensional model can be obtained.
In the embodiment, by traversing each space mask point, whether three-dimensional materials are added to each external surface point on the three-dimensional model to be optimized or not can be determined based on the traversing process, so that the comprehensive detection of the three-dimensional model to be optimized is realized, and the detection accuracy is further improved.
It is to be easily understood that the above-described generation process of the three-dimensional model may also be performed by a server.
In one embodiment, referring to fig. 13, fig. 13 illustrates a three-dimensional model display method in one specific embodiment, the three-dimensional model display method includes:
in step 1302, the terminal displays a live-action picture of the real scene, and changes the start recognition control to stop recognition control for display in response to the trigger operation for the start recognition control.
In step 1304, when the real scene includes the identifiable real scene object graph, the terminal segments the identifiable real scene object graph from the real scene to obtain a segmented image.
Step 1306, the terminal generates two symmetrical model contours with preset intervals according to the segmented image; the model contour is adapted to the image contour of the segmented image.
Step 1308, the terminal generates two symmetrical model contours with preset intervals according to the segmented image; filling each model contour through a three-dimensional material to obtain two symmetrical model contour surfaces; the two symmetrical model contour surfaces include a first model contour surface and a second model contour surface.
Step 1310, for each first edge contour point on the first model contour surface, determining a target second edge contour point on the second model contour surface that is symmetrical to the first edge contour point; and respectively connecting each first edge contour point with a corresponding symmetrical target second edge contour point through the three-dimensional material to obtain a three-dimensional model to be optimized.
Step 1312, the terminal scans the three-dimensional model to be optimized to determine a cavity on the three-dimensional model to be optimized; filling the cavity by the three-dimensional material to obtain an initial three-dimensional model.
In step 1314, the terminal generates and displays a three-dimensional model corresponding to the identifiable real-scene object graph according to the segmented image and the initial three-dimensional model, and changes the identification stopping control into the identification starting control for display.
In step 1316, the terminal displays a three-dimensional animation in the live-action picture, the three-dimensional animation including a moving three-dimensional model.
Step 1318, the terminal responds to the triggering operation of the model map switching control, and displays a model map list; and responding to the selection operation for the mapping list, and switching the model mapping covered on the model outline surface into the target model mapping selected by the selection operation for display.
Step 1320, the terminal responds to the triggering operation of the three-dimensional animation switching control to display a motion action list; and responding to the selection operation for the motion action list, and switching the three-dimensional model in the three-dimensional animation, which moves according to the motion mode matched with the pattern of the real object, into the three-dimensional model which moves according to the target motion mode selected by the selection operation.
Step 1322, the terminal responds to the export operation for the three-dimensional model to obtain the three-dimensional material corresponding to the three-dimensional model; responding to the triggering operation for the model combination control, and displaying a three-dimensional material list; the three-dimensional material list comprises a plurality of three-dimensional materials; responding to the selection operation for the three-dimensional material list, and displaying the corresponding combined three-dimensional model; combining the three-dimensional models, wherein the three-dimensional model components corresponding to the multiple target three-dimensional materials selected by the selection operation are included.
Step 1324, the terminal responds to the sharing operation for the combined three-dimensional model, and displays a receiver list; responding to the selection operation for the receiver list, and sending a model poster corresponding to the combined three-dimensional model to the target receiver selected by the selection operation; and the sent model poster is used for displaying the corresponding combined three-dimensional model when the target receiver triggers the model poster.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The application also provides an application scene, which applies the three-dimensional model display method. Specifically, the application of the three-dimensional model display method in the application scene is as follows:
referring to fig. 14, a user may scan a real object in a real scene through an image capturing device in the terminal, so that a terminal screen may display a real picture including a graphic of the real object. The terminal acquires an identifiable real-scene object graph in a real-scene picture by utilizing an object identification technology, and performs object segmentation after acquiring a rectangular frame of the real-scene object graph to obtain object edge contour information. The terminal converts two-dimensional edge pixel coordinate points into three-dimensional coordinate points by utilizing object edge contour information, the thickness of the three-dimensional model is set to be M, and two model contours with M intervals are generated in a three-dimensional space according to the M. And filling the two model contours through the three-dimensional material by the terminal to obtain two model contour surfaces, and linking the three-dimensional material with the corresponding edge three-dimensional coordinate points on the model contour surfaces to obtain the three-dimensional model to be optimized. The terminal can also detect along the three-dimensional model to be optimized through a three-dimensional cavity repairing device, and repair is performed when the existence of the cavity is detected, so that an initial three-dimensional model is obtained. And the terminal covers the identifiable real-scene object graph segmented from the real-scene picture on the model contour surface of the initial three-dimensional model to obtain the three-dimensional model and displays the three-dimensional model. FIG. 14 illustrates a schematic of a generation architecture of a three-dimensional model in one embodiment.
The application further provides an application scene, and the application scene applies the three-dimensional model display method. Specifically, the application of the three-dimensional model display method in the application scene is as follows:
the user can wear the expansion reality equipment, the real scene picture of the display scene is acquired through the expansion reality equipment, when the identifiable real scene object graph is included in the real scene picture, the three-dimensional model corresponding to the identifiable real scene object graph is overlapped and displayed in the real scene picture, and therefore the purpose of combining the virtual and the reality is achieved.
The above application scenario is only illustrative, and it is to be understood that the application of the three-dimensional model display method provided by the embodiments of the present application is not limited to the above scenario.
Based on the same inventive concept, the embodiment of the application also provides a three-dimensional model display device for realizing the three-dimensional model display method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the three-dimensional model display device provided below may be referred to the limitation of the three-dimensional model display method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 15, there is provided a three-dimensional model display apparatus 1500, comprising: a picture display module 1502, a model display module 1504, and a model motion module 1506, wherein:
the screen display module 1502 is configured to, in response to a triggering operation for the image acquisition control, invoke the image acquisition device to acquire a real scene, and display a real scene of the acquired real scene.
A model display module 1504 for displaying a three-dimensional model corresponding to the identifiable real object graphic in the real picture when the real picture includes the identifiable real object graphic; the three-dimensional model is provided with a model contour surface and a model depth surface, wherein the contour of the model contour surface is matched with the object contour of the real object graph, and the model depth surface is matched with the contour surface to represent the depth of the three-dimensional model.
The model movement module 1506 is configured to display movement of the three-dimensional model in the live-action picture.
In one embodiment, the model display module 1504 is further configured to segment the identifiable real object graph from the real scene to obtain a segmented image when the real scene includes the identifiable real object graph: and generating an initial three-dimensional model according to the segmentation image, and generating and displaying a three-dimensional model corresponding to the identifiable real-scene object graph according to the segmentation image and the initial three-dimensional model.
In one embodiment, the model display module 1504 is further configured to generate two symmetrical model contours with preset pitches from the segmented image; the model contour is matched with the image contour of the segmented image; filling each model contour through a three-dimensional material to obtain two symmetrical model contour surfaces; connecting two symmetrical model contour surfaces through a three-dimensional material to obtain a three-dimensional model to be optimized; and optimizing the three-dimensional model to be optimized to obtain an initial three-dimensional model.
In one embodiment, the model display module 1504 is further configured to, for each contour pixel point in the segmented image, convert the contour pixel point from a pixel coordinate system to a world coordinate system, and obtain a world coordinate point corresponding to the contour pixel point; according to the preset interval, the world coordinate points corresponding to the targeted contour pixel points are subjected to fission, and fission coordinate point pairs corresponding to the targeted contour pixel points are obtained; and generating two symmetrical model contours with preset intervals according to the fission coordinate point pairs corresponding to each contour pixel point.
In one embodiment, the two symmetrical model contour surfaces include a first model contour surface and a second model contour surface; the model display module 1504 is further configured to determine, for each first edge contour point on the first model contour surface, a target second edge contour point on the second model contour surface that is symmetrical to the first edge contour point for which it is intended; and respectively connecting each first edge contour point with a corresponding symmetrical target second edge contour point through the three-dimensional material to obtain a three-dimensional model to be optimized.
In one embodiment, the model display module 1504 is further configured to scan the three-dimensional model to be optimized to determine a void on the three-dimensional model to be optimized; filling the cavity by the three-dimensional material to obtain an initial three-dimensional model.
In one embodiment, the model display module 1504 is further configured to scan the three-dimensional model to be optimized to obtain a three-dimensional spatial mask of the three-dimensional model to be optimized; the three-dimensional spatial mask comprises a plurality of spatial mask points; for each spatial Mongolian point in the plurality of spatial Mongolian points, detecting the spatial Mongolian point to determine whether a three-dimensional material is added to the spatial Mongolian point; when the three-dimensional material is not added to the aimed spatial mask point, determining that a cavity exists at a position corresponding to the aimed spatial mask point on the three-dimensional model to be optimized.
In one embodiment, the live-action screen displays a start recognition control; the model display module 1504 is further configured to change the start recognition control to the stop recognition control for display in response to a trigger operation for the start recognition control; the stop recognition control is used for stopping recognition of the live-action object graph in the live-action picture; when the identifiable real-scene object graph exists in the real-scene picture, displaying the corresponding three-dimensional model, and changing the identification stopping control into the identification starting control for displaying.
In one embodiment, the model contour surface is covered with a model map; the pattern in the model map is adapted to the pattern in the recognizable live-action object graphic.
In one embodiment, the model display module 1504 is further configured to display a model map list in response to a trigger operation for a model map switch control; and responding to the selection operation for the mapping list, and switching the model mapping covered on the model outline surface into the target model mapping selected by the selection operation for display.
In one embodiment, the model motion module 1506 is further configured to display a three-dimensional animation in the live-action screen; the three-dimensional animation includes a three-dimensional model that moves in a motion pattern that is compatible with the recognizable live-action object graphics.
In one embodiment, a three-dimensional animation switching control is displayed in the live-action picture; the model motion module 1506 is further configured to display a motion action list in response to a trigger operation for the three-dimensional animation switching control; and responding to the selection operation for the motion action list, switching the three-dimensional model in the three-dimensional animation, which moves according to the motion mode matched with the figure of the identifiable real-scene object, into the three-dimensional model which moves according to the target motion mode selected by the selection operation.
In one embodiment, the live-action picture comprises a plurality of collection graphics; the model display module 1504 is further configured to, when a collection graphic to be lightened, which is adapted to the identifiable real object graphic, exists in the plurality of collection graphics, lighten and display the collection graphic to be lightened; when each of the plurality of collection graphics is lit, preset information is displayed.
In one embodiment, the live-action picture is a picture acquired by the image acquisition device for the live-action scene; the model display module 1504 is further configured to cancel display of the three-dimensional model corresponding to the identifiable real-time object graph when the real-time scene acquired by the image acquisition device changes from including the identifiable real-time object graph to not including the real-time identifiable real-time object graph.
In one embodiment, the three-dimensional model display device 1500 further includes a combination module, configured to obtain a three-dimensional material corresponding to the three-dimensional model in response to an export operation for the three-dimensional model, and send the three-dimensional material to the server; the three-dimensional material is used for triggering the server to generate a combined three-dimensional model based on the received three-dimensional material.
In one embodiment, a model combination control is displayed in the live-action picture; the three-dimensional model display device 1500 further includes a combination module for displaying a three-dimensional material list in response to a trigger operation for the model combination control; the three-dimensional material list comprises a plurality of three-dimensional materials; each three-dimensional material is obtained in response to the export operation of the three-dimensional model in the corresponding live-action picture; responding to the selection operation for the three-dimensional material list, and displaying the corresponding combined three-dimensional model; combining the three-dimensional models, wherein the three-dimensional model components corresponding to the multiple target three-dimensional materials selected by the selection operation are included.
In one embodiment, the combining module is further configured to determine, in response to a selection operation for the three-dimensional material list, a plurality of target three-dimensional materials selected by the selection operation; respectively carrying out model reconstruction processing on each target three-dimensional material to obtain a three-dimensional model component corresponding to each target three-dimensional material; and responding to the editing operation of the three-dimensional model component, obtaining a combined three-dimensional model and displaying the combined three-dimensional model.
In one embodiment, the editing operation includes at least one of a spatial position moving operation, a rotation angle adjusting operation, or a size scaling operation; a spatial position moving operation for adjusting a spatial position of the three-dimensional model component; rotation angle adjustment operation for adjusting the rotation angle of the three-dimensional model component; and a size scaling operation for adjusting the size of the three-dimensional model component.
In one embodiment, the combining module is further configured to display a recipient list in response to a sharing operation for the combined three-dimensional model; responding to the selection operation for the receiver list, and sending a model poster corresponding to the combined three-dimensional model to the target receiver selected by the selection operation; and the sent model poster is used for displaying the corresponding combined three-dimensional model when the target receiver triggers the model poster.
The respective modules in the three-dimensional model display device described above may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 16. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a three-dimensional model display method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 16 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method embodiments described above when the computer program is executed by the processor.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory, among others. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (20)

1. A method for displaying a three-dimensional model, the method comprising:
responding to the triggering operation aiming at the image acquisition control, calling the image acquisition equipment to acquire the real scene, and displaying the real scene picture of the acquired real scene;
when the real scene comprises an identifiable real scene object graph, displaying a three-dimensional model corresponding to the identifiable real scene object graph in the real scene; the three-dimensional model is provided with a model contour surface and a model depth surface, the contour of the model contour surface is matched with the object contour of the live-action object graph, and the model depth surface is matched with the contour surface to represent the depth of the three-dimensional model;
And displaying the motion of the three-dimensional model in the live-action picture.
2. The method of claim 1, wherein when the live-action screen includes an identifiable live-action object graphic, displaying in the live-action screen a three-dimensional model corresponding to the identifiable live-action object graphic, comprising:
when the real scene comprises identifiable real scene object graphics, the identifiable real scene object graphics are segmented from the real scene, and segmented images are obtained:
and generating an initial three-dimensional model according to the segmentation image, generating a three-dimensional model corresponding to the identifiable real-scene object graph according to the segmentation image and the initial three-dimensional model, and displaying the three-dimensional model.
3. The method of claim 2, wherein the generating an initial three-dimensional model from the segmented image comprises:
generating two symmetrical model contours with preset intervals according to the segmented image; the model contour is matched with the image contour of the segmented image;
filling each model contour through a three-dimensional material to obtain two symmetrical model contour surfaces;
connecting the two symmetrical model contour surfaces through a three-dimensional material to obtain a three-dimensional model to be optimized;
And optimizing the three-dimensional model to be optimized to obtain an initial three-dimensional model.
4. A method according to claim 3, wherein said generating two symmetrical model contours with preset spacing from the segmented image comprises:
for each contour pixel point in the segmented image, converting the aimed contour pixel point from a pixel coordinate system to a world coordinate system to obtain a world coordinate point corresponding to the aimed contour pixel point;
according to the preset distance, the world coordinate points corresponding to the targeted outline pixel points are subjected to fission, and a fission coordinate point pair corresponding to the targeted outline pixel points is obtained;
and generating two symmetrical model contours with the preset spacing according to the fission coordinate point pairs corresponding to each contour pixel point.
5. A method according to claim 3, wherein the two symmetrical model profile surfaces comprise a first model profile surface and a second model profile surface; the step of connecting the two symmetrical model contour surfaces through the three-dimensional material to obtain a three-dimensional model to be optimized comprises the following steps:
for each first edge contour point on the first model contour surface, determining a target second edge contour point on the second model contour surface, which is symmetrical to the aimed first edge contour point;
And respectively connecting each first edge contour point with a corresponding symmetrical target second edge contour point through the three-dimensional material to obtain a three-dimensional model to be optimized.
6. A method according to claim 3, wherein said optimizing the three-dimensional model to be optimized to obtain an initial three-dimensional model comprises:
scanning the three-dimensional model to be optimized to determine a cavity on the three-dimensional model to be optimized;
and filling the cavity through the three-dimensional material to obtain an initial three-dimensional model.
7. The method of claim 6, wherein scanning the three-dimensional model to be optimized to determine a void on the three-dimensional model to be optimized comprises:
scanning the three-dimensional model to be optimized to obtain a three-dimensional space mask of the three-dimensional model to be optimized; the three-dimensional space mask comprises a plurality of space mask points;
for each spatial Monte point in the plurality of spatial Monte points, detecting the aimed spatial Monte point to determine whether the three-dimensional material is added on the aimed spatial Monte point;
and when the three-dimensional material is not added to the aimed spatial mask point, determining that a hole exists at a position corresponding to the aimed spatial mask point on the three-dimensional model to be optimized.
8. The method of claim 1, wherein the model contour surface is overlaid with a model map; the pattern in the model map is adapted to the pattern in the identifiable real object graphic.
9. The method of claim 1, wherein a model map switching control is displayed in the live-action screen; the model contour surface is covered with a model map; the method further comprises the steps of:
responding to the triggering operation of the model map switching control, and displaying a model map list;
and responding to the selection operation for the map list, and switching the model map covered on the model contour surface into the target model map selected by the selection operation for display.
10. The method of claim 1, wherein the displaying the three-dimensional model moving in the live-action picture comprises:
displaying a three-dimensional animation in the live-action picture; the three-dimensional animation comprises a three-dimensional model which moves according to a movement mode matched with the recognizable live-action object graph.
11. The method of claim 10, wherein a three-dimensional animation switching control is displayed in the live-action picture; the method further comprises the steps of:
Responding to the triggering operation of the three-dimensional animation switching control, and displaying a motion action list;
and responding to the selection operation for the motion action list, and switching the three-dimensional model in the three-dimensional animation, which moves according to the motion mode matched with the identifiable real object graph, into the three-dimensional model which moves according to the target motion mode selected by the selection operation.
12. The method according to claim 1, wherein the live-action picture is a picture acquired by an image acquisition device for a live-action scene; the method further comprises the steps of:
and when the real-scene picture acquired by the image acquisition equipment is changed from including the identifiable real-scene object graph to not including the real-time identifiable real-scene object graph, canceling to display a three-dimensional model corresponding to the identifiable real-scene object graph.
13. The method of claim 1, wherein a model composition control is displayed in the live-action screen; the method further comprises the steps of:
responding to the triggering operation of the model combination control, and displaying a three-dimensional material list; the three-dimensional material list comprises a plurality of three-dimensional materials; each three-dimensional material is obtained in response to the export operation of the three-dimensional model in the corresponding live-action picture;
Responding to the selection operation for the three-dimensional material list, and displaying a corresponding combined three-dimensional model; the combined three-dimensional model comprises three-dimensional model components corresponding to the multiple target three-dimensional materials selected by the selection operation.
14. The method of claim 13, wherein the displaying the respective combined three-dimensional model in response to the selection operation for the three-dimensional material list comprises:
in response to a selection operation for the three-dimensional material list, determining a plurality of target three-dimensional materials selected by the selection operation;
respectively carrying out model reconstruction processing on each target three-dimensional material to obtain a three-dimensional model component corresponding to each target three-dimensional material;
and responding to the editing operation of the three-dimensional model component, obtaining a combined three-dimensional model and displaying the combined three-dimensional model.
15. The method of claim 14, wherein the editing operation comprises at least one of a spatial position movement operation, a rotation angle adjustment operation, or a size scaling operation; the space position moving operation is used for adjusting the space position of the three-dimensional model component; the rotation angle adjusting operation is used for adjusting the rotation angle of the three-dimensional model component; the size scaling operation is used for adjusting the size of the three-dimensional model component.
16. The method of claim 13, wherein the method further comprises:
responding to the sharing operation for the combined three-dimensional model, and displaying a receiver list;
transmitting a model poster corresponding to the combined three-dimensional model to an target receiver selected by a selection operation in response to the selection operation for the receiver list; and the sent model poster is used for displaying the corresponding combined three-dimensional model when the target receiver triggers the model poster.
17. A three-dimensional model display device, the device comprising:
the picture display module is used for responding to the triggering operation of the image acquisition control, calling the image acquisition equipment to acquire a real scene and displaying a real picture of the real scene;
the model display module is used for displaying a three-dimensional model corresponding to the identifiable real object graph in the real image when the real image comprises the identifiable real object graph; the three-dimensional model is provided with a model contour surface and a model depth surface, the contour of the model contour surface is matched with the object contour of the live-action object graph, and the model depth surface is matched with the contour surface to represent the depth of the three-dimensional model;
And the model movement module is used for displaying the movement of the three-dimensional model in the live-action picture.
18. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 16 when the computer program is executed.
19. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 16.
20. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 16.
CN202310316621.XA 2023-03-28 2023-03-28 Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium Pending CN116977545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310316621.XA CN116977545A (en) 2023-03-28 2023-03-28 Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310316621.XA CN116977545A (en) 2023-03-28 2023-03-28 Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116977545A true CN116977545A (en) 2023-10-31

Family

ID=88473742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310316621.XA Pending CN116977545A (en) 2023-03-28 2023-03-28 Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116977545A (en)

Similar Documents

Publication Publication Date Title
CN113396443B (en) Augmented reality system
US11238644B2 (en) Image processing method and apparatus, storage medium, and computer device
US10540817B2 (en) System and method for creating a full head 3D morphable model
CN106355153A (en) Virtual object display method, device and system based on augmented reality
US20180114363A1 (en) Augmented scanning of 3d models
US11854230B2 (en) Physical keyboard tracking
US11451758B1 (en) Systems, methods, and media for colorizing grayscale images
CN114119839A (en) Three-dimensional model reconstruction and image generation method, equipment and storage medium
KR102612529B1 (en) Neural blending for new view synthesis
CN114631127A (en) Synthesis of small samples of speaking heads
CN103679204A (en) Image identification and creation application system and method based on intelligent mobile device platform
EP3533218B1 (en) Simulating depth of field
CN113220251B (en) Object display method, device, electronic equipment and storage medium
CN114175097A (en) Generating potential texture proxies for object class modeling
CN110232722A (en) A kind of image processing method and device
US20240331245A1 (en) Video processing method, video processing apparatus, and storage medium
CN116485973A (en) Material generation method of virtual object, electronic equipment and storage medium
CN118298127A (en) Three-dimensional model reconstruction and image generation method, device, storage medium and program product
Cui et al. Fusing surveillance videos and three‐dimensional scene: A mixed reality system
CN115170400A (en) Video repair method, related device, equipment and storage medium
WO2022026603A1 (en) Object recognition neural network training using multiple data sources
WO2021173489A1 (en) Apparatus, method, and system for providing a three-dimensional texture using uv representation
CN116664770A (en) Image processing method, storage medium and system for shooting entity
CN116977545A (en) Three-dimensional model display method, three-dimensional model display device, computer equipment and storage medium
CN116055708B (en) Perception visual interactive spherical screen three-dimensional imaging method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication