WO2024156180A1 - Procédé et appareil de mise à jour d'aspect de modèle, et dispositif informatique - Google Patents

Procédé et appareil de mise à jour d'aspect de modèle, et dispositif informatique Download PDF

Info

Publication number
WO2024156180A1
WO2024156180A1 PCT/CN2023/117340 CN2023117340W WO2024156180A1 WO 2024156180 A1 WO2024156180 A1 WO 2024156180A1 CN 2023117340 W CN2023117340 W CN 2023117340W WO 2024156180 A1 WO2024156180 A1 WO 2024156180A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
coordinate
texture map
appearance
target position
Prior art date
Application number
PCT/CN2023/117340
Other languages
English (en)
Chinese (zh)
Inventor
冯浩霖
郑宇航
罗月花
Original Assignee
超聚变数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 超聚变数字技术有限公司 filed Critical 超聚变数字技术有限公司
Publication of WO2024156180A1 publication Critical patent/WO2024156180A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Definitions

  • the present application relates to the field of computer technology, and in particular to a model appearance updating method, device and computing equipment.
  • a three-dimensional model of the physical device can be created and displayed on the computer to simulate the three-dimensional model of the physical device.
  • the remodeling process includes modifying the model skeleton, adding model textures, and adding code. For example, if you need to add a color block of an indicator light to the indicator light area simulated on the 3D model of the simulated hard disk that has been created and displayed on the web page, you need to remodel the 3D model of the simulated hard disk through the modeling software and then republish it. This makes the steps for updating the model appearance more cumbersome.
  • the embodiments of the present application provide a model appearance updating method, apparatus and computing device, which simplify the operation of updating the appearance of a three-dimensional model, thereby improving the efficiency of model appearance updating.
  • the present application provides a method for updating a model appearance, the method comprising: displaying a first model, the first model being a three-dimensional model having a first texture appearance generated by rendering a first texture map; in response to receiving a trigger operation on a first position, determining a first coordinate of a target position, wherein the first position is a corresponding position of the first model displayed on a display interface, the target position is a position where the first position is mapped on a surface of the first model, and the first coordinate of the target position is a three-dimensional coordinate of the target position in the three-dimensional space where the first model is located; based on the first coordinate of the target position, determining a second coordinate of the target position, the second coordinate of the target position is a two-dimensional coordinate of the first coordinate of the target position mapped on the two-dimensional space where the first texture map is located; based on the second coordinate of the target position and the second texture map, updating the appearance of the first model.
  • the first coordinate of the target position mapped on the surface of the three-dimensional model by the first position can be determined, and the second coordinate of the target position mapped on the two-dimensional space where the first texture map is located can be determined according to the first coordinate of the target position, and the update of the appearance of the first model can be determined according to the second coordinate.
  • the area on the first model where the second texture map needs to be displayed can be determined, thereby converting the three-dimensional problem of changing the appearance of the three-dimensional model into the two-dimensional problem of modifying the mapped texture map.
  • the first model is updated.
  • the method comprises: determining a target area based on a second coordinate of a target position; and superimposing a second texture map on the target area of the first texture map to update the appearance of the first model.
  • the appearance of the model can be updated.
  • the target area is an area on the first texture map that contains a pixel point of the second coordinate of the target position and conforms to the size and shape of a preset two-dimensional graphic.
  • the area containing the pixel point of the second coordinate and conforming to the size and shape of the preset two-dimensional graphic can be determined as the target area according to the size and shape of the preset two-dimensional graphic. This allows the second texture map to be superimposed on the first texture map according to the size and shape of the preset two-dimensional graphic, thereby improving the accuracy of the model appearance update.
  • the target area is an area on the first texture map that has the same color as the pixel point at the second coordinate of the target position or has a color error within a specified range with the pixel point at the second coordinate of the target position.
  • the area with the same color or color error in a specified interval is determined as the target area, which facilitates accurate determination of the position area on the first texture map where the second texture map needs to be superimposed, thereby improving the accuracy of the model appearance update.
  • the method also includes: in response to receiving a trigger operation on a second position, determining a third coordinate of a third position, wherein the second position is a corresponding position other than the first position displayed on the display interface of the first model, the third position is a position where the second position is mapped on the surface of the first model, and the third coordinate of the third position is a three-dimensional coordinate of the third position in the three-dimensional space where the first model is located; based on the third coordinate of the third position, determining a fourth coordinate of the third position, the fourth coordinate of the third position is a two-dimensional coordinate where the third coordinate of the third position is mapped on the two-dimensional space where the first texture map is located; superimposing the second texture map at a position corresponding to the fourth coordinate of the first texture map; and updating the appearance of the first model according to the second texture map.
  • the third coordinate of the third position corresponding to the second position can be obtained, and the second texture map can be superimposed at the corresponding position on the first texture map based on the third coordinate, so as to achieve batch triggering of each second position and superimposing the second texture map at the corresponding position of each first texture map, and updating the appearance of the first model according to the superimposed second texture map, which simplifies the operation of updating the appearance of multiple positions of the first model, realizes batch model appearance update operation, and improves the efficiency of model appearance update.
  • the appearance of the first model is updated based on the second coordinates of the target position and the second texture map, including: superimposing the second texture map on the position corresponding to the second coordinates of the first texture map corresponding to the target position to generate a third texture map; and updating the appearance of the first model according to the third texture map.
  • the first texture at the corresponding position on the first texture map is replaced with the second texture of the second texture map to generate a third texture map, and the appearance of the first model is updated according to the third texture map.
  • the appearance of the first model can be updated.
  • the method further includes: determining a first state of the second texture map, the first state being used to indicate whether the second texture map is displayed or hidden; and in response to receiving a preview trigger operation, displaying the appearance of the updated first model according to the first state.
  • the appearance of the first model can be updated by displaying or hiding the second texture map.
  • determining a first state of the second texture map includes:
  • the state switching frequency is the display frequency of the second texture map. The frequency of switching between the displayed state and the hidden state.
  • the state switching frequency causes the second texture map of the target object to switch between the display state and the hidden state according to the frequency, so that the updated first model can be dynamically displayed.
  • the first model is a model used to simulate the first device
  • the second texture map is an indicator light map used to indicate the operating status of the first device.
  • the first model can be a model for simulating the first device
  • the second texture map can be an indicator light map for indicating the operating status of the first device, thereby adding an indicator light map to the model simulating the first device and improving the simulation effect of the model.
  • the method also includes: obtaining the operating status of the first device; based on the operating status of the first device, determining the first state of the indicator light map at the current moment; the first state is used to indicate whether the indicator light map is displayed or hidden; according to the first state of the indicator light map, updating the appearance of the first model to simulate indicating the operating status of the first device through the indicator light.
  • the model simulating the first device can dynamically display the indicator light map according to the actual operating status of the first device, so that the simulated indicator light can indicate the operating status of the first device, thereby improving the simulation effect of the model.
  • the method further includes: generating a web page package; the web page package includes a model skeleton of the first model and a texture map of the first model after the appearance is updated, and the web page package is used for downloading and use by other devices.
  • the updated first model can generate a web page package for download and use by other devices, thereby improving the utilization rate of the first model after the appearance is updated.
  • the present application provides a model appearance updating device, which is used to execute any one of the model appearance updating methods provided in the first aspect.
  • the present application may divide the model appearance update device into functional modules according to the method provided in the first aspect above.
  • each functional module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the present application may divide the model appearance update device into a display module, a processing module, and an update module, etc. according to the function.
  • the description of the possible technical solutions and beneficial effects executed by each of the functional modules divided above can refer to the technical solutions provided in the first aspect above or its corresponding possible implementation, and will not be repeated here.
  • an embodiment of the present application provides a computing device, which includes a processor and a memory, wherein the processor is coupled to the memory; the memory is used to store computer instructions, which are loaded and executed by the processor to enable the computing device to implement the model appearance updating method described in the above aspects.
  • an embodiment of the present application provides a computer-readable storage medium, in which at least one computer program instruction is stored, and the computer program instruction is loaded and executed by a processor to implement the model appearance updating method as described in the above aspects.
  • an embodiment of the present application provides a computer program product, the computer program product including computer instructions, the computer instructions stored in a computer-readable storage medium.
  • a processor of a computing device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computing device executes the model appearance updating method provided in various optional implementations of the first aspect above.
  • FIG1 is a schematic diagram of a computing device according to an exemplary embodiment
  • FIG2 is a schematic flow chart of a method for updating model appearance according to an exemplary embodiment
  • FIG3 is a schematic diagram of a digital twin system production platform interface involved in the embodiment shown in FIG2 ;
  • FIG4 is a schematic diagram of determining a target position on a first model involved in the embodiment shown in FIG2 ;
  • FIG5 is a schematic diagram of a perspective projection involved in the embodiment shown in FIG2 ;
  • FIG6 is a schematic diagram of a texture mapping UV mapping process involved in the embodiment shown in FIG2 ;
  • FIG7 is a schematic diagram of a preset two-dimensional graphic selection involved in the embodiment shown in FIG2 ;
  • FIG. 8 is a schematic diagram of the structure of a model appearance updating device provided by an exemplary embodiment of the present application.
  • a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
  • the character "/" generally indicates that the related objects are in an "or” relationship.
  • plural means two or more than two.
  • At least one of the following or similar expressions refers to any combination of these items, including any combination of single items or plural items.
  • at least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, and c can be single or multiple.
  • the words “first”, “second” and the like are used to distinguish the same items or similar items with substantially the same functions and effects. Those skilled in the art will understand that the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like do not necessarily limit the differences.
  • the words “exemplary” or “for example” are used to indicate examples, illustrations or explanations. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or design. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a concrete manner for ease of understanding.
  • IT Internet technology
  • digital twins can be a simulation process that fully utilizes data such as physical models, sensor updates, and operation histories, integrates multi-disciplinary, multi-physical quantities, multi-scales, and multi-probabilities, and completes mapping in virtual space, thereby reflecting the entire life cycle of the corresponding physical equipment.
  • Digital twins are a concept that transcends reality and can be regarded as a digital mapping system for one or more important, interdependent equipment systems.
  • a digital twin can be a digital version of a "clone" created on the basis of a device or system, which can include virtual representations of the real world including physical objects, processes, relationships, and behaviors.
  • the digital twin skeleton can refer to the geometry of each digital twin in the digital twin system.
  • a finite shape enclosed by several geometric faces can be called a geometric body, the faces that enclose a geometric body are called interfaces or surfaces of the geometric body, the intersections of different interfaces are called edges of the geometric body, and the intersections of different edges are called vertices of the geometric body.
  • a geometric body can also be regarded as a finite space area divided by several geometric faces in space.
  • Materials can represent the surface properties of drawn geometric bodies. Including the color used, and the brightness, a material can reference one or more textures, and one or more textures can be used to wrap the image onto the surface of the geometry. Textures can usually represent an image loaded from a file, generated on a canvas, or rendered from another scene. In a digital twin system, due to its high authenticity, its texture can be composed of one or more map files. The texture in a digital twin system can be called a digital twin texture map.
  • the digital twin i.e., the three-dimensional model, of IT equipment displayed in the digital twin system only includes the fixed display appearance of the IT equipment.
  • the fixed display appearance may refer to the model appearance of the simulated IT equipment that is not affected by the operating status of the IT equipment.
  • the three-dimensional model of the IT equipment displayed in the digital twin system does not include indicator lights or other indicating components for indicating the operating status of the IT equipment.
  • SAS/SATA hard disks have two indicator lights, one is the fault indicator light, which can be mainly used to prompt the occurrence of faults, and the other is the active indicator light, which can be mainly used to show the normal working condition of the hard disk.
  • the fault indicator light which can be mainly used to prompt the occurrence of faults
  • the active indicator light which can be mainly used to show the normal working condition of the hard disk.
  • the following embodiment of the present application provides a model appearance updating method.
  • On the basis of displaying the three-dimensional model of the IT equipment in the digital twin system according to the relevant technology when the user triggers a certain position on the three-dimensional model of the IT equipment, it is also possible to add a texture map according to the position where the trigger operation is received, thereby avoiding re-modeling the three-dimensional model of the IT equipment.
  • the appearance of the three-dimensional model is updated by directly superimposing the texture map on the three-dimensional model according to the triggered position, thereby simplifying the operation of updating the appearance of the three-dimensional model, and further improving the efficiency of adding indicator light display to the three-dimensional model.
  • FIG1 shows a schematic diagram of a computing device provided by an embodiment of the present application.
  • the computing device may include a central processing unit (CPU) 101, a graphics processing unit (GPU) 102, a display device 103, a memory 104, etc.
  • the computing device 100 may have the function of running a digital twin system and displaying a three-dimensional model in the digital twin system.
  • the computing device 100 may run a Three.js engine and support web graphics library (WebGL) technology and Web3D technology.
  • WebGL web graphics library
  • WebGL is a technology that presents interactive 2D and 3D graphics in any compatible web browser without the use of plug-ins.
  • WebGL can be fully integrated into all web standards of the browser, and GPU-accelerated use of image processing and effects can be used as part of the web page canvas.
  • WebGL elements can be added to other hypertext markup language (HTML) elements and mixed with other parts of the web page or web page background.
  • WebGL programs can be composed of handles written in JavaScript and shader code written in shading language (OpenGL shading language, GLSL) and executed on the GPU of the computing device.
  • Web3D can refer to a method of displaying three-dimensional graphics via a web browser.
  • Three.js is a cross-browser Web3D engine that uses JavaScript function libraries or application program interfaces (application program interfaces, APIs) to create and display animated three-dimensional graphics in web browsers.
  • Three.js can allow GPU-accelerated 3D animation elements in web pages created using JavaScript.
  • CPU 101 executes the step.
  • the memory 104 may store logic codes corresponding to a step executed by the computing device 100 described in the following embodiments.
  • the external display device 103 may have an interface display function, which can display the digital twin system production platform interface.
  • the digital twin system production platform interface can be used to display the three-dimensional model in the digital twin system and perform appearance update operations on the three-dimensional model.
  • model appearance updating method provided by the present application is exemplarily introduced below in conjunction with the accompanying drawings.
  • the model appearance updating method is applicable to the computing device shown in FIG. 1 .
  • FIG2 shows a flow chart of a model appearance updating method provided by an exemplary embodiment of the present application.
  • the model appearance updating method comprises the following steps:
  • the first model is a three-dimensional model generated by rendering according to the first texture map, the first texture map is a pattern mapped and displayed on the first model, and the first texture map may include at least one of a pattern, a design or a color.
  • the computing device may display the first model through an external display device.
  • the computing device displays the digital twin system production platform interface through the external display device, and displays the first model on the digital twin system production platform interface.
  • FIG3 is a schematic diagram of a digital twin system production platform interface involved in an embodiment of the present application.
  • a first model 201 can be displayed in the digital twin system production platform 200, and the first model 201 can be a three-dimensional model simulating a server.
  • S102 in response to receiving a trigger operation on a first position, determining a first coordinate of a target position, wherein the first position is a corresponding position of a first model displayed on a display interface, and the target position is a position where the first position is mapped on a surface of the first model.
  • the first position is the corresponding position of the first model displayed on the display interface
  • the target position is the position of the first position mapped on the surface of the first model
  • the first coordinate of the target position is the three-dimensional coordinate of the target position in the three-dimensional space where the first model is located.
  • the target position can be any position on the first model, and the first position can be selected and triggered by the user at a certain position point on the first model displayed on the display interface.
  • the user can select and trigger a certain position point on the first model on the display interface by clicking the mouse; if the first model is displayed by a computing device with a touch screen, the user can click the touch screen by manual triggering to select and trigger the position point displayed on the display interface of the first model.
  • Figure 4 is a schematic diagram of determining a target position on a first model involved in an embodiment of the present application. As shown in Figure 4, when a user triggers a position point 202 on the displayed first model, it can be determined that the target position on the first model is position point 202.
  • each location point on the first model can be clicked, selected and triggered, the user can select and trigger any location on the first model, so the location of the target location on the first model is not limited here.
  • the computing device may first determine the two-dimensional coordinates of the location point of the triggered first position on the display interface. Then, due to the first model, that is, the three-dimensional coordinates of the first position on the display interface, The three-dimensional space where the three-dimensional model is located is pre-set with a spatial rectangular coordinate system, and each point on the first model has a corresponding three-dimensional coordinate. In addition to the first model, there is also a camera in the three-dimensional space. The position coordinates of the camera are the three-dimensional coordinates determined in the above-mentioned spatial rectangular coordinate system.
  • the computing device obtains the position coordinates of the camera; then, the three-dimensional coordinates of the first position in the three-dimensional space (first spatial rectangular coordinate system) where the three-dimensional model is located can be determined according to the two-dimensional coordinates of the position point of the first position on the display interface.
  • the computing device can determine a ray through the three-dimensional coordinates of the position point on the first model corresponding to the first position in the three-dimensional space.
  • the ray can be obtained by connecting the position point indicated by the camera coordinates and the position point on the first model corresponding to the first position.
  • the intersection point of the ray and the first model is determined as the target position, and the first coordinates of the target position are determined by obtaining the skeleton information of the position point in the three-dimensional space. It can be understood that the target position is the position of the first position mapped on the surface of the first model.
  • the image of the first model captured at the camera position is the image displayed on the display interface, and the skeleton information of the position point in the three-dimensional space can indicate the geometric body where the position point is located, including point, line or surface information.
  • Fig. 5 is a schematic diagram of a perspective projection involved in an embodiment of the present application.
  • the image on the near clipping plane 302 is an image obtained by shooting a three-dimensional model in space by a camera 301.
  • the position point 304 of the determined target position on the display interface is projected onto the three-dimensional model in the three-dimensional space, which can correspond to the position point 305 of the first position in the three-dimensional space.
  • determining the three-dimensional coordinates of the first position in the three-dimensional space (first spatial rectangular coordinate system) where the three-dimensional model is located according to the two-dimensional coordinates of the first position may include: first establishing a second spatial rectangular coordinate system where the display interface 302 is located based on the plane rectangular coordinate system of the two-dimensional plane where the near clipping plane (display interface) 302 is located.
  • the x-axis and y-axis of the plane rectangular coordinate system are used as the x-axis and y-axis of the second spatial rectangular coordinate system, and the straight line passing through the origin of the plane rectangular coordinate system and perpendicular to the x-axis and the y-axis is determined as the z-axis.
  • the first spatial rectangular coordinate system is the spatial rectangular coordinate system where the three-dimensional model (first model) is located.
  • the three-dimensional coordinates of the first position in the second space rectangular coordinate system are (1, 1, 1). According to the coordinate correspondence between the first space rectangular coordinate system and the second space rectangular coordinate system, it can be determined that the three-dimensional coordinates of the first position in the first space rectangular coordinate system are (3, 3, 1).
  • the three-dimensional coordinates of the first position in the first space rectangular coordinate system can be determined, and the first ray can be determined according to the three-dimensional coordinate point and the coordinate point of the camera position in the first space rectangular coordinate system.
  • the coordinates of the intersection point are obtained through the three.js API, so as to further determine the three-dimensional coordinates (first coordinates) of the target position in the first space rectangular coordinate system.
  • the computing device may take the intersection point closest to the mouse click point as the target position.
  • the computing device may first obtain the camera coordinates of the camera in the three-dimensional space where the three-dimensional model is located, and obtain the two-dimensional coordinates of the position point of the first position clicked by the user in the two-dimensional space of the display interface; the user clicks on the screen, and the web page obtains the point coordinates (S x ,S y ) on the screen through the mouse click event, and through normalized calculation, converts the point coordinates (S x ,S y ) into the two-dimensional coordinates (x, y) of the first position in the two-dimensional space of the display interface.
  • the second coordinate of the target position is the two-dimensional coordinate of the first coordinate of the target position mapped to the two-dimensional space where the first texture map is located.
  • the computing device determines the position point on the first texture map corresponding to the first coordinate when the first texture map is covered on the model skeleton of the first model by acquiring the first coordinate of the target position, and determines the two-dimensional coordinates of the position point on the first texture map as the second coordinate.
  • the method of determining the second coordinate of the target position according to the first coordinate of the target position can be a method of texture map UV mapping.
  • the first texture map can be regarded as a two-dimensional image tiled on the desktop, the left and right direction of the two-dimensional image is used as the U axis, the up and down direction of the two-dimensional image is used as the V axis, and the plane formed by the U axis and the V axis can be used as a texture space coordinate system.
  • Superimposing the first texture map on the model skeleton of the first model can achieve the purpose of adding texture to the appearance of the first model, so each position point on the first texture map has a corresponding relationship with each position point on the model skeleton of the first model, and the conversion to coordinates can be expressed as the UV coordinates of the first texture map and the three-dimensional coordinates of the first model in the three-dimensional space have a second corresponding relationship. Therefore, the first coordinate of the target position can be converted into the second coordinate of the target position by UV mapping, thereby determining the position point on the first texture map indicated by the second coordinate.
  • Figure 6 is a schematic diagram of a texture map UV mapping process involved in an embodiment of the present application.
  • the example texture map 51 can be regarded as a two-dimensional image tiled on the desktop.
  • Superimposing the example texture map on the model skeleton of the example model 52 can achieve the purpose of adding texture to the example model 52.
  • S104 Determine the target area based on the second coordinates of the target position.
  • the computing device may determine the area on the first model where the second texture map needs to be superimposed according to the second coordinate, and use the area as the target area.
  • the target area is used to indicate an area on the first model where the second texture map is superimposed.
  • the second texture map may be a pre-configured texture map, for example, a map of the same color.
  • the target area may be an area on the first model that contains a pixel point of the second coordinate of the target position and conforms to the shape and size of a preset two-dimensional graphic.
  • the computing device may obtain a preset two-dimensional graphic, and determine an area having a pixel point containing the second coordinate of the target position and having a shape and size that conforms to the preset two-dimensional graphic as the target area.
  • the preset two-dimensional graphic may be a pre-set two-dimensional graphic or a two-dimensional graphic selected by the user.
  • the preset two-dimensional graphic may be a regular graphic, such as a square, a circle, an ellipse, etc.; or an irregular graphic, such as a shape and size of a preset two-dimensional graphic selected by the user or a custom graphic drawn.
  • Figure 7 is a schematic diagram of a preset two-dimensional graphic selection involved in an embodiment of the present application.
  • a selection pop-up window 401 for selecting a preset two-dimensional graphic can be displayed in the form of a pop-up window.
  • the user's mouse pointer can be used to trigger the operation to select the shape of the sticker to be added to the first model. For example, if the selected two-dimensional graphic is a square, a square sticker can be added when adding the sticker subsequently.
  • the target area is where an indicator light needs to be added on the first model, and is an area of a preset two-dimensional graphic shape that contains the target position.
  • the target area may also be an area on the first model that includes the target pixel point.
  • the color at the target pixel is the first color or a color whose error with the first color is within a specified interval.
  • the first color may be the color of the pixel at the second coordinate of the target position on the first texture map.
  • the computing device can obtain the first color of the pixel point at the second coordinate of the target position on the first texture map, and determine the target pixel points that are the same as the first color or have a color error within a specified threshold distance around the second coordinate on the first texture map, and determine that the area on the first model containing the target pixel points is the target area.
  • the computing device can diffuse outward according to the second coordinate of the target position to search for pixel points with the same or similar color as the pixel points of the second coordinate of the target position on the first texture map until a pixel point of a different color is encountered as a pixel point of the boundary of the target object.
  • the preset two-dimensional graph can be used as the diffusion direction, and finally the area whose pixel color is the same as the pixel color of the target position and conforms to the preset two-dimensional graph is determined as the target area.
  • a small preset graphic (for example, a square) can be formed first, and the computing device can determine whether the color of the pixels on the outer layer of the preset graphic is the same as the color of the pixels inside the preset graphic. If they are the same, they will expand outward until they reach the boundary where the pixel colors are different.
  • the first model is a model for simulating the first device
  • the second texture map is an indicator light map for indicating the operating status of the first device
  • the area where the indicator light needs to be added can be determined as the target area by diffusing outward from the second coordinate of the target position to find the area with the same pixel points on the first texture map
  • the first texture map superimposed on the target area is the area reserved by the developer for adding the indicator light map.
  • the black square area on the first model 201 can be the area reserved on the first texture map when the first model is modeled.
  • the area with all black colors can be diffused outward from the position corresponding to the second coordinate of the target position, and until pixels of different colors are encountered as boundaries, it can be determined that the target area is a black reserved area containing the target position.
  • S105 Update the appearance of the first model according to the target area and the second texture map.
  • the second texture map may be superimposed and displayed on the target area, thereby displaying the appearance of the updated first model.
  • the shape, position and size of the second texture map superimposed on the first model can be fine-tuned, that is, the user adjusts the shape, position and size of the second texture map superimposed on the first model through operations such as dragging and scaling.
  • the first model is a model for simulating a first device
  • the second texture map is an indicator light map for indicating the operating status of the first device
  • the color of the second texture map can be pre-set or reset by the user after fine-tuning the superimposed second texture map.
  • the computing device automatically determines the shape and size of the added indicator light map
  • the user can determine whether the shape and size of the generated indicator light map meets expectations. If not, the user can modify and adjust the shape, size and even position of the indicator light map by triggering an operation.
  • the appearance of the first model may be updated by generating a third texture map according to the first texture map, the position of the second coordinate, and the second texture map, wherein the third texture map is a texture map generated by superimposing the second texture map with a determined shape size at the second coordinate position on the first texture map at the target position, and the appearance of the first model is updated according to the third texture map.
  • S103 to S105 may be replaced by the following steps S106 to S107:
  • S106 The computing device superimposes the second texture map of the target area on the position corresponding to the second coordinate of the first texture map corresponding to the target position to generate a third texture map.
  • S107 The computing device updates the appearance of the first model according to the third texture map of the target area.
  • the second texture map with adjusted shape and content can be further superimposed on the objects corresponding to other positions, so that the operation of adding the map can be performed in batches, thereby improving the efficiency of adding the map on the first model.
  • the following steps may be included:
  • the second position is the other corresponding position displayed on the display interface of the first model except the first position
  • the third position is the position of the second position mapped on the surface of the first model
  • the third coordinate of the third position is the three-dimensional coordinate of the third position in the three-dimensional space where the first model is located.
  • the computing device when the computing device receives a trigger operation on a second position other than the first position on the first model, the three-dimensional coordinates of the second position mapped in the first space rectangular coordinate system where the first model is located can be determined based on the two-dimensional coordinates of the second position in the display interface.
  • the specific method for determining the third coordinate of the third position is the same as the method for determining the first coordinate of the target position, as shown in S102, and will not be repeated here.
  • the fourth coordinate of the third position is a two-dimensional coordinate of the third coordinate of the third position mapped to the two-dimensional space where the first texture map is located.
  • the specific manner in which the computing device determines the fourth coordinate of the third position through the third coordinate of the third position is the same as the manner in which the second coordinate of the target position is determined through the first coordinate of the target position, as shown in S103, and will not be repeated here.
  • the computing device can superimpose the second texture map with determined shape and content at the position corresponding to the fourth coordinate of the first texture map corresponding to the third position.
  • the computing device after the computing device superimposes the second texture map of the target area on the position of the first texture map indicated by the fourth coordinate of the third position, it can be shown on the display interface that the same texture as that at the target position has been added to other positions of the first model. At this time, the user can also fine-tune the texture maps at various other positions to ensure that the appearance of the updated first model meets expectations.
  • a batch application control can be displayed on the digital twin system production platform 200. After the user adds a texture map to the target position, the batch application control can be triggered. After the computing device receives the trigger operation on the batch application control, the function of obtaining other positions can be started. That is, when the user triggers other positions on the first model except the target position, the same texture map as the target position can be added to each other position.
  • the appearance of the first model after the online update can be displayed.
  • the specific display process can be as follows:
  • the first state may be used to indicate whether the second texture map of the target area is displayed or hidden.
  • the computing device can determine the first state of the second texture map according to a state switching frequency, where the state switching frequency is the frequency of switching between the display state and the hidden state of the second texture map.
  • the state switching frequency may be a preset parameter, or may be a parameter set by the user for at least one second texture map before previewing.
  • the computing device can first display the default state after receiving the preview trigger operation. If the default state is a hidden state, the second texture maps added to the first model are first hidden, and then the state of each second texture map is switched once per second according to the preset state switching frequency, thereby achieving a flashing effect once per second.
  • the user can number the at least one second texture map added to the first model and set the state switching frequency of each map according to the number; after the user completes the setting and the computing device receives the preview trigger operation, each second texture map can be displayed or hidden according to the state switching frequency corresponding to each number.
  • the state switching frequency of the second texture map numbered 1 is set to once per second
  • the state switching frequency of the second texture map numbered 2 is set to once every two seconds
  • the state of the second texture maps can be switched according to the preset state switching frequency, thereby achieving the effect that the second texture map numbered 1 flashes once per second and the second texture map numbered 2 flashes once every two seconds.
  • the computing device can obtain the operating status of the first device, and determine the first state of the indicator light map at the current moment based on the operating status of the first device; according to the content of the indicator light map in the target area and the first state of the indicator light map, the appearance of the updated first model is displayed to simulate indicating the operating status of the first device through the indicator light.
  • the first model is used to simulate a SAS/SATA hard disk
  • the second texture map is an indicator light map
  • there are two types of indicator lights on a SAS/SATA hard disk one is a fault indicator light, which can be mainly used to indicate the occurrence of a fault
  • the other is an active indicator light, which can be mainly used to show the normal working condition of the hard disk
  • the active indicator light flashes green and the fault indicator light is off it can indicate that the hard disk is reading and writing data
  • when the active indicator light is always green or flashes green, and the fault indicator light flashes yellow, it can indicate that the hard disk is being located or RAID is being reconstructed
  • when the active indicator light is always green or off, and the fault indicator light is always yellow, it can indicate a hard disk fault
  • the computing device when the computing device automatically generates an indicator light map and the user determines that the shape, size and position of the indicator light map do not need to be modified, the user can set a number for the indicator light map (such as Fault-1) and write a custom script for the state switching frequency of the indicator light map, and the custom script includes a function for determining the state switching frequency according to the operating state of the first device.
  • a number for the indicator light map such as Fault-1
  • the custom script includes a function for determining the state switching frequency according to the operating state of the first device.
  • the computing device may dynamically display the updated first model according to the second texture map of the target area and the first state determined at the current moment.
  • a computing device may generate a web page package; the web page package may include a model skeleton of a first model, a first texture map, and a second texture map of a target area superimposed on the first texture map; the web page package may be downloaded and used by other devices.
  • the webpage package can load the webpage package and read the three-dimensional information of the first model provided in the webpage package.
  • the three.js 3D engine By loading the three.js 3D engine, all the model skeletons and texture maps in the digital twin system in the webpage package are loaded.
  • the indicator light texture maps generated by the platform before can all be in the hidden stage. Therefore, in the webpage currently seen by the user, the indicator lights are all in an off state.
  • the indicator light map is switched between the display state and the hidden state, so that the indicator light shows a flashing effect, or the indicator light map is continuously displayed and hidden according to the specified frequency or the frequency of the change of the actual device operation data transmitted back from the background.
  • the user can observe that the indicator lights of the simulated device in the webpage are flashing according to the real state.
  • the software upgrade device includes at least one of the hardware structure and software modules corresponding to the execution of each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiment of the present application can divide the model appearance updating device into functional units according to the above method example.
  • each functional unit can be divided according to each function, or two or more functions can be integrated into one processing unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of software functional units. It should be noted that the division of units in the embodiment of the present application is schematic and is only a logical functional division. There may be other division methods in actual implementation.
  • FIG8 shows a schematic diagram of the structure of a model appearance updating device 400 provided by an exemplary embodiment of the present application.
  • the model appearance updating device 400 is applied to a computing device, or the model appearance updating device 400 may be a computer device.
  • the model appearance updating device 400 includes:
  • the display module 410 is used to display a first model, where the first model is a three-dimensional model with a first texture appearance generated by rendering according to a first texture map.
  • the processing module 420 is used to determine the first coordinates of the target position in response to receiving a trigger operation on the first position, wherein the first position is the corresponding position of the first model displayed on the display interface, the target position is the position of the first position mapped on the surface of the first model, and the first coordinates of the target position are the three-dimensional coordinates of the target position in the three-dimensional space where the first model is located; based on the first coordinates of the target position, determine the second coordinates of the target position, and the second coordinates of the target position are the two-dimensional coordinates of the first coordinates of the target position mapped on the two-dimensional space where the first texture map is located.
  • the updating module 430 is used to update the appearance of the first model based on the second coordinates of the target position and the second texture map.
  • the display module 410 may be used to execute S101 as shown in FIG. 2
  • the processing module 420 may be used to execute S102 to S104 as shown in FIG. 2
  • the update module 430 may be used to execute S105 as shown in FIG. 2 .
  • processing module 420 is further configured to:
  • the second texture map is superimposed on the target area of the first texture map to update the appearance of the first model.
  • the target area is an area on the first texture map that contains a pixel point of the second coordinate of the target position and conforms to a size and shape of a preset two-dimensional graphic.
  • the target area is an area on the first texture map that has the same color as a pixel at the second coordinate of the target position or has a color error within a specified range with the pixel at the second coordinate of the target position.
  • the processing module 420 is further used to determine the third coordinate of the third position in response to receiving a trigger operation on the second position, wherein the second position is the corresponding position of the first model displayed on the display interface other than the first position, the third position is the position of the second position mapped on the surface of the first model, and the third coordinate of the third position is the three-dimensional coordinate of the third position in the three-dimensional space where the first model is located; based on the third coordinate of the third position, determine the fourth coordinate of the third position, the fourth coordinate of the third position is the two-dimensional coordinate of the third coordinate of the third position mapped on the two-dimensional space where the first texture map is located; superimpose the second texture map on the position corresponding to the fourth coordinate of the first texture map.
  • the updating module 430 is also used to update the appearance of the first model according to the second texture map and the superimposed position of the second texture map.
  • the update module 430 is further used to superimpose the second texture map at the position corresponding to the second coordinate of the first texture map corresponding to the target position to generate a third texture map; and update the appearance of the first model according to the third texture map.
  • the processing module 420 is further used to determine a first state of the second texture map, where the first state is used to indicate whether the second texture map is displayed or hidden; the updating module 430 is further used to display the updated appearance of the first model according to the first state in response to a received preview trigger operation.
  • the processing module 420 is further configured to determine the first state of the second texture map according to a state switching frequency, where the state switching frequency is a frequency of switching between a display state and a hidden state of the second texture map.
  • the first model is a model for simulating a first device
  • the second texture map is an indicator light map for indicating a running status of the first device
  • the processing module 420 is further used to obtain the operating status of the first device
  • the update module 430 is also used to determine the first state of the indicator light map at the current moment based on the operating state of the first device; the first state is used to indicate whether the indicator light map of the target object is displayed or hidden; according to the content of the indicator light map of the target object and the first state of the indicator light map, the appearance of the updated first model is displayed to simulate indicating the operating state of the first device through the indicator light.
  • the processing module 420 is further used to generate a web page package; the web page package includes the model skeleton of the first model and the texture map of the first model after the appearance is updated, and the web page package is used for downloading and use by other devices.
  • some or all of the functions implemented in the display module 410, the processing module 420 and the update module 430 in the model appearance updating device can be executed by the computing device 100 in Figure 1, wherein the display module 410 can be executed by the external display device 103 of the computing device 100 in Figure 1, and the processing module 420 and the update module 430 can be executed collaboratively by the central processing unit 101, the graphics processor 102 and the memory 104 of the computing device 100 in Figure 1.
  • a computer-readable storage medium is also provided, which is used to store at least one instruction, at least one program, code set or instruction set, and the at least one instruction, the at least one program, the code set or instruction set is loaded and executed by a processor to implement all or part of the steps in the above-mentioned memory fault prediction method.
  • the machine-readable storage medium may be a read-only memory (ROM), a random access memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
  • a computer program product or a computer program is also provided, the computer program product or the computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computing device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computing device performs all or part of the steps of the method shown in any embodiment of FIG. 2 above.
  • the methods shown in the embodiments of the present application may be implemented as computer program instructions encoded in a machine-readable format on a computer-readable storage medium or encoded on other non-transitory media or products.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a device (which can be a single-chip microcomputer, chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention se rapporte au domaine technique des ordinateurs et concerne un procédé et un appareil de mise à jour d'aspect de modèle, et un dispositif informatique, qui simplifient l'opération de mise à jour de l'aspect d'un modèle tridimensionnel, permettant ainsi d'améliorer l'efficacité de mise à jour d'aspect de modèle. Le procédé comprend les étapes suivantes: l'affichage d'un premier modèle, le premier modèle étant un modèle tridimensionnel ayant un premier aspect de texture et généré par rendu selon une première carte de texture; la détermination d'une première coordonnée d'une position cible en réponse à la réception d'une opération de déclenchement sur une première position sur le premier modèle, la position cible étant la position de la première position mappée sur la surface du premier modèle, et la première coordonnée de la position cible étant une coordonnée tridimensionnelle de la position cible dans un espace tridimensionnel où le premier modèle est situé; la détermination d'une seconde coordonnée de la position cible sur la base de la première coordonnée de la position cible, la seconde coordonnée de la position cible étant une coordonnée bidimensionnelle de la première coordonnée de la position cible mappée sur un espace bidimensionnel où la première carte de texture est située; et la mise à jour de l'aspect du premier modèle.
PCT/CN2023/117340 2023-01-29 2023-09-06 Procédé et appareil de mise à jour d'aspect de modèle, et dispositif informatique WO2024156180A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310081090.0A CN116310037A (zh) 2023-01-29 2023-01-29 模型外观更新方法、装置及计算设备
CN202310081090.0 2023-01-29

Publications (1)

Publication Number Publication Date
WO2024156180A1 true WO2024156180A1 (fr) 2024-08-02

Family

ID=86787911

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/117340 WO2024156180A1 (fr) 2023-01-29 2023-09-06 Procédé et appareil de mise à jour d'aspect de modèle, et dispositif informatique

Country Status (2)

Country Link
CN (1) CN116310037A (fr)
WO (1) WO2024156180A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310037A (zh) * 2023-01-29 2023-06-23 超聚变数字技术有限公司 模型外观更新方法、装置及计算设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012037157A2 (fr) * 2010-09-13 2012-03-22 Alt Software (Us) Llc Système et procédé destinés à afficher des données qui présentent des coordonnées spatiales
US20150187135A1 (en) * 2013-12-31 2015-07-02 Nvidia Corporation Generating indirection maps for texture space effects
CN106570822A (zh) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 一种人脸贴图方法及装置
CN111489428A (zh) * 2020-04-20 2020-08-04 北京字节跳动网络技术有限公司 图像生成方法、装置、电子设备及计算机可读存储介质
CN116310037A (zh) * 2023-01-29 2023-06-23 超聚变数字技术有限公司 模型外观更新方法、装置及计算设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012037157A2 (fr) * 2010-09-13 2012-03-22 Alt Software (Us) Llc Système et procédé destinés à afficher des données qui présentent des coordonnées spatiales
US20150187135A1 (en) * 2013-12-31 2015-07-02 Nvidia Corporation Generating indirection maps for texture space effects
CN106570822A (zh) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 一种人脸贴图方法及装置
CN111489428A (zh) * 2020-04-20 2020-08-04 北京字节跳动网络技术有限公司 图像生成方法、装置、电子设备及计算机可读存储介质
CN116310037A (zh) * 2023-01-29 2023-06-23 超聚变数字技术有限公司 模型外观更新方法、装置及计算设备

Also Published As

Publication number Publication date
CN116310037A (zh) 2023-06-23

Similar Documents

Publication Publication Date Title
Kessenich et al. OpenGL Programming Guide: The official guide to learning OpenGL, version 4.5 with SPIR-V
KR101286318B1 (ko) 렌더링된 그래픽 엘리먼트들을 위한 성능 메트릭들의 시각적 표현의 디스플레이
US8773433B1 (en) Component-based lighting
EP1594091B1 (fr) Système et méthode pour améliorer un pipeline graphique
US8587593B2 (en) Performance analysis during visual creation of graphics images
US20130063460A1 (en) Visual shader designer
KR100928192B1 (ko) 내장형 디바이스에서의 3d 콘텐츠에 대한 오프라인 최적화파이프라인
KR101267120B1 (ko) 성능 분석 동안 관련된 그래픽스 데이터에 대한 그래픽스 명령들의 매핑
KR102573787B1 (ko) 광 프로브 생성 방법 및 장치, 저장 매체 및 컴퓨터 디바이스
WO2024156180A1 (fr) Procédé et appareil de mise à jour d'aspect de modèle, et dispositif informatique
CN111429561A (zh) 一种虚拟仿真渲染引擎
WO2023197762A1 (fr) Procédé et appareil de rendu d'image, dispositif électronique, support de stockage lisible par ordinateur et produit-programme d'ordinateur
KR101431311B1 (ko) 그래픽 이미지들의 시각적 창작 동안의 성능 분석
Ghayour et al. Real-time 3D graphics with WebGL 2: build interactive 3D applications with JavaScript and WebGL 2 (OpenGL ES 3.0)
CN114549708A (zh) 游戏对象的编辑方法、装置和电子设备
JP2005165873A (ja) Web3D画像表示システム
Bateman et al. The Essential Guide to 3D in Flash
Ragan-Kelley Practical interactive lighting design for RenderMan scenes
CN117215592B (zh) 渲染程序生成方法、装置、电子设备和存储介质
US20240033625A1 (en) Rendering method and apparatus for virtual scene, electronic device, computer-readable storage medium, and computer program product
CN116863067A (zh) 模型生成方法及计算设备
CN117839223A (zh) 游戏编辑预览方法、装置、存储介质与电子设备
CN117786951A (zh) 数字孪生系统的页面显示方法及计算设备
Hubbold et al. GKS-3D and PHIGS—Theory and Practice
CN116796089A (zh) 数据获取方法及计算设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23918181

Country of ref document: EP

Kind code of ref document: A1