CN111597628B - Model marking method and device, storage medium and electronic equipment - Google Patents

Model marking method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111597628B
CN111597628B CN202010720141.6A CN202010720141A CN111597628B CN 111597628 B CN111597628 B CN 111597628B CN 202010720141 A CN202010720141 A CN 202010720141A CN 111597628 B CN111597628 B CN 111597628B
Authority
CN
China
Prior art keywords
marked
marking
information
component
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010720141.6A
Other languages
Chinese (zh)
Other versions
CN111597628A (en
Inventor
石永贵
叶伍根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN202010720141.6A priority Critical patent/CN111597628B/en
Publication of CN111597628A publication Critical patent/CN111597628A/en
Application granted granted Critical
Publication of CN111597628B publication Critical patent/CN111597628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Civil Engineering (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a model marking method, a model marking device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a three-dimensional model, and identifying scene information of the three-dimensional model and component information of a component to be marked; performing two-dimensional rendering on the scene information and the component information to generate a two-dimensional image to be marked; determining attribute information of a marking assembly for marking each part to be marked based on the distribution state of each part to be marked in the image to be marked; and marking each part to be marked in the image to be marked based on the attribute information of the marking assembly corresponding to each part to be marked and the marking content of the part to be marked. The attribute information of the corresponding marking assemblies is determined based on the distribution state of each part to be marked, so that the marking assemblies of each part to be marked are uniformly distributed in the image to be marked, the condition that a plurality of marks are densely distributed to cause mark confusion is avoided, and the marking definition is improved.

Description

Model marking method and device, storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to computer technology, in particular to a model marking method, a model marking device, a storage medium and electronic equipment.
Background
With the rapid development of social customers, the application of the virtual simulation technology in the construction of building engineering can find the defects in the building construction in time, improve the construction quality, and simultaneously can fundamentally save the cost and investment of the building engineering, thereby laying a foundation for the future investment and use of the building engineering.
For the virtual model of the building engineering, the marking efficiency is low and the cost is high by an artificial marking mode at present.
Disclosure of Invention
The invention provides a model marking method, a model marking device, a storage medium and electronic equipment, which are used for realizing automatic marking of each part in a model.
In a first aspect, an embodiment of the present invention provides a model marking method, including:
acquiring a three-dimensional model, and identifying scene information of the three-dimensional model and component information of a component to be marked;
performing two-dimensional rendering on the scene information and the component information to generate a two-dimensional image to be marked;
determining attribute information of a marking assembly for marking each part to be marked based on the distribution state of each part to be marked in the image to be marked;
and marking each part to be marked in the image to be marked based on the attribute information of the marking assembly corresponding to each part to be marked and the marking content of the part to be marked.
In a second aspect, an embodiment of the present invention further provides a model marking apparatus, including:
the information identification module is used for acquiring a three-dimensional model and identifying scene information of the three-dimensional model and component information of a component to be marked;
the image to be marked generating module is used for performing two-dimensional rendering on the scene information and the component information to generate a two-dimensional image to be marked;
the attribute information determining module is used for determining attribute information of a marking assembly for marking each part to be marked based on the distribution state of each part to be marked in the image to be marked;
and the model marking module is used for marking each part to be marked in the image to be marked based on the attribute information of the marking assembly corresponding to each part to be marked and the marking content of the part to be marked.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the model labeling method of any of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the model tagging method of any one of the embodiments of the present invention.
According to the technical scheme provided by the embodiment of the invention, a scene of a three-dimensional model and a part to be marked are identified, and the scene information and the part information of the part to be marked are subjected to two-dimensional rendering to form a two-dimensional image to be marked, the attribute information of each marking assembly for marking the part to be marked is calculated in the image to be marked based on the distribution state of each part to be marked, the marking assembly is set in the image to be marked based on the attribute information, and the automatic marking of each part to be marked is completed. The image to be marked is formed by rendering scene information and component information of the component to be marked, authenticity of each component to be marked in the three-dimensional model is kept, meanwhile, attribute information of the corresponding marking assembly is determined based on the distribution state of each component to be marked, uniform distribution of each component to be marked in the image to be marked is guaranteed, the situation that a plurality of marks are distributed densely to cause mark confusion is avoided, and marking definition is improved.
Drawings
Fig. 1 is a schematic flowchart of a model marking method according to an embodiment of the present invention;
FIG. 2 is an exemplary diagram of an image to be marked according to an embodiment of the present invention;
FIG. 3 is an exemplary diagram of a marker image provided by an embodiment of the present invention;
FIG. 4 is a diagram illustrating a queue store according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a model labeling method according to a second embodiment of the present invention;
FIG. 6 is a schematic diagram of the intersection of the marking lines provided by the second embodiment of the present invention;
FIG. 7 is a schematic diagram of the location relationship of the mark indicating areas according to the second embodiment of the present invention;
fig. 8 is a schematic structural diagram of a model marking apparatus according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic flowchart of a model marking method according to an embodiment of the present invention, where the present embodiment is applicable to a case of automatically marking a component in a three-dimensional building model, and the method may be executed by a model marking apparatus according to an embodiment of the present invention, where the apparatus may be integrated in an electronic device such as a computer or a server, and specifically includes the following steps:
s110, obtaining a three-dimensional model, and identifying scene information of the three-dimensional model and component information of a component to be marked.
And S120, performing two-dimensional rendering on the scene information and the component information to generate a two-dimensional image to be marked.
S130, determining attribute information of a marking assembly for marking each part to be marked based on the distribution state of each part to be marked in the image to be marked.
And S140, marking each part to be marked in the image to be marked based on the attribute information of the marking assembly corresponding to each part to be marked and the marking content of the part to be marked.
The three-dimensional model in the present embodiment may be a three-dimensional model of any object, but is not limited to a three-dimensional building model, a three-dimensional building group model, a three-dimensional environment model, and the like. Taking a three-dimensional building model as an example, in order to clearly and intuitively display each part in the three-dimensional model, each part in the three-dimensional model is automatically identified and automatically marked.
The three-dimensional model to be marked may be loaded in a 3D engine, for example, the three-dimensional model may be imported from a model creation platform or a model creation application, and the three-dimensional model may be in fbx format. And identifying each part in the three-dimensional model, and rendering on the marking page to generate a two-dimensional image to be marked.
Optionally, identifying scene information of the three-dimensional model and component information of a component to be marked includes: setting a main virtual camera and an interactive virtual camera in the three-dimensional model; identifying scene information of the three-dimensional model based on the master virtual camera; identifying component information for the component to be marked based on the interactive virtual camera. Taking the three-dimensional building model as an example, the scene information of the three-dimensional model may be scene information of a building, and for example, when the three-dimensional building model is a three-dimensional model of a room, the scene information may be scene information composed of a floor, a wall, a ceiling, and the like of the room, and the component to be marked is a component arranged in the three-dimensional building model, such as a table, a chair, a sofa, or a space isolated by a wall (e.g., a living room, a bedroom, a kitchen, and the like) in the room.
In some embodiments, each component in the three-dimensional model is provided with a label, and the interactive virtual camera determines whether the component is to be marked by identifying the label of each component in the three-dimensional model. The label may be a preset identifier, and for example, the preset identifier may be a UI. And determining the part provided with the UI label as the part to be marked. Optionally, the 3D engine may store a list of parts to be marked, where the list of parts to be marked may include tags and marking contents of the parts to be marked. The label can be a part name, the label of the interactive virtual camera identification part is matched with the label of the part list to be marked, and whether the part is the part to be marked is determined. And if the label is in the part list to be marked, determining that the part corresponding to the label is the part to be marked, and if the label is not in the part list to be marked, determining that the part corresponding to the label is not the part to be marked. Optionally, the identifying scene information and the part information further includes identifying three-dimensional coordinates of the building scene and three-dimensional coordinates of the part to be marked. The three-dimensional coordinates for identifying the building scene may be coordinates for identifying edges and/or coordinates for feature points in the three-dimensional scene, and the feature points may be break points, etc. on the building scene (e.g., a wall surface, a ceiling). Identifying the three-dimensional coordinates of the part to be marked may identify center point coordinates, edge coordinates, etc. of the part to be marked.
And performing two-dimensional rendering on the scene information and the component information to obtain a two-dimensional image to be marked for marking. Optionally, performing two-dimensional rendering on the scene information and the component information, including: respectively carrying out coordinate conversion on the scene information of the three-dimensional model and the component information of the component to be marked to obtain two-dimensional coordinates corresponding to the scene information and the component information respectively; rendering the scene information and the component to be marked based on the two-dimensional coordinates respectively corresponding to the scene information and the component information.
Specifically, the coordinate conversion may be performed to convert world coordinates of the three-dimensional model into screen coordinates, and convert the screen coordinates into UI coordinates, i.e., two-dimensional coordinates, of the rendering interface. The conversion relation between the world coordinate system and the screen coordinate system and the conversion relation between the screen coordinate system and the UI coordinate system are predetermined, and the coordinate conversion is realized based on the conversion relation of the coordinate systems.
Optionally, a coordinate conversion code is preset in the 3D engine, and after the three-dimensional coordinates of the scene information and the component information are identified, the coordinate conversion code is called to obtain two-dimensional coordinates corresponding to the scene information and the component information of each component to be marked. For example, the coordinate transformation code may be:
vector2 world2 screenpoints = camera. main. worldtoscreenpoint (world); // world to Screen coordinates
Vector2 uiPos = new Vector2();
(rect, world2Screen Pos, UICamera, out uiPos);/screen coordinates to UI coordinates
record in anchor position = new Vector2((uipos.x) ui. RecttTransform. localscale.x, (uipos.y) ui. RecttTransform. localscale.x)// assignment
P0 = uiPos 。
The World Position is the World coordinate of the part to be marked, rect is the canvas object of the rendering interface, and ui is the empty part of the rendering interface, and is used for processing the whole zooming. P0Is the two-dimensional coordinates of the scene or part to be marked, i.e., uiPos.
Rendering the scene and the component to be marked on the rendering interface based on the two-dimensional coordinates of the scene and the component to be marked obtained through conversion, and forming an image to be marked. It should be noted that the depth of the interactive virtual camera is greater than that of the main virtual camera, so that the parts to be marked are prevented from being shielded by scene information, and the comprehensiveness and accuracy of the marking are improved. Scene information and component information are identified in a mode of arranging a virtual camera in a three-dimensional model, an image to be marked is formed in a two-dimensional rendering mode, and detailed description and explanation are carried out on the marked model or component.
In this embodiment, the parts to be marked are marked by the marking component, the 3D engine is provided with the marking component in advance, and the marking component is called to mark each part to be marked. The marking assembly comprises marking points, marking lines and marking description areas, wherein the marking lines can be but are not limited to straight lines, broken lines or curved lines, and the marking description areas are text display areas.
By calculating the attribute information of the marking assemblies, the position and the size of the marking assembly of each part to be marked in the image to be marked can be determined, the marking content of each part to be marked is added into the corresponding marking assembly, and the marking of the part to be marked is realized. In this embodiment, the attribute information of the marking assembly of each component to be marked is determined based on a distribution algorithm, so that the marking assemblies of each component to be marked are uniformly distributed in the image to be marked.
Optionally, determining attribute information of a marking assembly for marking each to-be-marked component based on a distribution state of each to-be-marked component in the to-be-marked image, where the attribute information includes: determining distribution information of the current part to be marked in the current coordinate direction based on the length of the image to be marked in the current coordinate direction, the maximum first distance of each part to be marked in the current coordinate direction and the second distance between the current part to be marked and an adjacent marking point in the current coordinate direction; determining the marking center point position of the current part to be marked based on the two-dimensional coordinates of the current part to be marked in the image to be marked and the distribution information in each coordinate direction; determining attribute information of the marking line and the marking specification area based on the marking point position.
Exemplarily, referring to fig. 2, fig. 2 is an exemplary diagram of an image to be marked according to an embodiment of the present invention. The image to be marked is in a two-dimensional coordinate system, such as an XY coordinate system, including an X coordinate direction and a Y coordinate direction. And respectively calculating the distribution state of the parts to be marked in the images to be marked in each coordinate direction.
Specifically, when the current coordinate direction is the X coordinate direction, the distribution information of the to-be-marked component in the current coordinate direction may be determined by the following formula:
OffsetX = (UIXMaxValue * minXDistance) / maxXDistance;
the method includes the steps that offset X is distribution information of components to be marked in the X coordinate direction, UIXMaxValue is the length of an image to be marked in the X coordinate direction, minXDistance is a second distance between a current component to be marked and an adjacent marking point in the X coordinate direction, wherein the adjacent marking point is the component to be marked with the shortest distance from the current marking component, and maxXDistance is a first distance, which is the largest distance of each component to be marked in the X coordinate direction, of the image to be marked, namely the distance between two components to be marked with the largest distances in the X coordinate direction of the image to be marked.
Similarly, when the current coordinate direction is the Y coordinate direction, the distribution information of the to-be-marked component in the current coordinate direction may be determined by the following formula:
OffsetY = (UIYMaxValue * minYDistance) / maxYDistance
the marking method comprises the steps of marking an image to be marked on a target object, wherein offset Y is distribution information of the image to be marked in a Y coordinate direction, UIYMaxValue is the length of the image to be marked in the Y coordinate direction, minYDistance is a second distance between a current image to be marked and an adjacent marking point in the Y coordinate direction, wherein the adjacent marking point is the image to be marked with the shortest distance to the current marking element, and maXYDESTANCE is a first distance between the image to be marked and each element to be marked in the Y coordinate direction, namely the distance between two elements to be marked with the farthest distances in the Y coordinate direction in the image to be marked.
Two of the current part to be marked in the image to be markedThe dimensional coordinate may be a two-dimensional coordinate obtained by converting a three-dimensional center point of the current part to be marked, and may be P0(Px, Py) P may be substituted0(Px, Py) is determined as a positioning point of the component to be marked, and coordinates (Qx, Qy) of the marking point Q are determined based on the two-dimensional coordinates of the current component to be marked and distribution information in each coordinate direction, where the coordinates of the marking point Q may be determined based on a difference between the two-dimensional coordinates in the current coordinate direction and the distribution information, specifically: Px-OffsetX = Qx, Py-OffsetY = Qy.
Referring to fig. 2, the mark line is a broken line, the mark point Q is a turning point of the mark line, and the positioning point P is determined according to the position0And the marking point Q determines the attribute information of each marking component, wherein the marking component can comprise a marking line P0Q、Q P1Wherein P is1(Ex, Ey) is a mark line Q P1Wherein Qx = Ex-L, Qy = Ey, L is a labeled constant. P1(Ex, Ey) may also be the start coordinates of the mark description area.
Correspondingly, the attribute information of the mark line further includes:
marking line P0Angle of Q: θ = arctan [ (Qy-Py)/(Qx-Px)]
Marking line P0Length of Q:
Figure 965190DEST_PATH_IMAGE001
marking line P0Coordinates of the center point of Q: QP0(QPx,QPy)=((Qx+Px)/2,(Qy+Py)/2)
Marking line P1Coordinates of the center point of Q: p1Q(P1Qx,P1Qy)=((Ex+Qx)/2,(Ey+Qy)/2)
And calling the marking points, the marking lines and the marking description area based on the attribute information, adding the marking points, the marking lines and the marking description area into the image to be marked, and adding the marking content of the part to be marked into the marking description area to finish marking. Optionally, the mark points and the mark lines may be image components, that is, a mark point image and a mark line image, and the mark description area is a text component.
In some casesIn the embodiment, the marking line is a curve or a straight line, and the positioning point P0And the mark points Q are two end points of the mark line, respectively, and the mark point Q may be a start coordinate point of the mark description area. Optionally, the size of the mark description area is determined according to the mark content of the component to be marked, the number of characters in the mark content is positively correlated with the size of the mark description area, and the mark content may be mark description information (type, name, etc.) and/or component attributes (material, size, etc.) of the component to be marked.
Accordingly, the attribute information of the mark line may include:
marking line P0Angle of Q: θ = arctan [ (Qy-Py)/(Qx-Px)]
Marking line P0Length of Q:
Figure 367352DEST_PATH_IMAGE001
marking line P0Coordinates of the center point of Q: QP0(QPx,QPy)=((Qx+Px)/2,(Qy+Py)/2)
And similarly, calling the marking points, the marking lines and the marking description area, setting the positions of the marking points, the positions, the lengths and the angles of the marking lines and the positions of the marking description area based on the attribute information, respectively adding the positions, the lengths and the angles of the marking lines and the positions of the marking description area to the image to be marked, and adding the marking content of the part to be marked to the marking description area to finish marking.
Exemplarily, referring to fig. 3, fig. 3 is an exemplary diagram of a marker image according to an embodiment of the present invention.
According to the technical scheme provided by the embodiment, a scene of a three-dimensional model and a component to be marked are identified, two-dimensional rendering is carried out on the scene information and the component information of the component to be marked, a two-dimensional image to be marked is formed, attribute information of each marking assembly used for marking the component to be marked is calculated in the image to be marked based on the distribution state of the component to be marked, the marking assembly is set in the image to be marked based on the attribute information, and automatic marking of the component to be marked is completed. The image to be marked is formed by rendering scene information and component information of the component to be marked, authenticity of each component to be marked in the three-dimensional model is kept, meanwhile, attribute information of the corresponding marking assembly is determined based on the distribution state of each component to be marked, uniform distribution of each component to be marked in the image to be marked is guaranteed, the situation that a plurality of marks are distributed densely to cause mark confusion is avoided, and marking definition is improved.
On the basis of the above embodiment, a tag information list of the three-dimensional model is set in the 3D engine, and the tag information list includes tags and tag contents of the components to be tagged. Due to the fact that in the virtual simulation process of the three-dimensional model, the marking information needs to be changed continuously, for example, the marking content of the part to be marked is updated. Correspondingly, the method further comprises the following steps: when the mark content of any part stored in advance is updated, the mark content in the image to be marked is updated based on the label of the part. Specifically, when it is detected that the tag content corresponding to any tag in the tag information list is updated, it may be determined whether a component corresponding to the tag is tagged, and if so, the tag content of the corresponding component in the tag image may be updated based on the updated tag content. Whether the mark content of each label is updated or not can be detected based on the preset time interval, so that the mark can be automatically updated in the updating process of the three-dimensional model, manual modification is replaced, and the maintenance cost of the mark is saved.
On the basis of the above embodiment, the method further includes: and respectively setting the display time stamp of each part to be marked, and storing the display time stamp and the marked content of each part to be marked in a queue. Referring to fig. 4, fig. 4 is a schematic diagram of queue storage according to an embodiment of the present invention. Segment in fig. 4 is tagged data of a component, the tagged data includes tagged content and a display time stamp, and the display time stamp can be a start display time stamp and an end display time stamp. The display time stamp can be determined according to the display requirements of the user for each part. Each slot, such as slot0, slot1 … slot, is a marked datum. The marked data of each part are stored, so that the marked data of the three-dimensional model can be read and displayed conveniently.
Further, the method further comprises: reading the display time stamp and the marking content of each part to be marked in the corresponding storage queue in the three-dimensional model; and dynamically displaying the marking content of each part to be marked based on the display time stamp. When the three-dimensional model is marked and displayed, the marked data of each part of the three-dimensional model enters a queue, the marked data of each part in the queue is read, the marked data is played through a timeline time event frame, and the marked data is marked and displayed based on a display timestamp in the marked data. Illustratively, the display timestamp of the tag A is 5s-10s, that is, the tag content of the tag A is displayed when the 5s time event frame is played, and the tag content of the tag A is removed when the 10s time event frame is played.
And finishing displaying the marked data of the current three-dimensional model, or dequeuing the marked data of the current three-dimensional model based on a displaying instruction of the next three-dimensional model.
According to the technical scheme, the marked contents of all the components in the three-dimensional model are dynamically displayed according to the display time stamp, so that the flow display of a digital process is realized, and the vividness and intuitiveness of the marked contents are improved.
Example two
Fig. 5 is a flowchart illustrating a model marking method according to a second embodiment of the present invention, which is optimized based on the second embodiment, and the method includes:
s210, acquiring a three-dimensional model, and identifying scene information of the three-dimensional model and part information of a part to be marked.
S220, performing two-dimensional rendering on the scene information and the component information to generate a two-dimensional image to be marked.
And S230, determining attribute information of a marking assembly for marking each part to be marked based on the distribution state of each part to be marked in the image to be marked, wherein the marking assembly comprises a marking point, a marking line and a marking description area.
S240, judging whether the marking lines of the parts to be marked intersect and/or whether the marking description areas intersect, if not, executing a step S260, and if so, executing a step S250.
And S250, adjusting the attribute information of each marking assembly, and returning to execute the step S240.
And S260, marking each part to be marked in the image to be marked based on the attribute information of the marking assembly corresponding to each part to be marked and the marking content of the part to be marked.
In this embodiment, after determining the attribute information of the marking assembly of each to-be-marked component, the attribute information of the marking assembly of each to-be-marked component is verified, and whether the marking assemblies of different to-be-marked components intersect or overlap is determined, specifically, whether the marking lines of different to-be-marked components intersect or not and whether the marking description areas of different to-be-marked components overlap or not are determined. The crossed marking lines or the overlapped marking description areas are adjusted, and the problem of unclear effect is avoided.
Referring to fig. 6, fig. 6 is a schematic diagram of the intersection of the marking lines provided in the second embodiment of the present invention. For any two marking lines ab and cd, if the line segment ab intersects with the line segment cd, the points a and b are located on two sides of the line segment cd, and the points c and d are located on two sides of the line segment ab. Using the mark line cd as a standard line, determining three line segments from the starting point to the other three vertexes by using any point c as the starting point for the four vertexes of the two mark lines, and obtaining directed line segments
Figure 505072DEST_PATH_IMAGE002
Figure 465813DEST_PATH_IMAGE003
And
Figure 157825DEST_PATH_IMAGE004
and the vertex d and the vertex c are positioned on the same marking line, and the vertex a and the vertex b are positioned on the same marking line. If it is
Figure 363679DEST_PATH_IMAGE005
Then, the points a and b are located on two sides of the line segment cd, and the marked lines ab and cd intersect.
Optionally, the marking line judgment code may be called, and whether any two marking lines intersect or not may be respectively judged based on the marking line judgment code. For example, the mark line judgment code may be:
recording two vectors for each of u, v, w, z
u = (c.x-a.x)*(b.y-a.y)-(b.x-a.x)*(c.y-a.y);
v = (d.x-a.x)*(b.y-a.y)-(b.x-a.x)*(d.y-a.y);
w = (a.x-c.x)*(d.y-c.y)-(d.x-c.x)*(a.y-c.y);
z = (b.x-c.x)*(d.y-c.y)-(d.x-c.x)*(b.y-c.y);
return (u*v<=0.00000001 && w*z<=0.00000001)。
Wherein, double u, v, w, z in the code are vector values of a, b, c, d edges respectively; and X and Y are coordinate values of an X axis and a Y axis respectively.
And when the feedback information of the marking line judgment code is (u + v < =0.00000001 & & w + z < =0.00000001), judging that the two marking lines are intersected, otherwise, determining that the two marking lines are not intersected.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a position relationship of the mark indicating areas according to a second embodiment of the present invention. Here, two flags in the case1, case2, case 3, case 4, and case 5 in fig. 7 indicate that there is an overlapping area of the areas, i.e., the two intersect. In this embodiment, whether the two mark description areas intersect is determined by the distance between the center points of the two mark description areas and the side length of the two mark description areas. Specifically, when the distance between the center points of the two mark description areas is less than or equal to half of the sum of the side lengths of the two mark description areas, the two mark description areas are determined to be intersected.
Specifically, two mark indication areas a and B are respectively a [ X01, y01, X02, y02] and B [ X11, y11, X12, y12], and the distance Lx between the physical center points of the mark indication area a and the mark indication area B in the X direction is: abs ((x 01+ x 02)/2- (x11+ x 12)/2), and the distance Ly between the physical center points of the caption region a and the caption region B in the Y direction is: abs ((y 01+ y 02)/2- (y11+ y 12)/2). The side lengths of the mark indicating area a and the mark indicating area B in the X direction are Sax: abs (x01-x02) and Sbx abs (x11-x 12); the side lengths of the mark indicating area a and the mark indicating area B in the Y direction are Say: abs (y01-y02) and Sby abs (y11-y 12).
Accordingly, when Lx < = (Sax + Sbx)/2 & & Ly < = (Say + Sby)/2, it is determined that the two mark description areas a and B intersect.
For any two objects to be marked, when the marking lines intersect and/or the marking description areas intersect, the attribute parameters of any object to be marked are adjusted, and the marking lines and the marking description areas are judged in a circulating mode until the marking lines do not intersect and the marking description areas do not intersect.
Specifically, the adjusting of the attribute parameter of any object to be marked may be modifying the labeling constant L of the mark line, for example, increasing the labeling constant or decreasing the labeling constant.
And marking the object to be marked based on the adjusted attribute information of each marking component, so that the marking accuracy is improved, and the crossing and overlapping of the marking components are avoided.
According to the technical scheme provided by the embodiment, after the marking assemblies of the parts to be marked are determined, the attribute information of the marking assemblies of different parts to be marked is verified, and when the marking assemblies of any two parts to be marked are intersected, the attribute information of the marking assemblies is adjusted until the intersection condition does not exist. The marking assemblies of all parts to be marked are ensured to be independent respectively, the marking is clear and visual, and the marking accuracy is improved.
EXAMPLE III
Fig. 8 is a schematic structural diagram of a model marking apparatus according to a third embodiment of the present invention, where the apparatus includes:
the information identification module 310 is configured to obtain a three-dimensional model, and identify scene information of the three-dimensional model and component information of a component to be marked;
a to-be-marked image generating module 320, configured to perform two-dimensional rendering on the scene information and the component information, and generate a two-dimensional to-be-marked image;
an attribute information determining module 330, configured to determine attribute information of a marking assembly that marks each to-be-marked component based on a distribution state of each to-be-marked component in the to-be-marked image;
a model marking module 340, configured to mark, in the image to be marked, each component to be marked based on the attribute information of the marking assembly corresponding to each component to be marked and the marking content of the component to be marked.
Optionally, the information identifying module 310 is configured to:
setting a main virtual camera and an interactive virtual camera in the three-dimensional model;
identifying scene information of the three-dimensional model based on the master virtual camera;
identifying component information for the component to be marked based on the interactive virtual camera.
Optionally, the to-be-marked image generating module 320 is configured to:
respectively carrying out coordinate conversion on the scene information of the three-dimensional model and the component information of the component to be marked to obtain two-dimensional coordinates corresponding to the scene information and the component information respectively;
rendering the scene information and the component to be marked based on the two-dimensional coordinates respectively corresponding to the scene information and the component information.
Optionally, the marking assembly includes a marking point, a marking line and a marking instruction area;
the attribute information determination module 330 includes:
the distribution information determining unit is used for determining the distribution information of the current to-be-marked component in the current coordinate direction based on the length of the to-be-marked image in the current coordinate direction, the maximum first distance of each to-be-marked component in the current coordinate direction and the second distance between the current to-be-marked component and an adjacent marking point in the current coordinate direction;
a marking point position determining unit, configured to determine a marking point position of the current component to be marked based on a two-dimensional coordinate of the current component to be marked in the image to be marked and distribution information in each coordinate direction;
and the attribute information determining unit is used for determining the attribute information of the marking line and the marking description area based on the marking point position.
Optionally, the attribute information determining module 330 further includes:
the marking judging unit is used for judging whether the marking lines of the parts to be marked intersect and/or whether the marking description areas intersect;
and the mark adjusting unit is used for adjusting the attribute information of each mark component if the mark adjusting unit is used for adjusting the attribute information of each mark component.
Optionally, a tag information list of the three-dimensional model is provided, where the tag information list includes tags and tag contents of the components to be tagged.
Optionally, the attribute information determining module 330 further includes:
and the mark content updating unit is used for updating the mark content in the image to be marked based on the label when the mark content in the mark information list is updated.
Optionally, the apparatus further comprises:
the mark storage module is used for respectively setting the display time stamps of the components to be marked and storing the display time stamps and the mark contents of the components to be marked in a queue;
the mark reading module is used for reading the display time stamp and the mark content of each part to be marked in the corresponding storage queue in the three-dimensional model;
and the mark display module is used for dynamically displaying the mark content of each part to be marked based on the display time stamp.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 9 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 9 illustrates a block diagram of an electronic device 412 that is suitable for use in implementing embodiments of the present invention. The electronic device 412 shown in fig. 9 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present invention. The device 412 is typically an electronic device that undertakes image classification functions.
As shown in fig. 9, the electronic device 412 is in the form of a general purpose computing device. The components of the electronic device 412 may include, but are not limited to: one or more processors 416, a storage device 428, and a bus 418 that couples the various system components including the storage device 428 and the processors 416.
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Electronic device 412 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 428 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 430 and/or cache Memory 432. The electronic device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 9, commonly referred to as a "hard drive"). Although not shown in FIG. 9, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Storage 428 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program 436 having a set (at least one) of program modules 426 may be stored, for example, in storage 428, such program modules 426 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination may comprise an implementation of a network environment. Program modules 426 generally perform the functions and/or methodologies of embodiments of the invention as described herein.
The electronic device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, camera, display 424, etc.), with one or more devices that enable a user to interact with the electronic device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, the electronic device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network, such as the internet) via the Network adapter 420. As shown, network adapter 420 communicates with the other modules of electronic device 412 over bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 412, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.
The processor 416 executes various functional applications and data processing, such as implementing the model labeling methods provided by the above-described embodiments of the present invention, by executing programs stored in the storage device 428.
EXAMPLE five
Fifth embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the model tagging method provided in the fifth embodiment of the present invention.
Of course, the computer program stored on the computer-readable storage medium provided by the embodiments of the present invention is not limited to the method operations described above, and may also execute the model marking method provided by any embodiment of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable source code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Source code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer source code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The source code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A method of model labeling, comprising:
acquiring a three-dimensional model, and identifying scene information of the three-dimensional model and component information of a component to be marked;
performing two-dimensional rendering on the scene information and the component information to generate a two-dimensional image to be marked;
determining attribute information of a marking assembly for marking each part to be marked based on the distribution state of each part to be marked in the image to be marked, wherein the marking assembly comprises marking points, marking lines and a marking description area;
marking each part to be marked in the image to be marked based on the attribute information of the marking assembly corresponding to each part to be marked and the marking content of the part to be marked;
wherein the determining of the attribute information of the marking assembly for marking each to-be-marked component based on the distribution state of each to-be-marked component in the to-be-marked image includes:
determining distribution information of the current part to be marked in the current coordinate direction based on the length of the image to be marked in the current coordinate direction, the maximum first distance of each part to be marked in the current coordinate direction and the second distance between the current part to be marked and an adjacent marking point in the current coordinate direction;
determining the marking point position of the current part to be marked based on the two-dimensional coordinates of the current part to be marked in the image to be marked and the distribution information in each coordinate direction;
determining attribute information of the marking line and the marking specification area based on the marking point position.
2. The method of claim 1, wherein the identifying scene information of the three-dimensional model and part information of the part to be marked comprises:
setting a main virtual camera and an interactive virtual camera in the three-dimensional model;
identifying scene information of the three-dimensional model based on the master virtual camera;
identifying component information for the component to be marked based on the interactive virtual camera.
3. The method of claim 1, wherein the two-dimensional rendering of the scene information and the component information comprises:
respectively carrying out coordinate conversion on the scene information of the three-dimensional model and the component information of the component to be marked to obtain two-dimensional coordinates corresponding to the scene information and the component information respectively;
rendering the scene information and the component to be marked based on the two-dimensional coordinates respectively corresponding to the scene information and the component information.
4. The method according to claim 1, wherein after marking each of the parts to be marked in the image to be marked, the method further comprises:
judging whether the marking lines of the parts to be marked intersect and/or whether the marking description areas intersect;
and if the marking lines intersect and/or the marking description areas intersect, adjusting the attribute information of each marking assembly.
5. The method of claim 1, further comprising:
and when the mark content pre-stored by any part is updated, updating the mark content of the corresponding part in the image to be marked based on the label of any part.
6. The method of claim 1, further comprising:
respectively setting display time stamps of the parts to be marked, and performing queue storage on the display time stamps and the marked contents of the parts to be marked;
correspondingly, the method further comprises the following steps:
reading the display time stamp and the marking content of each part to be marked in the corresponding storage queue in the three-dimensional model;
and dynamically displaying the marking content of each part to be marked based on the display time stamp.
7. A model marking apparatus, comprising:
the information identification module is used for acquiring a three-dimensional model and identifying scene information of the three-dimensional model and component information of a component to be marked;
the image to be marked generating module is used for performing two-dimensional rendering on the scene information and the component information to generate a two-dimensional image to be marked;
the attribute information determining module is used for determining attribute information of a marking assembly for marking each part to be marked based on the distribution state of each part to be marked in the image to be marked, wherein the marking assembly comprises a marking point, a marking line and a marking description area;
the model marking module is used for marking each part to be marked in the image to be marked based on the attribute information of the marking assembly corresponding to each part to be marked and the marking content of the part to be marked;
wherein, the attribute information determination module comprises:
the distribution information determining unit is used for determining the distribution information of the current to-be-marked component in the current coordinate direction based on the length of the to-be-marked image in the current coordinate direction, the maximum first distance of each to-be-marked component in the current coordinate direction and the second distance between the current to-be-marked component and an adjacent marking point in the current coordinate direction;
a marking point position determining unit, configured to determine a marking point position of the current component to be marked based on a two-dimensional coordinate of the current component to be marked in the image to be marked and distribution information in each coordinate direction;
and the attribute information determining unit is used for determining the attribute information of the marking line and the marking description area based on the marking point position.
8. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the model labeling method of any of claims 1-6.
9. A storage medium containing computer-executable instructions for performing the model tagging method of any one of claims 1-6 when executed by a computer processor.
CN202010720141.6A 2020-07-24 2020-07-24 Model marking method and device, storage medium and electronic equipment Active CN111597628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010720141.6A CN111597628B (en) 2020-07-24 2020-07-24 Model marking method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010720141.6A CN111597628B (en) 2020-07-24 2020-07-24 Model marking method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111597628A CN111597628A (en) 2020-08-28
CN111597628B true CN111597628B (en) 2020-11-20

Family

ID=72186557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010720141.6A Active CN111597628B (en) 2020-07-24 2020-07-24 Model marking method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111597628B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112809673B (en) * 2020-12-30 2022-07-19 广东博智林机器人有限公司 Robot coordinate determination method and device
CN114529686B (en) * 2022-04-21 2022-08-02 三一筑工科技股份有限公司 Building model generation method, device, equipment and medium
CN114840902B (en) * 2022-05-19 2023-03-24 三一筑工科技股份有限公司 Target object drawing method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10572970B2 (en) * 2017-04-28 2020-02-25 Google Llc Extracting 2D floor plan from 3D GRID representation of interior space
CN108427785A (en) * 2017-08-12 2018-08-21 中民筑友科技投资有限公司 A kind of method and device of the X-Y scheme automatic marking based on BIM
US10175697B1 (en) * 2017-12-21 2019-01-08 Luminar Technologies, Inc. Object identification and labeling tool for training autonomous vehicle controllers
CN108959824B (en) * 2018-08-06 2020-10-02 上海营邑城市规划设计股份有限公司 BIM design section layer layering generation method for Ying Yi planning pipeline
CN110598195A (en) * 2019-09-16 2019-12-20 杭州群核信息技术有限公司 Automatic labeling and typesetting method for decoration drawing
CN111274927A (en) * 2020-01-17 2020-06-12 北京三快在线科技有限公司 Training data generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111597628A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN111597628B (en) Model marking method and device, storage medium and electronic equipment
CN109165401B (en) Method and device for generating two-dimensional construction map based on civil structure three-dimensional model
US20150302649A1 (en) Position identification method and system
CN111031293B (en) Panoramic monitoring display method, device and system and computer readable storage medium
CN110914870B (en) Annotation generation for image networks
CN107566793A (en) Method, apparatus, system and electronic equipment for remote assistance
KR101553273B1 (en) Method and Apparatus for Providing Augmented Reality Service
CN110084797B (en) Plane detection method, plane detection device, electronic equipment and storage medium
CN111882634A (en) Image rendering method, device and equipment and storage medium
WO2020259682A1 (en) Three-dimensional point cloud-based initial viewing angle control and presentation method and system
JP6768123B2 (en) Augmented reality methods and equipment
CN110807161A (en) Page framework rendering method, device, equipment and medium
WO2024060952A1 (en) Method and apparatus for rendering virtual objects, device, and medium
CN110727825A (en) Animation playing control method, device, server and storage medium
CN107978018B (en) Method and device for constructing three-dimensional graph model, electronic equipment and storage medium
US8902219B1 (en) Maintaining connection to embedded content using graphical elements
CN112637541A (en) Audio and video labeling method and device, computer equipment and storage medium
US20180039715A1 (en) System and method for facilitating an inspection process
CN111836093B (en) Video playing method, device, equipment and medium
CN112509135B (en) Element labeling method, element labeling device, element labeling equipment, element labeling storage medium and element labeling computer program product
CN114004972A (en) Image semantic segmentation method, device, equipment and storage medium
CN113129362A (en) Method and device for acquiring three-dimensional coordinate data
CN113486941B (en) Live image training sample generation method, model training method and electronic equipment
CN110660313A (en) Information presentation method and device
CN114329675A (en) Model generation method, model generation device, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant