CN111538414B - Maintenance visual automatic judgment method and system - Google Patents

Maintenance visual automatic judgment method and system Download PDF

Info

Publication number
CN111538414B
CN111538414B CN202010348353.6A CN202010348353A CN111538414B CN 111538414 B CN111538414 B CN 111538414B CN 202010348353 A CN202010348353 A CN 202010348353A CN 111538414 B CN111538414 B CN 111538414B
Authority
CN
China
Prior art keywords
dimensional space
point
current iteration
point set
observation point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010348353.6A
Other languages
Chinese (zh)
Other versions
CN111538414A (en
Inventor
吕川
梁传圣
耿杰
张欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Shuishi Information Technology Co ltd
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010348353.6A priority Critical patent/CN111538414B/en
Publication of CN111538414A publication Critical patent/CN111538414A/en
Application granted granted Critical
Publication of CN111538414B publication Critical patent/CN111538414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a maintenance visual automatic judgment method and system. The method comprises the following steps: determining a three-dimensional space observation point set and a three-dimensional space observed point set under the current iteration times; connecting each eye point with any one maintenance object point to obtain an initial visible line segment set under the current iteration times; deleting the initial visible line segments positioned outside the visible cone taking human eyes as vertexes in the initial visible line segment set to obtain an initial solution set under the current iteration times; carrying out intersection detection on the initial solution set and the three-dimensional model of the maintenance scene product to obtain a visual solution set under the current iteration times; and calculating the system complexity statistical metric value under the current iteration times until the absolute value of the difference between the system complexity statistical metric value under the current iteration times and the system complexity statistical metric value under the last iteration times is less than a set value, thereby determining the visual distribution result of the observed points in the three-dimensional space. The invention can improve the judgment efficiency and the judgment accuracy.

Description

Maintenance visual automatic judgment method and system
Technical Field
The invention relates to the technical field of maintainability, in particular to a visual automatic maintenance judgment method and a visual automatic maintenance judgment system.
Background
With the development of the technology, various products are developed towards high integration, automation and intellectualization, the maintenance difficulty is increased, and in order to ensure that the products have better use reliability and lower full life cycle cost, maintainability design and analysis become indispensable researches in the whole design process.
Maintenance visibility is an important factor in maintenance design and evaluation. The maintenance visibility, that is, whether or not a maintenance person can see a maintenance object during operation, is one of the prerequisites for enabling maintenance of a product, and therefore needs to be determined. The good visual design can make the maintainer find the best operating position and gesture, and the product spatial position is fit for the maintenance as far as possible, and then reduces maintenance time, improves work efficiency, is the important influence factor who realizes convenient and swift maintenance, and if the maintenance position is more visible, then can bring very big difficulty for the repair.
The maintenance visibility judgment method commonly used at the present stage is mainly based on a physical prototype or a virtual prototype. The physical sample consumes a large amount of time and material resources and has hysteresis, and the judgment can be carried out after the product is designed and shaped; the simulation by the virtual prototype depends too much on the evaluation of the functional module, and the process has strong subjectivity, so that the contact between a person and a judgment object cannot be accurately established, the judgment result is easy to have larger difference with the actual condition, and the quantitative judgment result is difficult to obtain. Therefore, the existing maintenance visibility judgment method has the defects of long time consumption and poor accuracy.
Disclosure of Invention
Therefore, it is necessary to provide a maintenance visual automatic judgment method and system to improve the judgment efficiency and the judgment accuracy, so that the spatial position of the product is suitable for maintenance as much as possible, thereby reducing the maintenance time and improving the working efficiency.
In order to achieve the purpose, the invention provides the following scheme:
a maintenance visual automated judgment method comprises the following steps:
determining a three-dimensional space observation point set and a three-dimensional space observed point set under the current iteration times; when a point in the three-dimensional space observation point set is an eye point, a point in the three-dimensional space observation point set is a maintenance object point; when a point in the three-dimensional space observation point set is a maintenance object point, the point in the three-dimensional space observation point set is a human eye point;
connecting each eye point with any one maintenance object point to obtain an initial visible line segment set under the current iteration times;
deleting the initial visible line segments positioned outside the visible cone taking human eyes as vertexes in the initial visible line segment set to obtain an initial solution set under the current iteration times;
constructing a three-dimensional model of a maintenance scene product;
performing intersection detection on the initial solution set and the three-dimensional model of the maintenance scene product to obtain a visual solution set under the current iteration times;
constructing a network model under the current iteration times by the visual solution set, and calculating a system complexity statistic metric value of the network model under the current iteration times;
judging whether the absolute value of the difference value between the system complexity statistic metric value of the network model under the current iteration number and the system complexity statistic metric value of the network model under the last iteration number is smaller than a set value or not to obtain a first judgment result;
if the first judgment result is yes, determining a visual distribution result of observed points in the three-dimensional space by the network model under the current iteration times;
and if the first judgment result is negative, adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the step of determining the three-dimensional space observation point set and the three-dimensional space observed point set under the current iteration times.
Optionally, before the constructing a network model under the current iteration number from the visual solution set and calculating a system complexity statistics metric of the network model under the current iteration number, the method further includes:
judging whether the visual solution set is empty or not to obtain a second judgment result;
if the second judgment result is yes, adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the step of determining the three-dimensional space observation point set and the three-dimensional space observed point set under the current iteration times.
Optionally, the determining, by the network model under the current iteration number, a visual distribution result of observed points in the three-dimensional space specifically includes:
calculating the degree of each observation point in the network model under the current iteration times;
calculating the ratio of the degree to the number of observed points in the three-dimensional space observed point set; the ratio represents the visual distribution result of observed points in the three-dimensional space.
Optionally, the method for determining the three-dimensional space observed point set and the three-dimensional space observed point set under the initial iteration number is as follows:
determining an observation point moving range and an observed point moving range; the observation point moving range and the observed point moving range are both closed intervals;
and randomly generating a plurality of three-dimensional space points in the observation point moving range according to a uniform distribution mode to form a three-dimensional space observation point set, and randomly generating a plurality of three-dimensional space points in the observed point moving range according to a uniform distribution mode to form a three-dimensional space observed point set.
The invention also provides a maintenance visual automatic judgment system, which comprises:
the point set determining module is used for determining a three-dimensional space observation point set and a three-dimensional space observed point set under the current iteration times; when a point in the three-dimensional space observation point set is an eye point, a point in the three-dimensional space observation point set is a maintenance object point; when a point in the three-dimensional space observation point set is a maintenance object point, the point in the three-dimensional space observation point set is a human eye point;
the visual line segment set determining module is used for connecting each eye point with any one maintenance object point to obtain an initial visual line segment set under the current iteration times;
an initial solution set determining module, configured to delete an initial visible line segment located outside a visible cone with human eyes as vertices in the initial visible line segment set, to obtain an initial solution set under a current iteration number;
the model building module is used for building a three-dimensional model of a maintenance scene product;
the intersection detection module is used for carrying out intersection detection on the initial solution set and the three-dimensional model of the maintenance scene product to obtain a visual solution set under the current iteration times;
the metric value calculation module is used for constructing a network model under the current iteration times from the visual solution set and calculating a system complexity statistic metric value of the network model under the current iteration times;
the first judgment module is used for judging whether the absolute value of the difference value between the system complexity statistic metric value of the network model under the current iteration times and the system complexity statistic metric value of the network model under the last iteration times is smaller than a set value or not to obtain a first judgment result;
a visual distribution determining module, configured to determine, if the first determination result is yes, a visual distribution result of observed points in the three-dimensional space by using the network model under the current iteration number;
and the updating module is used for adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set if the first judgment result is negative, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the point set determining module.
Optionally, the maintenance visual automatic judgment system further includes:
the second judgment module is used for judging whether the visual solution set is empty or not to obtain a second judgment result; if the second judgment result is yes, adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the point set determination module.
Optionally, the visual distribution determining module specifically includes:
the first calculation unit is used for calculating the degree of each observation point in the network model under the current iteration times;
the second calculating unit is used for calculating the ratio of the degree to the number of observed points in the three-dimensional space observed point set; the ratio represents the visual distribution result of observed points in the three-dimensional space.
Optionally, the maintenance visual automatic judgment system further includes:
the initial point set determining module is used for determining a three-dimensional space observation point set and a three-dimensional space observed point set under the initial iteration times; the initial point set determining module specifically includes:
a point moving range determining unit for determining an observation point moving range and an observed point moving range; the observation point moving range and the observed point moving range are both closed intervals;
and the space point random generation unit is used for randomly generating a plurality of three-dimensional space points in the observation point moving range in a uniformly distributed manner to form a three-dimensional space observation point set, and randomly generating a plurality of three-dimensional space points in the observed point moving range in a uniformly distributed manner to form a three-dimensional space observed point set.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a maintenance visual automatic judgment method and a system, which respectively aim at human eyes of maintenance personnel and maintenance object points in a maintenance scene, initialize a network model comprising the human eyes and the maintenance object points in the maintenance scene, consider the visual angle of a maintenance visual space under the biological characteristics of the human eyes, and judge the spatial relationship between the human eyes and the maintenance scene by means of the network model, thereby determining the visual distribution result of observed points (the human eyes or the maintenance object points) in a three-dimensional space, greatly reducing the subjective judgment and intervention in the maintenance visual judgment, realizing the maintenance visual automatic judgment, and improving the judgment efficiency and the judgment accuracy; the visual cone based on the human eye part is considered, the visual range can fall in the visual cone, and the accuracy is higher; the visibility of a maintenance object can be measured, and the viewpoint of an operator with good visibility can be found out; human eyes and maintenance objects are networked, so that the visibility degree of operators or the maintenance objects can be calculated quantitatively.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of a method for determining maintenance visual automation according to an embodiment of the present invention;
FIG. 2 is a flow chart of steps one-step four in an embodiment of the present invention;
FIG. 3 is a schematic diagram of discretized eye points and repair object points in accordance with the present invention;
FIG. 4 is a diagram of an intersection detection scenario in accordance with the present invention;
FIG. 5 is a three-dimensional map of visibility of a maintenance object point according to the present invention;
FIG. 6 is a distribution diagram of the visibility of the inventor's eyepoint in three-dimensional space;
fig. 7 is a schematic structural diagram of a maintenance visual automated determination system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of a maintenance visual automated determination method according to an embodiment of the present invention.
Referring to fig. 1, the visual automatic repair judgment method of the embodiment includes:
step 101: determining a three-dimensional space observation point set and a three-dimensional space observed point set under the current iteration times; when a point in the three-dimensional space observation point set is an eye point, a point in the three-dimensional space observation point set is a maintenance object point; and when the point in the three-dimensional space observation point set is a maintenance object point, the point in the three-dimensional space observation point set is a human eye point.
Step 102: and connecting each eye point with any one maintenance object point to obtain an initial visible line segment set under the current iteration times.
Step 103: and deleting the initial visible line segments positioned outside the visible cone taking human eyes as vertexes in the initial visible line segment set to obtain an initial solution set under the current iteration times.
Step 104: and constructing a three-dimensional model of the maintenance scene product.
Step 105: and carrying out intersection detection on the initial solution set and the three-dimensional model of the maintenance scene product to obtain a visual solution set under the current iteration times.
Step 106: and constructing a network model under the current iteration times by the visual solution set, and calculating a system complexity statistic metric value of the network model under the current iteration times.
Step 107: and judging whether the absolute value of the difference value between the system complexity statistical metric value under the current iteration number and the system complexity statistical metric value under the last iteration number is smaller than a set value or not, and obtaining a first judgment result. If the first determination result is yes, go to step 108; if the first determination result is negative, step 109 is executed.
Step 108: and determining the visual distribution result of the observed points in the three-dimensional space by the network model under the current iteration times.
Step 109: and adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, combining the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the step 101.
Wherein, after the step 105, further comprising:
and judging whether the visual solution set is empty or not to obtain a second judgment result. If the second determination result is yes, step 109 is executed. If the second determination result is negative, continue to execute step 106.
Wherein, step 108 specifically includes:
calculating the degree of each observation point in the network model under the current iteration times; calculating the ratio of the degree to the number of observed points in the three-dimensional space observed point set; the ratio represents the visual distribution result of observed points in the three-dimensional space.
As an optional implementation manner, the method for determining the three-dimensional space observation point set and the three-dimensional space observation point set at the initial iteration number is: determining an observation point moving range and an observed point moving range; the observation point moving range and the observed point moving range are both closed intervals; and randomly generating a plurality of three-dimensional space points in the observation point moving range according to a uniform distribution mode to form a three-dimensional space observation point set, and randomly generating a plurality of three-dimensional space points in the observed point moving range according to a uniform distribution mode to form a three-dimensional space observed point set.
The visual automatic judgment method for maintenance can improve the judgment efficiency and the judgment accuracy, so that the spatial position of a product is suitable for maintenance as much as possible, the maintenance time is shortened, and the working efficiency is improved.
A more specific example is provided below.
The idea of the visual automatic judgment method for maintenance in the embodiment is as follows: firstly, discretizing a human eye observation area and a maintenance object area, initializing all human eyes to the line of sight line of the maintenance object, considering the visual conical surface of a human eye observation range, carrying out interference detection on all the line of sight lines and products in a three-dimensional scene, after eliminating intersecting line of sight lines, converting the connection relation between the human eyes and the maintenance object point into a network model, calculating network indexes of transmissibility of each human eye point set and each item of the maintenance object reflecting nodes, and judging the maintenance visibility of the scene, the human eye point and the maintenance object point.
Step one, solving space point cloud under human eye and maintenance object moving range
Determining the moving range of the human eyes and the maintenance object in the three-dimensional space, establishing point clouds reflecting the human eye points and the moving space of the maintenance object, and representing the spatial distribution relationship of the human eyes and the maintenance object. The method specifically comprises the following steps:
determining the moving range of the human eyes and the maintenance object in a three-dimensional space, wherein the moving range comprises the closed space of the x, y and z axes of the human eyes and the maintenance object, and randomly generating a certain number of m points in the closed range according to uniform distribution to be used as point clouds for representing the human eyes and the maintenance object. Wherein each eye point or maintenance object point satisfies the following constraint relationship in the three-dimensional space:
Figure GDA0002947468760000071
where k represents the number of constraints, l represents the lower constraint limit, u represents the upper constraint limit, (x, y, z) are coordinates of the eye point, and (u, v, w) are coordinates of the repair object point. x is the number oflkThe lower limit of the x coordinate under the k constraint of the eye point; x is the number ofukThe upper limit of the x coordinate under the k constraint of the eye point; y islkThe lower limit of the y coordinate under the k constraint of the eye point; y isukThe upper limit of the y coordinate under the k constraint of the eye point; z is a radical oflkThe lower limit of the z coordinate under the k constraint of the eye point; z is a radical ofukThe upper limit of the z coordinate under the k constraint of the eye point; u. oflkThe lower limit of the u coordinate under the k constraint for the maintenance object; u. ofukThe upper limit of the u coordinate under the k constraint for the maintenance object; v. oflkA lower limit of the v coordinate under the k constraint for the maintenance object; v. ofukThe upper limit of the v coordinate under the k constraint for the maintenance object; w is alkThe lower limit of the w coordinate under the k constraint of the maintenance object; w is aukAnd the upper limit of the w coordinate under the k constraint of the maintenance object.
Step two, generating an initial visible line segment solution set of human eyes-maintenance objects
In a three-dimensional space, connecting each human eye point with any maintenance object point to obtain an initialized set of human eye-maintenance object visual line segments, judging whether the visual line segments are in a visual cone with the human eye as a vertex, and deleting the set of the visual line segments outside the visual cone to finish the generation of an initial solution.
Step three, constructing a maintenance scene three-dimensional model
Constructing a virtual environment supporting computational analysis, and reconstructing the virtual environment according to the original three-dimensional model and computational requirements of the product so as to simplify the process and improve the efficiency; and then representing the model in a parameter form according to the self characteristics of the reconstructed model. The method specifically comprises the following steps:
reconstructing a three-dimensional model of the product, and if the model is regular, representing the three-dimensional model by a bounding sphere, a bounding box, a cylinder, a capsule body and the like; if the model is irregular, the model is subjected to meshing and represented by a triangular patch. Geometric feature data of each product is then obtained. Wherein each type of product data is shown in table 1.
TABLE 1 product geometry characteristic data type Table
Figure GDA0002947468760000081
Figure GDA0002947468760000091
Step four, judging the spatial relation between the visible line segment and the product
And D, judging the spatial relationship between all the visible line segments in the step two and all the products in the maintenance scene three-dimensional model constructed in the step three to obtain a visible solution. The specific process of judging the spatial relationship is as follows:
when the line segment AB and the surrounding ball O are judged, the distance d from the center of the ball to the line segment AB is solved and compared with the radius R of the surrounding ball O, if d is larger than R, the line segment AB and the surrounding ball O are not intersected, and if d is smaller than or equal to R, the line segment AB and the ball O are intersected.
When the line segment AB and the cylinder C are judged, intersection points of the straight line AB and the curved surface of the cylinder C, wherein the plane of the upper bottom surface, the plane of the lower bottom surface and the curved surface of the side surface are respectively solved, if the intersection points are located in the line segment AB and the upper bottom surface, the lower bottom surface or the side surface, the line segment AB and the cylinder C are intersected, and otherwise, the line segment AB and the cylinder C are not intersected.
When the line segment AB and the capsule body D are judged, the shortest distance D between the line segment AB and the axis of the capsule body is solved and compared with the radius R of the capsule body D, if D is larger than R, the line segment AB and the capsule body D are not intersected, and if D is smaller than or equal to R, the line segment AB and the capsule body D are intersected.
When the line segment AB and the bounding box E are judged, 6 axes, namely 3 surface normals of the bounding box E and a cross product vector of the 3 surface normals and the line segment direction vector, of the bounding box E are selected for judgment, and as long as one axis is a separation axis, namely the axis is used as a hyperplane, the line segment AB and the bounding box E can be respectively positioned at two sides of the hyperplane, the line segment AB and the bounding box E are not intersected, otherwise, the line segment AB and the bounding box E are intersected.
And when the line segment AB and the triangular patch F are judged, solving the intersection point of the plane where the straight line AB and the triangular patch F are located, if the intersection point is located in both the line segment AB and the triangular patch F, intersecting the line segment AB and the triangular patch F, and otherwise, not intersecting the line segment AB and the triangular patch F.
And if the group of line segments does not intersect with the environment, the group of line segments are visible line segments, otherwise, the group of line segments are deleted from the visible line segments, and the construction of the human eye-maintenance object network model is finished. The specific process of steps one-four is shown in fig. 2.
Step five, generating a network of human eyes-maintenance objects
And establishing a network model according to the connection relation between the human eyes and the maintenance object contained in the visual solution in the fourth step, calculating the network index of a certain node in the network, and quantifying the maintenance visibility of the representation product. Calculating the system complexity statistic metric value, repeating the first step to the fourth step to carry out iterative calculation, calculating the system complexity statistic metric value again, if the absolute value of the difference between the iterated system complexity statistic metric value and the previous generation is smaller than the precision requirement, stopping simulation, otherwise, continuing iteration. Specifically, the method comprises the following steps:
according to the visual state of the observed point in the system, the degree of the deviation of the discrete point from the equilibrium state and the aromatic information are considered, and the system complexity statistical metric is calculated and used as a judgment index of the algorithm iteration stop condition. After the iteration of the algorithm is terminated, calculating the degree of each node in the network, and quantifying and representing the maintenance visibility of the maintenance object by taking the degree of the maintenance object node and the number of human eye observation points as a ratio; and (4) making a ratio of the degree of the human eye nodes to the number of the maintenance object points, and quantitatively representing the maintenance visibility of the human eye observation points.
The visual automatic judgment method for maintenance provided by the embodiment reduces the participation degree of maintenance personnel, and is high in automation degree and stronger in objectivity; the visual cone based on the human eye part is considered, and the visual range can fall in the visual cone; the visibility of a maintenance object can be measured, and the viewpoint of an operator with good visibility can be found out; human eyes and maintenance objects are networked, so that the visibility degree of operators or the maintenance objects can be calculated quantitatively.
In practical application, the specific maintenance visual automatic judgment method has the following flow:
(1) solving space point cloud under human eye and maintenance object moving range
Respectively determining the moving ranges of the x, y and z axes of the observation point and the maintenance object to form a closed interval, wherein x, y and z represent the moving range of the observation point of human eyes, u, v and w represent the moving range of the maintenance object (-400< x < -100, 100< y <400, 100< z <400, 0< u <500, 0< v <500 and 0< w < 500);
the points in each of the 1000 three-dimensional spaces are randomly generated in the interval in a uniform distribution to form a point cloud of the eye point and the maintenance object, as shown in fig. 3, the left square represents the eye observation area, and the right square represents the maintenance object area.
(2) Generating a solution set of human eye-maintenance object initial visible line segments
Connecting any eye point with a maintenance object node pair to obtain an initialized visible line segment; judging whether each visible line segment is in a visible cone range taking a human eyepoint as a vertex; and (5) keeping the line segments inside the visible cones, and deleting the rest visible line segments.
(3) Constructing a three-dimensional model of a scene
1) Fig. 4 is an intersection detection scene 1, and 6 cuboids are included in the scene to represent products in the scene.
2) In scene 1, a rectangular solid (bounding box) is represented by a central coordinate, a range vector (half of the length of three sides), and a local coordinate axis (vector in the direction of three sides). The geometric characteristics of each object in scene 1 and scene 2 are shown in table 2.
Table 2 table of geometric characteristics of objects in scene 1
Figure GDA0002947468760000111
(4) Judgment of spatial relation between visual line segment and product
And (5) carrying out intersection detection on the visible line segments in the step (2) and the products in the scene 1. If the line segments have intersections with the cuboid, the set of line segments is not visible.
(5) Quantifying, characterizing, maintaining, visualizing, and controlling simulation to stop
When the human eye points are fixed (the human eye points are taken as observed points, and the maintenance object points are taken as observation points), the algorithm converges when the maintenance object point control iterates to 1753, and the distribution of the visibility of the maintenance object points in the three-dimensional space is shown in fig. 5.
When the maintenance object point is fixed (the maintenance object point is used as the observed point and the eye point is used as the observation point), the algorithm converges when the eye points are controlled to iterate to 1800 points, and the distribution of the visibility of the eye points in the three-dimensional space is shown in fig. 6.
The invention also provides a maintenance visual automatic judgment system, and fig. 7 is a schematic structural diagram of the maintenance visual automatic judgment system according to the embodiment of the invention.
Referring to fig. 7, the maintenance visual automation determination system includes:
a point set determining module 201, configured to determine a three-dimensional space observation point set and a three-dimensional space observed point set in the current iteration number; when a point in the three-dimensional space observation point set is an eye point, a point in the three-dimensional space observation point set is a maintenance object point; and when the point in the three-dimensional space observation point set is a maintenance object point, the point in the three-dimensional space observation point set is a human eye point.
A visible line segment set determining module 202, configured to connect each eye point with any one of the maintenance object points, to obtain an initial visible line segment set in the current iteration number.
An initial solution set determining module 203, configured to delete an initial visible line segment located outside a visible cone with human eyes as vertices in the initial visible line segment set, so as to obtain an initial solution set in the current iteration number.
And the model building module 204 is used for building a three-dimensional model of the maintenance scene product.
And the intersection detection module 205 is configured to perform intersection detection on the initial solution set and the three-dimensional model of the maintenance scene product to obtain a visual solution set under the current iteration number.
And the metric value calculating module 206 is configured to construct a network model under the current iteration number from the visual solution set, and calculate a system complexity statistics metric value of the network model under the current iteration number.
The first determining module 207 is configured to determine whether an absolute value of a difference between the system complexity statistical metric of the network model in the current iteration and the system complexity statistical metric of the network model in the previous iteration is smaller than a set value, so as to obtain a first determining result.
And a visual distribution determining module 208, configured to determine, if the first determination result is yes, a visual distribution result of the observed point in the three-dimensional space according to the network model in the current iteration number.
An updating module 209, configured to, if the first determination result is negative, add a random discrete point to the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, update the current iteration number, cooperate the updated three-dimensional space observation point set as the three-dimensional space observation point set in the current iteration number, and return to the point set determining module 201.
As an optional implementation manner, the maintenance visual automatic determination system further includes:
the second judgment module is used for judging whether the visual solution set is empty or not to obtain a second judgment result; if the second judgment result is yes, adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the point set determination module.
As an optional implementation manner, the visual distribution determining module specifically includes:
and the first calculating unit is used for calculating the degree of each observation point in the network model under the current iteration times.
The second calculating unit is used for calculating the ratio of the degree to the number of observed points in the three-dimensional space observed point set; the ratio represents the visual distribution result of observed points in the three-dimensional space.
As an optional implementation manner, the maintenance visual automatic determination system further includes:
the initial point set determining module is used for determining a three-dimensional space observation point set and a three-dimensional space observed point set under the initial iteration times; the initial point set determining module specifically includes:
a point moving range determining unit for determining an observation point moving range and an observed point moving range; the observation point moving range and the observed point moving range are both closed intervals;
and the space point random generation unit is used for randomly generating a plurality of three-dimensional space points in the observation point moving range in a uniformly distributed manner to form a three-dimensional space observation point set, and randomly generating a plurality of three-dimensional space points in the observed point moving range in a uniformly distributed manner to form a three-dimensional space observed point set.
The visual automatic judgement system of maintenance of this embodiment can improve the accuracy of judging efficiency and judgement for product spatial position is fit for the maintenance as far as possible, thereby reduces maintenance duration, improves work efficiency.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A maintenance visual automatic judgment method is characterized by comprising the following steps:
determining a three-dimensional space observation point set and a three-dimensional space observed point set under the current iteration times; when a point in the three-dimensional space observation point set is an eye point, a point in the three-dimensional space observation point set is a maintenance object point; when a point in the three-dimensional space observation point set is a maintenance object point, the point in the three-dimensional space observation point set is a human eye point;
connecting each human eye point with any one maintenance object point to obtain an initial visible line segment set under the current iteration times;
deleting the initial visible line segments positioned outside the visible cone taking human eyes as vertexes in the initial visible line segment set to obtain an initial solution set under the current iteration times;
constructing a three-dimensional model of a maintenance scene product;
performing intersection detection on the initial solution set and the three-dimensional model of the maintenance scene product to obtain a visual solution set under the current iteration times;
constructing a network model under the current iteration times by the visual solution set, and calculating a system complexity statistic metric value of the network model under the current iteration times;
judging whether the absolute value of the difference value between the system complexity statistic metric value of the network model under the current iteration number and the system complexity statistic metric value of the network model under the last iteration number is smaller than a set value or not to obtain a first judgment result;
if the first judgment result is yes, determining a visual distribution result of observed points in the three-dimensional space by the network model under the current iteration times;
and if the first judgment result is negative, adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the step of determining the three-dimensional space observation point set and the three-dimensional space observed point set under the current iteration times.
2. The method of claim 1, wherein before constructing the network model at the current iteration from the set of visual solutions and calculating the system complexity statistics metric value of the network model at the current iteration, the method further comprises:
judging whether the visual solution set is empty or not to obtain a second judgment result;
if the second judgment result is yes, adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the step of determining the three-dimensional space observation point set and the three-dimensional space observed point set under the current iteration times.
3. The method according to claim 1, wherein the determining a result of the visual distribution of observed points in a three-dimensional space from the network model at the current iteration number specifically includes:
calculating the degree of each observation point in the network model under the current iteration times;
calculating the ratio of the degree to the number of observed points in the three-dimensional space observed point set; the ratio represents the visual distribution result of observed points in the three-dimensional space.
4. The visual automatic judgment method for maintenance according to claim 1, wherein the determination method of the three-dimensional space observation point set and the three-dimensional space observation point set in the initial iteration number comprises:
determining an observation point moving range and an observed point moving range; the observation point moving range and the observed point moving range are both closed intervals;
and randomly generating a plurality of three-dimensional space points in the observation point moving range according to a uniform distribution mode to form a three-dimensional space observation point set, and randomly generating a plurality of three-dimensional space points in the observed point moving range according to a uniform distribution mode to form a three-dimensional space observed point set.
5. A visual automated diagnostic system for maintenance, comprising:
the point set determining module is used for determining a three-dimensional space observation point set and a three-dimensional space observed point set under the current iteration times; when a point in the three-dimensional space observation point set is an eye point, a point in the three-dimensional space observation point set is a maintenance object point; when a point in the three-dimensional space observation point set is a maintenance object point, the point in the three-dimensional space observation point set is a human eye point;
the visible line segment set determining module is used for connecting each eye point with any one maintenance object point to obtain an initial visible line segment set under the current iteration times;
an initial solution set determining module, configured to delete an initial visible line segment located outside a visible cone with human eyes as vertices in the initial visible line segment set, to obtain an initial solution set under a current iteration number;
the model building module is used for building a three-dimensional model of a maintenance scene product;
the intersection detection module is used for carrying out intersection detection on the initial solution set and the three-dimensional model of the maintenance scene product to obtain a visual solution set under the current iteration times;
the metric value calculation module is used for constructing a network model under the current iteration times from the visual solution set and calculating a system complexity statistic metric value of the network model under the current iteration times;
the first judgment module is used for judging whether the absolute value of the difference value between the system complexity statistic metric value of the network model under the current iteration times and the system complexity statistic metric value of the network model under the last iteration times is smaller than a set value or not to obtain a first judgment result;
a visual distribution determining module, configured to determine, if the first determination result is yes, a visual distribution result of observed points in the three-dimensional space by using the network model under the current iteration number;
and the updating module is used for adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set if the first judgment result is negative, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the point set determining module.
6. The visual automated repair decision system of claim 5, further comprising:
the second judgment module is used for judging whether the visual solution set is empty or not to obtain a second judgment result; if the second judgment result is yes, adding a random discrete point in the three-dimensional space observation point set to obtain an updated three-dimensional space observation point set, updating the current iteration times, cooperating the updated three-dimensional space observation point set into a three-dimensional space observation point set under the current iteration times, and returning to the point set determination module.
7. The visual automated judgment system for repair of claim 5, wherein the visual distribution determination module specifically comprises:
the first calculation unit is used for calculating the degree of each observation point in the network model under the current iteration times;
the second calculating unit is used for calculating the ratio of the degree to the number of observed points in the three-dimensional space observed point set; the ratio represents the visual distribution result of observed points in the three-dimensional space.
8. The visual automated repair decision system of claim 5, further comprising:
the initial point set determining module is used for determining a three-dimensional space observation point set and a three-dimensional space observed point set under the initial iteration times; the initial point set determining module specifically includes:
a point moving range determining unit for determining an observation point moving range and an observed point moving range; the observation point moving range and the observed point moving range are both closed intervals;
and the space point random generation unit is used for randomly generating a plurality of three-dimensional space points in the observation point moving range in a uniformly distributed manner to form a three-dimensional space observation point set, and randomly generating a plurality of three-dimensional space points in the observed point moving range in a uniformly distributed manner to form a three-dimensional space observed point set.
CN202010348353.6A 2020-04-28 2020-04-28 Maintenance visual automatic judgment method and system Active CN111538414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010348353.6A CN111538414B (en) 2020-04-28 2020-04-28 Maintenance visual automatic judgment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010348353.6A CN111538414B (en) 2020-04-28 2020-04-28 Maintenance visual automatic judgment method and system

Publications (2)

Publication Number Publication Date
CN111538414A CN111538414A (en) 2020-08-14
CN111538414B true CN111538414B (en) 2021-06-11

Family

ID=71978921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010348353.6A Active CN111538414B (en) 2020-04-28 2020-04-28 Maintenance visual automatic judgment method and system

Country Status (1)

Country Link
CN (1) CN111538414B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013144455A1 (en) * 2012-03-29 2013-10-03 Elomatic Oy Facility maintenance system
CN108363491A (en) * 2018-03-02 2018-08-03 北京空间技术研制试验中心 Spacecraft maintainable technology on-orbit terrestrial virtual verifies system and method
CN109961518A (en) * 2017-12-14 2019-07-02 北京航空航天大学 A kind of Virtual Maintenance human body based on visual range is up to determination method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140095244A1 (en) * 2012-10-02 2014-04-03 Computer Sciences Corporation Facility visualization and monitoring
JP2018081570A (en) * 2016-11-17 2018-05-24 株式会社Nttファシリティーズ Information visualization system, information visualization method, and program
CN107578110A (en) * 2017-09-19 2018-01-12 北京枭龙科技有限公司 A kind of repair message method for visualizing based on augmented reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013144455A1 (en) * 2012-03-29 2013-10-03 Elomatic Oy Facility maintenance system
CN109961518A (en) * 2017-12-14 2019-07-02 北京航空航天大学 A kind of Virtual Maintenance human body based on visual range is up to determination method
CN108363491A (en) * 2018-03-02 2018-08-03 北京空间技术研制试验中心 Spacecraft maintainable technology on-orbit terrestrial virtual verifies system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于快速成型技术的维修视觉可达性分析;苏续军,王广彦;《现代制造工程》;20150331(第3期);116-120 *
维修工具使用的可达域计算及可视化方法;方雄兵,田正东,林锐,李涛涛;《中国船舰研究》;20161031;第11卷(第5期);11-18,41 *

Also Published As

Publication number Publication date
CN111538414A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
US6791549B2 (en) Systems and methods for simulating frames of complex virtual environments
US6809738B2 (en) Performing memory management operations to provide displays of complex virtual environments
CN107481311B (en) Three-dimensional city model rendering method and device
US20030117397A1 (en) Systems and methods for generating virtual reality (VR) file(s) for complex virtual environments
CN108776993A (en) The modeling method and buried cable work well modeling method of three-dimensional point cloud with hole
Panchetti et al. Towards recovery of complex shapes in meshes using digital images for reverse engineering applications
CN109872394A (en) Long-narrow triangular mesh grid optimization method based on least square method supporting vector machine
CN112755535A (en) Illumination rendering method and device, storage medium and computer equipment
JP4829885B2 (en) Method and system for identifying proximity regions between several digitally simulated geometric objects
US20030117398A1 (en) Systems and methods for rendering frames of complex virtual environments
Park et al. Reverse engineering with a structured light system
CN109961514A (en) A kind of cutting deformation emulating method, device, storage medium and terminal device
Bischoff et al. Snakes on triangle meshes
CN107016727A (en) A kind of three-dimensional scenic optimum management method of transmission line of electricity
CN111538414B (en) Maintenance visual automatic judgment method and system
CN116246069B (en) Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN108090965A (en) Support the 3D roaming collision checking methods of massive spatial data
JP3928016B2 (en) Triangular mesh generation method and program using maximum angle method
KR20240050839A (en) Construction Management: Development of 3D engine-based geometric optimization method for Building Information Model Visualization in Augmented Reality
Xiong Reconstructing and correcting 3d building models using roof topology graphs
CN113886950B (en) Airborne equipment quality characteristic simulation method
Johnson et al. Computing a heuristic solution to the watchman route problem by means of photon mapping within a 3D virtual environment testbed
CN112330804B (en) Local deformable three-dimensional model contact detection method
Iwasaki et al. GPU-based rendering of point-sampled water surfaces
Osipov et al. Optimization of the process of 3D visualization of the model of urban environment objects generated on the basis of the attributive information from a digital map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220712

Address after: 450001 No.89, science Avenue, high tech Industrial Development Zone, Zhengzhou City, Henan Province

Patentee after: Zhengzhou Shuishi Information Technology Co.,Ltd.

Address before: 100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before: BEIHANG University