CN118038124A - Assembly visibility detection method and device - Google Patents
Assembly visibility detection method and device Download PDFInfo
- Publication number
- CN118038124A CN118038124A CN202410014599.8A CN202410014599A CN118038124A CN 118038124 A CN118038124 A CN 118038124A CN 202410014599 A CN202410014599 A CN 202410014599A CN 118038124 A CN118038124 A CN 118038124A
- Authority
- CN
- China
- Prior art keywords
- model
- visibility
- assembly
- standard component
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 76
- 230000000007 visual effect Effects 0.000 claims abstract description 109
- 238000000034 method Methods 0.000 claims abstract description 22
- 230000005477 standard model Effects 0.000 claims description 11
- 238000012986 modification Methods 0.000 claims description 9
- 230000004048 modification Effects 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 230000008676 import Effects 0.000 claims description 3
- 210000000629 knee joint Anatomy 0.000 claims description 3
- 210000000323 shoulder joint Anatomy 0.000 claims description 3
- 238000012790 confirmation Methods 0.000 claims description 2
- 238000004519 manufacturing process Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 8
- 238000012360 testing method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Ophthalmology & Optometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides an assembly visibility detection method and device, wherein the method comprises the following steps: importing a vehicle architecture model and a standard part model, and assembling the standard part model onto the vehicle architecture model; establishing a human body model on the same horizontal plane as the vehicle architecture model, and adjusting the pose of the human body model based on a visual range, wherein the visual range refers to a visual field range which can be observed when an eye point on the human body model gazes at the vehicle architecture model; providing a visual field plane, and displaying visual contents in the visual range in the visual field plane; and determining an assembly visibility detection result of the standard component model based on the position of the standard component model in the view plane and the visible area of the standard component model in the view plane. The assembly visibility of the standard part model is detected to obtain an assembly visibility detection result, so that the assembly position is adjusted, and the assembly time and the production cost are saved.
Description
Technical Field
The application relates to the field of vehicles, in particular to an assembly visibility detection method and device.
Background
The final assembly is installed in the automotive industry and occupies an important place. In the prior art, factories of automobile manufacturers almost all refer to assembly lines, and on the assembly lines, an automobile is assembled by all assembly assemblies to form a complete automobile.
At present, the assembly visibility is an important inspection item at the research and development end and the final assembly process end, and can be considered as a technical requirement of the visual field, but the test standard of the assembly visibility is ambiguous at present, and no clear judgment criterion exists.
In the existing vision related patents, the front and rear vision of a driver is checked, no detection method for the assembly visibility of an assembly person exists, the assembly visibility cannot be accurately judged, if the assembly visibility of the assembly person is not detected in advance, the assembly is blindly installed, and if the assembly of parts is difficult to modify, the die is required to be changed or manufactured again later, so that the assembly time and the production cost are increased.
Disclosure of Invention
Accordingly, the present application is directed to a method and apparatus for detecting assembly visibility, which overcomes at least one of the above-mentioned drawbacks.
In a first aspect, an embodiment of the present application provides an assembly visibility detection method, including: importing a vehicle architecture model and a standard component model, and assembling the standard component model onto the vehicle architecture model; establishing a human body model on the same horizontal plane as the vehicle architecture model, and adjusting the pose of the human body model based on a preset area in a visual range, wherein the visual range refers to a visual field range which can be observed when an eye point on the human body model gazes at the vehicle architecture model; providing a visual field plane, and displaying visual contents in the visual range in the visual field plane; and determining an assembly visibility detection result of the standard component model based on the position of the standard component model in the view plane and the visible area of the standard component model in the view plane.
In an alternative embodiment of the application, the pose of the manikin is adjusted by at least one of the following: adjusting the placement position of the mannequin; adjusting the placing posture of the human body model; and adjusting the visual angle of the eyepoint of the human body model.
In an alternative embodiment of the application, the placement position of the manikin is adjusted by: the distance and the direction between the human body model and the vehicle architecture model are adjusted by dragging the whole human body model so as to adjust the placement position of the human body model; the pose of the mannequin is adjusted by: the method comprises the steps of adjusting the placement posture of a human body model by dragging shoulder joint points, crotch points, knee joint points and heel points of the trunk part of the human body model; the visual angle of the eyepoint of the mannequin is adjusted by: the visual angle of the eyepoint of the mannequin is adjusted by controlling the eyepoint to rotate in the horizontal direction and/or the vertical direction.
In an alternative embodiment of the present application, the visual range includes a plurality of visual areas, the plurality of visual areas including a first preset area, a second preset area, and a third preset area ordered from high to low according to a visibility level, wherein the adjusting the pose of the mannequin based on the visual range includes: the preset areas with different visibility levels are displayed in a distinguishing mode in the visible range; and adjusting the pose of the human body model so that the standard part model is in a preset area with high visibility grade in the visible range.
In an alternative embodiment of the present application, the determining the assembly visibility test result of the standard part model based on the position of the standard part model in the view plane and the visible area of the standard part model in the view plane includes: determining a preset area where the standard component model is located in the view plane; determining a ratio of a viewable area of the standard part model in the view plane to a frontal area of the standard part model; and determining an assembly visibility detection result of the standard component model based on the ratio of the preset area of the standard component model in the view plane to the preset area.
In an optional embodiment of the present application, the determining the assembly visibility detection result of the standard part model based on the ratio of the preset area where the standard part model is located in the view plane, includes: if the standard component model is located in a first preset area of the view plane and the ratio is larger than a preset value, determining a first detection result of assembly visibility of the standard component model; if the standard component model is located in a second preset area of the view plane and the ratio is larger than a preset value, determining a second detection result of assembly visibility of the standard component model; if the standard component model is positioned in a third preset area of the view plane and the ratio is larger than a preset value, determining a third detection result of the assembly visibility of the standard component model; if the standard component model is located in a first preset area of the view plane and the ratio is not greater than a preset value, determining a second detection result of assembly visibility of the standard component model; if the standard component model is located in a second preset area of the view plane and the ratio is not greater than a preset value, determining a third detection result of assembly visibility of the standard component model; and if the standard component model is positioned in a third preset area of the view plane and the ratio is not greater than a preset value, determining a third detection result of the assembly visibility of the standard component model.
In an alternative embodiment of the present application, the first detection result is that no modification is required to the fitting position of the standard part model on the vehicle architecture model, the second detection result is that no further confirmation is required to the modification to the fitting position of the standard part model on the vehicle architecture model, and the third detection result is that no modification is required to the fitting position of the standard part model on the vehicle architecture model.
In a second aspect, an embodiment of the present application further provides an assembly visibility detecting device, the device including: an import module importing a vehicle architecture model and a standard component model, and assembling the standard component model onto the vehicle architecture model; the building module is used for building a human body model which is positioned on the same horizontal plane with the vehicle framework model, and adjusting the pose of the human body model based on a visual range, wherein the visual range refers to a visual field range which can be observed when an eye point on the human body model gazes at the vehicle framework model; the display module is used for providing a visual field plane and displaying the visual contents in the visual range in the visual field plane; and the determining module is used for determining an assembly visibility detection result of the standard component model based on the position of the standard component model in the view plane and the visible area of the standard component model in the view plane.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method as described above.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as described above.
The embodiment of the application provides an assembly visibility detection method and device. According to the application, the pose of the human body model is adjusted based on the visual range of the eyepoint of the human body model, the position of the standard part model in the visual field plane and the visual area of the standard part model in the visual field plane, which are required to be assembled on the vehicle framework model, are determined, so that the assembly visibility detection result of the standard part model is determined, and the problems that the assembly visibility is not judged, the assembly visibility is good or bad, blind installation is realized, and if the assembly parts are difficult to modify, the mould is required to be changed or manufactured again, and the assembly time and the production cost are increased are solved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an assembly visibility detection method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mannequin in the assembly visibility detection method according to the embodiment of the present application;
FIG. 3 is a schematic view of a visual field range of an eye point gaze of a human body model in an assembly visibility detection method according to an embodiment of the present application;
FIG. 4 is a schematic view of a plurality of visual areas of a visual range of a manikin in an assembly visibility detection method according to an embodiment of the present application;
FIG. 5 is a schematic view of a view plane in an assembly visibility detection method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of the viewable area of a standard part model in a field of view provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an assembly visibility detecting device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, every other embodiment obtained by a person skilled in the art without making any inventive effort falls within the scope of protection of the present application.
First, an application scenario to which the present application is applicable will be described. The application can be applied to the field of vehicles.
In the prior art, factories of automobile manufacturers almost refer to assembly lines, on which an automobile is assembled into a complete automobile by each assembly, whether the assembly visibility is good or not can not be accurately judged, if the assembly visibility of an assembly staff is not detected in advance, blind installation is performed, and if the assembly parts are found to be difficult to modify, the die is required to be changed or manufactured again later, so that the assembly time and the production cost are increased.
Aiming at the problems of at least one aspect, the application aims to provide an assembly visibility detection method and device, which can obtain an assembly visibility detection result of a standard part model by determining the position of the standard part model in a visual field plane and the visible area of the standard part in the visual field plane, accurately judge the assembly visibility and save the assembly time and the production cost.
Referring to fig. 1, fig. 1 is a flowchart of an assembly visibility detection method according to an embodiment of the present application. As shown in fig. 1, the assembly visibility detection method provided by the embodiment of the application includes:
s100, importing the vehicle architecture model and the standard component model, and assembling the standard component model on the vehicle architecture model.
In this step, the vehicle architecture model is a three-dimensional model on CATIA software, and is a new model of vehicle that is not assembled, the standard component model is a component that is fully standardized in terms of structure, size, drawing, marking, etc., and is a component of common parts produced by a professional factory, such as screw members, keys, pins, rolling bearings, etc., and the standard component model is a component that an assembler installs on the vehicle architecture model.
In one example, the vehicle architecture model and the standard model may be pre-created and stored on a computer, internet or cloud server, thereby importing the pre-stored models into CATIA software.
In addition to the above manner, the vehicle architecture model and the standard model may be created directly in CATIA software, or the standard model may be downloaded from the internet or cloud.
S200, building a human body model which is positioned on the same horizontal plane with the vehicle architecture model, and adjusting the pose of the human body model based on a preset area in a visual range, wherein the visual range refers to the visual field range which can be observed when an eye point on the human body model gazes the vehicle architecture model.
Here, when the mannequin is built, the visual range can be triggered, and the visual range is changed in real time according to the adjustment of the pose of the mannequin.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a mannequin in an assembly visibility detection method according to an embodiment of the present application.
As shown in fig. 2, the manikin contains an eyepoint 201, a shoulder node 202, a crotch point (H-point) 203, a knee node 204, and a heel point 205.
In an exemplary embodiment of the present application, the pose of the mannequin may be adjusted by at least one of the following embodiments:
In the first embodiment, the placement position of the mannequin is adjusted.
Specifically, the distance and the direction between the mannequin and the vehicle architecture model are adjusted by dragging the whole body of the mannequin, so that the placement position of the mannequin is adjusted, wherein the distance refers to the distance between the mannequin and the vehicle architecture model, and the direction refers to the direction and the orientation of the mannequin facing the vehicle architecture model.
In a second embodiment, the pose of the manikin is adjusted.
Specifically, the shoulder joint point, crotch point, knee joint point and heel point of the trunk part of the manikin are dragged to adjust the placement posture of the manikin, which can be one of standing, sitting or squatting.
In a third embodiment, the visual angle of the eyepoint of the mannequin is adjusted.
Specifically, the visual angle of the eyepoint of the mannequin is adjusted by controlling the eyepoint to rotate in the horizontal and/or vertical directions.
Referring to fig. 3, fig. 3 is a schematic view illustrating a visual field range of an eye point gaze of a human body model in an assembly visibility detection method according to an embodiment of the application.
As shown in fig. 3, the maximum angle by which the eyepoint can rotate vertically upward line of sight 301 is 60 °, the maximum angle by which the vertical downward line of sight 302 can rotate is 60 °, the maximum angle by which the horizontal rightward line of sight 303 can rotate is 30 °, the maximum angle by which the horizontal leftward line of sight 304 can rotate is 30 °, and the eyepoint can be manually rotated or clicked to input the rotation angle and rotation direction.
Illustratively, the visual range includes a plurality of visual areas including a first preset area, a second preset area, and a third preset area ordered from high to low in a visibility level.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a plurality of visual areas of a visual range of a manikin in an assembly visibility detection method according to an embodiment of the application.
As shown in fig. 4, the visual range is equally divided into 9 visual areas, a first preset area is an intermediate position 401 in the 9 visual areas, a second preset area is four visual areas corresponding to the upper, lower, left and right in the first preset area, 402, 403, 404 and 405 respectively, and a third preset area is four visual areas corresponding to the four corners in the 9 visual areas, 406, 407, 408 and 409 respectively.
Here, adjusting the pose of the mannequin based on the visual range includes: and distinguishing and displaying preset areas with different visibility levels in a visible range.
Illustratively, the first preset area, the second preset area and the third preset area are divided from deep to light according to colors to distinguish.
Or the positions of the visual areas can be displayed in a marked mode, and the first preset area, the second preset area and the third preset area are marked respectively to distinguish.
Dividing the visual range equally and dividing the visual range into different preset areas according to the visibility level so as to adjust the pose of the human body model, so that the standard part model is positioned in the preset area with high visibility level in the visual range.
When the pose of the human body model is adjusted, visual content in a visual range can be seen in real time at an operation interface, the pose is adjusted according to the visual content, the standard part model is located in a preset area with high visibility grade in the visual range as much as possible, preferably, the standard part model is located in a first preset area and then located in a second preset area, and if the first preset area and the second preset area cannot be adjusted, the standard part model is located in a third preset area as much as possible.
S300, providing a visual field plane, and displaying the visual contents in the visual range in the visual field plane.
When the human body model is established and the visual range is triggered, namely, the visual field plane is displayed on the other side of the operation interface, and when the pose of the human body model is adjusted, the visual range is changed in real time according to the adjustment of the pose of the human body model, and the content of the visual field plane is kept consistent with the visual content of the visual range and is changed in real time.
The visual field plane is used for displaying visual contents of a visual range according to a certain proportion, and the proportion can be 1:1 or 1:2, the preferred ratio may be 1:1.
S400, determining an assembly visibility detection result of the standard component model based on the position of the standard component model in the view plane and the visible area of the standard component model in the view plane.
And determining a preset area of the standard part model in the visual field plane, wherein the position of the standard part model in the visual field plane is the preset area in the visual field plane.
Referring to fig. 5, fig. 5 is a schematic view of a view plane in the assembly visibility detection method according to an embodiment of the application.
As shown in fig. 5, the view plane shows the position of the standard part model 501 in the view plane, which is the first preset area.
The ratio of the visible area of the standard part model in the view plane to the frontal area of the standard part model is determined, wherein the frontal area of the standard part model is calculated firstly when the standard part model is imported, the visible area of the standard part model in the view plane is calculated after the preset area of the standard part model in the view plane is determined, and the visible area is the exposed area of the standard part model after the standard part model is blocked by other parts of the vehicle.
And determining an assembly visibility detection result of the standard component model based on a preset area and a ratio of the standard component model in the view plane.
In this step, the above assembly visibility detection result may have the following six cases.
In the first embodiment, the standard component model is located in a first preset area of the view plane, and the ratio is larger than a preset value, so that a first detection result of assembly visibility of the standard component model is determined;
In a second embodiment, the standard component model is located in a second preset area of the view plane, and the ratio is larger than a preset value, so as to determine a second detection result of assembly visibility of the standard component model;
in a third embodiment, the standard component model is located in a third preset area of the view plane, and the ratio is larger than a preset value, so as to determine a third detection result of the assembly visibility of the standard component model;
In a fourth embodiment, the standard component model is located in a first preset area of the view plane, and the ratio is not greater than a preset value, and a second detection result of the assembly visibility of the standard component model is determined;
In a fifth embodiment, the standard component model is located in a second preset area of the view plane, and the ratio is not greater than a preset value, and a third detection result of the assembly visibility of the standard component model is determined;
In a sixth embodiment, the standard model is located in a third preset area of the view plane, and the ratio is not greater than a preset value, and a third detection result of the assembly visibility of the standard model is determined.
The first detection result is that the assembly position of the standard part model on the vehicle architecture model does not need to be modified, the second detection result is that the modification of the assembly position of the standard part model on the vehicle architecture model needs to be further confirmed, and the third detection result is that the modification of the assembly position of the standard part model on the vehicle architecture model needs to be carried out.
Six examples of assembly visibility test results are shown in table 1 below:
TABLE 1
The preset value is 1/3, the first detection result is good, the second detection result is good, and the third detection result is bad.
Referring to fig. 6, fig. 6 is a schematic diagram of a visible area of a standard component model in a view plane according to an embodiment of the application.
As shown in fig. 6, fig. 6 shows several cases where the standard model is occluded in the view plane, where the hatched portion is the visible portion of the standard model in the view plane, and the blank portion is the portion of the standard model that is not visible by the components within the vehicle architecture model.
According to the application, the pose of the human body model is adjusted based on the visual range of the eyepoint of the human body model, the position of the standard part model in the visual field plane and the visual area of the standard part model in the visual field plane are determined, so that the assembly visibility detection result of the standard part model is determined, the assembly position is adjusted, and the assembly time and the production cost are saved.
The application provides an assembly visibility checking method based on CATIA, which provides a simple model for creating assembly visual field visibility and provides a judging standard for assembly visibility checking under the model.
Referring to fig. 7, in the embodiment of the present application, an assembly visibility detecting device 700 corresponding to the assembly visibility detecting method is further provided in fig. 7, and as shown in fig. 7, the assembly visibility detecting device 700 includes:
An import module 701 importing a vehicle architecture model and a standard part model, and assembling the standard part model onto the vehicle architecture model;
The building module 702 builds a human body model on the same horizontal plane as the vehicle architecture model, and adjusts the pose of the human body model based on a visual range, wherein the visual range refers to a visual field range which can be observed when an eye point on the human body model gazes the vehicle architecture model;
a display module 703 that provides a viewing plane and displays visual contents within a visual range in the viewing plane;
the determining module 704 determines an assembly visibility test result of the standard part model based on a position of the standard part model in the view plane and a visible area of the standard part model in the view plane.
Because the principle of solving the problem by the device in the embodiment of the present application is similar to that of the method for detecting assembly visibility in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
The embodiment of the application also provides electronic equipment. The electronic device includes: a processor, a memory, and a bus.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 8, the electronic device 800 includes a processor 810, a memory 820, and a bus 830.
The memory 820 stores machine-readable instructions executable by the processor 810, when the electronic device 800 is running, the processor 810 communicates with the memory 820 through the bus 830, and when the machine-readable instructions are executed by the processor 810, the steps of the assembly visibility detection method in the method embodiment described above may be executed, and the specific implementation may refer to the method embodiment and will not be described herein.
The embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program can execute the steps of the assembly visibility detection method in the method embodiment when being executed by a processor, and a specific implementation manner may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device, which may be a personal computer, a server, or a network device, to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.
Claims (10)
1. An assembly visibility detection method, characterized by comprising:
importing a vehicle architecture model and a standard component model, and assembling the standard component model onto the vehicle architecture model;
establishing a human body model on the same horizontal plane as the vehicle architecture model, and adjusting the pose of the human body model based on a preset area in a visual range, wherein the visual range refers to a visual field range which can be observed when an eye point on the human body model gazes at the vehicle architecture model;
providing a visual field plane, and displaying visual contents in the visual range in the visual field plane;
and determining an assembly visibility detection result of the standard component model based on the position of the standard component model in the view plane and the visible area of the standard component model in the view plane.
2. The fitting visibility detection method of claim 1, wherein the pose of the manikin is adjusted by at least one of:
adjusting the placement position of the mannequin;
adjusting the placing posture of the human body model;
and adjusting the visual angle of the eyepoint of the human body model.
3. The fitting visibility detection method according to claim 2, characterized in that the placement position of the manikin is adjusted by:
The distance and the direction between the human body model and the vehicle architecture model are adjusted by dragging the whole human body model so as to adjust the placement position of the human body model;
the pose of the mannequin is adjusted by:
The method comprises the steps of adjusting the placement posture of a human body model by dragging shoulder joint points, crotch points, knee joint points and heel points of the trunk part of the human body model;
the visual angle of the eyepoint of the mannequin is adjusted by:
the visual angle of the eyepoint of the mannequin is adjusted by controlling the eyepoint to rotate in the horizontal direction and/or the vertical direction.
4. The fitting visibility detection method of claim 1, wherein said visibility range includes a plurality of visibility regions including a first preset region, a second preset region, a third preset region ordered from high to low in visibility level,
Wherein the adjusting the pose of the mannequin based on the visual range includes:
the preset areas with different visibility levels are displayed in a distinguishing mode in the visible range;
and adjusting the pose of the human body model so that the standard part model is in a preset area with high visibility grade in the visible range.
5. The fitting visibility detection method of claim 4, wherein said determining the fitting visibility detection result of the standard model based on the position of the standard model in the field of view plane and the visible area of the standard model in the field of view plane includes:
determining a preset area where the standard component model is located in the view plane;
determining a ratio of a viewable area of the standard part model in the view plane to a frontal area of the standard part model;
And determining an assembly visibility detection result of the standard component model based on the ratio of the preset area of the standard component model in the view plane to the preset area.
6. The fitting visibility detection method according to claim 5, wherein said determining the fitting visibility detection result of the standard part model based on a preset area in which the standard part model is located in the field of view plane and the ratio includes:
If the standard component model is located in a first preset area of the view plane and the ratio is larger than a preset value, determining a first detection result of assembly visibility of the standard component model;
If the standard component model is located in a second preset area of the view plane and the ratio is larger than a preset value, determining a second detection result of assembly visibility of the standard component model;
If the standard component model is positioned in a third preset area of the view plane and the ratio is larger than a preset value, determining a third detection result of the assembly visibility of the standard component model;
If the standard component model is located in a first preset area of the view plane and the ratio is not greater than a preset value, determining a second detection result of assembly visibility of the standard component model;
if the standard component model is located in a second preset area of the view plane and the ratio is not greater than a preset value, determining a third detection result of assembly visibility of the standard component model;
and if the standard component model is positioned in a third preset area of the view plane and the ratio is not greater than a preset value, determining a third detection result of the assembly visibility of the standard component model.
7. The fitting visibility detection method according to claim 6, wherein the first detection result is that no modification is required for the fitting position of the standard part model on the vehicle architecture model, the second detection result is that a further confirmation is required for the modification of the fitting position of the standard part model on the vehicle architecture model, and the third detection result is that a modification is required for the fitting position of the standard part model on the vehicle architecture model.
8. An assembly visibility detection device, said device comprising:
an import module importing a vehicle architecture model and a standard component model, and assembling the standard component model onto the vehicle architecture model;
The building module is used for building a human body model which is positioned on the same horizontal plane with the vehicle framework model, and adjusting the pose of the human body model based on a visual range, wherein the visual range refers to a visual field range which can be observed when an eye point on the human body model gazes at the vehicle framework model;
the display module is used for providing a visual field plane and displaying the visual contents in the visual range in the visual field plane;
And the determining module is used for determining an assembly visibility detection result of the standard component model based on the position of the standard component model in the view plane and the visible area of the standard component model in the view plane.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410014599.8A CN118038124A (en) | 2024-01-04 | 2024-01-04 | Assembly visibility detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410014599.8A CN118038124A (en) | 2024-01-04 | 2024-01-04 | Assembly visibility detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118038124A true CN118038124A (en) | 2024-05-14 |
Family
ID=90986609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410014599.8A Pending CN118038124A (en) | 2024-01-04 | 2024-01-04 | Assembly visibility detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118038124A (en) |
-
2024
- 2024-01-04 CN CN202410014599.8A patent/CN118038124A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10762714B2 (en) | Method and system for calibrating a virtual reality system | |
DE102014006732B4 (en) | Image overlay of virtual objects in a camera image | |
US8803676B2 (en) | Graphic display apparatus | |
EP3811326B1 (en) | Heads up display (hud) content control system and methodologies | |
US6947875B2 (en) | Apparatus and methods for virtual accommodation | |
CN112417694B (en) | Virtual rearview mirror adjusting method and device | |
CN107239187A (en) | Method, device and equipment for displaying three-dimensional graph | |
EP3667462A1 (en) | Screen position estimation | |
DE102017106098A1 (en) | DYNAMIC COLOR-FITTED VISUAL OVERLAYS FOR AUGMENTED REALITY SYSTEMS | |
TW202134995A (en) | Estimation device, estimation system, estimation method, and program | |
CN118038124A (en) | Assembly visibility detection method and device | |
US20230043713A1 (en) | Simulation apparatus and processing load adjustment device | |
KR20170127871A (en) | Driving simulator for vehicle and method for controlling the same | |
JP2005149175A (en) | Display controller and program | |
CN111581840A (en) | Equipment maintenance characteristic simulation test and evaluation system | |
EP2624117A2 (en) | System and method providing a viewable three dimensional display cursor | |
CN107854840B (en) | Eye simulation method and device | |
Schinko et al. | Building a Driving Simulator with Parallax Barrier Displays. | |
JP6481596B2 (en) | Evaluation support device for vehicle head-up display | |
EP4077017A1 (en) | Instrumentation perspective and light emulator | |
Koning et al. | 3-D processing in the Poggendorff illusion | |
Hogervorst et al. | Combining cues while avoiding perceptual conflicts | |
JP2004348452A (en) | Assembly order verification method and device | |
CN115866222A (en) | Projection area determining method and device and electronic equipment | |
CN118607099A (en) | Vehicle model display method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |