CN116821990A - Situation unified modeling method based on human engineering, server and storage medium - Google Patents
Situation unified modeling method based on human engineering, server and storage medium Download PDFInfo
- Publication number
- CN116821990A CN116821990A CN202311107350.3A CN202311107350A CN116821990A CN 116821990 A CN116821990 A CN 116821990A CN 202311107350 A CN202311107350 A CN 202311107350A CN 116821990 A CN116821990 A CN 116821990A
- Authority
- CN
- China
- Prior art keywords
- layer
- situation
- map
- display
- variables
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241000282414 Homo sapiens Species 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000003860 storage Methods 0.000 title claims abstract description 8
- 230000000007 visual effect Effects 0.000 claims abstract description 31
- 238000012800 visualization Methods 0.000 claims abstract description 22
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 4
- 238000005286 illumination Methods 0.000 claims description 4
- 230000001965 increasing effect Effects 0.000 claims description 4
- 238000012938 design process Methods 0.000 abstract description 4
- 230000008447 perception Effects 0.000 abstract description 3
- 238000013461 design Methods 0.000 description 25
- 238000009877 rendering Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000001149 cognitive effect Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The application discloses a situation unified modeling method based on human engineering, a server and a storage medium, and belongs to the field of command control. The method comprises the following steps: step 1: constructing an interface layer, a map layer and a situation layer; step 2: constructing a ternary three-layer control model, and specifically: according to human factors, hardware environment factors and external environment factors, adaptively adjusting and displaying an interface layer, a map layer and a situation layer; step 3: and carrying out real-time collaborative adjustment on display parameters of the interface layer, the map layer and the situation layer according to the current task and human factors. The method comprehensively considers people, machines, environments, an interface layer, a map layer and a situation layer, builds a three-layer battlefield situation ergonomic engineering control model, is oriented to command centers and various equipment, and enables the kernel concept of the ergonomic engineering to penetrate through the battlefield situation visual design process, comprehensively optimizes the battlefield situation visualization and improves the battlefield situation perception capability of fighters.
Description
Technical Field
The application belongs to the field of command control, and particularly relates to a situation unified modeling method, a server and a storage medium based on human engineering.
Background
From the key field of application, such as aerospace, nuclear industry, industrial design, and the like, the human engineering emphasizes safety and pleasure, and mainly provides systematic comprehensive design guidance with human cores for various systems.
The current human engineering research in the field of command control is weak, especially aiming at battlefield situation display, and is basically in a research blank at present. In the existing battlefield situation display field, factors of field environments are not considered in situation display and interaction inside equipment, so that fighters cannot focus on key fight tasks in a field complex environment, and the situation awareness capability of the battlefield is affected, so that military decisions are affected.
Therefore, a technical scheme capable of combining battlefield situation visualization construction of human engineering is needed, and various problems in situation display are uniformly solved for command centers and various devices.
Disclosure of Invention
In order to solve the situation display and interaction problems in a command center and equipment, the application provides a battlefield situation ternary three-layer modeling method based on human engineering, three elements of human-machine-environment in the human engineering are introduced, three objects of a battlefield situation visual interface layer, a map layer and a situation layer are provided, and an integral modeling design is carried out aiming at cooperative control between the three elements and the three objects, so that the battlefield situation visual content can be integrally regulated and controlled, and the battlefield situation visual experience is optimized.
The technical effects to be achieved by the application are realized by the following scheme:
according to a first aspect of the application, a situation unified modeling method based on human engineering is provided, comprising the following steps:
step 1: constructing an interface layer, a map layer and a situation layer;
step 2: constructing a ternary three-layer control model, and specifically: according to human factors, hardware environment factors and external environment factors, adaptively adjusting and displaying an interface layer, a map layer and a situation layer;
step 3: and carrying out real-time collaborative adjustment on display parameters of the interface layer, the map layer and the situation layer according to the current task and human factors.
Preferably, in step 1, corresponding display variables are set for the interface layer, the map layer and the situation layer according to human factors according to the application environment and the working content, wherein:
the display variables of the interface layer at least comprise position variables of the functional area and size variables of the buttons;
the display variables of the map layer at least comprise map visual variables, map color matching variables and map symbol level display variables;
the display variables of the situation layer at least comprise fight index visual variables.
Preferably, the map layer is a two-dimensional map or a three-dimensional digital earth, and the map visual variables at least comprise shape, size, color and direction; the map symbol level display variables at least comprise punctuation symbols, linear symbols, face symbols and character marks, and blanking processing is carried out under different scales;
in a map layer of the three-dimensional digital earth, performing terrain obstacle avoidance processing on the three-dimensional grid visualization.
Preferably, the combat label visual variable includes at least a label brightness and a label transparency, and the label transparency is used for setting different label transparencies according to importance degrees of different targets so as to be highlighted.
Preferably, in step 2, the adaptive adjustment and display of the situation layer according to the human factor is specifically: when the number of the display targets is larger than a threshold value, clustering the display targets; flashing and sound reminding are carried out on the display target at least; and different visual characteristics can be set for different display targets.
Preferably, the adaptive adjustment and display of the map layer and the situation layer according to the hardware environment factor are specifically: according to the calculation forces of the CPU and the GPU, the labels and the map are automatically adaptively adjusted;
the interface layer, the map layer and the situation layer are adaptively adjusted and displayed according to external environmental factors, and the method comprises the following steps: and according to the external illumination intensity, automatically adjusting the display brightness of the interface layer, the map layer and the situation layer.
Preferably, in step 3, the visual contrast is set according to the current task, specifically: when the situation layer is highlighted, the color brightness of the fight mark in the situation layer is increased, and the color brightness of the remote sensing image in the map layer is reduced; when the map layer is highlighted, the color brightness of the situation layer label is reduced, and the color brightness of the remote sensing image in the map layer is improved; the interface layer is hidden when the situation layer is highlighted or the map layer is highlighted.
Preferably, in step 3, the number of target displays in the situation layer is obtained in real time, and clustering is performed; and dynamically adjusting the display strategies of the map layer and the situation layer according to the calculation forces of the CPU and the GPU, and optimizing the situation display by adopting the multi-core calculation forces of the CPU and the GPU.
According to a second aspect of the present application, there is provided a server comprising: a memory and at least one processor;
the memory stores a computer program, and the at least one processor executes the computer program stored in the memory to realize the unified modeling method based on the situation of the human engineering.
According to a third aspect of the present application, there is provided a computer-readable storage medium having stored therein a computer program which, when executed, implements the above-described ergonomic-based situation unified modeling method.
According to the embodiment of the application, the unified modeling method based on the artificial engineering situation has the beneficial effects that the design concept of the artificial engineering is introduced, when the battlefield situation is visualized, the people, the machines, the environment, the interface layer, the map layer and the situation layer are comprehensively considered, the cross influence among the elements is balanced, the design of the integrity and the balance is carried out, the artificial core concept of the artificial engineering is penetrated in the battlefield situation visualization design process, the battlefield situation visualization is comprehensively optimized, the command center and various equipment are oriented, and the battlefield situation perception of each level of fighters is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the prior art solutions, the drawings which are used in the description of the embodiments or the prior art will be briefly described below, it being obvious that the drawings in the description below are only some of the embodiments described in the present application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of a unified modeling method based on human engineering situation in an embodiment of the application;
fig. 2 is a block diagram of a server according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in fig. 1, the battlefield situation ternary three-layer modeling method based on the human engineering in an embodiment of the application comprises the following steps:
s110: constructing an interface layer, a map layer and a situation layer;
in the step, according to the application environment and the working content, corresponding display variables are set for the interface layer, the map layer and the situation layer according to the human engineering, wherein:
the display variables of the interface layer at least comprise the position variables of the functional area and the size variables of the buttons; the interface layer mainly relates to a User Interface (UI), and refers to the overall design of man-machine interaction, operation logic, attractive interface and the like of software. The interface layer design is to continuously solicit user opinion in the whole development process of the system, and continuously iterate feedback optimization. The interface design of the system is to combine the working and application environment of the user, understand the requirement of the user on the system, and realize the adaptation of the human-computer interface. The interface layer should set functional areas such as menus, buttons and the like according to the human engineering concept, for example, on vehicle-mounted equipment, the buttons are ensured to be within a certain size range, and the accurate and rapid selection of fingers is ensured, so that interface operations such as map, situation and the like are also realized through the interface layer. Through layout optimization of the interface, cognition and decision accuracy and response speed of fighters can be improved.
The display variables of the map layer at least comprise map visual variables, map color matching variables and map symbol level display variables; the map layer is a two-dimensional map or a three-dimensional digital earth and can also comprise environmental information such as global geomagnetism, meteorological hydrology and the like. The map visual variable is a basic element for forming a symbol graph and at least comprises a shape, a size, a color and a direction; the map symbol level display variables at least comprise punctuation symbols, linear symbols, face symbols and character marks, and blanking processing under different scales is carried out according to general display rules in map drawing;
compared with the low efficiency of the traditional vector map production, the remote sensing image has the advantages of rapidness, abundant detail information and the like on the map layer of the three-dimensional digital earth, but the remote sensing image also causes certain visual interference for superposing and displaying other information, such as a situation layer, on the map layer. In addition, the three-dimensional digital earth surface often uses elevation to represent the fluctuation of the earth surface, but the fluctuation can cause shielding to the label display on the earth surface, so that the label display is incomplete, and the correct judgment of the situation by a commander is affected. In addition, the three-dimensional network display is also shielded by the mountain and the like, so that the three-dimensional network visualization also needs to develop a terrain obstacle avoidance design. The curvature of the digital earth also causes processing difficulties for visualization of volume rendering and the like, and adaptation of the volume rendering algorithm on the digital earth is also performed. Specific: by constructing the proxy ellipsoidal surface geometry, the data ellipsoidal shell stereo grid, the digital terrestrial ellipsoidal body and the terrain elevation ellipsoidal shell stereo grid, the number of sampling points which are actually required to be processed in the light sampling is greatly reduced by skipping the light sampling points or stopping the light sampling in advance (eliminating the sampling points which are blocked by the terrain elevation), so that the subsequent complex calculations such as tri-linear interpolation and the like are greatly reduced; meanwhile, the arc-shaped data ellipsoidal shell three-dimensional grid and the terrain elevation ellipsoidal shell three-dimensional grid are converted into cuboid forms, the three-linear interpolation of sampling points is rapidly realized by utilizing the efficient texture query function of the GPU, and finally, the volume rendering performance facing the digital earth is integrally improved.
The display variables of the situation layer at least comprise fight index visual variables. The situation layer is used for presenting battlefield situation elements focused by the commander, and various elements are subjected to standard symbolization display by adopting specific combat labels of the army and have a certain difference from map symbolization. For example, the color of the combat symbol has a specific military significance, which is specified in the standard specification of combat symbols in each country. Related researches on visual variables of combat labels are important points of attention of a situation layer and are one of basic research contents of situation visualization. For example, visual variables for the fight number: brightness, transparency, etc. are adjusted to optimize the display of battlefield situations, as shown in fig. 2, wherein a) is before transparency is set, and b) is after transparency is set. By combining the importance semantics of the targets and setting different color transparencies, the problem of the display superposition confusion of the large-scale targets can be reduced to a certain extent, meanwhile, the highlighting of key targets in the battlefield situation is ensured, and the battlefield situation is ensured to be quickly mastered by the battlefield commander.
S120: constructing a ternary three-layer control model, and specifically: according to human factors, hardware environment (machine) factors and external environment factors, adaptively adjusting and displaying an interface layer, a map layer and a situation layer;
in this step, the ternary of the ternary three layers means: the three layers refer to an interface layer, a map layer and a situation layer.
The visual beneficiary of battlefield situation is the combat commander, carries out the pleasant design according to the design theory that the human factor engineering centers on, is the soul of visual design. The situation layer is adaptively adjusted and displayed according to human factors, and the method specifically comprises the following steps: because of a certain cognitive load limit, battlefield situation display elements are controlled in a certain range as much as possible, when the number of display targets is greater than a threshold value, clustering processing is carried out on the display targets, and a clustering algorithm in machine learning can be adopted: K-MEANS clustering and the like, and the number of display targets is reduced, which is a constraint source in the aspect of situation clustering processing physiology;
the physiological state of a person is also constantly changing, and when fatigue occurs, it is necessary to highlight by target blinking, sound reminding, or the like. Flicker accords with the cognitive characteristics of humans: the dynamic performance can improve the visual saliency of people, and the voice prompt can assist in enhancing the display by adding an auditory channel;
in addition, there are individual preferences, for example, there are some differences in preference of colors, and thus, color configuration of interfaces and the like in the visualization is to provide a plurality of styles as much as possible. Further, wearable devices, eye-movements, electro-brain-physiology monitors, etc. can be employed to provide more personal-oriented, more scientific, customized settings.
The map layer and the situation layer are adaptively adjusted and displayed according to the hardware environment factors as follows: according to the calculation forces of the CPU and the GPU, the labels and the map are automatically adaptively adjusted; the machine hardware environment comprises a display card, a handheld terminal, a display, a CPU/GPU and the like. The display card has certain performance influence on the situation layer and the map layer, and when the configuration index of the display card is low, the labels and the map can be displayed in a concise manner, so that high rendering smoothness is ensured. On equipment with limited memory, the map is mostly an embedded map, and the displayed map layer information is relatively simple. The hardware CPU/GPU provides multi-core computing power and high-performance rendering capability, can effectively improve display smoothness and rendering effect, and is key basic hardware essential for acceleration in a visualization algorithm;
the interface layer, the map layer and the situation layer are adaptively adjusted and displayed according to external environmental factors, and the method comprises the following steps: according to the external illumination intensity, automatically adjusting the display brightness of the interface layer, the map layer and the situation layer; the external environment factor refers to an external natural environment and a natural environment simulated by a map. The external natural environment includes illumination, weather, etc. When the external environment changes (such as the night, the morning and evening, the sunny day, etc.), the light sensor can automatically adjust the battlefield situation display, such as the brightness of an interface layer, a map layer, a situation layer, a screen, etc., so that the people are ensured to have better environment comfort. The map and related data can construct the basic background of the battlefield data visualization, the elevation, the image and the like can have certain influence on the visualization, and special attention is paid to the visualization design.
S130: and carrying out real-time collaborative adjustment on display parameters of the interface layer, the map layer and the situation layer according to the current task and human factors.
In this step, relationship division is performed for three elements, namely, a person (a human factor), a machine (a hardware environment factor), an environment (an external environment factor), and three object layers, namely, an interface layer, a map layer, and a situation layer: the three object layers are mutually connected; the three elements are arranged between each other; three elements and three object layers;
the cooperative control process for the three object layers includes: firstly, attention needs to be paid to the overall visual coordination, for example, the theme colors of all layers are uniformly arranged, so that the visual feeling consistency of the whole system where the situation display is can be well formed. Secondly, it should be noted that, according to the current task, a certain visual contrast is set, for example, when the foreground layer is highlighted, the color brightness of the label in the layer can be increased, and at the same time, the color brightness of the remote sensing image in the background map layer can be reduced. By the difference increasing treatment, the visual contrast between the two layers can be better improved; on the contrary, when the map layer is highlighted, the color brightness of the situation layer label can be reduced, and meanwhile, the color brightness of the remote sensing image in the background map is improved. The interface layer mainly provides interactive operation for users, and when the map layer or the situation layer is projected, the interface layer can be hidden so as to enlarge the display range of the map layer, expand the field of view of the battlefield situation of the battlefield, display more information of the situation layer and enhance the capability of the battlefield situation of the battlefield commander comprehensively.
The constraint limit and the whole balance treatment of three elements comprise: the human-machine-ring three have certain space, resource and other constraints and mutual constraints, which are important matters in human engineering. Comprehensive consideration is needed when the visual design is carried out, so that the balance on the whole is realized:
(a) The cognitive load of a person is limited, i.e. for a screen display, the number of targets that the human eye can effectively resolve at the same time is currently generally recognized as 7±2. The limited cognitive load of human eyes is an important physiological basis for detailed design of display information in battlefield situation visualization. Therefore, the situation layer needs to cluster as much as possible, and the number of target display in the screen is reduced;
(b) According to hardware parameters of the machine display card, an adaptive display strategy of the object layer can be configured: when the configuration of the machine display card is higher, the strategies displayed by the map layer and the situation layer can be detailed, otherwise, the situation display can be simplified; in the practical application process, if higher calculation demands are needed, the display detail of the map layer and the situation layer can be properly reduced, and after calculation is completed, the display detail of the map layer and the situation layer is restored, so that the calculation and the display demands are considered, and the use experience is improved;
(c) The multi-core computing power of the CPU/GPU of the machine is adopted, and the computing performance involved in situation display is optimized by using an OpenMP parallel programming interface facing CPU multi-cores, CUDA, openCL and other parallel programming interfaces facing GPU multi-cores, multi-thread asynchronous processing and the like, so that smooth display is ensured, and user experience of people is improved.
For the integrated system integral fusion processing between three elements and three object layers, three elements of human-machine-ring of human engineering and the situation layer, map layer and interface layer of battlefield situation visualization are involved, and complex cross influences exist among the elements. Therefore, when the battlefield situation visualization is carried out, the factors are comprehensively considered, the cross influence among the elements is weighed, the overall and balanced design is carried out, and the artificial core concept of the artificial engineering is penetrated in the battlefield situation visualization design process, which is the overall design concept of the system emphasized by the design model.
Through the steps, the design model of the three-element three-object layer for the battlefield situation visualization introduces human-computer loop three-element of human engineering, provides the composition of the map layer, the situation layer and the three-object layer for the battlefield situation visualization, provides a cooperative control and system overall design method of the three-element and the three-object layer, and provides a novel overall system design guide for the battlefield situation visualization. The application provides a modeling method for the visual overall design of equipment-level situation based on the pleasant core design concept of man-machine-ring, and has better practical engineering application prospect.
As shown in fig. 2, a server according to an embodiment of the present application includes: a memory 201 and at least one processor 202;
the memory 201 stores a computer program, and the at least one processor 202 executes the computer program stored in the memory 201 to implement the above-described ergonomic-based battlefield situation ternary three-layer modeling method.
According to a third aspect of the present application, there is provided a computer readable storage medium having a computer program stored therein, the computer program when executed implementing the above-described battlefield situation ternary three-layer modeling method based on ergonomic engineering.
According to one embodiment of the application, the artificial engineering-based battlefield situation ternary three-layer modeling method has the beneficial effects that the artificial engineering design concept is introduced, when the battlefield situation is visualized, the man, the machine, the environment, the interface layer, the map layer and the situation layer are comprehensively considered, the cross influence among the elements is balanced, the integrity and balance design is carried out, the artificial core concept of the artificial engineering is penetrated in the battlefield situation visualization design process, the battlefield situation visualization is comprehensively optimized, and the battlefield situation perception of the fighters is improved.
It should be noted that the foregoing detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly indicates otherwise. Furthermore, it will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, steps, operations, devices, components, and/or groups thereof.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Spatially relative terms, such as "above … …," "above … …," "upper surface at … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial location relative to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the exemplary term "above … …" may include both orientations of "above … …" and "below … …". The device may also be positioned in other different ways, such as rotated 90 degrees or at other orientations, and the spatially relative descriptors used herein interpreted accordingly.
In the above detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, like numerals typically identify like components unless context indicates otherwise. The illustrated embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (10)
1. The unified modeling method based on the situation of the human engineering is characterized by comprising the following steps:
step 1: constructing an interface layer, a map layer and a situation layer;
step 2: constructing a ternary three-layer control model, and specifically: according to three elements, namely human factors, hardware environment factors and external environment factors, the interface layer, the map layer and the situation layer are adaptively adjusted and displayed;
step 3: and carrying out real-time collaborative adjustment on display parameters of the interface layer, the map layer and the situation layer according to the current task and human factors.
2. The unified modeling method based on human engineering situation according to claim 1, wherein in step 1, according to application environment and working content, corresponding display variables are set for an interface layer, a map layer and a situation layer according to human factors, wherein:
the display variables of the interface layer at least comprise position variables of the functional area and size variables of the buttons;
the display variables of the map layer at least comprise map visual variables, map color matching variables and map symbol level display variables;
the display variables of the situation layer at least comprise fight index visual variables.
3. The unified modeling method based on the situation of the human engineering according to claim 2, wherein the map layer is a two-dimensional map or a three-dimensional digital earth, and the map visual variables at least comprise shape, size, color and direction; the map symbol level display variables at least comprise punctuation symbols, linear symbols, face symbols and character marks, and blanking processing is carried out under different scales;
in a map layer of the three-dimensional digital earth, performing terrain obstacle avoidance processing on the three-dimensional grid visualization.
4. The unified modeling method of human engineering-based situation according to claim 2, wherein the fight index visual variables include at least index brightness and index transparency, and the index transparency is used for setting different index transparencies according to importance of different targets to be highlighted.
5. The unified modeling method of situation based on human engineering according to claim 1, wherein in step 2, the adaptive adjustment and display of the situation layer according to human factors is specifically: when the number of the display targets is larger than a threshold value, clustering the display targets; flashing and sound reminding are carried out on the display target at least; and different visual characteristics can be set for different display targets.
6. The unified modeling method based on the situation of the human engineering according to claim 1, wherein the adaptive adjustment and display of the map layer and the situation layer according to the hardware environment factor is specifically as follows: according to the calculation forces of the CPU and the GPU, the labels and the map are automatically adaptively adjusted;
the interface layer, the map layer and the situation layer are adaptively adjusted and displayed according to external environmental factors, and the method comprises the following steps: and according to the external illumination intensity, automatically adjusting the display brightness of the interface layer, the map layer and the situation layer.
7. The unified modeling method based on the situation of the human engineering according to claim 1, wherein in step 3, the visual contrast is set according to the current task, specifically: when the situation layer is highlighted, the color brightness of the fight mark in the situation layer is increased, and the color brightness of the remote sensing image in the map layer is reduced; when the map layer is highlighted, the color brightness of the situation layer label is reduced, and the color brightness of the remote sensing image in the map layer is improved; the interface layer is hidden when the situation layer is highlighted or the map layer is highlighted.
8. The unified modeling method based on the situation of the human engineering according to claim 7, wherein in step 3, the target display quantity in the situation layer is obtained in real time, and clustering is performed; and dynamically adjusting the display strategies of the map layer and the situation layer according to the calculation forces of the CPU and the GPU, and optimizing the situation display by adopting the multi-core calculation forces of the CPU and the GPU.
9. A server, comprising: a memory and at least one processor;
the memory stores a computer program, and the at least one processor executes the computer program stored by the memory to implement the ergonomic-based posture unification modeling method of any of claims 1-8.
10. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, which when executed implements the ergonomic-based situation unified modeling method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311107350.3A CN116821990A (en) | 2023-08-31 | 2023-08-31 | Situation unified modeling method based on human engineering, server and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311107350.3A CN116821990A (en) | 2023-08-31 | 2023-08-31 | Situation unified modeling method based on human engineering, server and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116821990A true CN116821990A (en) | 2023-09-29 |
Family
ID=88141435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311107350.3A Pending CN116821990A (en) | 2023-08-31 | 2023-08-31 | Situation unified modeling method based on human engineering, server and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116821990A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117744027A (en) * | 2024-02-20 | 2024-03-22 | 中国人民解放军国防大学联合作战学院 | Fusion method, server and storage medium based on large-scale polymorphic information |
CN118036738A (en) * | 2024-03-05 | 2024-05-14 | 中国人民解放军国防大学联合作战学院 | Soldier chess situation display and control method, server and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104266652A (en) * | 2014-09-17 | 2015-01-07 | 沈阳美行科技有限公司 | Method for dynamically adjusting color matching of map according to weather situation of city where automobile is located |
CN111580728A (en) * | 2020-04-07 | 2020-08-25 | 深圳震有科技股份有限公司 | Method and device for dynamically plotting multiple military standard types based on state mode |
CN114610426A (en) * | 2022-03-04 | 2022-06-10 | 腾讯科技(深圳)有限公司 | Method, device and equipment for adjusting interface layout and storage medium |
US11520947B1 (en) * | 2021-08-26 | 2022-12-06 | Vilnius Gediminas Technical University | System and method for adapting graphical user interfaces to real-time user metrics |
CN116320305A (en) * | 2023-03-10 | 2023-06-23 | 中国电子科技集团公司第五十四研究所 | Full-dimension battlefield situation information presentation system |
CN116576545A (en) * | 2023-05-08 | 2023-08-11 | 珠海格力电器股份有限公司 | Control method and device of control interface, air conditioner controller and storage medium |
-
2023
- 2023-08-31 CN CN202311107350.3A patent/CN116821990A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104266652A (en) * | 2014-09-17 | 2015-01-07 | 沈阳美行科技有限公司 | Method for dynamically adjusting color matching of map according to weather situation of city where automobile is located |
CN111580728A (en) * | 2020-04-07 | 2020-08-25 | 深圳震有科技股份有限公司 | Method and device for dynamically plotting multiple military standard types based on state mode |
US11520947B1 (en) * | 2021-08-26 | 2022-12-06 | Vilnius Gediminas Technical University | System and method for adapting graphical user interfaces to real-time user metrics |
CN114610426A (en) * | 2022-03-04 | 2022-06-10 | 腾讯科技(深圳)有限公司 | Method, device and equipment for adjusting interface layout and storage medium |
CN116320305A (en) * | 2023-03-10 | 2023-06-23 | 中国电子科技集团公司第五十四研究所 | Full-dimension battlefield situation information presentation system |
CN116576545A (en) * | 2023-05-08 | 2023-08-11 | 珠海格力电器股份有限公司 | Control method and device of control interface, air conditioner controller and storage medium |
Non-Patent Citations (1)
Title |
---|
杨帆 等: "面向军事群体的聚合及解聚可视化控制模型", 《计算机测量与控制》, pages 1 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117744027A (en) * | 2024-02-20 | 2024-03-22 | 中国人民解放军国防大学联合作战学院 | Fusion method, server and storage medium based on large-scale polymorphic information |
CN117744027B (en) * | 2024-02-20 | 2024-05-07 | 中国人民解放军国防大学联合作战学院 | Fusion method, server and storage medium based on large-scale polymorphic information |
CN118036738A (en) * | 2024-03-05 | 2024-05-14 | 中国人民解放军国防大学联合作战学院 | Soldier chess situation display and control method, server and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11861061B2 (en) | Virtual sharing of physical notebook | |
US10467814B2 (en) | Mixed-reality architectural design environment | |
CN116821990A (en) | Situation unified modeling method based on human engineering, server and storage medium | |
US20200241730A1 (en) | Position-dependent Modification of Descriptive Content in a Virtual Reality Environment | |
US20190213792A1 (en) | Providing Body-Anchored Mixed-Reality Experiences | |
Fang et al. | Head-mounted display augmented reality in manufacturing: A systematic review | |
CN111156855A (en) | Electronic warfare equipment virtual training man-machine interaction system | |
Alshaal et al. | Enhancing virtual reality systems with smart wearable devices | |
Morris et al. | An xri mixed-reality internet-of-things architectural framework toward immersive and adaptive smart environments | |
CN118227017A (en) | Augmented reality information processing method and device, electronic equipment and storage medium | |
Goktan et al. | Augmented reality appendages for robots: Design considerations and recommendations for maximizing social and functional perception | |
KR100898991B1 (en) | Apparatus for shader providing and transformation of 3d graphic system | |
Tobisková et al. | Multimodal augmented reality and subtle guidance for industrial assembly–A survey and ideation method | |
Neuman et al. | Performalism: A manifesto for architectural performance | |
Schilling et al. | Environment-integrated human machine interface framework for multimodal system interaction on the shopfloor | |
Xiaocheng | Application research of virtual 3D animation technology in the design of human computer interface | |
CN114201042B (en) | Distributed comprehensive integrated seminar device, system, construction method and interaction method | |
Han | Research on multi-mode human-computer interaction design evaluation system based on virtual reality technology | |
CN111385489B (en) | Method, device and equipment for manufacturing short video cover and storage medium | |
Featherstone et al. | Human creativity in the data visualisation pipeline | |
Wang et al. | Research on virtual reality technology in landscape design | |
Shi et al. | Multimodal Usability of Human-Computer Interaction Based on Task-Human-Computer-Centered Design | |
Schürg | Development and evaluation of interaction concepts for mobile augmented and virtual reality applications considering external controllers | |
Hu et al. | Product Design Interaction and Experience Based on Virtual Reality Technology | |
Kimura et al. | Exploring Opportunities from the More-than-Human Perspective for Investigating Wicked Problem in Our Entangled World |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230929 |
|
RJ01 | Rejection of invention patent application after publication |