CN116820379A - Equipment display control method based on human engineering, server and storage medium - Google Patents

Equipment display control method based on human engineering, server and storage medium Download PDF

Info

Publication number
CN116820379A
CN116820379A CN202311107375.3A CN202311107375A CN116820379A CN 116820379 A CN116820379 A CN 116820379A CN 202311107375 A CN202311107375 A CN 202311107375A CN 116820379 A CN116820379 A CN 116820379A
Authority
CN
China
Prior art keywords
layer
display
situation
map
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311107375.3A
Other languages
Chinese (zh)
Inventor
张昶
王家润
杨胜利
李志强
王记坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 15 Research Institute
Original Assignee
CETC 15 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 15 Research Institute filed Critical CETC 15 Research Institute
Priority to CN202311107375.3A priority Critical patent/CN116820379A/en
Publication of CN116820379A publication Critical patent/CN116820379A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a device display control method based on human engineering, a server and a storage medium, and belongs to the field of command control. The method comprises the following steps: step 1: the input layer acquires external environment data and human data; step 2: the control layer calculates and generates screen brightness parameters, display parameters and physiological index parameters according to external environment data and human data, and analyzes voice instructions; step 3: the output layer outputs a visual situation according to the screen brightness parameter and the display parameter, outputs prompt information according to the physiological index parameter, and adjusts the display parameter according to the voice command analysis. The method is based on the design concept of human engineering, performs situation display cooperativity and systematic design aiming at special equipment environments on a map layer, a situation layer and an interface layer, and provides an overall design guide for equipment-level situation visualization; and automatic control is adopted, so that manual interactive operation is reduced, and misoperation in a narrow space is avoided.

Description

Equipment display control method based on human engineering, server and storage medium
Technical Field
The application belongs to the field of command control, and particularly relates to a device display control method, a server and a storage medium based on human engineering.
Background
From the key field of application, the human engineering is mainly focused on aerospace, nuclear industry, industrial design and the like, is important to safety and pleasure, and mainly provides systematic comprehensive design guidance with human cores for various systems. However, the current ergonomic research in the field of command control is weak, especially for the battlefield situation display direction, and is basically in the study blank at present. Particularly, the ergonomic research of equipment-level situation display is still blank under the constraint of space, resources, external environment and the like in various equipment.
Battlefield situation inside various kinds of equipment (tanks, warships, fighters, transport vehicles, etc.) shows some special design considerations: the method is greatly influenced by external environment light in a field environment, and the observation situation of fighters is disturbed to a certain extent; strong vibration caused by a field road surface, the size of a button of an interface needs to be properly adjusted, and the selection accuracy of fingers is ensured; the resources of the machine are limited (memory, display card and the like), detailed display of information needs to be considered, and an embedded Geographic Information System (GIS) and the like are selected. In addition, operations of fighters are inconvenient because of a narrow space inside the equipment, and the like.
Therefore, there is a need for a method for controlling equipment display based on ergonomic engineering, which can free up both hands of a fighter and focus on the operation of critical combat tasks such as driving and shooting.
Disclosure of Invention
In order to solve the defects in the prior art, the application provides an equipment display control method based on human engineering, which can carry out integral cooperative control based on three elements of the human engineering and three layers of battlefield situations, provides an equipment-level situation display intelligent control model, automatically controls the display of a machine and the battlefield situations, and solves the problems that situation display in equipment is limited by space, resources and the like and influenced by environment, situation display is greatly influenced, the observation and judgment of fighters are influenced, and the operation of the fighters is inconvenient.
The technical effects to be achieved by the application are realized by the following scheme:
according to a first aspect of the present application, there is provided an ergonomic-based equipment display control method, comprising the steps of:
step 1: the input layer acquires external environment data and human data;
step 2: the control layer calculates and generates screen brightness parameters, display parameters and physiological index parameters according to the external environment data and the data of the person, and analyzes voice instructions;
step 3: and the output layer outputs a visual situation according to the screen brightness parameter and the display parameter, outputs prompt information according to the physiological index parameter, and adjusts the display parameter according to the voice command analysis.
Preferably, the input layer includes at least a gyroscope, an optical sensor, a camera, a microphone, an oximeter, a temperature sensor, and a heart rate detector, the external environment data includes at least light intensity data, and the human data includes at least blood oxygen data, heart rate data, body temperature data, and voice instructions.
Preferably, according to the light intensity data, mapping conversion of intensity information and screen brightness is calculated, screen brightness parameters are calculated again, and a hardware programming interface is called to send to an output layer.
Preferably, the display parameters include a situation layer, a map layer and an interface layer, wherein:
the interface layer is a user interface and comprises parameters of man-machine interaction and operation logic;
the map layer at least comprises a two-dimensional map, three-dimensional digital earth, geomagnetic information and meteorological hydrologic information, and different information is organized by the map layer and display and hidden control is carried out;
and the situation layer presents battlefield situation elements, and standard symbolized display is carried out by adopting the battlefield label.
Preferably, the generating of the display parameter is specifically:
setting the size of keys in the interface layer according to the detected vibration amplitude and vibration frequency, and selecting elements for hiding the interface layer according to the current task;
calculating brightness mapping conversion of the light intensity data and labels in the situation layer by adopting a color space HSV according to the light intensity data, calling a label drawing programming interface according to the calculated label brightness, and setting label colors to be sent to an output layer; setting the grading detailed display of the additional text information of the labels in the situation layer according to the energy consumption of the equipment and the display card parameters;
calculating brightness mapping conversion of the light intensity data and visual variables of geographic symbols in a map layer by adopting a color space HSV according to the light intensity data, calling a map drawing programming interface according to the calculated brightness of the map symbols, and setting the colors of the map symbols to be sent to an output layer; and adopting an embedded GIS facing to the equipment according to the energy consumption of the equipment and the display card parameters.
Preferably, the calculation generation of the physiological index parameters is specifically: and according to the emotion and physical physiological states of the fighter collected by the camera, the microphone, the oximeter, the temperature sensor and the heart rate detector, judging whether the fighter is in a bad state or not, and sending warning information to the output layer according to the requirement.
Preferably, the voice command analysis is specifically: according to the voice information collected by the microphone, analyzing the voice information into instruction information and sending the instruction information to an output layer, wherein:
the instruction information facing the map layer at least comprises: the method comprises the steps of two-dimensional map amplification and scaling, map up-down and left-right movement, map layer display and hiding and brightness adjustment;
the instruction information facing the situation layer at least comprises: brightness adjustment of the label, addition, deletion and examination of the label and display of the label layer;
the instruction information facing the interface layer at least comprises: and comparing and displaying the map layer and the situation layer, highlighting the map layer or highlighting the situation layer, and displaying consistency of the map layer and the situation layer.
Preferably, the output layer completes the automation operation on the interface layer, the map layer and the situation layer according to the calculation and processing results of the control layer.
According to a second aspect of the present application, there is provided a server comprising: a memory and at least one processor;
the memory stores a computer program, and the at least one processor executes the computer program stored in the memory to implement the ergonomic-based equipment display control method described above.
According to a third aspect of the present application, there is provided a computer-readable storage medium having stored therein a computer program which, when executed, implements the above-described ergonomic-based equipment display control method.
According to one embodiment of the application, the technical effect of the equipment display control method based on the human engineering is that the method is based on the pleasant core design concept of human-machine-environment, performs situation display cooperativity and systematic design aiming at special equipment environment on a map layer, a situation layer and an interface layer, and provides an overall design guide for equipment-level situation visualization;
the novel design concept of automatic control for equipment-level battlefield situation display is provided: a) By adopting automatic control, unnecessary manual interaction operations (such as adjusting screen brightness, button size in an interface and the like) are reduced as much as possible, and misoperation in a narrow space is avoided; b) Through the man-machine interaction of multimodality, adopt the voice command to replace the manual operation in the narrow and small space of reduction, guarantee that both hands focus core fight task: driving, shooting, etc.
Drawings
In order to more clearly illustrate the embodiments of the application or the prior art solutions, the drawings which are used in the description of the embodiments or the prior art will be briefly described below, it being obvious that the drawings in the description below are only some of the embodiments described in the present application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of an ergonomic-based equipment display control method according to an embodiment of the present application;
fig. 2 is a block diagram of a server according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in fig. 1, the ergonomic-based equipment display control method in an embodiment of the application includes the following steps:
s110: the input layer acquires external environment data and human data;
in this step, the input layer includes at least a gyroscope, an optical sensor, a camera, a microphone, an oximeter, a temperature sensor, and a heart rate detector, where the gyroscope is used to detect data such as a horizontal angle, an angular velocity, etc. of a current device (such as a tank, a vehicle, etc.), the optical sensor is used to detect an illumination intensity of an environment in the device, the camera is used to detect facial expressions, gesture actions, limb positions, etc. of a person, the microphone is used to collect voice instructions of the person, and the oximeter, the heart rate detector, the temperature sensor are used to collect blood oxygen data, heart rate data, body temperature data, etc. of the person. An acceleration sensor, a vibration sensor, etc. may also be included for detecting acceleration as well as vibration data of the current device.
The external environment data at least comprises light intensity data, and the human data at least comprises blood oxygen data, heart rate data, body temperature data and voice instructions.
S120: the control layer calculates and generates screen brightness parameters, display parameters and physiological index parameters according to external environment data and human data, and analyzes voice instructions;
the control layer is a computer, and comprises a CPU, a GPU, a memory, a plurality of interfaces and the like, and can receive data, perform calculation processing according to a program stored in the memory and output a calculation result through the interfaces.
Specifically, according to the light intensity data, the mapping conversion of the intensity information and the screen brightness is calculated, the screen brightness parameter is calculated, and the hardware programming interface is called to send to the output layer.
The display parameters comprise display parameters of a situation layer, a map layer and an interface layer, wherein:
the interface layer is a user interface and comprises parameters of man-machine interaction and operation logic, the interface layer mainly relates to a User Interface (UI), the interface layer refers to the overall design of man-machine interaction, operation logic, attractive interface and the like of software, and the interface design of the system is combined with the working and application environment of a user to realize the adaptation of the man-machine interface;
the map layer at least comprises a two-dimensional map, three-dimensional digital earth, geomagnetic information and meteorological hydrologic information, and different information is organized by the map layer and display and hidden control is carried out;
the situation layer presents battlefield situation elements focused by the commander, and standard symbolized display is carried out by adopting specific battlefield labels of the army.
The interface layer, the map layer and the situation layer need to be subjected to cooperative control treatment, and first, attention is required to overall coordination of situation display, so that consistency of visual feeling is formed. Secondly, attention is also paid to setting certain contrast according to the current task, and the interested layer is highlighted; the three elements in human engineering, namely human-machine (machine) -ring (environment), have certain resource limitations. There is a certain cognitive load on human vision. Computer resources such as CPU, memory, display card and display screen have certain physical limitations. Therefore, comprehensive consideration is given to the situation display design, and the design balance is considered as a whole. According to the hardware level of the machine display card, the self-adaptive display of a map layer and a situation layer can be configured; the three elements and the three layers are integrally fused. When the battlefield situation is displayed, the influence among three layers is considered, the resource constraint of human-machine-ring three-element is considered, and the design concepts of safety, systemization, delightness and the like of human engineering are penetrated into the battlefield situation display design through synthesis.
The situation display in the equipment is influenced by space, resource and other restrictions, environment and the like, has certain specificity, and particularly has larger influence of external environment light. Based on the three-layer integral cooperative control of three elements of human engineering and battlefield situation, an equipment-level situation display intelligent control model is provided. The model takes ambient light and the like as input, and automatically controls the display of three layers of machines and battlefield situations and the like through internal processing of the model.
The generation of the display parameters needs to fully consider the human engineering, namely the relationship between human-machine-environment three elements, and in the aspect of pleasant design, a battlefield situation visualized beneficiary is a combat commander, and the pleasant design is carried out according to the design concept of the human engineering with people as the center, so that the method is a soul of the visualized design.
In terms of hardware design of the machine, the hardware environment of the machine comprises a display card, a display, a CPU/GPU and the like. The display card has certain performance influence on the situation layer and the map layer;
in terms of environment design, the environment refers to an external natural environment and a natural environment of map simulation. The external natural environment has illumination, weather and the like, and has certain interference on situation display. Map-related data: elevation, images, etc. can have a certain influence on display;
setting the size of keys in the interface layer according to the detected vibration amplitude and vibration frequency, and adjusting the positions of the keys in the screen according to the finger positions detected by the camera so as to facilitate operation; when the vibration amplitude is overlarge, automatically detecting the average coordinate of the position of the finger, matching the average coordinate with the coordinate of the key in the interface layer, automatically judging the key to be operated, and amplifying the key or directly executing the function corresponding to the key so as to facilitate the operation of an operator; selecting elements of the hidden interface layer according to the current task;
according to the side-tipping angle of the current equipment detected by the gyroscope, the rotation angles of characters and keys in an interface layer and the rotation angles of various labels in a map layer and a situation layer can be set, so that an operator can accurately distinguish the characters, the keys and the various labels in the tipping equipment, and the problem that when the tipping angle is large, the operator subconsciously keeps the head in a vertical state to tip the viewing screen, and the viewing angles of the various characters and the operator are not matched to influence the recognition is avoided;
the base of the display screen can be set to be an automatic rotating structure, and the angle of the display screen can be adjusted according to the inclination angle of the equipment detected by the gyroscope, so that the display screen is further convenient for operators to watch.
Calculating brightness mapping conversion of the light intensity data and labels in the situation layer by adopting a color space HSV according to the light intensity data, calling a label drawing programming interface according to the calculated label brightness, and setting label colors to be sent to an output layer; setting the grading detailed display of the additional text information of the labels in the situation layer according to the energy consumption of the equipment and the display card parameters;
calculating brightness mapping conversion of the light intensity data and visual variables of geographic symbols in a map layer by adopting a color space HSV according to the light intensity data, calling a map drawing programming interface according to the calculated brightness of the map symbols, and setting the colors of the map symbols to be sent to an output layer; and adopting an embedded GIS facing to the equipment according to the energy consumption of the equipment and the display card parameters.
The light intensity data are collected light intensity data in the equipment, and display is adjusted in real time according to the intensity of light operated by personnel in the equipment; the optical sensor can be arranged outside the equipment, and the light intensity inside the equipment can be adjusted in real time according to the light intensity data of the external environment so as to keep the optimal observation state of an operator, for example, the light inside the equipment is adjusted to be weakened at night, so that the operator can clearly observe the environment outside the equipment.
The calculation generation of the physiological index parameters is specifically as follows: according to the information collected by the camera, microphone, oximeter, temperature sensor and heart rate detector, the emotion (happiness, anger, fun, happiness, etc.) and physiological state of fighter and the like are monitored comprehensively in multiple modes, whether the fighter is in fatigue state or not is judged, if the fighter is in bad state, warning reminding is given, and warning information is sent to the output layer. The safety is improved, and the accident potential is reduced.
The voice instruction analysis is specifically as follows: according to the information collected by the microphone, the voice command is analyzed based on voice recognition, various man-machine interaction operations based on voice are performed, and instruction information is sent to an output layer, wherein:
the instruction information facing the map layer at least comprises: the method comprises the steps of two-dimensional map amplification and scaling, map up-down and left-right movement, map layer display and hiding and brightness adjustment;
the instruction information facing the situation layer at least comprises: brightness adjustment of the label, addition, deletion and examination of the label and display of the label layer;
the instruction information facing the interface layer at least comprises: contrast display of map layers and situation layers (for example, based on color space HSV, brightness contrast of two layers is increased, etc.); the map layer or the protruding situation layer is highlighted, and when a certain layer is highlighted, the other two layers are automatically hidden, so that the concentration of an operator is improved; the map layer is displayed consistently with the situation layer (e.g., based on color space HSV, keeping the brightness of the situation layer proportional to the brightness of the map layer).
S130: the output layer outputs a visual situation according to the screen brightness parameter and the display parameter, outputs prompt information according to the physiological index parameter, and adjusts the display parameter according to the voice command analysis.
In the step, the output layer completes automatic operation on the interface layer, the map layer and the situation layer according to the calculation and processing results of the control layer. Output layers are, for example, CRT displays (picture tube displays) and LCD displays (liquid crystal displays); the surface of the touch control element is covered with the touch control element, so that touch operation can be performed; or a plurality of entity keys or virtual keys are arranged around the display screen, if the entity keys are the entity keys, light, a touch screen and the like are arranged in the keys, and the user-defined operation of the keys is supported, so that an operator can conveniently set the keys according to own preference.
Through the pleasant core design concept based on the man-machine-ring, situation display cooperativity and systematic design aiming at special equipment environments are carried out on the map layer, the situation layer and the interface layer, an overall design scheme is provided for equipment-level situation visualization, and the method has a good practical engineering application prospect. The model automatically and systematically comprehensively processes multiple elements such as natural external environment and situation layers, map layers, screen brightness, physiological indexes of people, facial expressions, voice commands and the like, integrally designs the interactive friendliness of equipment-level software, reflects systematic cooperative control of three elements of human-machine-ring and three layers, and comprehensively implements artificial engineering design with human cores in battlefield situation display processing.
As shown in fig. 2, a server according to an embodiment of the present application includes: a memory 201 and at least one processor 202;
the memory 201 stores a computer program, and the at least one processor 202 executes the computer program stored in the memory 201 to implement the ergonomic-based equipment display control method described above.
According to a third aspect of the present application, there is provided a computer-readable storage medium having a computer program stored therein, the computer program when executed implementing the ergonomic-based equipment display control method described above.
According to one embodiment of the application, the technical effect of the equipment display control method based on the human engineering is that the method is based on the pleasant core design concept of human-machine-environment, performs situation display cooperativity and systematic design aiming at special equipment environment on a map layer, a situation layer and an interface layer, and provides an overall design guide for equipment-level situation visualization;
the novel design concept of automatic control for equipment-level battlefield situation display is provided: a) By adopting automatic control, unnecessary manual interaction operations (such as adjusting screen brightness, button size in an interface and the like) are reduced as much as possible, and misoperation in a narrow space is avoided; b) Through the man-machine interaction of multimodality, adopt the voice command to replace the manual operation in the narrow and small space of reduction, guarantee that both hands focus core fight task: driving, shooting, etc.
It should be noted that the foregoing detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly indicates otherwise. Furthermore, it will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, steps, operations, devices, components, and/or groups thereof.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Spatially relative terms, such as "above … …," "above … …," "upper surface at … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial location relative to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the exemplary term "above … …" may include both orientations of "above … …" and "below … …". The device may also be positioned in other different ways, such as rotated 90 degrees or at other orientations, and the spatially relative descriptors used herein interpreted accordingly.
In the above detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, like numerals typically identify like components unless context indicates otherwise. The illustrated embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. The equipment display control method based on the human engineering is characterized by comprising the following steps of:
step 1: the input layer acquires external environment data and data of a person, wherein the data of the person at least comprises a voice instruction;
step 2: the control layer calculates and generates screen brightness parameters, display parameters and physiological index parameters according to the external environment data and the data of the person, and analyzes voice instructions;
step 3: and the output layer outputs a visual situation according to the screen brightness parameter and the display parameter, outputs prompt information according to the physiological index parameter, and adjusts the display parameter according to the voice command analysis.
2. The ergonomic-based equipment display control method of claim 1 wherein the input layer comprises at least a gyroscope, an optical sensor, a camera, a microphone, an oximeter, a temperature sensor, and a heart rate detector, the external environmental data comprises at least light intensity data, and the person's data further comprises blood oxygen data, heart rate data, and body temperature data.
3. The ergonomic equipment display control method of claim 2 wherein, according to the light intensity data, mapping conversion of intensity information and screen brightness is calculated, screen brightness parameters are calculated again, and a hardware programming interface is invoked for sending to an output layer.
4. The ergonomic-based equipment display control method of claim 2, wherein the display parameters comprise a situation layer, a map layer, and an interface layer, wherein:
the interface layer is a user interface and comprises parameters of man-machine interaction and operation logic;
the map layer at least comprises a two-dimensional map, three-dimensional digital earth, geomagnetic information and meteorological hydrologic information, and different information is organized by the map layer and display and hidden control is carried out;
and the situation layer presents battlefield situation elements, and standard symbolized display is carried out by adopting the battlefield label.
5. The ergonomic-based equipment display control method of claim 4 wherein generating the display parameters is specifically:
setting the size of keys in the interface layer according to the detected vibration amplitude and vibration frequency, and selecting elements for hiding the interface layer according to the current task;
calculating brightness mapping conversion of the light intensity data and labels in the situation layer by adopting a color space HSV according to the light intensity data, calling a label drawing programming interface according to the calculated label brightness, and setting label colors to be sent to an output layer; setting the grading detailed display of the additional text information of the labels in the situation layer according to the energy consumption of the equipment and the display card parameters;
calculating brightness mapping conversion of the light intensity data and visual variables of geographic symbols in a map layer by adopting a color space HSV according to the light intensity data, calling a map drawing programming interface according to the calculated brightness of the map symbols, and setting the colors of the map symbols to be sent to an output layer; and adopting an embedded GIS facing to the equipment according to the energy consumption of the equipment and the display card parameters.
6. The ergonomic engineering-based equipment display control method of claim 2, wherein the calculating and generating of the physiological index parameters is specifically: and according to the emotion and physical physiological states of the fighter collected by the camera, the microphone, the oximeter, the temperature sensor and the heart rate detector, judging whether the fighter is in a bad state or not, and sending warning information to the output layer according to the requirement.
7. The ergonomic equipment display control method of claim 4 wherein performing voice command parsing comprises: according to the voice information collected by the microphone, analyzing the voice information into instruction information and sending the instruction information to an output layer, wherein:
the instruction information facing the map layer at least comprises: the method comprises the steps of two-dimensional map amplification and scaling, map up-down and left-right movement, map layer display and hiding and brightness adjustment;
the instruction information facing the situation layer at least comprises: brightness adjustment of the label, addition, deletion and examination of the label and display of the label layer;
the instruction information facing the interface layer at least comprises: and comparing and displaying the map layer and the situation layer, highlighting the map layer or highlighting the situation layer, and displaying consistency of the map layer and the situation layer.
8. The ergonomic engineering-based equipment display control method of claim 4 wherein the output layer performs automated operations on the interface layer, map layer, and situation layer based on the computation and processing results of the control layer.
9. A server, comprising: a memory and at least one processor;
the memory stores a computer program, and the at least one processor executes the computer program stored in the memory to implement the ergonomic-based equipment display control method of any of claims 1-8.
10. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, which when executed implements the ergonomic-based equipment display control method of any of claims 1-8.
CN202311107375.3A 2023-08-31 2023-08-31 Equipment display control method based on human engineering, server and storage medium Pending CN116820379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311107375.3A CN116820379A (en) 2023-08-31 2023-08-31 Equipment display control method based on human engineering, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311107375.3A CN116820379A (en) 2023-08-31 2023-08-31 Equipment display control method based on human engineering, server and storage medium

Publications (1)

Publication Number Publication Date
CN116820379A true CN116820379A (en) 2023-09-29

Family

ID=88127900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311107375.3A Pending CN116820379A (en) 2023-08-31 2023-08-31 Equipment display control method based on human engineering, server and storage medium

Country Status (1)

Country Link
CN (1) CN116820379A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164209A (en) * 2011-03-23 2011-08-24 北京百纳威尔科技有限公司 Mobile terminal screen display control method and mobile terminal
US20120306768A1 (en) * 2011-06-03 2012-12-06 Microsoft Corporation Motion effect reduction for displays and touch input
CN105100396A (en) * 2015-05-21 2015-11-25 努比亚技术有限公司 Mobile terminal and rotating display method and device thereof
CN107132958A (en) * 2016-02-29 2017-09-05 王博 A kind of the small screen shows the control method and mobile intelligent terminal of large scale content
CN107358572A (en) * 2017-07-12 2017-11-17 杭州字节信息技术有限公司 A kind of ambient light adaptive approach of modified based on tactical information terminal
CN107479706A (en) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 A kind of battlefield situation information based on HoloLens is built with interacting implementation method
CN111010476A (en) * 2019-11-13 2020-04-14 北京奇艺世纪科技有限公司 Image display method, device, terminal and readable storage medium
US20200152197A1 (en) * 2011-04-22 2020-05-14 Emerging Automotive, Llc Methods and vehicles for capturing emotion of a human driver and customizing vehicle response
CN111290575A (en) * 2020-01-21 2020-06-16 中国人民解放军空军工程大学 Multichannel interactive control system of air defense anti-pilot weapon
CN112533049A (en) * 2020-11-26 2021-03-19 深圳创维-Rgb电子有限公司 Picture following screen rotation processing method and device, intelligent terminal and medium
CN112562267A (en) * 2020-11-27 2021-03-26 深圳腾视科技有限公司 Vehicle-mounted safety robot and safe driving assistance method
CN113778113A (en) * 2021-08-20 2021-12-10 北京科技大学 Pilot-assisted driving method and system based on multi-mode physiological signals
KR20230050535A (en) * 2021-10-07 2023-04-17 주식회사 이에스피 Display system and method for improving autonomous driving safety of electric bus
CN116320305A (en) * 2023-03-10 2023-06-23 中国电子科技集团公司第五十四研究所 Full-dimension battlefield situation information presentation system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164209A (en) * 2011-03-23 2011-08-24 北京百纳威尔科技有限公司 Mobile terminal screen display control method and mobile terminal
US20200152197A1 (en) * 2011-04-22 2020-05-14 Emerging Automotive, Llc Methods and vehicles for capturing emotion of a human driver and customizing vehicle response
US20120306768A1 (en) * 2011-06-03 2012-12-06 Microsoft Corporation Motion effect reduction for displays and touch input
CN105100396A (en) * 2015-05-21 2015-11-25 努比亚技术有限公司 Mobile terminal and rotating display method and device thereof
CN107132958A (en) * 2016-02-29 2017-09-05 王博 A kind of the small screen shows the control method and mobile intelligent terminal of large scale content
CN107358572A (en) * 2017-07-12 2017-11-17 杭州字节信息技术有限公司 A kind of ambient light adaptive approach of modified based on tactical information terminal
CN107479706A (en) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 A kind of battlefield situation information based on HoloLens is built with interacting implementation method
CN111010476A (en) * 2019-11-13 2020-04-14 北京奇艺世纪科技有限公司 Image display method, device, terminal and readable storage medium
CN111290575A (en) * 2020-01-21 2020-06-16 中国人民解放军空军工程大学 Multichannel interactive control system of air defense anti-pilot weapon
CN112533049A (en) * 2020-11-26 2021-03-19 深圳创维-Rgb电子有限公司 Picture following screen rotation processing method and device, intelligent terminal and medium
CN112562267A (en) * 2020-11-27 2021-03-26 深圳腾视科技有限公司 Vehicle-mounted safety robot and safe driving assistance method
CN113778113A (en) * 2021-08-20 2021-12-10 北京科技大学 Pilot-assisted driving method and system based on multi-mode physiological signals
KR20230050535A (en) * 2021-10-07 2023-04-17 주식회사 이에스피 Display system and method for improving autonomous driving safety of electric bus
CN116320305A (en) * 2023-03-10 2023-06-23 中国电子科技集团公司第五十四研究所 Full-dimension battlefield situation information presentation system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
沈志辉;袁再江;: "作战辅助决策系统信息融合关键技术", 辽宁工程技术大学学报(自然科学版), no. 1, pages 83 - 85 *
贾Π;徐晓刚;郑文庭;: "舰载指控系统中战场可视化研究", 系统仿真学报, no. 10, pages 95 - 98 *
黄银园;田东雨;田胜;: "人因工程在舰载态势生成系统中的应用设计", 现代雷达, no. 09, pages 8 - 12 *

Similar Documents

Publication Publication Date Title
US11507208B2 (en) External user interface for head worn computing
US11294180B2 (en) External user interface for head worn computing
US10268339B2 (en) Enhanced camera-based input
US6184847B1 (en) Intuitive control of portable data displays
EP3283938B1 (en) Gesture interface
CA2773636C (en) Method and apparatus for using generic software applications by ocular control and suitable methods of interaction
WO2016157936A1 (en) Information processing device, information processing method, and program
US20160041619A1 (en) Information processing apparatus, information processing method, and program
US10592000B2 (en) Gesture-based GUI for computing devices
JP2021528786A (en) Interface for augmented reality based on gaze
US20190385372A1 (en) Positioning a virtual reality passthrough region at a known distance
US10713488B2 (en) Inspection spot output apparatus, control method, and storage medium
JP2017004356A (en) Method of specifying position in virtual space, program, recording medium with program recorded therein, and device
US20210278932A1 (en) Self-learning digital interface
Fang et al. Head-mounted display augmented reality in manufacturing: A systematic review
JPH07271546A (en) Image display control method
CN108369451A (en) Information processing unit, information processing method and program
CN116820379A (en) Equipment display control method based on human engineering, server and storage medium
Cardenas et al. AARON: assistive augmented reality operations and navigation system for NASA’s exploration extravehicular mobility unit (xEMU)
CN113672158A (en) Human-computer interaction method and device for augmented reality
CN116185205B (en) Non-contact gesture interaction method and device
WO2023095519A1 (en) Display control device, display control method, and program
CN109847343B (en) Virtual reality interaction method and device, storage medium and electronic equipment
JP2018073310A (en) Display system and display program
Bacher Augmented User Interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination