CN114981769A - Information display method and device, medical equipment and storage medium - Google Patents

Information display method and device, medical equipment and storage medium Download PDF

Info

Publication number
CN114981769A
CN114981769A CN202180009260.3A CN202180009260A CN114981769A CN 114981769 A CN114981769 A CN 114981769A CN 202180009260 A CN202180009260 A CN 202180009260A CN 114981769 A CN114981769 A CN 114981769A
Authority
CN
China
Prior art keywords
menu
target
position point
option content
option
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180009260.3A
Other languages
Chinese (zh)
Inventor
朱皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Healthcare Co Ltd
Original Assignee
Wuhan United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Healthcare Co Ltd filed Critical Wuhan United Imaging Healthcare Co Ltd
Publication of CN114981769A publication Critical patent/CN114981769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided are an information presentation method, an information presentation device, a medical apparatus and a storage medium. When a trigger instruction is detected to exist in a current interface of a target device, and a selection task exists in an area where a trigger position point corresponding to the trigger instruction is located, option content of the selection task is obtained (S101), and a first menu generated according to the option content is displayed in a preset range around the area where the trigger position point is located in the current interface, wherein the option content displayed in the first menu is used for selecting target option content (S102).

Description

Information display method and device, medical equipment and storage medium Technical Field
The present application relates to the field of human-computer interaction technologies, and in particular, to an information display method and apparatus, a medical device, and a storage medium.
Background
The man-machine interaction refers to the process of information exchange between a person and a computer for completing a determined task in a certain interaction mode by using a certain dialogue language between the person and the computer.
The man-machine interaction function is mainly completed by external equipment capable of inputting and outputting and corresponding software. For example, the medical device mainly uses the physical operation panel and the touch display screen as the input of the user, and displays the corresponding content through the display screen as the output to the user. Generally, the logic calculation of input to output of the human-computer interaction function can be realized by using different logic calculation methods.
Disclosure of Invention
The embodiment of the application provides an information display method and device, medical equipment and a storage medium, which can simplify the operation mode of information display and improve the accuracy of information display.
In a first aspect, an embodiment of the present application provides an information display method, where the method includes:
if the triggering instruction exists in the current interface of the target equipment and the selection task exists in the area where the triggering position point corresponding to the triggering instruction is located, acquiring option content of the selection task; no menu exists in the current interface; the trigger instruction is generated in a preset control mode;
displaying a first menu generated according to the content of the option in a preset range around an area where a trigger position point in a current interface is located; the option contents displayed in the first menu are used for selecting the target option contents.
In one embodiment, the above-mentioned manipulation manner includes: the first operation continuously acting in the current interface and/or the second operation separating acting in the current interface.
In one embodiment, the trigger location point is an initial interaction location point; correspondingly, before acquiring the option content of the selection task, the method comprises the following steps:
determining whether a selection task exists in the area where the initial interaction position point is located according to the context of the area where the initial interaction position point is located; the context represents the operation flow logic of the current interface;
and if so, executing the step of acquiring the option content of the selection task.
In one embodiment, the displaying the first menu generated according to the content of the option in the preset range around the area where the trigger position point in the current interface is located includes:
and displaying a first menu generated according to the option content of the selected task in a preset range around the area where the initial position interaction point is located in the current interface.
In one embodiment, the method further comprises:
if the trigger position point acted by the control mode is detected to move from the initial interaction position point to a first target interaction position point in a first menu, determining target option content according to task information of the first target interaction position point; the task information includes the presence selection task and the absence selection task.
In one embodiment, the determining the content of the target option according to the task information of the first target interaction location point includes:
and if the task information of the first target interaction position point indicates that no selection task exists and the first target interaction position point is in the range of any option content in the first menu, determining the option content of the first target interaction position point as the target option content.
In one embodiment, the determining the content of the target option according to the task information of the first target interaction location point includes:
if the task information of the first target interaction position point is that a selection task exists and the first target interaction position point is in the range of any option content in the first menu, acquiring sub-option content corresponding to the selection task of the option content where the first target interaction position point is located;
displaying a second menu generated according to sub-option content corresponding to the selection task of the option content of the first target interaction position point within a preset range around the area where the first menu is located; the sub option contents displayed in the second menu are used to select the target option contents.
In one embodiment, the method further comprises:
if the trigger position point acted by the control mode is detected to move from the first target position point to a second target interaction position point in a second menu, determining target option content according to task information of the second target interaction position point; the task information includes a presence selection task and an absence selection task.
In one embodiment, the determining the content of the target option according to the task information of the second target interaction location point includes:
if the triggering position point acted by the control mode is detected to move from the second target interaction position point to a third target interaction position point in other menus, and the third target interaction position point is located in the range of any option content in the other menus, determining the option content of the third target interaction position point as the target option content; the other menu is any menu other than the second menu.
In one embodiment, the method further comprises:
and if the control mode is detected to meet the preset control end condition, determining that the control of the target equipment is ended, and canceling all menus in the target equipment interface corresponding to the control end moment.
In one embodiment, the obtaining of the option content of the selection task includes:
and determining the option content of the selected task through a preset artificial intelligence algorithm.
In one embodiment, the displaying the menu generated according to the content of the option in the preset range around the area where the trigger position point in the current interface is located includes:
and if the display information exists in the preset range around the area where the trigger position point in the current interface is located, displaying a menu generated according to the option content based on the uncovered area of the area where the display information exists.
In one embodiment, the method further comprises:
in the process of selecting the target option content in the menu, displaying a corresponding area of the target option content according to a preset display mode; the preset display mode comprises the following steps: highlighting a corresponding area of the target option content; or displaying the target option content corresponding area in an animation transition mode.
In one embodiment, the method further comprises:
acquiring attribute information of option content of a selection task;
and adjusting the appearance display information of the menu generated according to the option content according to the attribute information.
In one embodiment, the display form of the menu includes any one or a combination of a graphical interface, a table and text.
In one embodiment, the trigger location point is any location point in the current interface.
In one embodiment, the appearance display information of the menu is determined according to the context of the menu and the layout of the current interface; the context represents the operational flow logic of the current interface.
In a second aspect, an embodiment of the present application provides an information display apparatus, including:
the acquisition module is used for acquiring option content of a selection task if a trigger instruction is detected to exist in the current interface of the target device and the selection task exists in the area where the trigger position point corresponding to the trigger instruction is located;
the method comprises the steps that a trigger instruction is generated through a preset control mode, and the control mode comprises a first operation which continuously acts in a current interface;
the display module is used for displaying a menu generated according to the content of the options in a preset range around the area where the trigger position point in the current interface is located; the option contents displayed in the menu are used for selecting the target option contents.
In a third aspect, an embodiment of the present application provides a medical apparatus, including a display, a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method steps performed in the foregoing first aspect, and the display displays information according to the execution result of the processor.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method steps performed in the embodiment of the first aspect.
The embodiment of the application provides an information display method and device, medical equipment and a storage medium. When a triggering instruction exists in a current interface of target equipment and a selection task exists in an area where a triggering position point corresponding to the triggering instruction is located, acquiring option content of the selection task, and displaying a first menu generated according to the option content in a preset range around the area where the triggering position point is located in the current interface, wherein the option content displayed in the first menu is used for selecting the target option content; according to the method, only when a trigger instruction is detected and the corresponding selection task is determined to exist in the trigger instruction, the option content in the selection task is acquired in real time and displayed as a first menu, which means that the menu generated based on each selection task does not exist in the current interface before the trigger instruction does not exist, so that the display operation button of the selection task does not need to reside in the system interface of the target device under the condition that the selection task does not need to be executed, the interface is simpler, and more interface areas are reserved for other main line tasks. Furthermore, in the method, when the generated first menu is displayed, the selection task of the trigger position point is acquired in real time when the selection content in the first menu is displayed, so that the operation purpose of generating the trigger instruction can be accurately identified, and the accuracy of the selection content in the displayed first menu is ensured.
Drawings
FIG. 1 is a schematic diagram of an internal structure of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating an information presentation method according to an embodiment;
FIG. 3 is a diagram illustrating a manner in which information is presented in an interface, according to an embodiment;
FIG. 4 is a schematic diagram of a manner in which information is presented in an interface in another embodiment;
FIG. 5 is a schematic diagram illustrating a manner in which information is presented in an interface in another embodiment;
FIG. 6 is a schematic diagram illustrating a manner in which information is presented in an interface in another embodiment;
FIG. 7 is a schematic diagram illustrating a manner in which information is presented in an interface in another embodiment;
FIG. 8 is a schematic diagram illustrating a manner in which information is presented in an interface in another embodiment;
FIG. 9 is a schematic diagram illustrating a manner in which information is presented in an interface in another embodiment;
FIG. 10 is a schematic diagram illustrating a manner in which information is presented in an interface in another embodiment;
FIG. 11 is a schematic diagram illustrating a manner in which information is presented in an interface in another embodiment;
FIG. 12 is a diagram illustrating how information is presented in an interface according to another embodiment;
FIG. 13 is a flow chart illustrating an information displaying method according to another embodiment;
fig. 14 is a flowchart illustrating an information displaying method according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clearly understood, the embodiments of the present application are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the application and are not intended to limit the embodiments of the application.
In the related art, the devices available for human-computer interaction mainly include a keyboard, a mouse, various pattern recognition devices, and the like. The software corresponding to these devices is the part of the operating system that provides the human-computer interaction function.
Taking an ultrasound system in a medical device as an example, there are various control modes of the ultrasound system in the related art, one is controlled by a button on a physical panel, one is controlled by a touch screen, and the other is controlled by voice. Most of the control content is with corresponding icons, labels and menu item images and information fields. There are significant drawbacks to button control via a touchpad, and ultrasound devices often have multiple, or even ten, physical buttons, making the ultrasound devices complex and aesthetically undesirable. Although the defects caused by a physical panel are overcome through touch screen control, the defects of large number of virtual buttons and complex control cannot be overcome for moving physical buttons to a graphical interface. Compared with the two methods, the voice recognition simplifies limb operation, but lacks accuracy, and has lower corresponding speed and lower working efficiency. Based on this, the embodiment of the application provides an information display method, an information display device, medical equipment and a storage medium, which can simplify the operation mode of information display and improve the accuracy of information display.
The information presentation method provided by the embodiment of the present application may be applied to any software system, and the software system may be run in any field and any type of computer device, and the computer device includes, but is not limited to, various medical devices, personal computers, laptops, smartphones, tablet computers, portable wearable devices, and the like, where the medical devices may be medical X-ray machines, digital imaging devices, X-ray computed tomography devices, magnetic resonance imaging devices, ultrasound imaging devices, nuclear medicine imaging devices, and the like, and the embodiment of the present application does not limit this. For example, the information presentation method may be applied in an ultrasound system, which is operated in a medical device.
As shown in fig. 1, a schematic diagram of the internal structure of a computer device is provided, and a processor in the computer device is used for providing computing and control capabilities. The memory comprises a nonvolatile storage medium and an internal memory; the non-volatile storage medium stores an operating system, a computer program, and a database; the internal memory provides an environment for the operating system and the computer program to run in the non-volatile storage medium. The database in the computer equipment is used for storing relevant data of the information display method process. A network interface in a computer device is used to communicate with other devices outside over a network connection. The computer program is executed by a processor to implement an information presentation method.
The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. It can be understood that an execution subject of the information presentation method provided in the embodiment of the present application may be a target device, or may be an information presentation apparatus, and the information presentation apparatus may be a part or all of the target device. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application.
In an embodiment, an information presentation method is provided, where the embodiment relates to a specific process of, for a current interface of a target device, acquiring option content of a selection task if a trigger instruction is detected to exist in the current interface and the selection task exists in an area where a trigger position point corresponding to the trigger instruction is located, and presenting a menu generated by the option content at a set position, as shown in fig. 2, the embodiment includes some method steps:
s101, if a trigger instruction is detected to exist in a current interface of target equipment, and a selection task exists in an area where a trigger position point corresponding to the trigger instruction is located, acquiring option content of the selection task; no menu exists in the current interface; the trigger instruction is generated in a preset control mode.
The target device generally refers to a device, which may be any type of device in any field, for example, the target device is a medical device, and the target device includes, but is not limited to, a medical X-ray machine, a digital imaging device, an X-ray computed tomography imaging device, a magnetic resonance imaging device, an ultrasound imaging device, a nuclear medicine imaging device, and the like.
The current interface of the target device is an interface displayed by the target device at the current moment, taking an ultrasonic system of the medical device as an example, and the current interface is an interface when the ultrasonic system is not started yet after the medical device is started; or the current interface is a main interface of the started ultrasonic system of the medical equipment; or, the current interface is an interface displayed after any application block in the started ultrasound system of the medical device, and the like, which is not limited in the embodiment of the present application.
In the embodiment of the present application, a menu does not exist in the current interface, where the menu refers to a menu that can be hidden, that is, before a trigger instruction is received, the menu is not displayed in the interface and is in a hidden state.
In one scenario, the menu may be understood as a menu of selection tasks to be performed at this time, for example, if the selection task to be performed at this time is a selection task of an ultrasound probe, the menu refers to a menu of selection tasks of the probe. For another example, the selection task to be executed at this time is a task of selecting ultrasound image display parameters, and the menu here refers to a menu of tasks of selecting ultrasound image display parameters. Of course, if the selection task to be performed at this time is multiple, for example, both the task of selecting the ultrasound probe and the task of selecting the ultrasound image display parameter are performed, the menu herein refers to a menu that generally refers to any one of the tasks of selecting the ultrasound probe and selecting the ultrasound image display parameter.
The existence of the trigger instruction in the current interface indicates that the user executes a preset control mode in the current interface. The preset control mode can be preset and stored in the target equipment, and the target equipment can accurately identify the control mode conveniently in practical application.
And if the triggering instruction is detected to exist in the current interface of the target equipment and the selection task exists at the triggering position point corresponding to the triggering instruction based on the control mode realized by the preset control mode, acquiring the option content of the selection task.
That is to say, after detecting that a trigger instruction exists in the current interface of the target device, it needs to further detect whether a selection task exists at a trigger location point corresponding to the trigger instruction.
The selection task refers to a general name of a task which needs to execute selection operation, and different selection tasks can be generated on different interfaces in a software system due to different functions of the interfaces. Different selection tasks can correspond to different selectable option contents, and if a certain option content also comprises a selectable item, the option content also belongs to a selection task, so that the selection task can be called as the selection task as long as the selection operation is required to be executed. For example, in the main interface of the ultrasound system, different probes can be selected through the primary interface, and different detection items can be selected in the secondary interface after different probes are selected; alternatively, at the ultrasonic freezing interface of the sonic system, different measurement packages may be selected through the primary interface, different measurement items may be selected through the secondary interface, and so on.
A plurality of selection tasks are pre-configured in the software system, and each selection task comprises a plurality of selectable item contents. Each selection task of the preset configuration can be triggered at a preset position of the interface, so that when the selection task is applied, if the trigger position point corresponding to the trigger instruction is overlapped with the preset position of the preset selection task on the interface, the selection task can be determined to exist at the trigger position point corresponding to the trigger instruction.
And determining that a selection task exists at a trigger position point corresponding to the currently detected trigger instruction, and acquiring option content in the selection task.
For example, as shown in fig. 3, if the trigger position point corresponding to the trigger instruction detected in fig. 3 is point a and it is determined that the selection task X1 exists at the point a, the option contents N1, N2, N3, and N4 in the selection task X1 need to be acquired.
In one embodiment, the manner of obtaining the option content in the selection task is as follows: and determining according to the context of the area where the trigger position point corresponding to the trigger instruction is located. The meaning of the context is not the text content of the context, but is related to the operation flow logic of the area where the trigger position point is located, and may be the execution logic of the workflow, for example, if the interface in the login state triggers the login task, the context here refers to the login environment, and the content of the selectable login option in the login task can be determined according to the login environment. In the embodiment, the option content in the selection task is determined according to the context of the area where the trigger position point is located, which is equivalent to considering the flow logic of the current business operation of the selection task and the running environment of the business operation, so that the acquired option content in the selection task can better conform to the requirement of the current selection task, and the accuracy of the acquired option content is ensured.
In another embodiment, the manner of obtaining the option content in the selection task is as follows: and determining the option content of the selected task through a preset artificial intelligence algorithm. In this embodiment, when the option content in the selection task of the trigger location point needs to be acquired, the available option content in the selection task may be determined through a preset artificial intelligence algorithm, for example, the artificial intelligence algorithm may select a task type of the task itself, all option contents supported by the task type, and an option content factor with the largest number of times of selection by the user in the selection task as the decision factors, and determine the most preferred option content required for triggering the selection task this time.
Taking the example of determining the content of the option in the selection task according to the content of the option selected by the user most frequently in the selection task as an example, optionally, in a scenario, assuming that the content of the option included in a selection task by the user is set to be a multi-level menu, and the options in each level menu of the selection task are configured in advance depending on the context, that is, the options in each level menu are already set fixedly, based on this assumption, when the user triggers an operation of executing the selection task, the preset artificial intelligence algorithm can determine the content of the option most frequently selected by the user to be the option 2.4 in the second level menu through long-term learning, then at this time, the option 2.4 in the second level menu can be displayed in the first level menu, for example, a shortcut of the option 2.4 is created and placed in the first level menu, so that the user can select the option quickly, the menu can be placed in the first-level menu in other ways, which is not limited by the embodiment of the present application.
In the embodiment, the option content in the selection task is obtained through the artificial intelligence algorithm, and can be comprehensively considered from multiple aspects to determine the option content which best meets the requirements of the user, so that the accuracy of the obtained option content is ensured, and the intelligence of obtaining the option content in the selection task is improved.
S102, displaying a first menu generated according to option contents in a preset range around an area where a trigger position point in a current interface is located; the option contents displayed in the first menu are used for selecting the target option contents.
After the option content of the selection task of the trigger position point is acquired, the option options in the selection task can be displayed. Specifically, during display, the display can be performed in a preset range around the area where the trigger position point is located in the current interface, and the display is performed according to the menu generated according to the acquired option content of the selection task, so that the target option content can be selected based on the option content displayed in the menu.
The preset range around the area where the trigger position point in the current interface is located may be any range of values, for example, 10cm range, or 20cm range, etc., which is not limited by the embodiment of the present application.
After the display position is determined according to the preset range around the area where the trigger position point is located, the first menu is drawn according to the acquired option content and displayed at the display position, for example, the first menu may be drawn according to a preset menu generation rule, for example, a menu in which three option contents are in a row may be generated for three option contents in the menu generation rule, or a menu in which six option contents form a circle may be generated for six option contents may be defined in the menu generation rule, and the like.
In one embodiment, the presentation form of the first menu includes any one or a combination of a graphical interface, a table and text.
The embodiment of the present application is not limited, and the first menu may be displayed in a graphical interface, or in a table form, where the table includes, but is not limited to, a single vertical column table, a single horizontal column table, multiple rows and multiple columns of tables, and the like; the first menu can also be in a text form, namely, the first menu is displayed in the text form; of course, any one or more of tables, texts and graphical interfaces can be combined, and the display form of the first menu can be determined according to actual conditions. In the embodiment, a plurality of display forms are provided for displaying the first menu, so that the display effect of the information in the first menu is clearer, more intuitive and more comprehensive.
In one embodiment, the generated appearance showing information of the first menu can be determined according to the context of the first menu and the layout of the current interface.
Similarly, the context of the first menu refers to the context of the content of the item in the first menu, i.e. the operation flow logic of the content of the item and the running environment of the business operation. The layout of the current interface determines the area of the current interface in which information can be presented. The method includes the steps that the context of a first menu and the layout of a current interface are comprehensively considered, the shape and the size of a control of each option content in the first menu are determined, the building shape of the control of each option content in the first menu is determined, the position and the like of the control of each option content in the first menu can be further determined, and the determined information determines the appearance display information of the generated first menu.
According to the information display method provided by the embodiment of the application, when a trigger instruction exists in a current interface of target equipment and a selection task exists in an area where a trigger position point corresponding to the trigger instruction is located, option content of the selection task is obtained, and a first menu generated according to the option content is displayed in a preset range around the area where the trigger position point is located in the current interface, wherein the option content displayed in the first menu is used for selecting the target option content; according to the method, only when a trigger instruction is detected and the corresponding selection task is determined to exist in the trigger instruction, the option content in the selection task can be obtained in real time and displayed as a first menu, which means that a menu generated based on each selection task does not exist in the current interface before the trigger instruction does not exist, so that a display operation button of the selection task does not need to reside in the system interface of the target device under the condition that the selection task does not need to be executed, the interface is simpler, and more interface areas are reserved for being used as other mainline tasks. Furthermore, in the method, when the generated first menu is displayed, the selection task of the trigger position point is acquired in real time when the selection content in the first menu is displayed, so that the operation purpose of generating the trigger instruction can be accurately identified, and the accuracy of the selection content in the displayed first menu is ensured.
Based on the above embodiments, an implementation manner of the preset manipulation manner is described.
In one embodiment, the preset manipulation manner includes a first operation which is continuously acted in the current interface.
The first operation continuously acting in the current interface can be understood as that the user continuously executes the first operation in the current interface. For example, taking a touch screen as an example, the first operation continuously acting in the current interface may be that the user continuously slides in the touch screen by a finger, and the finger of the user does not leave the touch screen during the sliding process. It should be noted that the continuous action in the embodiment of the present application may be understood as continuously generating an influence on the target device, where the influence includes, but is not limited to, a physical influence, a program influence, an information influence, and the like.
In practical applications, the interface display screen of the target device may be a screen that is not touchable, or a screen that can sense and detect external information, or the like, in addition to a touchable screen.
For example, in a touchable screen, the first operation may be that the finger slides in the current interface of the target device, that is, the finger slides in the current interface of the target device may cause the trigger instruction to be generated in the current interface.
In the screen which cannot be touched, the first operation may be to move a device which is connected to the target device and is separated from the display, that is, by moving the device which is connected to the target device and is separated from the display, the trigger instruction may be generated in the current interface. The device for separating the entity input from the display may be a mouse, a trackball, a joystick, and the like, and at the same time, a corresponding indicator identifier may be displayed in the current interface of the target device to indicate a position point of the mouse, the trackball, or the joystick, which needs to be manipulated in the current interface, where a display style of the indicator identifier includes, but is not limited to, a pointer style, a palm style, a finger style, an arrow style, a geometric shape style, and the like.
In the screen capable of sensing and detecting the external information, the external information may be sensed and detected, including but not limited to detecting the user's body movement without touching the screen, sensing and detecting the user's voice, sensing and detecting the user's brain activity signal, etc., the first operation may be an operation of generating the above-mentioned sensible and detectable information, that is, the sensible and detectable information generated by the first operation may be implemented to act on the current interface of the target device, so that the above-mentioned trigger instruction is generated in the current interface.
In any of the first operations listed above, in the embodiment of the present application, during a period from the generation of the trigger instruction to the subsequent selection of the target option content, if the first operation is used as a manipulation manner for generating the trigger instruction, the first operation must be continuously applied to the current interface, and a situation that the first operation is not applied to the current interface cannot occur. For example, the finger is continuously acted on the touch screen from the beginning of generating the trigger instruction to the subsequent process of selecting to the target option content, and the finger cannot leave the touch screen.
In practical application, the functions provided by the equipment and the software system are very rich, the selection for completing one task in the software system can be realized only by carrying out multiple selections, each selection correspondingly generates a trigger instruction, and trigger position points corresponding to the trigger instructions at each time can also be different, so that in one embodiment, the trigger position points corresponding to the trigger instructions are any position points in the current interface.
When the manipulation mode is a first operation continuously acting on the current interface, a trigger position point corresponding to the trigger instruction in the current interface can be defined as a position point at which the dwell time of the first operation continuously acting on the current interface is greater than a preset threshold value, for example, when a finger continuously slides in the touch screen, the finger continuously acts on the touch screen, and when the dwell time of the finger sliding to a certain position point of the touch screen exceeds the preset threshold value, the position point is the trigger position point generating the trigger instruction.
More than one trigger position point for generating the trigger instruction by the finger in the process of sliding the touch screen is provided, and all the position points for generating the trigger instruction as long as the dwell time exceeds the preset threshold value are the trigger position points for generating the trigger instruction.
Therefore, when the target option content needs to be selected, the trigger is realized by adopting the first operation with continuous action, namely, in the whole information display process, the target option content can be selected by adopting the uniform operation logic, so that the option selection speed is increased, and for a user, the muscle memory can be formed by always adopting the uniform operation logic, so that the selection accuracy is improved.
In another embodiment, the preset manipulation manner includes a second operation of separating actions in the current interface.
The second operation that is acted upon intermittently in the current interface may be understood as being performed intermittently by the user in the current interface. For example, taking a touch screen as an example, the second operation that is acted on at intervals in the current interface may be a manner in which the user clicks, lifts, re-clicks, and re-lifts in the touch screen using a finger. Each click corresponds to the generation of a trigger instruction in the current interface. It should be noted that the interval role in the embodiment of the present application may be understood as an interval generating influence on the target device, which includes, but is not limited to, a physical influence, a program influence, an information influence, and the like.
Similarly, the touchable screen, the touchless screen, and the screen capable of sensing and detecting external information are used as examples to describe the realizable manner of the second operation.
For example, in a touchable screen, the second operation may be that the finger clicks intermittently on the current interface of the target device, and each time the finger clicks on the current interface of the target device, the trigger instruction may be generated in the current interface.
In the touchless screen, the second operation may be that the device, which is separated from the display of the entity input connected to the target device, is clicked at intervals in the current interface of the target device, a corresponding indicator identifier is displayed in the current interface of the target device to indicate the position point of the device clicked in the current interface, and the device can generate the trigger instruction once per click.
The device for separating the physical input from the display may be a mouse, a trackball, a joystick, etc., wherein the display style of the indicator mark includes, but is not limited to, a pointer style, a palm style, a finger style, an arrow style, a geometric shape style, etc., and embodiments of the present application are not listed here and can be set according to the actual situation.
In the screen capable of sensing and detecting the external information, the external information may be sensed and detected, including but not limited to detecting the user's limb movement without touching the screen, sensing and detecting the user's voice information, sensing and detecting the user's brain activity signal, etc. Correspondingly, in this scenario, the second operation may be an operation of generating the sensible and detectable information, that is, the sensible and detectable information generated by the second operation implements the current interface acting on the target device so as to generate the trigger instruction in the current interface.
Based on the above description of the second operation, in any second operation, in the embodiment of the present application, during a period from the generation of the trigger instruction to the subsequent selection of the target option content, if the second operation is used as a control manner for generating the trigger instruction, the second operation acts on the current interface at intervals. For example, during the process from the generation of the trigger instruction to the subsequent selection of the target option content, each time an item of content needs to be selected, the finger needs to click once at a position point corresponding to the item of content in the touch screen. In practical application, the functions provided by the device and the software system are very rich, the selection of one task in the software system can be realized only by selecting the option content for multiple times, and the corresponding positions of each option content in the touch screen are different, so that when the second operation is adopted as the control mode for generating the trigger instruction, the trigger position point corresponding to the trigger instruction is also any position point in the current interface, and the embodiment of the application is not limited to the position point.
Alternatively, the second operation may be a combination of any of the above listed ways, i.e. the spacing effect in the current interface is achieved in various ways. For example, a trigger instruction is generated by acting on the current interface in a finger clicking manner, a finger is lifted (interval process), another trigger instruction is generated by acting on the current interface in a voice manner, then another trigger instruction is generated by acting on the current interface in other manners after an interval, and so on until the target option content is selected to end the task.
Therefore, when the target option content needs to be selected, the adoption of the second operation with the interval function can avoid the adoption of a single operation logic, so that the operation mode in the task selection process is richer and more flexible, and the diversity of task selection is improved.
In one embodiment, the manner of combining the first operation and the second operation is used as the manipulation manner. The manner of combining the first operation and the second operation is a combination of any of the above listed manners of operating the first operation and the second operation, and embodiments of the present application are not listed here.
In this embodiment, the trigger instruction is generated by a first operation that is continuously performed in the current interface in a preset manipulation manner. Because the current interface can be an interface under any condition of the target equipment, namely the selection task existing in the trigger position point corresponding to the detected trigger instruction can be any selection task in any interface in the target equipment, which is equivalent to that in the embodiment, under the condition that the preset first operation is continuously acted on the current interface, the menu generation and display of the option content in any selection task of any interface in the target equipment can be realized, and the corresponding selection is completed, the first operation is a control mode with continuous action, the menu display can be completed by only one operation, and the selection of the target option content can be completed, which is equivalent to that different option contents can be responded and displayed through the same operation, different selection results are obtained, the operation consistency of the selection task is enhanced, the learning cost of different tasks in the selection task is greatly reduced, and the operation of the selection task is more convenient, the usability of man-machine interaction operation of the target equipment is improved.
Based on the above embodiments, the following describes, with reference to a specific usage scenario, an example of generating the first menu and determining the target option content, and for clarity, the following embodiments will be described with the aid of the first operation in the above embodiments.
Taking an ultrasound system in an ultrasound device with a touch screen function as an example to complete a task of selecting a convex probe to perform abdominal blood vessel examination, a current interface without a menu is shown in fig. 4, fig. 4 shows a first page of a simplified ultrasound system interface, the interface is in an initial state of a home page, and measurement items indicated in the interface are abdominal conventional measurement items, including an ultrasound image display area, an image parameter display area and a system parameter information display area, wherein the probe is used in the system parameter information to select the task example.
The scene that the menu does not exist in the current interface indicates that the detected trigger instruction is an instruction sent to the initialization interface in the target device, so that the trigger position point corresponding to the trigger instruction in the scene is the initial interaction point.
In an embodiment, before the option content of the selection task, it may be determined whether the selection task exists in the area where the initial interaction location point is located according to the context of the area where the initial interaction location point is located, and if the selection task exists, the step of obtaining the option content of the selection task may be performed.
Through a preset manipulation manner, for example, a finger does not slide away from the screen on the touch screen, a trigger instruction is generated in the interface illustrated in fig. 4, a trigger position point (i.e., an initial interaction position point) corresponding to the trigger instruction may refer to an initial interaction position point a1 illustrated in fig. 6, and then, whether a selection task exists in the area where the point a1 is located is continuously determined, specifically, according to a context of the area where the point a1 is located, the context is determined by the business operation logic of the area where the point a1 is located, and if the determination result is that the selection task exists in the area where the point a1 is located, the step of obtaining the option content of the selection task may be continuously performed. Therefore, the option content of the selection task is obtained only when the selection task exists in the area where the initial interaction position point is determined, so that the resource waste caused by the step of obtaining the option content of the selection task when the selection task does not exist in the area where the initial interaction position point is located is avoided, and the processing resource is saved.
Correspondingly, in an embodiment, the displaying the first menu generated according to the content of the option in the preset range around the area where the trigger position point in the current interface is located includes: and displaying a first menu generated according to the option content of the selected task in a preset range around the area where the initial position interaction point is located in the current interface.
The method comprises the steps of acquiring option contents in a selection task of an area where an initial interaction position point is located, and naturally displaying a first menu generated according to the option contents in a preset range around the area where the initial interaction position point is located. Therefore, the display is carried out within the preset range of the initial interaction position point, the effect of better checking information can be achieved, and the flexibility and the intelligence of man-machine interaction are improved.
As shown in fig. 5, it is assumed that the acquired option contents in the selection task of the area where the initial interaction position point a1 is located are option 1, option 2, option 3, option 4, option 5, and option 6 in fig. 5; and in a preset range around the point a1, a first menu shown in fig. 5 generated according to the option 1, the option 2, the option 3, the option 4, the option 5, and the option 6 is displayed.
For example, as shown in fig. 6, for example, taking a sliding operation that a finger does not leave the screen on the touch screen as an example, the finger may press the touch screen at a point a1 below a parameter information display area of the ultrasound system, and keep the touch screen still, which means that a trigger instruction is generated at a point a1, then, the context of the area where the point a1 is located is obtained to determine whether a selection task exists in the area where the point a1 is located, and if so, the option content in the selection task is obtained: the method comprises the steps of generating a first menu according to a convex array probe, a linear array probe, 3D-1 and S7-1, and then generating a first menu according to the convex array probe, the linear array probe, the 3D-1 and S7-1.
From the operator's perspective, the operator's finger presses the touch screen at point a1 in fig. 6 and remains stationary, and a first menu M1 of probe selections pops up within a predetermined range of point a 1.
Based on the first menu display process of the embodiment, when the first menu of the probe selection task needs to be displayed, a resident display operation button does not need to be arranged on an interface of the ultrasound system, and the first menu can be displayed only by triggering an instruction at a preset position point (for example, a1 point), so that the display interface of the ultrasound system is simplified, the number of buttons of input devices is reduced, and compared with the case that the operation button is resident on the display interface, the information display method provided by the embodiment of the application can vacate more display space to display main contents, and bring immersive experience to users. Physical buttons can be simplified, and key input buttons are highlighted, so that the layout of the machine interface is more concise and attractive.
Since the selection task itself in practical application needs multiple levels of selections to determine the final target option content, on the basis of fig. 5 and fig. 6, the target option content may be further selected in the first menu through the continuously acting first operation, and based on this, in an embodiment, the process of selecting the target option content in the first menu includes:
if the trigger position point acted by the control mode is detected to move from the initial interaction position point to a first target interaction position point in a first menu, determining target option content according to task information of the first target interaction position point; the task information includes a presence selection task and an absence selection task.
Taking the first operation as an example, in a process of generating the first menu from the initial interaction position until the target option content is selected in the first menu, the first operation needs to be continuously applied to the interface of the target device, for example, this process needs that the finger does not leave the screen all the time, in a case that the finger does not leave the screen, a dwell time of the finger applied to the trigger position point of the screen at a position point on the screen exceeds a preset threshold, which is not limited in the embodiment of the present application, for example, 1S, 2S, and the like.
For the first target interaction position point, the first target interaction position point has two conditions of a selection task and no selection task, namely, the first target interaction position point may have sub-option content, even the sub-option content in the first target interaction position point also has sub-option content, and so on, and when the first target interaction position point is arranged one level by one level, a plurality of levels of selection tasks can be arranged, and in such a case, the first target interaction position point needs to be arranged one level by one level until the final target option content is selected.
Then for the case where a certain item of option content in the first menu happens to be the target option content, i.e. after the finger slides from the initial interaction position point a1 to the first target interaction position point B1 in the screen, the option 2 to which the first target interaction position point B1 belongs is exactly the target option content.
In one embodiment, if the task information of the first target interaction location point indicates that no selection task exists and the first target interaction location point is within a range of any option content in the first menu, the option content where the first target interaction location point is located is determined as the target option content.
Under the condition that the first operation continuously acts on the interface of the target equipment, if it is detected that the trigger position point of the first operation action moves from the initial interaction position point to a first target interaction position point in the first menu and the first target interaction position point is in the range of any option content in the first menu, the option content where the first target interaction position point is located is determined to be target option content.
As shown in fig. 7, the initial interaction position point is a1 point, the first target interaction position point B1 in the first menu is an example of an option 2 in fig. 7, and an area formed by lines L1 to L2 is a belonging range of the option 2. It should be understood that, of the region formed by the range L1 to L2 of each option in the first menu, the region around a1 is removed, for example, in the region formed by the range L1 to L2 of option 2 in fig. 7, the region around a1 is removed, and the preset range may be determined according to actual situations, for example, 1mm, 5mm, and the like, which are not limited in this embodiment.
Then the finger does not slide away from the screen, the trigger point of the finger initially acting on the screen is the initial interaction position point a1, when the trigger point of the finger acting on the screen slides from the initial interaction position point a1 to the first target interaction position point B1, and the first target interaction position point B1 is required to be within the range of option 2, then the target option content selected by option 2 for this time is determined.
Optionally, in determining the first target interaction position B1, it may be further determined whether a connection line between the initial interaction position a1 and the first target interaction position B1 (the connection line is not visible in the interface) continuously intersects with a pixel in a range to which a certain item content in the first menu belongs, where the continuous intersection may be understood as that the dwell time is greater than a preset threshold, so that it is determined that the time for which the operator's finger finally dwells at the first target interaction position B1 is greater than the preset threshold, and it is determined that the first target interaction position B1 is at the target item content selected for this time in option 2.
In this embodiment, when the first operation is continuously performed on the current interface, the sliding is started from the initial interaction position point a1, the option content belonging to the stopped final position, i.e., the first target interaction position point B1, is the target option content, and the whole process does not require other operations, and the target option content can be selected only by sliding and stopping, so that the speed of the operator in completing the selection task is increased, muscle memory is formed, and the accuracy of selection is improved.
The following describes a process of sliding the contents of options in a second menu at a next level from a first menu, taking a two-level selection task as an example.
In one embodiment, determining the target option content according to the task information of the first target interaction location point comprises: if the task information of the first target interaction position point is that a selection task exists and the first target interaction position point is in the range of any option content in the first menu, acquiring sub-option content corresponding to the selection task of the option content where the first target interaction position point is located; displaying a second menu generated according to sub-option content corresponding to the selection task of the option content of the first target interaction position point within a preset range around the area where the first menu is located; the sub option contents displayed in the second menu are used to select the target option contents.
Continuing to take the sliding operation that the finger does not leave the screen on the touch screen as an example, please refer to the first target interaction position B1 in the option 2 in the interface illustrated in fig. 8, to continuously determine whether the selection task exists in the area where the B1 point is located, specifically, according to the context of the area where the B1 point is located, the context is determined by the service operation logic of the area where the B1 point is located, and if the determination result is that the selection task exists in the area where the B1 point is located, the sub-option content corresponding to the selection task of the option content where the first target interaction position point is located may be continuously obtained. In this way, as long as it is determined that the selection task exists in the area where the first target interaction position point B1 is located, the corresponding sub-option content is obtained, so that resource waste caused by executing the step of obtaining the sub-option content when the selection task does not exist in the area where the first target interaction position point B1 is located is avoided, and processing resources are saved.
After the sub-option content corresponding to the selection task of the option content at the first target interaction position point is obtained, naturally, a second menu generated according to the sub-option content corresponding to the selection task of the option content at the first target interaction position point needs to be displayed within a preset range around the area where the first menu is located. Therefore, the display is carried out within the preset range around the area where the first menu is located, the effect of better checking information can be achieved, and the flexibility and the intelligence of man-machine interaction are improved. Of course, the point B1 may be used as a positioning point, and the generated second menu may be displayed within a preset range around the point B1, which is not limited in the embodiment of the present application.
As shown in fig. 8, it is assumed that the sub-option contents acquired in the selection task (i.e., option 2) of the area where the first target interaction location point is located are option 2.1, option 2.2, option 2.3, option 2.4, and option 2.5 in fig. 8; the second menu shown in fig. 8 generated according to the option 2.1, the option 2.2, the option 2.3, the option 2.4, and the option 2.5 may be presented within a preset range around the area where the first menu is located, or within a preset range around the point B1.
For example, as shown in fig. 9, for example, taking a sliding operation in which a finger does not leave a screen on a touch screen as an example, the finger slides from a point a1 in a first menu M1 to a point B1 of a convex array probe selected in a first menu M1, and the finger remains stationary, that is, the selected convex array probe is represented, at this time, a trigger instruction is generated at the convex array probe, and then the target device needs to determine whether a selection task exists in an area where the convex array probe is located according to a context of the area where the convex array probe is located, and if so, obtains sub-option content in the selection task: abdominal conventions, abdominal kidneys, abdominal blood vessels, abdominal bowel vessels, obstetrical fetal heart, and generates a second menu M2 according to these options.
From the perspective of the operator, the finger of the operator stays still at point B1 in fig. 9, which indicates that the convex probe is selected, and at this time, the second menu M2 selected by the probe pops up within the preset range around the area where the first menu is located.
Based on the above process of displaying the second menu on the basis of the first menu, it should be emphasized that the second menu is displayed within a preset range around the area where the first menu is located, it is understood that the second menu M2 is the next menu of the first menu M1, and that the first menu M1 and the second menu M2 are both dynamically generated menus on the premise that the first operation continues to act on the current interface, therefore, the embodiment of the application can realize the dynamic generation of the multi-level menus, and the multi-level menus are closely connected, for example, the first-level menu (the first menu M1) is displayed within 10cm around the initial trigger point (the initial interaction position point a1), and the next-level menu (the second menu M2) is within 10cm around the nearest pixel point (the point B1) of the previous-level menu (the first menu M1), so that the problem that buttons at different positions need to be frequently operated is solved, and the efficiency is improved.
Further, the sub-option content included in the above B1 (option 2) is continued to be: a case will be described in which any one of option 2.1, option 2.2, option 2.3, option 2.4, and option 2.5 is selected as a target option. It will be appreciated that the process of this embodiment is essentially the same as the process described above for selecting option 2 from the content of options included at point a 1.
In one embodiment, if it is detected that a trigger position point acted by a control mode moves from a first target position point to a second target interaction position point in a second menu, determining target option content according to task information of the second target interaction position point; the task information includes a presence selection task and an absence selection task.
Specifically, still taking the first operation as an example, in the process of generating the second menu starting from the first target interaction location point B1 until the target option content is selected in the second menu, the first operation needs to be continuously performed on the interface of the target device, for example, this process needs to keep the finger away from the screen, where in the case that the finger does not leave the screen, the trigger location point applied to the screen refers to a location point where the staying time of the finger on the screen exceeds a preset threshold, and the preset threshold is not limited in this embodiment of the application, for example, 1S, 2S, and the like.
For the second target interaction position point, the second target interaction position point has two conditions of a selection task and no selection task, namely, the second target interaction position point may still have sub-option content, even the sub-option content in the second target interaction position point also has sub-option content, and so on, and when the second target interaction position point is arranged one level by one level, a plurality of levels of selection tasks can exist, and in such a case, the second target interaction position point needs to be arranged one level by one level until the final target option content is selected.
Then the item 2 to which the second target interaction position point belongs is just the target item content for the case that a certain item of sub-item content in the second menu is just the target item content, namely the case that after the finger slides from the first target interaction position point B1 to the second target interaction position point in the screen, the item 2 to which the second target interaction position point belongs is just the target item content.
In one embodiment, if the task information of the second target interaction position point indicates that no selection task exists, and the second target interaction position point is located within a range where any one of the sub-option contents in the second menu belongs, it is determined that the sub-option content where the second target interaction position point is located is the target option content.
Under the condition that the first operation continuously acts on the interface of the target device, if it is detected that the trigger position point of the first operation action is moved from the first target interaction position point B1 to a second target interaction position point in the second menu, and the second target interaction position point is within the range of any one of the sub-option contents in the second menu, the sub-option content where the second target interaction position point is located is determined as the target option content.
As shown in fig. 10, the initial interaction position point is a1 point, the first target interaction position point is B1 point, the second target interaction position point C1 in the second menu, and if the content of any sub-option in the second menu is taken as an example of option 2.4 in fig. 10, the region formed by lines L3 to L4 is the belonged range of option 2.4. It is understood that the sub-option in the second menu belongs to a region outside the range of the first target interaction position point B1, for example, taking the first target interaction position point B1 as an example in fig. 10, and the sub-option 2.4 in fig. 10 belongs to a region outside the range of B1 in the region formed by L3 to L4.
Then the finger does not slide away from the screen, the trigger point of the current finger acting on the screen is set as the point B1 of the first target interaction position, when the trigger point of the current finger acting on the screen slides from B1 to C1 of the second target interaction position, and the point C1 of the second target interaction position is required to be within the range of the option 2.4, then the target option content selected by the option 2.4 is determined.
It is understood that in fig. 10, assuming that the connection line from the point a1 to the point B1 is r1 and the connection line from the point a1 to the point C1 is r2, the operator moves the finger far away from the point a1 during sliding the finger from the point B1 to the point C1, so that the second target interaction position point C1 can be reached after the movement distance is greater than r 1. Therefore, the finger continues to stop at C1, which indicates that the option 2.4 at C1 is the target option content selected this time.
In this embodiment, when the first operation is continuously performed on the current interface, the sliding is continued from the point B1 of the first target interaction position, the sub-option content to which the point C1 of the second target interaction position belongs at the final position of the stop is the target option content, and the target option content can be selected only by sliding and stopping without other operations in the whole process, so that the speed of the operator completing the selection task is increased, muscle memory is formed, and the accuracy of the selection is improved.
If the finger stays in the range of the option 2.4 and selects 2.4, and the next level of sub-option content is also included in 2.4, the processing mode is the same as the process from the option 2 to the option 2.4, and so on, which will not be described in detail later. In summary, in the embodiment of the present application, a multi-level menu may be integrated, so that the selection of the multi-level menu is completed by one operation.
In some scenarios, the target option content to be selected does not appear after the last level menu is selected, and the user needs to return to the menu, so that even if the finger slides to a certain selectable content in the second menu M2, other options can be returned to be selected through the original path, or the selection is cancelled.
For this process, an embodiment is provided that includes: if the triggering position point acted by the control mode is detected to move from the second target interaction position point to a third target interaction position point in other menus, and the third target interaction position point is located in the range of any option content in the other menus, determining the option content where the third target interaction position point is located as the target option content; the other menu is any menu other than the second menu.
The embodiment is suitable for a scene of returning to select other options from a certain level menu and a scene of continuing to select from a certain level menu to a lower level menu.
Referring to fig. 11, taking the first operation as an example, in the case that the finger does not leave the screen, the trigger point of the current finger acting on the screen is the second target interaction position point, the second target interaction position point is C1, and the third target interaction position point is D1.
In fig. 11, when the trigger point of the finger on the screen is detected to slide from C1 to D1, and D1 is in the range of option 2, then option 2 can be determined as the target option content. It should be noted that in the returning scenario, that is, when the finger moves from C1 to D1, the condition that the reverse movement from C1 to the point close to the initial interaction position a1 is satisfied, and the distance from C1 to the initial interaction position a1 is greater than the distance from D1 to a1 is satisfied. When the finger slides to D1 and the above condition is satisfied, the second menu M2 disappears, that is, the second menu M2 no longer exists in the current interface of the target device.
Similarly, if the finger moves all the way to a1, until reaching a1, which means that the operator has not made any selection this time, the first menu M1 and the second menu M2 disappear, and the whole interface returns to the initial state shown in fig. 4.
In the embodiment, when the selection task is executed in the software system of the target device, other target option contents can be selected or the selection can be cancelled in an original way, so that the control mode of selecting the task is more flexible and intelligent. In the process of returning the original path or canceling the selection, the menu which is not selected correspondingly disappears, and the current interface of the target equipment is restored to the original interface, so that more areas are vacated to display main content, and the interface utilization rate of the target equipment is improved.
Whether option 2 is selected as the target option content, option 2.4 is selected as the option content, or the final route is returned to the point a1 for deselection, this means that the selection task performed by the operator in the target device is finished. Therefore, in one embodiment, if it is detected that the manipulation manner meets the preset manipulation end condition, it is determined that the manipulation of the target device is ended, and all menus in the interface of the target device corresponding to the manipulation end time are cancelled and displayed.
The preset control ending condition includes but is not limited to: the menu is not acted on any menu, and/or any option in any menu is not selected, a certain option in the last level menu is acted on, and the like, which is not limited in the embodiment of the application.
Specifically, once the target option content is selected or the selection is cancelled, it indicates that the manipulation manner is no longer acting on the current interface, for example, the finger no longer slides in the current interface, and has already left the current interface, which indicates that the manipulation on the target device is finished, and the selection task performed by the operator in the target device is finished.
Taking the manipulation manner as the first operation as an example, in an embodiment, the deselection may be that the first operation no longer acts on any menu, and this time, it represents that the manipulation of the target device by the first operation is ended; in an embodiment, the content of the selected target option may be that the first operation acts on a range where a specific option of a certain hierarchical menu of the menu is located, or that a certain option acted by the first operation does not have a top-bottom menu, and so on, as long as the first operation selects the content of the target option, the operation of the first operation on the target device is ended.
And the first operation finishes the operation of the target equipment, and cancels and displays all menus in the interface of the target equipment corresponding to the operation finishing time. Of course, in practical applications, if there are no selectable items based on the context logic at the location point triggered in the current interface, there will be no menu display in the current interface.
After the task is selected, the target device needs to cancel all menus in the target device interface corresponding to the control ending time, for example, after the finger leaves the current interface, the control is ended, the target device cancels all menus, and the current interface recovers the original initial interface, so that more areas are vacated to display main content, the interface/operation panel is simpler, and the interface utilization rate of the target device is improved.
In addition, in the previous embodiment, when the first menu generated according to the option content is displayed in the preset range around the area where the trigger position point in the current interface is located, the situation that the displayed contents are mutually covered is avoided.
It should be noted that, although the first menu is taken as an example for description in the present embodiment, in practical applications, any one of the menus, for example, the second menu, the third menu, the.
For example, as shown in fig. 12, it is assumed that the determined area where the trigger position point in the current interface is located is an area S1, the preset range of the area S1 is an area S2, the menu to be displayed is a first menu M1, according to a preset display manner, the first menu M1 needs to be displayed in the area S2, but if it is detected that other display information Y already exists in the area S2, when the first menu M1 is displayed, the first menu M1 may be displayed based on an area not covered by the current display information Y, that is, the first menu M1 is moved to an area not covered by Y for displaying. The display area of the first menu M1 in fig. 12 is only an example, and in practical applications, the area of the first menu M1 that can be displayed after avoiding the Y information needs to be considered comprehensively in combination with the layout of the actual software interface and other relevant factors, which is not limited in the embodiment of the present application.
In this embodiment, the display information already exists in the preset range around the area where the trigger position point in the current interface is located, and the menu generated according to the option content may be displayed based on the uncovered area of the area where the display information already exists. Therefore, existing display information is avoided, the clarity and the intuition of all information in the current interface of the target device can be guaranteed, and the human-computer interaction intelligence is improved.
In one embodiment, after selecting any item of option content to be selected, the menu in any embodiment may display the selected item of option content in a preset display manner to indicate that the item of option content is selected from a plurality of item contents. Optionally, the preset display mode includes: highlighting a corresponding area of the target option content; or displaying the target option content corresponding area in an animation transition mode.
The control area of the selected target option content is highlighted, for example, obvious changes such as color, size, state and the like are generated; or, the region corresponding to the target option content is displayed in an animation transition manner, for example, when the target option content is selected, the control region corresponding to the target option content is changed into a first color firstly, then gradually transited into a second color, and finally presents a third color. Of course, other animation schemes are possible, such as first gradually increasing and then gradually decreasing until the original size is restored. The animation can be realized by time-based interpolation, which is not limited in the embodiment of the application; of course, the two can be combined to show, and the embodiment of the present application is not limited to this.
Therefore, the selected content is displayed with unique characteristics, so that the information display in the interface is richer, and the interface information display effect is improved.
In one embodiment, when any one of the hierarchical menus generated in any one of the embodiments is displayed, the display information of the menu may be adjusted, as shown in fig. 13, where the method further includes:
s201, acquiring attribute information of option content of the selection task.
And S202, adjusting the appearance display information of the menu generated according to the option content according to the attribute information.
Taking the first menu M1 as an example, the property information of options 1 to 6 in the first menu M1 is obtained as described in conjunction with fig. 11, where the property information includes, but is not limited to, the shape, size, number, appearance, and the like of the control area.
Based on the attribute information, the target device needs to: the appearance display information (including the size and initial state shape of the control) of the menu second menu M2 generated from option 1 to option 6 is adjusted. For example, the shape and size of the single option control graph can be adjusted according to the content quantity of the upper-level options and the content quantity of the lower-level options.
For example, assuming that the number of option contents included in option 2 is large, the option 2 control graphic shape may be set to be larger or displayed in a dotted line. Alternatively, in the first menu M1, option 2 is selected, and the controls for the other options that are not selected may be moved some distance to either side of option 2, ensuring the clear integrity of option 2.
For another example, if the number of the option contents is less than 6, and only 5, the hexagon may be set to a pentagonal type, and there may be many vacancies, etc.; however, if the number is more than 6, the hexagon may be configured as a circle, but here, it is only a distance, and the central shape may also be independent of the number, which is not limited by the embodiment of the present application.
In the embodiment, the appearance display information of the menu is adjusted by setting the actual attribute information of the option content, so that the display of the menu in the current interface is more suitable for the use requirement, the menu display mode is enriched, and the menu display diversity is improved.
As shown in fig. 14, there is also provided an embodiment comprising the steps of:
and S1, pressing the arbitrary position A1 of the touch screen interface by a finger.
S2, whether the selection task is available at the touch screen interface position A1.
And S3, if yes, acquiring the selectable option content of the position A1.
S4, drawing a first menu M1 according to the selectable option content.
S5, wait for the finger to move to a certain selectable content 2 in the first menu M1.
S6, the selectable content of the selected content 2 is obtained.
S7, selecting whether the content 2 has the option content.
S8, drawing a second menu M2 according to the option content of the selected content 2.
And S9, whether the touch screen is continuously pressed.
S10, if not, wait for the finger to move to a certain selectable content in the second menu M2.
S11, the finger is moved to the selected content 3 on the first menu M1.
S12, the selectable option content of the content 3 is acquired.
S13, whether the content 3 has content.
S14, if yes, drawing a third menu M3 according to the content.
S15, wait for the finger to move to a certain selectable content in the third menu M3.
And S16, whether the touch screen is continuously pressed.
S17, if not, checking the confirmation.
It should be understood that, although the respective steps in the flowcharts attached in the above-described embodiments are sequentially shown as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the figures attached to the above-mentioned embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In one embodiment, an information display apparatus is further provided in the embodiments of the present application, including: obtain module, show module, wherein:
the acquisition module is used for acquiring option contents of a selection task if a trigger instruction is detected to exist in a current interface of the target equipment and the selection task exists in an area where a trigger position point corresponding to the trigger instruction is located; no menu exists in the current interface; the trigger instruction is generated in a preset control mode;
the display module is used for displaying a first menu generated according to the content of the option in a preset range around the area where the trigger position point in the current interface is located; the option contents displayed in the first menu are used for selecting the target option contents.
In one embodiment, the above manipulation manner includes: the first operation continuously acting in the current interface and/or the second operation separating acting in the current interface.
In one embodiment, the trigger position point is an initial interaction position point; accordingly, the apparatus comprises:
the first determining module is used for determining whether a selection task exists in the area where the initial interaction position point is located according to the context of the area where the initial interaction position point is located; the context represents the operation flow logic of the current interface;
and the acquisition module is used for executing the step of acquiring the option content of the selection task if the selection task exists.
In one embodiment, the display module is further configured to display a first menu generated according to the content of the selected item of the selected task within a preset range around an area where the initial position interaction point in the current interface is located.
In one embodiment, the apparatus further comprises:
the second determining module is used for determining target option content according to task information of the first target interaction position point if the triggering position point acted by the control mode is detected to move from the initial interaction position point to the first target interaction position point in the first menu; the task information includes a presence selection task and an absence selection task.
In one embodiment, the second determining module includes:
and the first determining unit is used for determining that the option content of the first target interaction position point is the target option content if the task information of the first target interaction position point indicates that no selection task exists and the first target interaction position point is in the range of any option content in the first menu.
In one embodiment, the second determining module includes:
the second determining unit is used for acquiring sub-option content corresponding to the selection task of the option content of the first target interaction position point if the task information of the first target interaction position point indicates that the selection task exists and the first target interaction position point is in the range of any option content in the first menu;
the display unit is used for displaying a second menu generated according to the sub-option content corresponding to the selection task of the option content of the first target interaction position point in a preset range around the area where the first menu is located; the sub option contents displayed in the second menu are used to select the target option contents.
In one embodiment, the apparatus further comprises:
the third determining module is used for determining target option content according to task information of a second target interaction position point if the triggering position point acted by the control mode is detected to move from the first target position point to the second target interaction position point in the second menu; the task information includes the presence selection task and the absence selection task.
In one embodiment, the third determining module includes:
the third determining unit is used for determining that the option content of the third target interaction position point is the target option content if the triggering position point acted by the control mode is detected to move from the second target interaction position point to the third target interaction position point in other menus and the third target interaction position point is in the range of any option content in other menus; the other menu is any menu other than the second menu.
In one embodiment, the apparatus further comprises:
and the ending module is used for determining that the operation and control of the target equipment are ended and canceling all menus in the interface of the target equipment corresponding to the operation and control ending moment if the operation and control mode is detected to meet the preset operation and control ending condition.
In an embodiment, the obtaining module is further configured to determine, by using a preset artificial intelligence algorithm, the option content of the selected task.
In an embodiment, the display module is further configured to display a menu generated according to the content of the option based on an uncovered area of an area where the display information already exists if it is detected that the display information already exists within a preset range around the area where the trigger position point in the current interface is located.
In one embodiment, the apparatus further comprises:
the display module is used for displaying a corresponding area of the target option content according to a preset display mode in the process of selecting the target option content in the menu; the preset display mode comprises the following steps: highlighting a corresponding area of the target option content; or displaying the target option content corresponding area in an animation transition mode.
In one embodiment, the apparatus further comprises:
the information acquisition module is used for acquiring the attribute information of the option content of the selection task;
and the adjusting module is used for adjusting the appearance display information of the menu generated according to the option content according to the attribute information.
In one embodiment, the menu display form includes any one or more of a graphical interface, a table, and text.
In one embodiment, the trigger position point is any position point in the current interface.
In one embodiment, the appearance display information of the menu is determined according to the context of the menu and the layout of the current interface; the context represents the business operations logic of the current interface.
For specific limitations of the information displaying apparatus, reference may be made to the above limitations of each step in the information displaying method, and details are not described herein again. The modules in the information display device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a target device, and can also be stored in a memory of the target device in a software form, so that the target device can call and execute operations corresponding to the modules.
In one embodiment, a medical apparatus is provided, which includes a display, a memory and a processor, the memory stores a computer program, the processor implements the following steps executed by the processor in the radiotherapy planning system when executing the computer program, and the display displays information according to the execution result of the processor:
if the triggering instruction exists in the current interface of the target equipment and the selection task exists in the area where the triggering position point corresponding to the triggering instruction is located, acquiring option content of the selection task; no menu exists in the current interface; the trigger instruction is generated in a preset control mode;
displaying a first menu generated according to the content of the option in a preset range around an area where a trigger position point in a current interface is located; the option contents displayed in the first menu are used for selecting the target option contents.
When the above steps are implemented by the medical device provided by the above embodiment, the implementation principle and technical effect of the medical device are similar to the principle of the method steps executed by the above information display method, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, realizes the following steps performed by the processor in the radiotherapy planning system:
if the triggering instruction exists in the current interface of the target equipment and the selection task exists in the area where the triggering position point corresponding to the triggering instruction is located, acquiring option content of the selection task; no menu exists in the current interface; the trigger instruction is generated in a preset control mode;
displaying a first menu generated according to the content of the option in a preset range around the area where the trigger position point in the current interface is located; the option contents displayed in the first menu are used for selecting the target option contents.
When the above steps are implemented by the computer-readable storage medium provided by the above embodiment, the implementation principle and technical effect of the computer-readable storage medium are similar to the principle of the method steps executed by the above information display method, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express a few embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, variations and modifications can be made without departing from the concept of the embodiments of the present application, and these embodiments are within the scope of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the appended claims.

Claims (20)

  1. An information presentation method, the method comprising:
    if a trigger instruction exists in a current interface of target equipment and a selection task exists in an area where a trigger position point corresponding to the trigger instruction is located, acquiring option content of the selection task; no menu exists in the current interface; the trigger instruction is generated in a preset control mode;
    displaying a first menu generated according to the option content in a preset range around the area where the trigger position point in the current interface is located; the option content displayed in the first menu is used for selecting target option content.
  2. The method of claim 1, wherein the manipulation comprises: the first operation continuously acting in the current interface and/or the second operation acting in the current interface is separated.
  3. The method of claim 1 or 2, wherein the trigger location point is an initial interaction location point;
    correspondingly, before acquiring the option content of the selection task, the method comprises the following steps:
    determining whether the selection task exists in the area where the initial interaction position point is located according to the context of the area where the initial interaction position point is located; the context represents the operational flow logic of the current interface;
    and if so, executing the step of acquiring the option content of the selection task.
  4. The method according to claim 3, wherein the displaying the first menu generated according to the option content within a preset range around the area where the trigger position point in the current interface is located comprises:
    and displaying the first menu generated according to the option content of the selection task in a preset range around the area where the initial position interaction point is located in the current interface.
  5. The method of claim 4, further comprising:
    if the triggering position point acted by the control mode is detected to move from the initial interaction position point to a first target interaction position point in the first menu, determining the target option content according to the task information of the first target interaction position point; the task information includes the presence of a selection task and the absence of a selection task.
  6. The method according to claim 5, wherein the determining the target option content according to the task information of the first target interaction location point comprises:
    and if the task information of the first target interaction position point indicates that no selection task exists and the first target interaction position point is in the range of any option content in the first menu, determining the option content where the first target interaction position point is located as the target option content.
  7. The method of claim 5, wherein the determining the target option content according to the task information of the first target interaction location point comprises:
    if the task information of the first target interaction position point is that a selection task exists and the first target interaction position point is in the range of any option content in the first menu, acquiring sub-option content corresponding to the selection task of the option content where the first target interaction position point is located;
    displaying a second menu generated according to sub-option content corresponding to the selection task of the option content of the first target interaction position point in a preset range around the area where the first menu is located; the sub option content displayed in the second menu is used to select the target option content.
  8. The method of claim 7, further comprising:
    if the trigger position point acted by the control mode is detected to move from the first target position point to a second target interaction position point in the second menu, determining the target option content according to task information of the second target interaction position point; the task information includes the presence of a selection task and the absence of a selection task.
  9. The method of claim 8, wherein the determining the target option content according to the task information of the second target interaction location point comprises:
    if the triggering position point acted by the control mode is detected to move from a second target interaction position point to a third target interaction position point in other menus, and the third target interaction position point is located in the range of any option content in the other menus, determining the option content where the third target interaction position point is located as the target option content; the other menu is any menu other than the second menu.
  10. The method according to claim 1 or 2, characterized in that the method further comprises:
    and if the control mode is detected to meet the preset control end condition, determining that the control of the target equipment is ended, and canceling all menus in the target equipment interface corresponding to the control end moment.
  11. The method according to claim 1 or 2, wherein the obtaining of the option content of the selection task comprises:
    and determining the option content of the selection task through a preset artificial intelligence algorithm.
  12. The method according to claim 1 or 2, wherein the displaying the menu generated according to the option content within a preset range around the area where the trigger position point in the current interface is located comprises:
    and if the display information exists in the preset range around the area where the trigger position point in the current interface is located, displaying a menu generated according to the option content based on the uncovered area of the area where the display information exists.
  13. The method according to claim 1 or 2, characterized in that the method further comprises:
    in the process of selecting the target option content in the menu, displaying the corresponding area of the target option content according to a preset display mode; the preset display mode comprises the following steps: highlighting the corresponding area of the target option content; or displaying the target option content corresponding area in an animation transition mode.
  14. The method according to claim 1 or 2, characterized in that the method further comprises:
    acquiring attribute information of option content of the selection task;
    and adjusting the appearance display information of the menu generated according to the option content according to the attribute information.
  15. The method according to claim 1 or 2, wherein the display form of the menu comprises any one or more of a graphical interface, a table and text.
  16. The method of claim 1 or 2, wherein the trigger location point is any location point in the current interface.
  17. The method according to claim 1 or 2, wherein the appearance display information of the menu is determined according to the context of the menu and the layout of the current interface; the context represents the operational flow logic of the current interface.
  18. An information presentation device, the device comprising:
    the acquisition module is used for acquiring option content of a selection task if a trigger instruction is detected to exist in a current interface of the target equipment and the selection task exists in an area where a trigger position point corresponding to the trigger instruction is located; no menu exists in the current interface; the trigger instruction is generated in a preset control mode;
    the display module is used for displaying a first menu generated according to the option content in a preset range around the area where the trigger position point in the current interface is located; the option content displayed in the first menu is used for selecting target option content.
  19. A medical device comprising a display, a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 17; and the display displays information according to the execution result of the processor.
  20. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 17.
CN202180009260.3A 2021-08-30 2021-08-30 Information display method and device, medical equipment and storage medium Pending CN114981769A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/115351 WO2022057604A1 (en) 2021-08-30 2021-08-30 Information display method, device thereof, medical equipment thereof, and storage medium thereof

Publications (1)

Publication Number Publication Date
CN114981769A true CN114981769A (en) 2022-08-30

Family

ID=80775881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180009260.3A Pending CN114981769A (en) 2021-08-30 2021-08-30 Information display method and device, medical equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114981769A (en)
WO (1) WO2022057604A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115328348B (en) * 2022-08-31 2024-03-15 济南浪潮数据技术有限公司 Front page operation management method, device and equipment of micro front end and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261565A (en) * 2007-03-08 2008-09-10 国际商业机器公司 Method and apparatus for providing customizable, immediate and radiating menus for accessing applications and actions
CN102331932A (en) * 2011-09-08 2012-01-25 北京像素软件科技股份有限公司 Method for realizing menu interface
CN104317487A (en) * 2014-11-12 2015-01-28 北京国双科技有限公司 Display method and display device of annular menu
CN104781765A (en) * 2012-09-13 2015-07-15 谷歌公司 Interacting with radial menus for touchscreens
CN107491248A (en) * 2017-08-24 2017-12-19 小草数语(北京)科技有限公司 Menu display method, device, terminal device and computer-readable recording medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10521022B2 (en) * 2006-03-17 2019-12-31 Conversant Wireless Licensing S.a.r.l. Mobile communication terminal and method therefor
US8245156B2 (en) * 2008-06-28 2012-08-14 Apple Inc. Radial menu selection
US8799827B2 (en) * 2010-02-19 2014-08-05 Microsoft Corporation Page manipulations using on and off-screen gestures
CN102799347B (en) * 2012-06-05 2017-01-04 北京小米科技有限责任公司 User interface interaction method and device applied to touch screen equipment and touch screen equipment
RU2638725C1 (en) * 2016-08-15 2017-12-15 Общество с ограниченной ответственностью "ПИРФ" (ООО "ПИРФ") Method and system of device management with radial graphical user interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261565A (en) * 2007-03-08 2008-09-10 国际商业机器公司 Method and apparatus for providing customizable, immediate and radiating menus for accessing applications and actions
CN102331932A (en) * 2011-09-08 2012-01-25 北京像素软件科技股份有限公司 Method for realizing menu interface
CN104781765A (en) * 2012-09-13 2015-07-15 谷歌公司 Interacting with radial menus for touchscreens
CN104317487A (en) * 2014-11-12 2015-01-28 北京国双科技有限公司 Display method and display device of annular menu
CN107491248A (en) * 2017-08-24 2017-12-19 小草数语(北京)科技有限公司 Menu display method, device, terminal device and computer-readable recording medium

Also Published As

Publication number Publication date
WO2022057604A1 (en) 2022-03-24

Similar Documents

Publication Publication Date Title
CN104657044B (en) Radial menu
EP2840478B1 (en) Method and apparatus for providing user interface for medical diagnostic apparatus
US20100223566A1 (en) Method and system for enabling interaction with a plurality of applications using a single user interface
US20200004487A1 (en) Object moving program
CN109791468A (en) User interface for both hands control
CN104063128B (en) A kind of information processing method and electronic equipment
CN109891374B (en) Method and computing device for force-based interaction with digital agents
US11169693B2 (en) Image navigation
CN110476187B (en) Sewing machine type polygon drawing method
US11150797B2 (en) Method and device for gesture control and interaction based on touch-sensitive surface to display
EP3374869A1 (en) Hierarchical positioned event dispatch
CN110200623A (en) Method for displaying parameters, device, terminal device and the medium of electrocardiogram
CN114981769A (en) Information display method and device, medical equipment and storage medium
US20150169286A1 (en) Audio activated and/or audio activation of a mode and/or a tool of an executing software application
CN109192282B (en) Editing method and device for medical image annotation, computer equipment and storage medium
US10019127B2 (en) Remote display area including input lenses each depicting a region of a graphical user interface
Uddin Improving Multi-Touch Interactions Using Hands as Landmarks
CN105302466B (en) A kind of text operation method and terminal
CN111142706B (en) Medical bed moving method, device, equipment and storage medium
CN114093493A (en) System and method for controlling interface of medical imaging equipment
WO2017190360A1 (en) Medical detection system and control method therefor
KR101480429B1 (en) Apparatus and method for searching data object based emr system
KR20200031598A (en) Control method of favorites mode and device including touch screen performing the same
KR102648748B1 (en) Method Of Providing Function Tool Reflecting The Intention Of a User, And Computing Apparatus That Performs The Method
JP2018171441A (en) Computer program, display device, display system, and display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination