WO2022057604A1 - Procédé d'affichage d'informations, dispositif associé, équipement médical associé, et support de stockage associé - Google Patents

Procédé d'affichage d'informations, dispositif associé, équipement médical associé, et support de stockage associé Download PDF

Info

Publication number
WO2022057604A1
WO2022057604A1 PCT/CN2021/115351 CN2021115351W WO2022057604A1 WO 2022057604 A1 WO2022057604 A1 WO 2022057604A1 CN 2021115351 W CN2021115351 W CN 2021115351W WO 2022057604 A1 WO2022057604 A1 WO 2022057604A1
Authority
WO
WIPO (PCT)
Prior art keywords
menu
target
option
position point
content
Prior art date
Application number
PCT/CN2021/115351
Other languages
English (en)
Chinese (zh)
Inventor
朱皓
Original Assignee
武汉联影医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉联影医疗科技有限公司 filed Critical 武汉联影医疗科技有限公司
Priority to PCT/CN2021/115351 priority Critical patent/WO2022057604A1/fr
Priority to CN202180009260.3A priority patent/CN114981769A/zh
Publication of WO2022057604A1 publication Critical patent/WO2022057604A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Definitions

  • the present application relates to the technical field of human-computer interaction, and in particular, to an information display method, apparatus, medical device and storage medium.
  • Human-computer interaction refers to the process of information exchange between people and computers using a certain dialogue language and a certain interactive way to complete certain tasks.
  • the human-computer interaction function is mainly completed by external devices that can input and output and corresponding software.
  • medical equipment mainly uses a physical operation panel and a touchable display screen as the user's input, and displays the corresponding content through the display screen as the output to the user.
  • the input-to-output logic computation of the human-computer interaction function can be realized by using different logic computation methods.
  • the embodiments of the present application provide an information display method, device, medical device, and storage medium, which can simplify the operation mode of information display and improve the accuracy of information display.
  • an embodiment of the present application provides an information display method, the method comprising:
  • the trigger instruction is controlled by a preset way to generate
  • the first menu generated according to the option content is displayed; the option content displayed in the first menu is used to select the target option content.
  • the above-mentioned manipulation manner includes: a first operation that continuously acts in the current interface, and/or a second operation that acts intermittently in the current interface.
  • the trigger location point is the initial interaction location point; accordingly, before acquiring the option content of the selection task, the method includes:
  • the above-mentioned first menu generated according to the content of the options is displayed within a preset range around the area where the trigger position point in the current interface is located, including:
  • a first menu generated according to the content of the options of the selected task is displayed.
  • the method further includes:
  • the content of the target option is determined according to the task information of the first target interaction position point; the task information includes existence of Selection tasks and non-existing selection tasks.
  • the above-mentioned determining the content of the target option according to the task information of the first target interaction position point includes:
  • the task information of the first target interaction position point is that there is no selection task, and the first target interaction position point is within the scope of any option content in the first menu, then determine the option where the first target interaction position point is located.
  • the content is the target option content.
  • the above-mentioned determining the content of the target option according to the task information of the first target interaction position point includes:
  • the option content at which the first target interaction position point is located is obtained.
  • a second menu generated according to the sub-option content corresponding to the selection task of the option content where the first target interaction point is located is displayed; the sub-option content displayed in the second menu uses to select the target option content.
  • the method further includes:
  • the task information of the second target interaction position point the content of the target options is determined; the task information includes There are selection tasks and there are no selection tasks.
  • the above-mentioned determining the content of the target option according to the task information of the second target interaction location point includes:
  • the trigger position point of the manipulation mode is moved from the second target interaction position point to the third target interaction position point in other menus, and the third target interaction position point is within the range of any option content in other menus. Then, it is determined that the option content where the third target interaction position point is located is the target option content; the other menus are any menu except the second menu.
  • the method further includes:
  • the above-mentioned acquisition of the option content of the selection task includes:
  • the content of the options of the selected task is determined by the preset artificial intelligence algorithm.
  • the above-mentioned menu generated according to the content of the options is displayed within a preset range around the area where the trigger position point in the current interface is located, including:
  • a menu generated according to the content of the options is displayed based on the uncovered area of the area where the existing display information is located.
  • the method further includes:
  • the corresponding area of the target option content is displayed according to the preset display mode; the preset display mode includes: highlighting the corresponding area of the target option content; or, displaying the corresponding area of the target option content in an animated transition.
  • the method further includes:
  • the attribute information adjust the appearance and display information of the menu generated according to the content of the options.
  • the display form of the above-mentioned menu includes any one or a combination of a graphical interface, a table, and a text.
  • the above-mentioned trigger position point is any position point in the current interface.
  • the appearance display information of the menu is determined according to the context of the menu and the layout of the current interface; the context represents the operation flow logic of the current interface.
  • an information display device comprising:
  • an acquisition module configured to acquire the option content of the selected task if it is detected that there is a trigger instruction in the current interface of the target device, and there is a selection task in the area where the trigger position corresponding to the trigger instruction is located;
  • the trigger instruction is generated by a preset manipulation method, and the manipulation method includes a first operation that continues to act in the current interface;
  • the display module is used to display the menu generated according to the option content within a preset range around the area where the trigger position point in the current interface is located; the option content displayed in the menu is used to select the target option content.
  • the embodiments of the present application provide a medical device, including a display, a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor implements the first embodiment when the processor executes the computer program In the method steps performed in the display, the display performs information display according to the execution result of the processor.
  • embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the method steps performed in the embodiments of the first aspect are implemented.
  • An information display method, apparatus, medical device, and storage medium provided by the embodiments of the present application.
  • a first menu generated according to the option content is displayed, wherein the option content displayed in the first menu is used to select the target option content; in this method, only when a trigger instruction is detected, it is determined that the trigger instruction Only when there is a corresponding selection task will the options in the selection task be obtained in real time and displayed as the first menu, which means that there is no menu generated based on each selection task in the current interface before the trigger command is issued, so , in the case where there is no need to perform the selection task, the system interface of the target device does not need to resident the display operation button of the selection task, which can make the interface more concise, and leave more interface area for other mainline tasks.
  • the content of the options in the first menu is obtained in real time based on the selection task of the trigger position point, so that the operation purpose of generating the trigger instruction can be accurately identified, thereby ensuring that The accuracy of the content of the options in the displayed first menu.
  • FIG. 1 is a schematic diagram of the internal structure of a computer device in an embodiment
  • FIG. 2 is a schematic flowchart of an information display method in an embodiment
  • FIG. 3 is a schematic diagram of an information display manner in an interface in an embodiment
  • FIG. 4 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 5 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 6 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 7 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 8 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 9 is a schematic diagram of an information display manner in an interface in another embodiment.
  • FIG. 10 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 11 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 12 is a schematic diagram of an information display manner in an interface in another embodiment
  • FIG. 13 is a schematic flowchart of an information display method in another embodiment
  • FIG. 14 is a schematic flowchart of an information display method in another embodiment.
  • the devices available for human-computer interaction mainly include keyboards, mice, various pattern recognition devices, and the like.
  • the software corresponding to these devices is the part of the operating system that provides the human-computer interaction function.
  • the embodiments of the present application provide an information display method, apparatus, medical device, and storage medium, which can simplify the operation mode of information display and improve the accuracy of information display.
  • the information display method provided by the embodiments of this application can be applied to any software system, and the software system can run in any field and any type of computer equipment, including but not limited to various medical equipment, Personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, etc., among which, medical equipment can be medical X-ray machines, digital imaging equipment, X-ray computed tomography equipment, magnetic resonance imaging equipment, ultrasound imaging equipment, nuclear Medical imaging equipment and the like, which are not limited in this embodiment of the present application.
  • the information presentation method may be applied in an ultrasound system operating in a medical device.
  • a schematic diagram of the internal structure of a computer device is provided, and a processor in the computer device is used to provide computing and control capabilities.
  • the memory includes a non-volatile storage medium and an internal memory; the non-volatile storage medium stores an operating system, a computer program and a database; the internal memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium.
  • the database in the computer equipment is used to store data related to the process of the information presentation method.
  • a network interface in a computer device is used to communicate with other external devices through a network connection.
  • the computer program when executed by the processor, implements an information presentation method.
  • the execution subject of the information display method provided in the embodiments of the present application may be the target device or the information display device, and the information display device may be part or all of the target device.
  • the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments.
  • an information display method is provided. This embodiment relates to the current interface of the target device. If it is detected that there is a trigger instruction in the current interface, and the trigger location corresponding to the trigger instruction exists in the area where the point is located Select a task, obtain the option content of the selected task, and display the specific process of the menu generated by the option content at a set position, as shown in Figure 2, this embodiment includes some method steps:
  • the target device generally refers to a certain device, which can be any type of device in any field.
  • the target device includes but is not limited to medical X-ray machines, digital imaging equipment, and X-ray computed tomography. equipment, magnetic resonance imaging equipment, ultrasound imaging equipment, nuclear medicine imaging equipment, etc.
  • the current interface of the target device is the interface displayed by the target device at the current moment.
  • the current interface is the interface when the medical device has just turned on and the ultrasound system has not been turned on; or, the current interface is the interface of the medical device.
  • the main interface of the opened ultrasound system; or, the current interface is an interface displayed after any application section in the opened ultrasound system of the medical device, etc., which are not limited in the embodiments of the present application.
  • the menu here refers to a menu that can be hidden, that is, the menu is not displayed in the interface and is in a hidden state until a trigger instruction is received.
  • the menu can be understood as a menu of selection tasks to be performed this time.
  • the menu here refers to a menu for selection of probe tasks.
  • the selection task to be performed this time is to select an ultrasound image display parameter task
  • the menu here refers to a menu for selecting an ultrasound image display parameter task.
  • the menu here refers to the task of selecting ultrasound probe and selecting ultrasound image display parameters. Quest Menu for any kind of quest.
  • the presence of a trigger instruction in the current interface indicates that the user has performed a preset manipulation method in the current interface.
  • the preset manipulation mode can be preset and stored in the target device, so as to facilitate accurate identification of the manipulation mode by the target device in practical application.
  • the option content of the selection task is obtained.
  • Selection tasks refer to the collective name of tasks that need to perform selection operations. Different interfaces in the software system will have completely different selection tasks due to different interface functions. Different selection tasks can correspond to different selectable option contents, and if an option content also includes selectable items, then the option content also belongs to a selection task, so as long as the selection operation needs to be performed, it can be called to select tasks. For example, in the main interface of the ultrasonic system, different probes can be selected through the first-level interface, and different detection items can be selected in the second-level interface after selecting different probes; or, in the ultrasonic freezing interface of the acoustic wave system, through the first-level interface Different measurement packages can be selected, different measurement items can be selected through the secondary interface, and so on.
  • each selection task includes multiple optional content.
  • the default configuration is that each selection task can be triggered at a preset position on the interface. In this way, during application, if the trigger position point corresponding to the above trigger instruction coincides with the preset selection task at the preset position on the interface, it can be determined that the There is a selection task at the trigger position point corresponding to the trigger command.
  • the option content in the selection task needs to be acquired.
  • the method of acquiring the option content in the selection task is: determining according to the context of the area where the trigger position corresponding to the trigger instruction is located.
  • the meaning of the context is not the text content of the context, but is related to the operation process logic of the area where the trigger location point is located, which can be the execution logic of the workflow. For example, if the interface in the login state triggers the login task, then the The context refers to the login environment, and the content of the login options that can be selected in the login task can be determined according to the login environment.
  • determining the content of the options in the selection task based on the context of the area where the trigger location point is located is equivalent to considering the process logic of the current business operation of the selection task and the operating environment of the business operation, and the options in the selection task obtained in this way The content will be more in line with the needs of the currently selected task, ensuring the accuracy of the content of the options obtained.
  • the method of acquiring the option content in the selection task is: determining the option content of the selection task through a preset artificial intelligence algorithm.
  • the preset artificial intelligence algorithm can be used to determine the option content that can be provided in the selection task, for example, the artificial intelligence algorithm can select the task of the task itself
  • the type, all option contents supported by the task type, and the option content factor that the user selects the most times in the selection task are used as determining factors to determine the optimal option content required for triggering the selection task this time.
  • the option content included in the selection task by the user is set to be multiple.
  • Hierarchical menu, and the options in each hierarchical menu of the selection task are pre-configured depending on the context, that is, the options in each hierarchical menu have been fixedly set. Based on this assumption, when the user triggers the operation to execute the selection task
  • the preset artificial intelligence algorithm has long-term learning, it can determine that the most frequently selected option by the user is option 2.4 in the second-level menu.
  • the option 2.4 in the second-level menu can be displayed in the first In the first-level menu, for example, a shortcut of option 2.4 is created and placed in the first-level menu, so that the user can quickly select it, or it can be placed in the first-level menu in other ways. This is not limited.
  • the option content in the selection task is obtained through the artificial intelligence algorithm, and the option content that best meets the user's needs can be determined comprehensively from various aspects, which ensures the accuracy of the obtained option content and improves the acquisition rate. Select the intelligence of the option content in the task.
  • S102 in a preset range around the area where the trigger position point in the current interface is located, display a first menu generated according to the option content; the option content displayed in the first menu is used to select the target option content.
  • the option options in the selection task can be displayed. Specifically, when displaying, it can be displayed in a preset range around the area where the trigger position point in the current interface is located, and the displayed menu is generated according to the acquired option content of the selection task. In this way, it can be displayed based on the menu Select the target option content from the option content displayed in .
  • the preset range around the area where the trigger position point in the current interface is located may be any numerical range, for example, within a range of 10 cm, or within a range of 20 cm, etc., which is not limited in this embodiment of the present application.
  • the first menu is drawn according to the obtained options and displayed at the display position.
  • the first menu can be drawn according to the preset menu generation rules.
  • a menu for example, it may be specified in the menu generation rule that a menu with three options in one column can be generated for three options, or it can be specified that six options can be generated for six options to form a circular menu, and so on.
  • the embodiment does not limit the content specified in the menu generation rule.
  • the display form of the first menu includes any one or a combination of a graphical interface, a table, and a text.
  • the specific display form of the generated first menu is not limited in the embodiment of the present application, which can be displayed in a graphical interface or in the form of a table, wherein the table includes but is not limited to a single vertical column form, multi-line and multi-column form, etc.; the form of the first menu can also be text, that is, displayed in the form of text; of course, any one or more of the form, text and graphical interface can be combined,
  • the display form of the first menu can be determined according to the actual situation. In this embodiment, multiple display forms are provided to display the first menu, which can make the display effect of the information in the first menu clearer, more intuitive and more comprehensive.
  • the generated appearance display information of the first menu may be determined according to the context of the first menu and the layout of the current interface.
  • the context of the first menu refers to the context of the option content in the first menu, that is, the operation process logic of the option content and the operating environment of the business operation.
  • the layout of the current interface determines the area in which information can be displayed in the current interface.
  • the option content of the selection task is acquired, and the selection task is In the preset range around the area where the trigger position point in the current interface is located, a first menu generated according to the option content is displayed, wherein the option content displayed in the first menu is used to select the target option content; in this method, only After a trigger command is detected, it is determined that there is a corresponding selection task in the trigger command, and the options in the selection task are obtained in real time and displayed as the first menu, which means that before the trigger command is issued, the current interface is not There are menus generated based on each selection task.
  • the system interface of the target device does not need to resident the display operation button of the selection task, which can make the interface more concise and leave more interfaces. Areas are used for other main quests.
  • the generated first menu is displayed, the content of the options in the first menu is obtained in real time based on the selection task of the trigger position point, so that the operation purpose of generating the trigger instruction can be accurately identified, thereby ensuring that The accuracy of the content of the options in the displayed first menu.
  • the preset manipulation manner includes a first operation that continues to act in the current interface.
  • the first operation that takes effect continuously in the current interface can be understood as the user continuously performing the first operation in the current interface.
  • the first operation that is continuously active in the current interface may be that the user continuously slides the touch screen with a finger, and the user's finger does not leave the touch screen during the sliding process.
  • the continuous effect in the embodiments of the present application may be understood as continuously affecting the target device, and the impact includes but is not limited to physical impact, program impact, information impact, and the like.
  • the interface display screen of the target device may be a non-touchable screen in addition to a touchable screen, or a screen that can sense and detect external information.
  • the first operation may be the above-mentioned sliding of the finger in the current interface of the target device, that is, the above-mentioned trigger instruction can be generated in the current interface by sliding the finger in the current interface of the target device.
  • the first operation may be to move the device that is connected to the target device and separate the physical input from the display.
  • the device that separates the physical input and display can be a mouse, a trackball, a joystick, etc.
  • a corresponding indicator logo will be displayed in the current interface of the target device to indicate that the mouse, trackball or joystick is in the current interface.
  • the position point that needs to be manipulated, wherein the display styles of the indicator logo include but are not limited to pointer style, palm style, finger style, arrow style, geometric shape style, etc.
  • the embodiments of the present application will not list them one by one. It can be set according to the actual situation.
  • the external information that can be sensed and detected includes but is not limited to detecting physical movements of users who do not touch the screen, sensing and detecting users' voices, and sensing and detecting users' brain activity signals, etc.
  • the operation may be an operation of generating the above-mentioned inductable and detectable information, that is, the inductable and detectable information generated by the first operation can be implemented to act on the current interface of the target device, so that the above-mentioned trigger instruction is generated in the current interface.
  • first operation in this embodiment of the present application, from the start of generating the trigger instruction to the subsequent selection to the content of the target option, if the first operation is used as the trigger instruction to generate the trigger instruction. In the control mode, the first operation must continue to act on the current interface, and the situation that does not act on the current interface cannot occur. For example, from the start of generating the trigger instruction to the subsequent selection to the content of the target option, the finger needs to continuously act on the touch screen and cannot leave the touch screen.
  • the functions provided by the equipment and the software system are very rich.
  • the selection of a task needs to be selected multiple times. Each selection corresponds to a trigger command, and each trigger command is generated.
  • the corresponding trigger position points may also be different. Therefore, in an embodiment, the trigger position point corresponding to the trigger instruction is any position point in the current interface.
  • the trigger position corresponding to the trigger instruction in the current interface can be defined as the position where the dwell time is greater than the preset threshold during the period when the first operation continues to act on the current interface. For example, continuous sliding of the finger on the touch screen means that the finger continues to act on the touch screen. When the finger slides to a certain position of the touch screen and stays for longer than a preset threshold, the position is the trigger position for generating the trigger instruction.
  • the trigger position for generating a trigger command is the trigger position for generating the trigger command. Therefore, in practical applications, the trigger position for generating the trigger command is It can be located in any area of the current interface, such as up, down, left, right, etc., that is, when the finger slides the touch screen, a corresponding trigger instruction can be generated at any point in the touch screen.
  • the first operation that continues to act is used to realize the trigger, which is equivalent to using a unified operation logic to realize the selection of the target option content in the entire information display process, thereby improving the speed of option selection.
  • the consistent use of unified operation logic can form muscle memory, thereby improving the accuracy of selection.
  • the above-mentioned preset manipulation manner includes a second operation that acts at intervals in the current interface.
  • the second operation that acts intermittently in the current interface can be understood as the user performing the second operation intermittently in the current interface.
  • the second operation that acts at intervals in the current interface may be a manner in which the user uses a finger to click, lift, click, and lift again on the touch screen. Each click is equivalent to generating a trigger command in the current interface.
  • the spacing effect in the embodiments of the present application can be understood as affecting the target device at intervals, and the effect includes but is not limited to physical effect, program effect, information effect, and the like.
  • an implementation manner of the second operation is described by taking a touchable screen, a non-touchable screen, and a screen capable of sensing and detecting external information as examples.
  • the second operation may be the above-mentioned finger clicking on the current interface of the target device at intervals, and each time the finger clicks on the current interface of the target device, the above-mentioned trigger instruction can be generated in the current interface.
  • the second operation may be to intermittently click on the device connected to the target device that is separated from the physical input and display in the current interface of the target device, and a corresponding indicator will be displayed in the current interface of the target device.
  • the identifier is used to indicate the point where the device clicks in the current interface, and each time the device clicks, the above trigger instruction can be generated.
  • the device that separates entity input from display may be a mouse, trackball, joystick, etc.
  • the display style of the indicator logo includes but is not limited to pointer style, palm style, finger style, arrow style, geometric shape style, etc. etc., the embodiments of the present application are not listed one by one here, and may be set according to actual conditions.
  • the external information that can be sensed and detected includes, but is not limited to, detecting physical movements of users who do not touch the screen, sensing and detecting voice information of users, and sensing and detecting brain activity signals of users.
  • the second operation may be an operation that generates the above-mentioned inductable and detectable information, that is, through the inductable and detectable information generated by the second operation, the current interface acting on the target device is implemented so that the current interface is The above trigger command is generated in the interface.
  • the second operation acts on the current interface intermittently. For example, from the start of generating the trigger instruction to the subsequent selection to the target option content, each time an item of content needs to be selected, a finger needs to be clicked once on the touch screen at the position corresponding to the item of content. And in practical applications, the functions provided by the device and the software system are very rich. The selection of a task in the software system can only be realized by selecting the option content multiple times. The corresponding position of each option content on the touch screen is different. Therefore, when the second operation is used as the manipulation method for generating the trigger command, the trigger position point corresponding to the trigger command is also any position point in the current interface, which is not limited in this embodiment of the present application.
  • the second operation may also be a combination of any of the above-listed methods, that is, the interval action can be implemented in the current interface through multiple methods.
  • a trigger command can be generated in the current interface by clicking with a finger, and the finger is raised (interval process), and then another trigger command can be generated by acting on the current interface by voice, and then the interval can be continued.
  • Other methods act on the current interface to generate a new trigger command, and so on, until the target option is selected to end the task.
  • the second operation with interval function can be used to avoid using a single operation logic, so that the operation mode in the task selection process is more abundant and flexible, thereby improving the diversity of task selection.
  • the combination of the above-mentioned first operation and the second operation is used as the manipulation method.
  • the manner in which the first operation and the second operation are combined is a combination of any of the first and second operations listed above, and the embodiments of the present application will not list them one by one.
  • the trigger instruction is generated through a first operation in a preset manipulation manner that continues to act in the current interface.
  • the current interface can be the interface of the target device under any circumstances, that is, the selection task that exists in the trigger position point corresponding to the detected trigger instruction, it can be any selection task in any interface in the target device, which is equivalent to this implementation.
  • the preset first operation continues to act on the current interface, it is possible to generate a menu and display the option content in any selection task of any interface in the target device, and complete the corresponding selection.
  • the first operation is to continue to act.
  • the following describes the generation of the first menu and the determination of the target option content in combination with specific usage scenarios as an example, and for the sake of clarity, the following embodiments will take the first operation in the above embodiment as an example to assist illustrate.
  • Figure 4 shows a simplified interface of the ultrasound system.
  • the home page the interface is in the initial state of the home page, and the measurement items indicated in this interface are the abdominal routine measurement items, including an ultrasound image display area, an image parameter display area, and a system parameter information display area, where the probe is selected in the system parameter information.
  • the abdominal routine measurement items including an ultrasound image display area, an image parameter display area, and a system parameter information display area, where the probe is selected in the system parameter information.
  • the scene where the menu does not exist in the current interface indicates that the above-mentioned detected trigger instruction is an instruction issued to the initialization interface in the target device, so the trigger position point corresponding to the trigger instruction in this scene is the initial interaction point.
  • the option content of the above-mentioned selection task it can be determined whether there is a selection task in the area where the initial interaction position point is located according to the context of the area where the initial interaction position point is located, and if so, the above-mentioned options for obtaining the selection task can be executed. content steps.
  • a trigger command is generated in the interface shown in the above example in FIG. Describe the initial interaction position point A1 shown in Figure 6, then, continue to judge whether there is a selection task in the area where the A1 point is located, specifically, according to the context of the area where the A1 point is located, the context is the business operation logic of the area where the A1 point is located Judgment, if the judgment result is that there is a selection task in the area where the A1 point is located, the step of acquiring the option content of the selection task can be continued.
  • the option content of the selection task can be obtained, avoiding the problem caused by the step of obtaining the option content of the selection task when there is no selection task in the area where the initial interaction point is located. Waste of resources, save processing resources.
  • the first menu generated according to the content of the options is displayed, including: around the area where the initial position interaction point in the current interface is located Within the preset range of , display the first menu generated according to the options of the selected task.
  • the first menu generated according to the option content needs to be displayed within a preset range around the area where the initial interaction point is located. In this way, displaying within the preset range of the initial interaction position point can achieve a better effect of viewing information and improve the flexibility and intelligence of human-computer interaction.
  • the finger taking the sliding control of the finger on the touch screen without leaving the screen as an example, the finger can press the touch screen at point A1 in the lower area of the parameter information display area of the ultrasound system. And keep it still, at this time, it means that a trigger command is generated at point A1, and then, obtain the context of the area where point A1 is located to determine whether there is a selection task in the area where point A1 is located, and if so, obtain the option content in the selection task: convex array probe , linear array probe, 3D-1, S7-1, and then generate the first menu according to convex array probe, linear array probe, 3D-1, S7-1.
  • the first menu of the probe selection task needs to be displayed, there is no need to set a resident display operation button on the interface of the ultrasound system, and only a preset position point (for example, point A1) needs to be set. ) triggering the instruction, the first menu can be displayed, thereby simplifying the display interface of the ultrasound system and reducing the number of input device buttons. Free up more display space to display the main content, bringing an immersive experience to the user. It can simplify the physical buttons and highlight the key input buttons, making the machine interface layout more concise and beautiful.
  • the above-mentioned continuous first operation can further Selecting the target option content in the menu, based on this, in one embodiment, the process of selecting the target option content in the first menu includes:
  • the content of the target option is determined according to the task information of the first target interaction position point; Selection tasks and non-existing selection tasks.
  • the first operation needs to continue to act on the interface of the target device.
  • this process requires a finger None leave the screen, wherein, in the case where the finger does not leave the screen, the trigger position that acts on the screen refers to the position where the dwell time of the finger on the screen exceeds a preset threshold, which is not limited in the embodiments of the present application , such as 1S, 2S, etc.
  • the first target interaction position point there are two cases of selection task and no selection task in the first target interaction position point, that is, the first target interaction position point may also have sub-option content, and even the first target interaction position point There are also sub-options in the sub-options in target option content.
  • the first target interaction position point B1 belongs to Option 2 happens to be the case for the target option content.
  • the first target interaction position is determined.
  • the option content where the position point is located is the target option content.
  • the first operation continues to act on the interface of the target device, if it is detected that the trigger position of the first operation is moved from the initial interaction position to the first target interaction position in the first menu, and the first target interaction If the position point is within the range to which any option content in the first menu belongs, the option content at which the first target interaction position point is located is determined as the target option content.
  • the initial interaction position point is point A1
  • the first target interaction position point B1 in the first menu and any option content in the first menu is taken as an example of option 2 in FIG. 7
  • the line L1 to The area formed by L2 is the scope of option 2.
  • the scope of each option in the first menu is the area formed by L1 to L2, and the area after the preset range around A1 is removed.
  • the scope of option 2 is L1 to L2 In the formed area, the area after the preset range around A1 is removed, and the preset range can be determined according to the actual situation, for example, 1 mm, 5 mm, etc., which are not limited in the embodiments of the present application.
  • the trigger point acting on the screen at the beginning is the initial interaction position point A1
  • the trigger point acting on the screen slides from the initial interaction position point A1 to the first target interaction position point B1
  • the first target interaction position point B1 is required to be within the scope of option 2
  • the connection between the initial interaction position point A1 and the first target interaction position point B1 (the connection line is not visible in the interface) and the first menu. It can be judged whether the pixels in the range of the content of an option in the above continue to intersect, and the continuous intersection here can be understood as the above-mentioned dwell time is greater than the preset threshold, thus determining the time that the operator's finger finally stays at the first target interaction position point B1. If it is greater than the preset threshold, it is determined that the first target interaction position point B1 is located in option 2, which is the content of the target option selected this time.
  • the target option content when the first operation continues to act on the current interface, start sliding from the initial interaction position point A1, and the final position where the first target interaction position point B1 belongs is the target option content, and no other operations are required in the whole process. , just swipe and stay to achieve target option content selection, thereby increasing the speed of the operator to complete the selection task, and forming muscle memory to improve the accuracy of selection.
  • the following takes the two-level selection task as an example to describe the process of sliding the option content in the second menu of the next level from the first menu.
  • determining the content of the target options according to the task information of the first target interaction position point includes: if the task information of the first target interaction position point is an existing selection task, and the first target interaction position point is in the same range as the first target interaction position point. Within the range to which any option content in the menu belongs, obtain the sub-option content corresponding to the selection task of the option content where the first target interaction position point is located; within the preset range around the area where the first menu is located, display The second menu generated by the sub-option content corresponding to the selection task of the option content where the target interaction position point is located; the sub-option content displayed in the second menu is used to select the target option content.
  • the first target interaction position point B1 in option 2 in the interface of the example in FIG. 8 continues to judge whether there is a selection task in the area where the point B1 is located, Specifically, according to the context of the area where the B1 point is located, the context is the business operation logic judgment of the area where the B1 point is located. If the judgment result is that there is a selection task in the area where the B1 point is located, you can continue to obtain the first target interaction location point.
  • the sub-option content corresponding to the selection task of the option content.
  • the corresponding sub-option content will be acquired, so as to avoid executing the process of acquiring the sub-option content when there is no selection task in the area where the first target interaction location point B1 is located.
  • the waste of resources caused by the steps saves processing resources.
  • the finger slides from point A1 in the first menu M1 to the convex selected in the first menu M1.
  • point B1 of the array probe if it remains still, it means the selected convex array probe.
  • a trigger command is generated on the convex array probe, and then the target device needs to select the task according to the context of the area where the convex array probe is located. , if it exists, obtain the sub-option contents in the selection task: abdominal routine, abdominal kidney, abdominal blood vessel, abdominal bowel, obstetric fetal heart, and generate the second menu M2 according to these options.
  • the second menu is displayed within a preset range around the area where the first menu is located.
  • the second menu M2 is the lower part of the first menu M1 A menu, and the first menu M1 and the second menu M2 are both dynamically generated menus under the premise that the first operation continues to act on the current interface. Therefore, in the embodiment of the present application, it is possible to dynamically generate a multi-level menu, and the The multi-level menus are closely connected.
  • the first-level menu (the first menu M1) is displayed within 10cm around the initial trigger point (the initial interaction position point A1), and the subsequent first-level menu (the second menu M2) is far from the upper-level menu.
  • the first menu M1) is within 10cm around the nearest pixel point (point B1), thus solving the need to frequently operate buttons at different positions and improving the efficiency.
  • sub-option contents included in the above-mentioned B1 (option 2): option 2.1, option 2.2, option 2.3, option 2.4, option 2.5 are selected as the target option content. It can be understood that, the process of this embodiment is essentially the same as the above-mentioned process of selecting option 2 from the option contents included in point A1.
  • the target is determined according to the task information of the second target interaction position.
  • Option content includes the existence of selected tasks and the absence of selected tasks.
  • the first operation needs to continue to act on the interface of the target device
  • this process requires that the finger does not leave the screen all the time, wherein, in the case where the finger does not leave the screen, the trigger position point that acts on the screen refers to the position point where the finger stays on the screen for longer than a preset threshold, the The preset threshold is not limited in this embodiment of the present application, for example, 1S, 2S, and the like.
  • the second target interaction location point has a selection task and there is no selection task, that is, there may be sub-option content in the second target interaction location point, or even the second target interaction location point. There are also sub-options in the sub-options in target option content.
  • the second target interaction position point Option 2 to which it belongs is the case where the content of the target option is exactly the same.
  • the second target interaction position point if the task information of the second target interaction position point is that there is no selection task, and the second target interaction position point is within the scope of any sub-option content in the second menu, the second target is determined.
  • the content of the sub-option where the interactive position point is located is the content of the target option.
  • the first operation continues to act on the interface of the target device, if it is detected that the trigger position of the first operation is moved from the first target interaction position point B1 to the second target interaction position point in the second menu, and the first target interaction position point B1 If the second target interaction position point is within the scope of any sub-option content in the second menu, the sub-option content at which the second target interaction position point is located is determined as the target option content.
  • the initial interaction position point is point A1
  • the first target interaction position point is point B1
  • the second target interaction position point C1 in the second menu and any sub-point in the second menu in Figure 10
  • the option content is an example of option 2.4
  • the area formed by lines L3 to L4 is the scope of option 2.4.
  • the scope of each sub-option in the second menu is the area outside the range of the first target interaction position point B1.
  • the scope of option 2.4 is the area formed by L3 to L4 that is outside the area of B1.
  • the finger does not slide away from the screen, and the trigger point that the finger currently acts on the screen is the first target interaction position point B1, and when the finger slides, the trigger point acting on the screen slides from B1 to the second target interaction position point C1, and If the second target interaction position point C1 is required to be within the scope of option 2.4, then it is determined that option 2.4 is the content of the target option selected this time.
  • the first operation continues to act on the current interface, continue to slide from the first target interaction position point B1, and the sub-option content to which the second target interaction position point C1 belongs to the final position of the stay is the target option content.
  • the whole process does not require other operations, just slide and stay to achieve the target option content selection, thereby increasing the speed of the operator to complete the selection task, and forming muscle memory to improve the accuracy of selection.
  • multi-level menus can be integrated, so that the selection of multi-level menus can be completed in one operation.
  • the target option content to be selected does not appear, and it is necessary to return. Based on this, even if the finger has swiped to a selectable content in the second menu M2, the original path can be passed. Return to select another option, or cancel the selection.
  • an embodiment is provided.
  • the embodiment includes: if the trigger position point of the manipulation mode is detected to move from the second target interaction position point to the third target interaction position point in other menus, and the third target interaction position If the point is within the range of any option content in other menus, the option content where the third target interaction position point is determined is the target option content; the other menus are any menu except the second menu.
  • This embodiment is suitable for the scenario of returning from a certain level menu to select other options, and also suitable for the scenario of continuing to select from a certain level menu to a lower-level menu, because the scenario of continuing to select a lower-level menu from a certain level menu can be combined with the previous implementation.
  • this embodiment will describe the scenario of returning to select other options from a certain hierarchical menu.
  • the current trigger point of the finger acting on the screen is the second target interaction position point, and the second target interaction position point is C1,
  • the third target interaction position point is D1.
  • option 2 when it is detected that the trigger point of the finger acting on the screen slides from C1 to D1, and since D1 is within the range of option 2, option 2 can be determined as the target option content. It should be noted here that in the returned scene, that is, when the finger moves from point C1 to point D1, it needs to meet the requirements of the reverse movement from C1 to the initial interaction point A1, and the distance from C1 to the initial interaction point A1 is a condition greater than the distance from D1 to A1. When the finger slides to D1 and the above conditions are met, the second menu M2 will disappear and display, that is, the second menu M2 no longer exists in the current interface of the target device.
  • the selection task when executed in the software system of the target device, it is possible to return to the original path to select other target option contents or cancel the selection, so that the manipulation mode of the selection task is more flexible and intelligent.
  • the menus that have not been selected will disappear accordingly, and the current interface of the target device will be restored to the original interface, which frees up more area to display the main content and improves the utilization rate of the interface of the target device.
  • option 2 is selected as the target option content
  • option 2.4 is selected as the option content
  • the selection is finally returned to point A1 to cancel the selection, it means that the selection task performed by the operator in the target device is over. Therefore, in one embodiment, if it is detected that the manipulation mode meets the preset manipulation ending condition, it is determined that manipulation of the target device is ended, and the display of all menus in the interface of the target device corresponding to the manipulation ending time is canceled.
  • the preset manipulation ending conditions include, but are not limited to: not acting on any menu, and/or not selecting any option in any menu, acting on a certain option in the last level menu, etc. This application implements The example does not limit this.
  • the manipulation mode no longer acts on the current interface. For example, if the finger no longer slides in the current interface and has left the current interface, it means that the manipulation of the target device is over. , the selection task performed by the operator in the target device is over.
  • canceling the selection may be that the first operation no longer acts on any menu, which means that the manipulation of the target device by the first operation ends; in one embodiment, selecting The content of the target option can be that the first operation acts on the menu in the range of a specific option with a certain level of menu, or, an option acted by the first operation has no upper and lower menus, etc., as long as the first operation selects the target option content, the manipulation of the target device by the first operation ends.
  • the target device needs to cancel all menus in the interface of the target device corresponding to the end of the manipulation. For example, after the finger leaves the current interface, the manipulation ends, the target device cancels all menus, and the current interface returns to the original initial interface, thus freeing up More areas are used to display the main content, which makes the interface/operation panel more concise and improves the utilization rate of the target device interface.
  • the situation where the displayed contents overlap each other is avoided.
  • a first menu generated according to the content of the options is displayed based on the uncovered area of the area where the existing display information is located.
  • the first menu is used as an example for description in this embodiment, in practical applications, any menu, for example, the second menu, the third menu, . . . , the Nth menu, can be adapted.
  • the determined area where the trigger position point in the current interface is located is the S1 area
  • the preset range of the S1 area is the S2 area
  • the menu to be displayed is the first menu M1
  • the first menu M1 needs to be displayed in S2 but if it is detected that other display information Y already exists in the S2 area, when the first menu M1 is displayed, the first menu can be displayed based on the area not covered by the current display information Y M1, that is, move the first menu M1 to an area that does not cover Y for display.
  • the display area of the first menu M1 in FIG. 12 is only an example. In practical applications, it is necessary to combine the layout of the actual software interface and other related factors, and comprehensively consider the area of the first menu M1 that can be displayed after avoiding the Y information. This embodiment of the present application does not limit this.
  • display information already exists in a preset range around the area where the trigger position point in the current interface is located, and a menu generated according to the content of the options can be displayed based on the uncovered area of the area where the existing display information is located.
  • the existing displayed information is avoided, the clarity and intuition of all the information in the current interface of the target device can be ensured, and the intelligence of human-computer interaction can be improved.
  • the selected option content can be displayed in a preset display mode to indicate that the option content is selected among many options.
  • the preset display manner includes: highlighting the area corresponding to the content of the target option; or, displaying the area corresponding to the content of the target option in an animated transition.
  • the control area of the selected target option content is highlighted, for example, there will be obvious changes such as color, size, status, etc.; or, the corresponding area of the target option content is displayed in an animated transition, for example, the control corresponding to the target option content when selected
  • the area becomes the first color, then gradually transitions to the second color, and finally takes on the third color.
  • other animations are also possible, such as gradually getting bigger, then gradually getting smaller until it returns to its original size.
  • the animation may be implemented by time-based interpolation, which is not limited in this embodiment of the present application; of course, the two may be displayed in combination, which is not limited in this embodiment of the present application.
  • the selected content is displayed with unique characteristics, which can enrich the information display in the interface and improve the display effect of the interface information.
  • the appearance display information of the menu can be adjusted.
  • the method further includes:
  • S201 Obtain attribute information of option content of a selection task.
  • the attribute information here includes but is not limited to the shape, size, quantity, appearance of the control area. etc.
  • the target device needs to adjust the appearance display information (including the size of the control and the shape of the initial state, etc.) of the menu second menu M2 generated according to the option content: option 1 to option 6.
  • the shape and size of a single option control graph can be adjusted based on the amount of option content at the upper level and the amount of option content at the lower level.
  • option 2 For example, assuming that considering the large number of option contents included in option 2, the graphic shape of option 2 control can be set to be larger, or displayed in dotted lines. Or, in the first menu M1, option 2 is selected, then the controls of other options that are not selected can move some distances to the two sides of the option 2, so as to ensure the clearness and integrity of the option 2.
  • the hexagon can be set as a pentagon and there can be multiple vacancies; but if the number is more than 6, the hexagon can be set as Of course, the circle is only a distance here, and the central shape may also have nothing to do with the number, which is not limited in this embodiment of the present application.
  • the actual attribute information of the option content is used to set and adjust the appearance display information of the menu, which can make the display of the menu in the current interface more suitable for use requirements, and also enrich the menu display mode and improve the diversity of the menu display.
  • an embodiment is also provided, and the embodiment includes the following steps:
  • an embodiment of the present application further provides an information display device, comprising: an acquisition module and a display module, wherein:
  • the acquisition module is used to acquire the option content of the selected task if it is detected that there is a trigger instruction in the current interface of the target device, and there is a selection task in the area where the trigger position point corresponding to the trigger instruction is located; there is no menu in the current interface; the trigger instruction is Generated by preset manipulation methods;
  • the display module is used to display the first menu generated according to the option content within a preset range around the area where the trigger position point in the current interface is located; the option content displayed in the first menu is used to select the target option content.
  • the above-mentioned manipulation manner includes: a first operation that continuously acts in the current interface, and/or a second operation that acts intermittently in the current interface.
  • the above-mentioned trigger position point is the initial interaction position point; correspondingly, the device includes:
  • the first determination module is used to determine whether there is a selection task in the area where the initial interaction location point is located according to the context of the area where the initial interaction location point is located; the context represents the operation flow logic of the current interface;
  • the above obtaining module is configured to execute the step of obtaining the option content of the selected task if it exists.
  • the above-mentioned display module is further configured to display the first menu generated according to the option content of the selected task within a preset range around the area where the initial position interaction point in the current interface is located.
  • the device further includes:
  • the second determination module is configured to determine the target according to the task information of the first target interaction position if it is detected that the trigger position of the manipulation mode is moved from the initial interaction position to the first target interaction position in the first menu Option content; task information includes the existence of selected tasks and the absence of selected tasks.
  • the above-mentioned second determining module includes:
  • the first determining unit is configured to determine the first target if the task information of the first target interaction position point is that there is no selection task, and the first target interaction position point is within the range to which any option content in the first menu belongs The option content where the interaction position point is located is the target option content.
  • the above-mentioned second determining module includes:
  • the second determination unit is configured to obtain the first target interaction if the task information of the first target interaction position point is that there is a selection task and the first target interaction position point is within the scope of any option content in the first menu The sub-option content corresponding to the selection task of the option content where the position point is located;
  • the display unit is used to display the second menu generated according to the sub-option content corresponding to the selection task of the option content where the first target interaction position point is located within a preset range around the area where the first menu is located;
  • the sub-option content of is used to select the target option content.
  • the device further includes:
  • the third determination module is configured to determine, according to the task information of the second target interaction point, if it is detected that the trigger position point of the manipulation mode is moved from the first target position point to the second target interaction position point in the second menu
  • the content of the target option; the task information includes the existence of the selected task and the absence of the selected task.
  • the above-mentioned third determining module includes:
  • the third determining unit is configured to move from the second target interaction position point to the third target interaction position point in other menus if it is detected that the trigger position point of the manipulation mode action, and the third target interaction position point is in any position in the other menus Within the range to which one option content belongs, it is determined that the option content where the third target interaction position point is located is the target option content; the other menus are any menu except the second menu.
  • the device further includes:
  • the ending module is configured to determine that the manipulation of the target device is ended if it is detected that the manipulation mode satisfies the preset manipulation ending condition, and cancel the display of all menus in the interface of the target device corresponding to the manipulation ending time.
  • the above obtaining module is further configured to determine the option content of the selection task through a preset artificial intelligence algorithm.
  • the above-mentioned display module is further configured to, if it is detected that display information already exists in a preset range around the area where the trigger position point in the current interface is located, based on the uncovered area of the area where the display information is located, display the information.
  • the apparatus further includes:
  • the display module is used to display the corresponding area of the target option content according to a preset display mode during the process of selecting the target option content in the menu; the preset display mode includes: highlighting the corresponding area of the target option content; or, displaying the target option in an animated transition. content area.
  • the apparatus further includes:
  • the information acquisition module is used to acquire the attribute information of the option content of the selection task
  • the adjustment module is used to adjust the appearance display information of the menu generated according to the content of the options according to the attribute information.
  • the display form of the above-mentioned menu includes any one or a combination of a graphical interface, a table, and a text.
  • the above-mentioned trigger position point is any position point in the current interface.
  • the appearance display information of the menu is determined according to the context of the menu and the layout of the current interface; the context represents the business operation logic of the current interface.
  • Each module in the above-mentioned information display device may be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules can be embedded in or independent of the target device in the form of hardware, and can also be stored in the memory of the target device in the form of software, so that the target device can call and execute operations corresponding to the above modules.
  • a medical device including a display, a memory, and a processor, where a computer program is stored in the memory, and when the processor executes the computer program, the processor implements the following steps performed by the processor in the above radiotherapy planning system,
  • the display displays information according to the execution result of the processor:
  • the trigger instruction is controlled by a preset way to generate
  • the first menu generated according to the option content is displayed; the option content displayed in the first menu is used to select the target option content.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by the processor, the following steps performed by the processor in the above radiotherapy planning system are implemented:
  • the trigger instruction is controlled by a preset way to generate
  • the first menu generated according to the option content is displayed; the option content displayed in the first menu is used to select the target option content.
  • any reference to memory, storage, database, or other media used in the various embodiments provided by the embodiments of this application may include at least one of non-volatile memory and volatile memory.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Sont fournis un procédé d'affichage d'informations, un dispositif associé, un dispositif médical associé, et un support de stockage associé. Lorsqu'une instruction de déclenchement est présente dans une interface courante d'un dispositif cible, et une tâche de sélection existe dans une région où un point de position de déclenchement correspondant à l'instruction de déclenchement est situé, un contenu d'options d'une tâche de sélection est obtenu (S101). Dans une plage prédéfinie autour de la région où se trouve le point de localisation de déclenchement dans l'interface courante, un premier menu généré sur la base du contenu d'options est affiché, le contenu d'options affiché dans le premier menu étant utilisé pour sélectionner le contenu d'options cible (S102). Selon le procédé, l'objectif fonctionnel de génération de l'instruction de déclenchement peut être identifié avec précision, de telle sorte que la précision du contenu d'options dans le premier menu affiché est assurée.
PCT/CN2021/115351 2021-08-30 2021-08-30 Procédé d'affichage d'informations, dispositif associé, équipement médical associé, et support de stockage associé WO2022057604A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/115351 WO2022057604A1 (fr) 2021-08-30 2021-08-30 Procédé d'affichage d'informations, dispositif associé, équipement médical associé, et support de stockage associé
CN202180009260.3A CN114981769A (zh) 2021-08-30 2021-08-30 信息展示方法、装置、医疗设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/115351 WO2022057604A1 (fr) 2021-08-30 2021-08-30 Procédé d'affichage d'informations, dispositif associé, équipement médical associé, et support de stockage associé

Publications (1)

Publication Number Publication Date
WO2022057604A1 true WO2022057604A1 (fr) 2022-03-24

Family

ID=80775881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115351 WO2022057604A1 (fr) 2021-08-30 2021-08-30 Procédé d'affichage d'informations, dispositif associé, équipement médical associé, et support de stockage associé

Country Status (2)

Country Link
CN (1) CN114981769A (fr)
WO (1) WO2022057604A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114995670A (zh) * 2022-05-30 2022-09-02 南京麦澜德医疗科技股份有限公司 双屏幕超声医疗快捷指令菜单方法及装置
CN115328348A (zh) * 2022-08-31 2022-11-11 济南浪潮数据技术有限公司 微前端的首页操作管理方法、装置、设备及可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390033A (zh) * 2006-03-17 2009-03-18 诺基亚公司 改进的移动通信终端及其方法
CN102207818A (zh) * 2010-02-19 2011-10-05 微软公司 使用屏幕上和屏幕外手势的页面操纵
CN102331932A (zh) * 2011-09-08 2012-01-25 北京像素软件科技股份有限公司 一种菜单界面实现方法
US20120278762A1 (en) * 2008-06-28 2012-11-01 Mouilleseaux Jean-Pierre M Radial menu selection
CN102799347A (zh) * 2012-06-05 2012-11-28 北京小米科技有限责任公司 应用于触屏设备的用户界面交互方法、装置及触屏设备
CN104781765A (zh) * 2012-09-13 2015-07-15 谷歌公司 与用于触屏的径向菜单交互
CN109564494A (zh) * 2016-08-15 2019-04-02 皮尔夫有限责任公司 使用径向图形用户界面控制设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352881B2 (en) * 2007-03-08 2013-01-08 International Business Machines Corporation Method, apparatus and program storage device for providing customizable, immediate and radiating menus for accessing applications and actions
CN104317487B (zh) * 2014-11-12 2018-05-18 北京国双科技有限公司 环形菜单的显示方法和显示装置
CN107491248A (zh) * 2017-08-24 2017-12-19 小草数语(北京)科技有限公司 菜单显示方法、装置、终端设备和计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390033A (zh) * 2006-03-17 2009-03-18 诺基亚公司 改进的移动通信终端及其方法
US20120278762A1 (en) * 2008-06-28 2012-11-01 Mouilleseaux Jean-Pierre M Radial menu selection
CN102207818A (zh) * 2010-02-19 2011-10-05 微软公司 使用屏幕上和屏幕外手势的页面操纵
CN102331932A (zh) * 2011-09-08 2012-01-25 北京像素软件科技股份有限公司 一种菜单界面实现方法
CN102799347A (zh) * 2012-06-05 2012-11-28 北京小米科技有限责任公司 应用于触屏设备的用户界面交互方法、装置及触屏设备
CN104781765A (zh) * 2012-09-13 2015-07-15 谷歌公司 与用于触屏的径向菜单交互
CN109564494A (zh) * 2016-08-15 2019-04-02 皮尔夫有限责任公司 使用径向图形用户界面控制设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114995670A (zh) * 2022-05-30 2022-09-02 南京麦澜德医疗科技股份有限公司 双屏幕超声医疗快捷指令菜单方法及装置
CN115328348A (zh) * 2022-08-31 2022-11-11 济南浪潮数据技术有限公司 微前端的首页操作管理方法、装置、设备及可读存储介质
CN115328348B (zh) * 2022-08-31 2024-03-15 济南浪潮数据技术有限公司 微前端的首页操作管理方法、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN114981769A (zh) 2022-08-30

Similar Documents

Publication Publication Date Title
JP6592496B2 (ja) ユーザインタフェースオブジェクトを選択するためのデバイス、方法及びグラフィカルユーザインタフェース
US11073980B2 (en) User interfaces for bi-manual control
US8473862B1 (en) Organizational tools on a multi-touch display device
WO2022057604A1 (fr) Procédé d'affichage d'informations, dispositif associé, équipement médical associé, et support de stockage associé
Guimbretiere et al. Benefits of merging command selection and direct manipulation
CN110476187B (zh) 缝纫机式多边形绘制方法
US11150797B2 (en) Method and device for gesture control and interaction based on touch-sensitive surface to display
CN104063128B (zh) 一种信息处理方法及电子设备
Besançon et al. Hybrid touch/tangible spatial 3D data selection
CN109891374A (zh) 与数字代理的基于力的交互
US11275500B1 (en) Graphics authoring application user interface control
Sluÿters et al. Quantumleap, a framework for engineering gestural user interfaces based on the leap motion controller
WO2022144649A1 (fr) Génération de vidéo à intervalles de temps par réexécution d'instructions d'utilisateur dans une application de vecteur graphique
JP2023531981A (ja) 医療用イメージングシステムのための適合型ユーザインターフェース
Schwarz et al. An architecture for generating interactive feedback in probabilistic user interfaces
CN109192282B (zh) 医用图像注释的编辑方法、装置、计算机设备及存储介质
Uddin Improving Multi-Touch Interactions Using Hands as Landmarks
US10019127B2 (en) Remote display area including input lenses each depicting a region of a graphical user interface
CN107850832B (zh) 一种医疗检测系统及其控制方法
CN105739816B (zh) 选择图形元素
KR102648748B1 (ko) 사용자의 의도가 반영된 기능 툴 제공 방법 및 이를 수행하는 컴퓨팅 장치
JPH04320579A (ja) 画像処理装置
Moon Prototyping Touchless User Interface for Interacting with a Website
Buschek et al. 10Building Adaptive Touch

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868432

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868432

Country of ref document: EP

Kind code of ref document: A1