WO2021169881A1 - Display control method and device for head-mounted device, and computer storage medium - Google Patents

Display control method and device for head-mounted device, and computer storage medium Download PDF

Info

Publication number
WO2021169881A1
WO2021169881A1 PCT/CN2021/077112 CN2021077112W WO2021169881A1 WO 2021169881 A1 WO2021169881 A1 WO 2021169881A1 CN 2021077112 W CN2021077112 W CN 2021077112W WO 2021169881 A1 WO2021169881 A1 WO 2021169881A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
application
mobile device
event
mounted device
Prior art date
Application number
PCT/CN2021/077112
Other languages
French (fr)
Chinese (zh)
Inventor
张复尧
熊棋
徐毅
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202180013084.0A priority Critical patent/CN115053203A/en
Publication of WO2021169881A1 publication Critical patent/WO2021169881A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the embodiments of the present application relate to the field of vision enhancement technology, and in particular, to a display control method of a head-mounted device, a device, and a computer storage medium.
  • VR Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • the head-mounted device and the mobile device are included, and the head-mounted device and the mobile device can be connected through wired or wireless communication.
  • the head-mounted device and the mobile device can be connected through wired or wireless communication.
  • a set of user interfaces are also defined for head-mounted devices, these user interfaces cannot adapt to different usage scenarios, resulting in unfriendly display of the user interface, which is not conducive to user operations.
  • the embodiments of the present application provide a display control method, device, and computer storage medium of a head-mounted device, which can improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
  • an embodiment of the present application provides a display control method of a head-mounted device, which is applied to the head-mounted device, and the method includes:
  • the display state of the head-mounted device is controlled.
  • an embodiment of the present application provides a display control method of a head-mounted device, which is applied to a mobile device, and the method includes:
  • the application scenario of the mobile device is sent to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the head-mounted device according to the display control command.
  • the display status of the device is controlled.
  • an embodiment of the present application provides a head-mounted device, which includes a scene analysis service module, a sending module, and a display module; wherein,
  • the scenario analysis service module is configured to identify the current application scenario
  • the sending module is configured to send the display control command corresponding to the current application scene from the scene analysis service module to the display module;
  • the display module is configured to control the display state according to the display control command.
  • an embodiment of the present application provides a head-mounted device, which includes a first memory and a first processor; wherein,
  • the first memory is configured to store a computer program that can run on the first processor
  • the first processor is configured to execute the method described in the first aspect when the computer program is running.
  • an embodiment of the present application provides a mobile device, which includes a scenario analysis service module and a sending module; wherein,
  • the scenario analysis service module is configured to identify the application scenario of the mobile device
  • the sending module is configured to send the application scenario of the mobile device to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and control the display according to the Command to control the display state of the head-mounted device.
  • an embodiment of the present application provides a mobile device, which includes a second memory and a second processor; wherein,
  • the second memory is configured to store a computer program that can run on the second processor
  • the second processor is configured to execute the method described in the second aspect when the computer program is running.
  • an embodiment of the present application provides a computer storage medium that stores a computer program that, when executed by a first processor, implements the method described in the first aspect, or is executed by a second
  • the processor implements the method described in the second aspect when executed.
  • the embodiments of the present application provide a display control method, device, and computer storage medium of a head-mounted device, which are applied to the head-mounted device, by identifying the current application scenario; based on the display control command corresponding to the current application scenario, Control the display state of the head-mounted device.
  • the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios.
  • Status can reduce the workload of the user's manual adjustment; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
  • FIG. 1 is a schematic diagram of the composition of a vision enhancement system provided by an embodiment of the application
  • FIG. 2 is a schematic flowchart of a display control method of a head-mounted device according to an embodiment of the application
  • FIG. 3 is a schematic flowchart of another display control method of a head-mounted device according to an embodiment of the application.
  • FIG. 4 is a schematic flowchart of a detailed flow chart of a display control method of a head-mounted device according to an embodiment of the application;
  • FIG. 5 is a detailed flowchart of another display control method of a head-mounted device according to an embodiment of the application.
  • FIG. 6 is a detailed flowchart of another display control method of a head-mounted device according to an embodiment of the application.
  • FIG. 7 is a detailed flowchart of yet another display control method of a head-mounted device according to an embodiment of the application.
  • FIG. 8 is a schematic diagram of the composition structure of a head-mounted device provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of a specific hardware structure of a head-mounted device provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of the composition structure of a mobile device provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of a specific hardware structure of a mobile device provided by an embodiment of this application.
  • FIG. 12 is a schematic diagram of the architecture structure of a vision enhancement system provided by an embodiment of the application.
  • the visual enhancement system 10 may include a head-mounted device 110 and a mobile device 120. Wherein, the head-mounted device 110 and the mobile device 120 are in a wired or wireless communication connection.
  • the head-mounted device 110 may specifically refer to a monocular or binocular head-mounted display (Head-Mounted Display, HMD), such as AR glasses.
  • the head-mounted device 110 may include one or more display modules 111 placed near the position of the user's single eye or both eyes. Among them, through the display module 111 of the head-mounted device 110, the content displayed therein can be presented in front of the user's eyes, and the displayed content can fill or partially fill the user's field of vision.
  • the display module 111 may refer to one or more organic light-emitting diode (OLED) modules, liquid crystal display (LCD) modules, laser display modules, and the like.
  • OLED organic light-emitting diode
  • LCD liquid crystal display
  • laser display modules and the like.
  • the head-mounted device 110 may also include one or more sensors and one or more cameras.
  • the head-mounted device 110 may include one or more sensors such as an inertial measurement unit (IMU), an accelerometer, a gyroscope, a proximity sensor, and a depth camera.
  • IMU inertial measurement unit
  • the mobile device 120 may be wirelessly connected to the head-mounted device 110 according to one or more wireless communication protocols (for example, Bluetooth, WIFI, etc.). Alternatively, the mobile device 120 may also be wired to the head-mounted device 110 via a data cable according to one or more data transmission protocols such as Universal Serial Bus (USB).
  • USB Universal Serial Bus
  • the mobile device 120 may be implemented in various forms.
  • the mobile devices described in the embodiments of the present application may include smart phones, tablet computers, notebook computers, laptop computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), smart watches, and so on.
  • a user operating on the mobile device 120 can control the operations at the head-mounted device 110 via the mobile device 120.
  • the data collected by the sensors in the head-mounted device 110 may also be sent back to the mobile device 120 for further processing or storage.
  • the current HMD solution usually defines a set of unchanging user interfaces (User Interface, UI), but these user interfaces cannot adapt to different usage scenarios.
  • UI User Interface
  • adaptive system behavior has been widely adopted by other devices and applications. Among them, some devices will sense the environment and adjust their functions; for example, noise-canceling headphones will detect the voice of people in the environment and adjust the level of noise reduction. Some applications will sense the user's behavior/posture and adjust functions accordingly; for example, a fitness application on a smart watch will start tracking the user's heartbeat when it detects that the user is exercising.
  • Some systems sense the environment and user behavior at the same time; for example, a car display system may detect the movement of the car and the Bluetooth connection of the user's phone, and in this case display a set of specific UI layouts and/or components.
  • a car display system may detect the movement of the car and the Bluetooth connection of the user's phone, and in this case display a set of specific UI layouts and/or components.
  • the head-mounted device also defines a set of user interfaces, these user interfaces cannot adapt to different usage scenarios, resulting in unfriendly display of the user interface, which is not conducive to user operations.
  • the embodiment of the present application provides a display control method of a head-mounted device, which is applied to the head-mounted device.
  • the basic idea of the method is: to identify the current application scenario; based on the display control command corresponding to the current application scenario, Control the display state of the head-mounted device.
  • the embodiment of the present application also provides a display control method of a head-mounted device, which is applied to a mobile device.
  • the basic idea of the method is to identify the application scenario of the mobile device; and send the application scenario of the mobile device to the head-mounted device.
  • Device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the display state of the head-mounted device according to the display control command.
  • the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios.
  • Status can reduce the workload of the user's manual adjustment; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
  • FIG. 2 shows a schematic flowchart of a display control method of a head-mounted device provided in an embodiment of the present application. As shown in Figure 2, the method may include:
  • the method of the embodiment of the present application is applied to a head-mounted device, and the head-mounted device includes a display module.
  • a user interface is usually defined on the display module, but these user interfaces are currently unable to adapt to different application scenarios.
  • the embodiments of the present application mainly provide a way to control users according to related factors such as user behavior or usage status.
  • the method of the interface more specifically, the method provided in the embodiment of the present application can enable the user interface to adapt to different application scenarios of the head-mounted device.
  • the head-mounted device includes a scene analysis service module, which can be used to identify the application scenario of the head-mounted device in a variety of ways, for example, the application scenario of the head-mounted device can be identified according to the user's posture information .
  • the application scenario of the head-mounted device can also be identified according to the foreground application event, and even the application scenario of the head-mounted device can be identified according to external events (such as external sensor events, application scenarios of the mobile device, etc.), which is not limited here.
  • the identifying the current application scenario may include:
  • posture information matches the predefined posture
  • application scenario recognition is performed according to the posture information, and the current application scenario is determined.
  • the current application scenario can be determined according to the matched posture information; if the posture information is not the same as the predefined posture If it matches, it will return to the step of obtaining the user's posture information.
  • the predefined posture may include at least one preset posture.
  • the method may further include: if a preset posture matching the posture information is queried in the predefined posture, determining that the posture information matches the predefined posture .
  • the head-mounted device may also include a user status monitoring service module, which is used to obtain user sensor data to determine the user's posture information. Therefore, in some embodiments, the obtaining the user's posture information may include:
  • the sensor data of the user is analyzed to determine the posture information of the user.
  • signal processing technology and/or machine learning technology can be used to determine the user's posture information from the user sensor data.
  • the machine learning technology can be Support Vector Machine (SVM) technology, Artificial Neural Network (ANN) technology, Deep Learning (DL) technology, etc., but there is no limitation here.
  • SVM Support Vector Machine
  • ANN Artificial Neural Network
  • DL Deep Learning
  • the identifying the current application scenario may include:
  • application scenario identification is performed according to the foreground application event, and the current application scenario is determined.
  • the current application scenario can be determined according to the matched foreground application event; if the foreground application event matches the predefined foreground application event If the list does not match, it will return to the step of obtaining the foreground application event.
  • the predefined foreground application event list may include at least one preset foreground application event.
  • the method may further include: if a preset foreground application event that matches the foreground application event is searched in the preset foreground application event list, determining The foreground application event matches the predefined foreground application event list.
  • the head-mounted device may also include a foreground application monitoring service module for acquiring foreground application events. Therefore, in some embodiments, the obtaining the foreground application event may include: obtaining the foreground application event through the foreground application monitoring service module of the head-mounted device.
  • the foreground application event may be an active application of the user running on the head-mounted device.
  • the foreground application monitoring service module can monitor the user's active applications running on the head-mounted device to obtain foreground application events.
  • the identifying the current application scenario may include:
  • the embodiments of the present application may also identify the current application scenario according to external sensor events.
  • the external sensor event may be detected by an external sensor device (or called: externally extended sensor device).
  • the acquiring external sensor events may include:
  • the external sensor device includes a sensor monitoring service module.
  • the external sensor event can be obtained through the sensor monitoring service module, and then the external sensor device sends the external sensor event to the event receiver of the head-mounted device, thereby In this type of device, the external sensor event can be obtained from the event receiver.
  • the communication connection between the external sensor device and the event receiver of the head-mounted device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol.
  • the wireless communication protocol may include at least one of the following: Bluetooth (Bluetooth) protocol, Wireless Fidelity (WIFI) protocol, Infrared Data Association (IrDA) protocol, and short-distance transmission ( Near Field Communication, NFC) protocol.
  • Bluetooth Bluetooth
  • WIFI Wireless Fidelity
  • IrDA Infrared Data Association
  • NFC Near Field Communication
  • the identifying the current application scenario may include:
  • the obtaining the application scenario of the mobile device may include:
  • the performing application scenario identification according to the application scenario of the mobile device to determine the current application scenario may include:
  • the application scenario of the mobile device is determined as the current application scenario.
  • user sensor data can be obtained through the user status monitoring service module inside the mobile device, or the foreground application can be obtained through the foreground application monitoring service module inside the mobile device. Event; then through the analysis of these user sensor data or foreground application events, the application scenario of the mobile device can be determined; in addition, the mobile device can even obtain external sensor events through external sensor devices to determine the application of the mobile device Scenes.
  • the mobile device uses its internal user status monitoring service module or foreground application monitoring service module, or uses external sensor devices to determine the application scenario of the mobile device.
  • the application scenarios of the user status monitoring service module or the foreground application monitoring service module, or the use of external sensor devices to determine the head-mounted device are similar, and will not be detailed here.
  • the mobile device can send it to the event receiver of the head-mounted device, so that in the head-mounted device, the mobile device can be obtained from the event receiver
  • the application scenarios of the headsets and then determine the application scenarios of the head-mounted device. Since the mobile device and the head-mounted device are in the same visual enhancement system, in a specific example, the application scenario of the mobile device can be directly determined as the current application scenario of the head-mounted device.
  • the communication connection between the mobile device and the event receiver of the head-mounted device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol.
  • the wireless communication protocol may include at least one of the following: Bluetooth protocol, WIFI protocol, IrDA protocol, and NFC protocol.
  • S102 Control the display state of the head-mounted device based on the display control command corresponding to the current application scenario.
  • the display control command corresponding to the current application scenario can be obtained.
  • the method may further include:
  • the display control command corresponding to the current application scene is determined.
  • the predefined scenes may include at least one of the following: driving scenes, walking scenes, cycling scenes, standing scenes, sitting scenes, holding mobile devices scenes, putting down mobile devices scenes, and geographic location scenes ;
  • Display control commands can include at least one of the following: open display command, close display command, open partial display command, close partial display command, adjust display area size command, and adjust display element arrangement command.
  • the corresponding display control command can be determined according to the matching scenario at this time to control the interior of the headset
  • the display status of the display module such as changing the display, UI layout, rearranging UI elements, UI depth, brightness, power consumption in AR, etc.; if the current application scene does not match the predefined scene, then it will return to execution at this time Steps to identify the current application scenario.
  • the predefined scene may include at least one preset scene, and each preset scene corresponds to a display control command with a different display state.
  • the method may further include: comparing the application scene with the predefined scene; if a preset scene matching the application scene is queried in the predefined scene, determining all The application scenarios described match the predefined scenarios.
  • This embodiment provides a display control method of a head-mounted device, which is applied to the head-mounted device. Identify the current application scenario; control the display state of the head-mounted device based on the display control command corresponding to the current application scenario. In this way, by determining the application scenario of the head-mounted device, and then according to the display control instructions corresponding to the application scenario, the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios. State, to reduce the workload of manual adjustment by the user; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
  • FIG. 3 shows a schematic flowchart of another display control method for a head-mounted device provided in an embodiment of the present application.
  • the method may include:
  • the method in the embodiment of the present application is applied to a mobile device.
  • the mobile device may also include a scene analysis service module to analyze the application scene of the mobile device, and then send the application scene to the head-mounted device, and the head-mounted device includes a display Module, in order to be able to adaptively control the display state of the display module in different application scenarios.
  • the embodiment of the application can also have multiple ways to identify the application scenario of the mobile device.
  • the application scenario of the mobile device can be identified based on the user's posture information, or the application of the mobile device can be identified based on the foreground application event.
  • Scenarios can even identify the application scenarios of the mobile device based on external events (such as external sensor events, etc.), which is not limited here.
  • the application scenario of identifying a mobile device may include:
  • posture information matches the predefined posture
  • application scenario recognition is performed according to the posture information, and the application scenario of the mobile device is determined.
  • the application scenario of the mobile device can be determined according to the matched posture information; if the posture information matches the predefined posture If it does not match, it will return to the step of obtaining the user's posture information.
  • the predefined posture may include at least one preset posture.
  • the method may further include: if a preset posture matching the posture information is queried in the predefined posture, determining that the posture information matches the predefined posture .
  • the mobile device may also include a user status monitoring service module, which is used to obtain user sensor data to determine the user's posture information. Therefore, in some embodiments, the obtaining the user's posture information may include:
  • the sensor data of the user is analyzed to determine the posture information of the user.
  • signal processing technology and/or machine learning technology can be used to determine the user's posture information from the user sensor data.
  • the machine learning technology can be SVM technology, ANN technology, deep learning technology, etc., but it is not limited in any way.
  • the application scenario of identifying a mobile device may include:
  • application scenario identification is performed according to the foreground application event, and the application scenario of the mobile device is determined.
  • the application scenario of the mobile device can be determined according to the matched foreground application event; if the foreground application event matches the predefined foreground application If the event list does not match, it will return to the step of obtaining the foreground application event.
  • the predefined foreground application event list may include at least one preset foreground application event.
  • the method may further include: if a preset foreground application event that matches the foreground application event is searched in the preset foreground application event list, determining The foreground application event matches the predefined foreground application event list.
  • the mobile device may also include a foreground application monitoring service module for acquiring foreground application events. Therefore, in some embodiments, the obtaining the foreground application event may include: obtaining the foreground application event through the foreground application monitoring service module of the mobile device.
  • the foreground application event may be an active application of the user running on the mobile device.
  • the foreground application monitoring service module can monitor the user's active applications running on the mobile device to obtain foreground application events.
  • the application scenario of identifying a mobile device may include:
  • the application scenario identification is performed according to the external sensor event, and the application scenario of the mobile device is determined.
  • the embodiments of the present application may also identify application scenarios of the mobile device based on external sensor events.
  • the external sensor event may be detected by an external sensor device (or called: externally extended sensor device).
  • the acquiring external sensor events may include:
  • the external sensor device includes a sensor monitoring service module.
  • the external sensor event can be obtained through the sensor monitoring service module, and then the external sensor device sends the external sensor event to the event receiver of the mobile device, so that in the mobile device, The external sensor event can be obtained from the event receiver.
  • the communication connection between the external sensor device and the event receiver of the mobile device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol.
  • the wireless communication protocol may include at least one of the following: Bluetooth protocol, WIFI protocol, IrDA protocol, and NFC protocol.
  • S202 Send the application scenario of the mobile device to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the head according to the display control command.
  • the display status of the wearable device is controlled.
  • the application scenario of the mobile device can be sent to the head-mounted device, so that the head-mounted device can determine the display control command and control the head-mounted device according to the display control command The display status.
  • the application scenario of the mobile device may include at least one of the following: driving scenario, walking scenario, cycling scenario, standing scenario, sitting scenario, holding mobile device scenario, lowering the mobile device scenario, and geographic location Location scene;
  • display control commands can include at least one of the following: open display command, close display command, open partial display command, close partial display command, adjust display area size command, and adjust display element arrangement command.
  • the mobile device can send it to the event receiver of the head-mounted device, so that in the subsequent head-mounted device, the mobile device's information can be obtained from the event receiver.
  • Application scenarios and then determine the application scenarios of the head-mounted device. Since the mobile device and the head-mounted device are in the same visual enhancement system, in a specific example, the application scenario of the mobile device can be directly determined as the current application scenario of the head-mounted device.
  • the head-mounted device can determine the corresponding display control command according to the application scenario of the mobile device to control the display status of the display module inside the head-mounted device, such as changing the display, UI layout, rearranging UI elements, and UI depth, brightness, power consumption, etc.
  • This embodiment provides a display control method of a head-mounted device, which is applied to a mobile device. Identify the application scenario of the mobile device; send the application scenario of the mobile device to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine the display control command and control the display according to the Command to control the display state of the head-mounted device.
  • the head-mounted device receives the application scenario of the mobile device, it can display control instructions corresponding to the application scenario, so that in different application scenarios, the head-mounted device can automatically change the user according to user behavior or usage status.
  • the display status of the interface can reduce the workload of manual adjustment by the user; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
  • FIG. 4 shows a detailed flowchart of a display control method for a head-mounted device provided in an embodiment of the present application. As shown in Figure 4, the method may include:
  • step S303 if the judgment result is yes, then step S304 is executed; if the judgment result is no, then step S301 is executed back.
  • the head-mounted device may include a user status monitoring service module, a foreground application monitoring service module, and a scene analysis server
  • a mobile device may also include a user status monitoring service module, a foreground application monitoring service module, and scene analysis. server.
  • the monitoring of the user's posture information, foreground application events, etc. can be performed by a head-mounted device or a mobile device, and there is no limitation here.
  • step S301 it may be obtained by monitoring by the user status monitoring service module; for step S302-S304, it may be obtained by analyzing by the scene analysis server.
  • step S307 if the judgment result is yes, then step S304 is executed; if the judgment result is no, then step S305 is executed back.
  • steps S301-S303 and steps S305-S307 can be executed in parallel, and there is no order of execution to determine the current application scenario.
  • step S305 it can be obtained by monitoring by the foreground application monitoring service module; for step S306-S307 and S304, it can be obtained by analyzing the scene analysis server.
  • the vision enhancement system may also include external sensor devices, and the external sensor devices include a sensor monitoring service module.
  • step S308 may be to run a sensor monitoring service module on the external sensor device to detect the external sensor event.
  • steps S308, steps S301-S303, and steps S305-S307 can be executed in parallel, and there is no order of execution; and this also shows three ways to identify the current application scenario, after which, Step S304 is all performed.
  • step S309 if the judgment result is yes, then step S310 is executed; if the judgment result is no, then step S308, S305, and S301 are executed back.
  • S309 and S310 may be performed by a head-mounted device.
  • the head-mounted device may also include a display module (also referred to as a “screen”).
  • the scene analysis server in the head-mounted device can control the display state of the display module after sending a display control command to the display module.
  • step S310 the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to steps S308, S305, and S301 at this time.
  • the matching scenario described in the embodiment of the present application may specifically refer to a preset scenario (or, the current application scenario) that matches the current application scenario.
  • the current application scene or the preset scene matching it may be referred to as "matching scene” for short.
  • the scene analysis server sends a display control command to the display module to control the display state of the display module.
  • the embodiments of the present application relate to a method for controlling a user interface based on factors related to user behavior and usage.
  • the embodiment of the present application relates to a user interface, which can adapt to application scenarios of a head-mounted device in different use cases.
  • the embodiment of the present application can autonomously sense the user's state and turn off the screen display of the head-mounted device or hide a part of UI components on the screen of the head-mounted device.
  • the embodiments of the present application are mainly used to improve the interaction effect between the head-mounted device and the user. More specifically, the embodiments of the present application enable the head-mounted device to automatically change its UI according to the user's behavior. This can reduce the workload of the user to manually adjust the UI. For example, for a pair of AR glasses that work with a mobile device (such as a smart phone), when the system senses that the user is watching the screen of the mobile device, the AR glasses will close its display module or avoid presenting any content to reduce visual interference. In this way, in a specific example, the optical see through (OST) of the AR glasses is turned off, so that the user can better view the real world. In another specific example, the system may resize and/or reorganize the UI components on the display (for example, when the user looks down, the UI may be rearranged to the upper half).
  • a mobile device such as a smart phone
  • the user may use a wireless controller to control the head-mounted device.
  • the user's smart phone receives a text message or a call
  • the user puts down the wireless controller and picks up the smart phone.
  • the embodiment of the present application detects this series of events, and at this time, the screen of the head-mounted device will be turned off, so that the user can better watch the screen of the smart phone.
  • the detection of events can be based on one or more of the following: sensor data from the wireless controller (for example, whether the user holds the wireless controller), events from the smartphone (for example, the user picks up the smartphone and/or unlocks the smartphone) To read text), and sensor data from the head-mounted device (for example, whether the device is pointing down), etc.
  • sensor data from the wireless controller for example, whether the user holds the wireless controller
  • events from the smartphone for example, the user picks up the smartphone and/or unlocks the smartphone
  • sensor data from the head-mounted device for example, whether the device is pointing down
  • FIG. 5 shows another possible implementation manner provided by an embodiment of the present application.
  • only user behavior can cause the screen of the head-mounted device to turn off. For example, the user suddenly enters a running state from walking at a slow speed.
  • FIG. 5 shows a detailed flowchart of another display control method of a head-mounted device provided by an embodiment of the present application. As shown in Figure 5, the method may include:
  • step S403 if the judgment result is yes, then step S404 is executed; if the judgment result is no, then step S401 is executed back.
  • step S405 the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to step S401 at this time.
  • the predefined scene includes at least one preset scene, and each preset scene corresponds to a display control command with a different display state.
  • the current application scene matches the predefined scene
  • the current application scene or the preset scene matching it may be referred to as a “matching scene” for short.
  • the scene analysis server sends a display control command to the display module to control the display state of the display module.
  • FIG. 6 shows another possible implementation manner provided by an embodiment of the present application.
  • only a special application event can cause the screen of the head-mounted device to turn off.
  • a smart phone receives an incoming video call, and when the user accepts the video call, the screen of the head-mounted device is turned off to facilitate the video call on the smart phone.
  • FIG. 6 shows a detailed flowchart of yet another display control method of a head-mounted device provided by an embodiment of the present application. As shown in Figure 6, the method may include:
  • step S503 if the judgment result is yes, then step S504 is executed; if the judgment result is no, then step S501 is executed back.
  • step S505 the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to step S501 at this time.
  • FIG. 7 shows another possible implementation manner provided by an embodiment of the present application.
  • a single external sensor event may cause the screen of the head-mounted device to turn off.
  • the method may include:
  • step S603 if the judgment result is yes, then step S604 is executed; if the judgment result is no, then step S601 is executed back.
  • step S604 the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to step S601 at this time.
  • the vision enhancement system may further include an external sensor device, and the external sensor event can be acquired through the external sensor device.
  • the current application scenario can be identified; then the current application scenario is compared with the predefined scenario, so that when the current application scenario matches the predefined scenario, the matching scenario can be determined by
  • the scene analysis server in the head-mounted device sends the display control command to the display module, so as to control its display state.
  • the predefined scene includes at least one preset scene, and each preset scene corresponds to a display control command with a different display state.
  • the current application scene matches the predefined scene
  • the current application scene or the preset scene matching it may be referred to as a “matching scene” for short.
  • the scene analysis server sends a display control command to the display module to control the display state of the display module.
  • This embodiment provides a display control method of a head-mounted device.
  • the specific implementation of the foregoing embodiment is described in detail through this embodiment. It can be seen from this that the embodiment of the application significantly improves the user experience of the head-mounted device. . More specifically, the embodiments of the present application can enable the head-mounted device to automatically change its UI according to user behavior, thereby reducing the workload of the user to manually adjust the UI. In this way, the embodiments of the present application can develop a head-mounted device product with a better user experience, and make the head-mounted device product more intelligent.
  • FIG. 8 shows a schematic diagram of the composition structure of a head-mounted device 70 provided by an embodiment of the present application.
  • the head-mounted device 70 may include: a scene analysis service module 701, a sending module 702, and a display module 703; among them,
  • the scenario analysis service module 701 is configured to identify the current application scenario
  • the sending module 702 is configured to send the display control command corresponding to the current application scene from the scene analysis service module to the display module 703;
  • the display module 703 is configured to control the display state according to the display control command.
  • the scene analysis service module 701 is specifically configured to obtain posture information of the user; and compare the posture information with a predefined posture; The posture information is used to identify the application scenario, and the current application scenario is determined.
  • the head-mounted device 70 may further include a user status monitoring service module 704;
  • the user status monitoring service module 704 is configured to obtain user sensor data; and analyze the user sensor data to determine the posture information of the user;
  • the sending module 702 is further configured to send the posture information of the user to the scene analysis service module 701.
  • the scene analysis service module 701 is specifically configured to determine that the posture information matches the predefined posture if a preset posture matching the posture information is queried in the predefined posture;
  • the predefined posture includes at least one preset posture.
  • the scene analysis service module 701 is specifically configured to obtain foreground application events; and search for the foreground application events in a list of predefined foreground application events; and if the foreground application event matches the predefined If the foreground application event list matches, the application scenario identification is performed according to the foreground application event, and the current application scenario is determined.
  • the head-mounted device 70 may further include a foreground application monitoring service module 705;
  • the foreground application monitoring service module 705 is configured to obtain the foreground application event
  • the sending module 702 is further configured to send the foreground application event to the scene analysis service module 701.
  • the scene analysis service module 701 is specifically configured to determine that the foreground application event matches the foreground application event if a preset foreground application event matching the foreground application event is searched in the preset foreground application event list.
  • the predefined foreground application event list matches; wherein, the predefined foreground application event list includes at least one preset foreground application event.
  • the scene analysis service module 701 is specifically configured to acquire external sensor events; and perform application scene recognition according to the external sensor events, and determine the current application scene.
  • the head-mounted device 70 may further include a communication module 706 and an event receiver 707;
  • the communication module 706 is configured to establish a communication connection between the external sensor device and the event receiver 707;
  • the event receiver 707 is configured to receive the external sensor event sent by the external sensor device.
  • the scenario analysis service module 701 is further configured to obtain the application scenario of the mobile device; and determine the application scenario of the mobile device as the current application scenario.
  • the communication module 706 is further configured to establish a communication connection between the mobile device and the event receiver 707;
  • the event receiver 707 is further configured to receive the application scenario of the mobile device sent by the mobile device.
  • the scene analysis service module 701 is further configured to compare the current application scene with a predefined scene; and if the current application scene matches the predefined scene, determine the current The display control commands corresponding to the application scenarios.
  • the predefined scene includes at least one of the following: a driving scene, a walking scene, a riding scene, a standing scene, a sitting scene, a holding mobile device scene, a lowering mobile device scene, and a geographic location scene;
  • the display control command includes at least one of the following: a display opening command, a display closing command, a partial display opening command, a partial display closing command, a display area size adjustment command, and a display element arrangement adjustment command.
  • a "module” may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a unit, or may also be non-modular.
  • the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the embodiments of the present application provide a computer storage medium, which is applied to the head-mounted device 70, and the computer storage medium stores a computer program that, when executed by a processor, implements any one of the foregoing embodiments. Methods.
  • FIG. 9 shows a schematic diagram of the specific hardware structure of the head-mounted device 70 provided by the embodiment of the present application.
  • it may include: a first communication interface 801, a first memory 802, and a first processor 803; various components are coupled together through a first bus system 804.
  • the first bus system 804 is used to implement connection and communication between these components.
  • the first bus system 804 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the first bus system 804 in FIG. 9. in,
  • the first communication interface 801 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
  • the first memory 802 is configured to store a computer program that can run on the first processor 803;
  • the first processor 803 is configured to execute: when the computer program is running:
  • the display state of the head-mounted device is controlled.
  • the first memory 802 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
  • Synchlink DRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus RAM
  • the first processor 803 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the first processor 803 or instructions in the form of software.
  • the aforementioned first processor 803 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the first memory 802, and the first processor 803 reads the information in the first memory 802, and completes the steps of the foregoing method in combination with its hardware.
  • the embodiments described in this application can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more application specific integrated circuits (ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
  • ASIC application specific integrated circuits
  • DSP Digital Signal Processing
  • DSP Device digital signal processing equipment
  • PLD programmable Logic Device
  • PLD Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described in this application can be implemented through modules (for example, procedures, functions, etc.) that perform the functions described in this application.
  • the software codes can be stored in the memory and executed by
  • the first processor 803 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
  • This embodiment provides a head-mounted device, which may include a scene analysis service module, a sending module, and a display module.
  • a head-mounted device which may include a scene analysis service module, a sending module, and a display module.
  • the embodiments of the present application can develop a head-mounted device product with a better user experience, and make the head-mounted device product more intelligent.
  • This allows the head-mounted device to automatically change its UI according to user behavior, reducing the user's manual adjustment of the UI; at the same time, it can also improve the interaction between the head-mounted device and the user, and improve the user-friendliness of the user interface. .
  • FIG. 10 shows a schematic diagram of the composition structure of a mobile device 90 provided by an embodiment of the present application.
  • the mobile device 90 may include: a scene analysis service module 901 and a sending module 902; among them,
  • the scenario analysis service module 901 is configured to identify the application scenario of the mobile device
  • the sending module 902 is configured to send the application scenario of the mobile device to the head-mounted device; wherein, the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and according to the display control command The display state of the head-mounted device is controlled.
  • the scene analysis service module 901 is specifically configured to obtain posture information of the user; and compare the posture information with a predefined posture; The posture information is used to identify the application scenario, and the application scenario of the mobile device is determined.
  • the mobile device 90 may further include a user status monitoring service module 903;
  • the user status monitoring service module 903 is configured to obtain user sensor data; and analyze the user sensor data to determine the posture information of the user;
  • the sending module 902 is further configured to send the posture information of the user to the scene analysis service module 901.
  • the scene analysis service module 901 is specifically configured to determine that the posture information matches the predefined posture if a preset posture matching the posture information is queried in the predefined posture;
  • the predefined posture includes at least one preset posture.
  • the scene analysis service module 901 is specifically configured to obtain foreground application events; and search for the foreground application events in a list of predefined foreground application events; and if the foreground application event matches the predefined foreground application event If the foreground application event list matches, the application scenario identification is performed according to the foreground application event, and the application scenario of the mobile device is determined.
  • the mobile device 90 may also include a foreground application monitoring service module 904;
  • the foreground application monitoring service module 904 is configured to obtain the foreground application event
  • the sending module 902 is further configured to send the foreground application event to the scene analysis service module 901.
  • the scene analysis service module 901 is specifically configured to determine that the foreground application event matches the foreground application event if a preset foreground application event matching the foreground application event is searched in the preset foreground application event list.
  • the predefined foreground application event list matches; wherein, the predefined foreground application event list includes at least one preset foreground application event.
  • the scene analysis service module 901 is specifically configured to acquire external sensor events; and perform application scene recognition according to the external sensor events, and determine the application scene of the mobile device.
  • the mobile device 90 may further include a communication module 905 and an event receiver 906;
  • the communication module 905 is configured to establish a communication connection between the external sensor device and the event receiver 906;
  • the event receiver 906 is configured to receive the external sensor event sent by the external sensor device.
  • a "unit” may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, it may also be a module, or it may also be non-modular.
  • the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this embodiment provides a computer storage medium applied to the mobile device 90.
  • the computer storage medium stores a computer program that, when executed by the second processor, implements any of the foregoing embodiments. The method described.
  • FIG. 11 shows a schematic diagram of a specific hardware structure of the mobile device 90 provided by an embodiment of the present application.
  • it may include: a second communication interface 1001, a second memory 1002, and a second processor 1003; various components are coupled together through a second bus system 1004.
  • the second bus system 1004 is used to implement connection and communication between these components.
  • the second bus system 1004 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the second bus system 1004 in FIG. 11. in,
  • the second communication interface 1001 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
  • the second memory 1002 is configured to store a computer program that can run on the second processor 1003;
  • the second processor 1003 is configured to execute: when running the computer program:
  • the application scenario of the mobile device is sent to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the head-mounted device according to the display control command.
  • the display status of the device is controlled.
  • the second processor 1003 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
  • This embodiment provides a mobile device, which may include a scene analysis service module and a sending module.
  • the head-mounted device can display control instructions corresponding to the application scenario, so that in different application scenarios, the head-mounted device can automatically change its UI according to user behavior , Reducing the workload of the user to manually adjust the UI; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
  • FIG. 12 shows a schematic structural structure diagram of a vision enhancement system provided by an embodiment of the present application.
  • the visual enhancement system may include a head-mounted device 1101, a mobile device 1102, and an external sensor device 1103.
  • the head-mounted device 1101 is the head-mounted device described in any one of the preceding embodiments
  • the mobile device 1102 is the mobile device described in any one of the preceding embodiments
  • the external sensor device 1103 is the one described in any one of the preceding embodiments.
  • the head-mounted device 1101 may include a user status monitoring service module 1, a scene analysis service module 2, an event receiver 3, a foreground application monitoring service module 4, and a display module 5 (that is, the white-filled in FIG. 12 Module);
  • the external sensor device 1103 may include a sensor monitoring service module 6 (that is, the module filled with black in Figure 12);
  • the mobile device 1102 may include a user status monitoring service module 7, a scene analysis service module 8, and a foreground application monitoring service module 9 (That is, the module filled with gray in Figure 12).
  • the user status monitoring service module 1 monitors related sensor data (Sensor Data), and uses signal processing technology and/or machine learning technology to detect certain events from the sensor data (such as the user's posture information). Then, the service module 1 sends the event to the scene analysis service module 2 on the head-mounted device 1101.
  • the event receiver 3 can receive external sensor events from the sensor monitoring service module 6 in the external sensor device 1103, that is, the sensor monitoring service module 6 runs on the external sensor device 1103 (such as an IoT device) to detect external sensors. event.
  • the event receiver 3 may also receive user state events from the scene analysis service module 8 on the mobile device 1102, and then the event receiver 3 forwards the event to the scene analysis service module 2 on the head-mounted device 1101.
  • the foreground application monitoring service module 4 monitors the user's active applications running on the head-mounted device 1101.
  • the service module sends the foreground application event (or called the foreground task event) to the scene analysis service module 2. For example, when the user starts or stops a particular application.
  • the scene analysis service module 2 collects sensor data from internal sensors of the head-mounted device 1101, collects sensor events from external sensor devices, and collects foreground application events. Then, the module analyzes the data and determines a suitable display control command, and then the display control command will be sent to the display module 5.
  • the display control commands are predefined for different scenarios.
  • the display control commands include, but are not limited to, turn on/off the display, turn on/off part of the display, brightness control, adjust the display size, rearrange UI elements, and so on.
  • the user status monitoring service module 7 monitors real-time sensor data on the mobile device and detects events from the sensor data.
  • the user status monitoring service module 7 sends the sensor event to the scene analysis service module 8 on the mobile device.
  • the mobile device changes to landscape mode or portrait mode.
  • the foreground application monitoring service module 9 monitors the user's active applications running on the mobile device. For example, when the user starts or stops a specific application, the service sends the foreground application event to the scene analysis service module 8.
  • the mobile device 1102 may also include an event receiver (not shown in the figure), and the event receiver may also be used to receive external sensor events from the sensor monitoring service module 6 in the external sensor device 1103.
  • the scene analysis service module 8 collects sensor data from internal sensors of the mobile device, and collects foreground application events from the foreground application monitoring service module 9. Then, the scene analysis service module 8 analyzes the data, determines the usage scene on the mobile device, and sends the scene event of the mobile device to the event receiver 3 on the head-mounted device 1101 through a data cable or WiFi or other wireless communication signals.
  • the application scenarios that can be detected by the vision enhancement system of the embodiment of the present application include, but are not limited to, driving, walking, cycling, standing, sitting, holding a mobile device, putting down a mobile device, geographic location, and so on.
  • the system response includes but is not limited to changing the display, UI layout, UI depth, brightness, power consumption, etc. in AR.
  • the embodiments of the present application significantly improve the user experience of the head-mounted device. More specifically, the embodiments of the present application can enable the head-mounted device to automatically change its UI according to user behavior, thereby reducing the workload of the user to manually adjust the UI. In this way, the embodiments of the present application can develop a head-mounted device product with a better user experience, and make the head-mounted device product more intelligent.
  • the current application scenario is identified; based on the display control command corresponding to the current application scenario, the display state of the head-mounted device is controlled.
  • the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios.
  • Status can reduce the workload of the user's manual adjustment; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.

Abstract

Disclosed are a display control method and device for a head-mounted device, and a computer storage medium, wherein same are applied to a head-mounted device. The method comprises: identifying the current application scenario; and on the basis of a display control command corresponding to the current application scenario, controlling a display state of a head-mounted device. As such, by means of determining an application scenario of a head-mounted device, and according to a display control command corresponding to the application scenario, the head-mounted device can automatically change a display state of a user interface according to a user behavior or a use state under different application scenarios, such that the workload of manual adjustments which are performed by a user can be reduced. Moreover, the interactive effect between the head-mounted device and a user can further be improved, and the user friendliness of the user interface is improved.

Description

头戴式设备的显示控制方法、设备及计算机存储介质Display control method, device and computer storage medium of head-mounted device
相关申请的交叉引用Cross-references to related applications
本申请要求以2020年02月24日提交的、申请号为62/980,915的题为“Method for Controlling Head-Mounted Display”的在先美国临时专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the prior U.S. provisional patent application entitled "Method for Controlling Head-Mounted Display" filed on February 24, 2020 with application number 62/980,915, the entire content of which is incorporated herein by reference. Applying.
技术领域Technical field
本申请实施例涉及视觉增强技术领域,尤其涉及一种头戴式设备的显示控制方法、设备及计算机存储介质。The embodiments of the present application relate to the field of vision enhancement technology, and in particular, to a display control method of a head-mounted device, a device, and a computer storage medium.
背景技术Background technique
近年来,随着虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)和混合现实(Mixed Reality,MR)等视觉增强技术的发展,可以通过计算机系统模拟出虚拟的三维世界,使得用户能够与虚拟场景进行互动,并且带给用户身临其境的感受。In recent years, with the development of visual enhancement technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), a virtual three-dimensional world can be simulated through a computer system, so that The user can interact with the virtual scene and give the user an immersive experience.
在视觉增强系统中,至少包括有头戴式设备和移动设备,且头戴式设备和移动设备可以通过有线或者无线进行通信连接。目前,头戴式设备虽然也会定义一组用户界面,但是这些用户界面无法适应不同的使用场景,导致用户界面显示不友好,不利于用户的操作。In the visual enhancement system, at least a head-mounted device and a mobile device are included, and the head-mounted device and the mobile device can be connected through wired or wireless communication. At present, although a set of user interfaces are also defined for head-mounted devices, these user interfaces cannot adapt to different usage scenarios, resulting in unfriendly display of the user interface, which is not conducive to user operations.
发明内容Summary of the invention
本申请实施例提供一种头戴式设备的显示控制方法、设备及计算机存储介质,可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。The embodiments of the present application provide a display control method, device, and computer storage medium of a head-mounted device, which can improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
本申请实施例的技术方案可以如下实现:The technical solutions of the embodiments of the present application can be implemented as follows:
第一方面,本申请实施例提供了一种头戴式设备的显示控制方法,应用于头戴式设备,该方法包括:In the first aspect, an embodiment of the present application provides a display control method of a head-mounted device, which is applied to the head-mounted device, and the method includes:
识别当前的应用场景;Identify the current application scenario;
基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。Based on the display control command corresponding to the current application scenario, the display state of the head-mounted device is controlled.
第二方面,本申请实施例提供了一种头戴式设备的显示控制方法,应用于移动设备,该方法包括:In the second aspect, an embodiment of the present application provides a display control method of a head-mounted device, which is applied to a mobile device, and the method includes:
识别移动设备的应用场景;Identify the application scenarios of mobile devices;
将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。The application scenario of the mobile device is sent to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the head-mounted device according to the display control command. The display status of the device is controlled.
第三方面,本申请实施例提供了一种头戴式设备,该头戴式设备包括场景分析服务模块、发送模块和显示模块;其中,In a third aspect, an embodiment of the present application provides a head-mounted device, which includes a scene analysis service module, a sending module, and a display module; wherein,
所述场景分析服务模块,配置为识别当前的应用场景;The scenario analysis service module is configured to identify the current application scenario;
所述发送模块,配置为将所述当前的应用场景对应的显示控制命令由所述场景分析 服务模块发送至所述显示模块;The sending module is configured to send the display control command corresponding to the current application scene from the scene analysis service module to the display module;
所述显示模块,配置为根据所述显示控制命令控制显示状态。The display module is configured to control the display state according to the display control command.
第四方面,本申请实施例提供了一种头戴式设备,该头戴式设备包括第一存储器和第一处理器;其中,In a fourth aspect, an embodiment of the present application provides a head-mounted device, which includes a first memory and a first processor; wherein,
所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;The first memory is configured to store a computer program that can run on the first processor;
所述第一处理器,用于在运行所述计算机程序时,执行如第一方面所述的方法。The first processor is configured to execute the method described in the first aspect when the computer program is running.
第五方面,本申请实施例提供了一种移动设备,该移动设备包括场景分析服务模块和发送模块;其中,In a fifth aspect, an embodiment of the present application provides a mobile device, which includes a scenario analysis service module and a sending module; wherein,
所述场景分析服务模块,配置为识别移动设备的应用场景;The scenario analysis service module is configured to identify the application scenario of the mobile device;
所述发送模块,配置为将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。The sending module is configured to send the application scenario of the mobile device to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and control the display according to the Command to control the display state of the head-mounted device.
第六方面,本申请实施例提供了一种移动设备,该移动设备包括第二存储器和第二处理器;其中,In a sixth aspect, an embodiment of the present application provides a mobile device, which includes a second memory and a second processor; wherein,
所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;The second memory is configured to store a computer program that can run on the second processor;
所述第二处理器,用于在运行所述计算机程序时,执行如第二方面所述的方法。The second processor is configured to execute the method described in the second aspect when the computer program is running.
第七方面,本申请实施例提供了一种计算机存储介质,该计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如第一方面所述的方法、或者被第二处理器执行时实现如第二方面所述的方法。In a seventh aspect, an embodiment of the present application provides a computer storage medium that stores a computer program that, when executed by a first processor, implements the method described in the first aspect, or is executed by a second The processor implements the method described in the second aspect when executed.
本申请实施例提供了一种头戴式设备的显示控制方法、设备及计算机存储介质,应用于头戴式设备,通过识别当前的应用场景;基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。这样,通过确定头戴式设备的应用场景,然后根据该应用场景对应的显示控制指令,从而可以使得在不同的应用场景下,头戴式设备能够根据用户行为或者使用状态自动改变用户界面的显示状态,可以减少用户手动调整的工作量;同时还可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。The embodiments of the present application provide a display control method, device, and computer storage medium of a head-mounted device, which are applied to the head-mounted device, by identifying the current application scenario; based on the display control command corresponding to the current application scenario, Control the display state of the head-mounted device. In this way, by determining the application scenario of the head-mounted device, and then according to the display control instructions corresponding to the application scenario, the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios. Status can reduce the workload of the user's manual adjustment; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
附图说明Description of the drawings
图1为本申请实施例提供的一种视觉增强系统的组成示意图;FIG. 1 is a schematic diagram of the composition of a vision enhancement system provided by an embodiment of the application;
图2为本申请实施例提供的一种头戴式设备的显示控制方法的流程示意图;2 is a schematic flowchart of a display control method of a head-mounted device according to an embodiment of the application;
图3为本申请实施例提供的另一种头戴式设备的显示控制方法的流程示意图;3 is a schematic flowchart of another display control method of a head-mounted device according to an embodiment of the application;
图4为本申请实施例提供的一种头戴式设备的显示控制方法的详细流程示意图;4 is a schematic flowchart of a detailed flow chart of a display control method of a head-mounted device according to an embodiment of the application;
图5为本申请实施例提供的另一种头戴式设备的显示控制方法的详细流程示意图;FIG. 5 is a detailed flowchart of another display control method of a head-mounted device according to an embodiment of the application;
图6为本申请实施例提供的又一种头戴式设备的显示控制方法的详细流程示意图;FIG. 6 is a detailed flowchart of another display control method of a head-mounted device according to an embodiment of the application;
图7为本申请实施例提供的又一种头戴式设备的显示控制方法的详细流程示意图;FIG. 7 is a detailed flowchart of yet another display control method of a head-mounted device according to an embodiment of the application;
图8为本申请实施例提供的一种头戴式设备的组成结构示意图;FIG. 8 is a schematic diagram of the composition structure of a head-mounted device provided by an embodiment of the application;
图9为本申请实施例提供的一种头戴式设备的具体硬件结构示意图;FIG. 9 is a schematic diagram of a specific hardware structure of a head-mounted device provided by an embodiment of the application;
图10为本申请实施例提供的一种移动设备的组成结构示意图;FIG. 10 is a schematic diagram of the composition structure of a mobile device provided by an embodiment of this application;
图11为本申请实施例提供的一种移动设备的具体硬件结构示意图;FIG. 11 is a schematic diagram of a specific hardware structure of a mobile device provided by an embodiment of this application;
图12为本申请实施例提供的一种视觉增强系统的架构结构示意图。FIG. 12 is a schematic diagram of the architecture structure of a vision enhancement system provided by an embodiment of the application.
具体实施方式Detailed ways
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。In order to have a more detailed understanding of the characteristics and technical content of the embodiments of the present application, the implementation of the embodiments of the present application will be described in detail below with reference to the accompanying drawings. The attached drawings are for reference and explanation purposes only, and are not used to limit the embodiments of the present application.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of this application. The terminology used herein is only for the purpose of describing the embodiments of the application, and is not intended to limit the application.
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。还需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。In the following description, “some embodiments” are referred to, which describe a subset of all possible embodiments, but it is understood that “some embodiments” may be the same subset or different subsets of all possible embodiments, and Can be combined with each other without conflict. It should also be pointed out that the term "first\second\third" referred to in the embodiments of this application only distinguishes similar objects, and does not represent a specific order for the objects. Understandably, "first\second\third" "Three" may be interchanged in specific order or sequence when permitted, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
参见图1,其示出了本申请实施例提供的一种视觉增强系统的组成示意图。如图1所示,视觉增强系统10可以包括头戴式设备110和移动设备120。其中,头戴式设备110和移动设备120通过有线或者无线进行通信连接。Refer to FIG. 1, which shows a schematic diagram of the composition of a vision enhancement system provided by an embodiment of the present application. As shown in FIG. 1, the visual enhancement system 10 may include a head-mounted device 110 and a mobile device 120. Wherein, the head-mounted device 110 and the mobile device 120 are in a wired or wireless communication connection.
这里,头戴式设备110,具体可以是指单目或者双目的头戴式显示器(Head-Mounted Display,HMD),比如AR眼镜。在图1中,头戴式设备110可以包括在靠近使用者单眼或者双眼的位置区域放置的一个或多个显示模块111。其中,通过头戴式设备110的显示模块111,可以将其中显示的内容呈现在使用者的眼前,而且显示内容能够填满或部分填满使用者的视野。还需要说明的是,显示模块111可以是指一个或多个有机发光二极管(Organic Light-Emitting Diode,OLED)模块、液晶显示器(Liquid Crystal Display,LCD)模块、激光显示模块等。Here, the head-mounted device 110 may specifically refer to a monocular or binocular head-mounted display (Head-Mounted Display, HMD), such as AR glasses. In FIG. 1, the head-mounted device 110 may include one or more display modules 111 placed near the position of the user's single eye or both eyes. Among them, through the display module 111 of the head-mounted device 110, the content displayed therein can be presented in front of the user's eyes, and the displayed content can fill or partially fill the user's field of vision. It should also be noted that the display module 111 may refer to one or more organic light-emitting diode (OLED) modules, liquid crystal display (LCD) modules, laser display modules, and the like.
另外,在一些实施例中,头戴式设备110,还可以包括一个或多个传感器和一个或多个相机。例如,头戴式设备110可以包括诸如惯性测量单元(Inertial Measurement Unit,IMU)、加速计、陀螺仪、接近传感器和深度相机等的一个或多个传感器。In addition, in some embodiments, the head-mounted device 110 may also include one or more sensors and one or more cameras. For example, the head-mounted device 110 may include one or more sensors such as an inertial measurement unit (IMU), an accelerometer, a gyroscope, a proximity sensor, and a depth camera.
移动设备120可以根据一个或多个无线通信协议(例如,蓝牙、WIFI等)无线连接到头戴式设备110。或者,移动设备120也可以根据诸如通用串行总线(Universal Serial Bus,USB)之类的一个或多个数据传输协议经由数据电缆有线连接到头戴式设备110。这里,移动设备120可以以各种形式来实施。例如,本申请实施例中描述的移动设备可以包括诸如智能手机、平板电脑、笔记本电脑、膝上电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、智能手表等等。The mobile device 120 may be wirelessly connected to the head-mounted device 110 according to one or more wireless communication protocols (for example, Bluetooth, WIFI, etc.). Alternatively, the mobile device 120 may also be wired to the head-mounted device 110 via a data cable according to one or more data transmission protocols such as Universal Serial Bus (USB). Here, the mobile device 120 may be implemented in various forms. For example, the mobile devices described in the embodiments of the present application may include smart phones, tablet computers, notebook computers, laptop computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), smart watches, and so on.
在一些实施例中,在移动设备120上操作的用户可以经由移动设备120控制头戴式设备110处的操作。另外,由头戴式设备110中的传感器收集的数据也可以被发送回移动设备120以供进一步处理或存储。In some embodiments, a user operating on the mobile device 120 can control the operations at the head-mounted device 110 via the mobile device 120. In addition, the data collected by the sensors in the head-mounted device 110 may also be sent back to the mobile device 120 for further processing or storage.
还需要说明的是,目前的HMD方案通常会定义一组一成不变的用户界面(User Interface,UI),但是这些用户界面无法适应不同的使用场景。然而,自适应系统行为已被其他设备和应用广泛采用。其中,某些设备会感测环境并调整其功能;例如,降噪耳机将检测环境中人的声音,并调整降噪级别。某些应用会感测用户的行为/姿态并相应地调整功能;例如,智能手表上的健身应用在检测到用户在运动时便开始跟踪用户的心跳。某些系统会同时感测环境和用户行为;例如,汽车显示系统可能会检测汽车的运动以及用户电话的蓝牙连接,并在这种情况下显示一组特定的UI布局和/或组件。也就是说,头戴式设备虽然也会定义一组用户界面,但是这些用户界面无法适应不同的使用场景,从而导致用户界面显示不友好,不利于用户的操作。It should also be noted that the current HMD solution usually defines a set of unchanging user interfaces (User Interface, UI), but these user interfaces cannot adapt to different usage scenarios. However, adaptive system behavior has been widely adopted by other devices and applications. Among them, some devices will sense the environment and adjust their functions; for example, noise-canceling headphones will detect the voice of people in the environment and adjust the level of noise reduction. Some applications will sense the user's behavior/posture and adjust functions accordingly; for example, a fitness application on a smart watch will start tracking the user's heartbeat when it detects that the user is exercising. Some systems sense the environment and user behavior at the same time; for example, a car display system may detect the movement of the car and the Bluetooth connection of the user's phone, and in this case display a set of specific UI layouts and/or components. In other words, although the head-mounted device also defines a set of user interfaces, these user interfaces cannot adapt to different usage scenarios, resulting in unfriendly display of the user interface, which is not conducive to user operations.
本申请实施例提供了一种头戴式设备的显示控制方法,应用于头戴式设备,该方法的基本思想是:识别当前的应用场景;基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。The embodiment of the present application provides a display control method of a head-mounted device, which is applied to the head-mounted device. The basic idea of the method is: to identify the current application scenario; based on the display control command corresponding to the current application scenario, Control the display state of the head-mounted device.
本申请实施例还提供了一种头戴式设备的显示控制方法,应用于移动设备,该方法的基本思想是:识别移动设备的应用场景;将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。The embodiment of the present application also provides a display control method of a head-mounted device, which is applied to a mobile device. The basic idea of the method is to identify the application scenario of the mobile device; and send the application scenario of the mobile device to the head-mounted device. Device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the display state of the head-mounted device according to the display control command.
这样,通过确定头戴式设备的应用场景,然后根据该应用场景对应的显示控制指令,从而可以使得在不同的应用场景下,头戴式设备能够根据用户行为或者使用状态自动改变用户界面的显示状态,可以减少用户手动调整的工作量;同时还可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。In this way, by determining the application scenario of the head-mounted device, and then according to the display control instructions corresponding to the application scenario, the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios. Status can reduce the workload of the user's manual adjustment; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
下面将结合附图对本申请各实施例进行详细说明。Hereinafter, each embodiment of the present application will be described in detail with reference to the accompanying drawings.
本申请的一实施例中,参见图2,其示出了本申请实施例提供的一种头戴式设备的显示控制方法的流程示意图。如图2所示,该方法可以包括:In an embodiment of the present application, refer to FIG. 2, which shows a schematic flowchart of a display control method of a head-mounted device provided in an embodiment of the present application. As shown in Figure 2, the method may include:
S101:识别当前的应用场景。S101: Identify the current application scenario.
需要说明的是,本申请实施例的方法应用于头戴式设备,且头戴式设备包括有显示模块。具体对于显示模块来说,显示模块上通常定义有用户界面,但是目前这些用户界面无法适应不同的应用场景,本申请实施例主要是提供一种可以根据用户行为或者使用状态等相关因素来控制用户界面的方法;更具体地讲,本申请实施例所提供的方法可以使得用户界面能够自适应头戴式设备不同的应用场景。It should be noted that the method of the embodiment of the present application is applied to a head-mounted device, and the head-mounted device includes a display module. Specifically for the display module, a user interface is usually defined on the display module, but these user interfaces are currently unable to adapt to different application scenarios. The embodiments of the present application mainly provide a way to control users according to related factors such as user behavior or usage status. The method of the interface; more specifically, the method provided in the embodiment of the present application can enable the user interface to adapt to different application scenarios of the head-mounted device.
在本申请实施例中,头戴式设备内部包括有场景分析服务模块,可以采用多种方式来识别头戴式设备的应用场景,例如,可以根据用户的姿态信息识别头戴式设备的应用场景,也可以根据前台应用事件识别头戴式设备的应用场景,甚至还可以根据外部事件(如外部传感器事件、移动设备的应用场景等)识别头戴式设备的应用场景,这里不作任何限定。In the embodiment of the present application, the head-mounted device includes a scene analysis service module, which can be used to identify the application scenario of the head-mounted device in a variety of ways, for example, the application scenario of the head-mounted device can be identified according to the user's posture information , The application scenario of the head-mounted device can also be identified according to the foreground application event, and even the application scenario of the head-mounted device can be identified according to external events (such as external sensor events, application scenarios of the mobile device, etc.), which is not limited here.
在一种可能的实施方式中,所述识别当前的应用场景,可以包括:In a possible implementation manner, the identifying the current application scenario may include:
获取用户的姿态信息;Obtain the user's posture information;
将所述姿态信息与预定义姿态进行比较;Comparing the posture information with a predefined posture;
若所述姿态信息与预定义姿态相匹配,则根据所述姿态信息进行应用场景识别,确定出所述当前的应用场景。If the posture information matches the predefined posture, application scenario recognition is performed according to the posture information, and the current application scenario is determined.
需要说明的是,在将姿态信息与预定义姿态进行比较之后,如果姿态信息与预定义姿态相匹配,那么可以根据匹配的姿态信息,确定出当前的应用场景;如果姿态信息与预定义姿态不匹配,那么将会返回执行获取用户的姿态信息的步骤。It should be noted that after comparing the posture information with the predefined posture, if the posture information matches the predefined posture, then the current application scenario can be determined according to the matched posture information; if the posture information is not the same as the predefined posture If it matches, it will return to the step of obtaining the user's posture information.
还需要说明的是,预定义姿态可以包括至少一个预设姿态。对于姿态信息与预定义姿态相匹配,该方法还可以包括:若在所述预定义姿态中查询到与所述姿态信息相匹配的预设姿态,则确定所述姿态信息与预定义姿态相匹配。It should also be noted that the predefined posture may include at least one preset posture. For the matching of the posture information with the predefined posture, the method may further include: if a preset posture matching the posture information is queried in the predefined posture, determining that the posture information matches the predefined posture .
也就是说,可以在预定义姿态中查询,如果能够查询到与姿态信息相匹配的预设姿态,那么就可以说明姿态信息与预定义姿态相匹配;这时候可以根据该姿态信息(或者,相匹配的预设姿态)来确定出当前应用场景。In other words, you can query in the pre-defined posture. If the preset posture that matches the posture information can be queried, it can indicate that the posture information matches the pre-defined posture; at this time, it can be based on the posture information (or relative Matching preset posture) to determine the current application scenario.
另外,头戴式设备还可以包括用户状态监视服务模块,用于获取用户传感器数据,以确定出用户的姿态信息。因此,在一些实施例中,所述获取用户的姿态信息,可以包括:In addition, the head-mounted device may also include a user status monitoring service module, which is used to obtain user sensor data to determine the user's posture information. Therefore, in some embodiments, the obtaining the user's posture information may include:
通过所述头戴式设备的用户状态监视服务模块,获取用户传感器数据;Obtain user sensor data through the user status monitoring service module of the head-mounted device;
对所述用户传感器数据进行分析,确定出所述用户的姿态信息。The sensor data of the user is analyzed to determine the posture information of the user.
也就是说,在通过头戴式设备的用户状态监视服务模块获取到用户传感器数据之后,可以利用信号处理技术和/或机器学习技术从用户传感器数据中确定出用户的姿态信息。In other words, after the user sensor data is acquired through the user status monitoring service module of the head-mounted device, signal processing technology and/or machine learning technology can be used to determine the user's posture information from the user sensor data.
具体来讲,机器学习技术可以是支持向量机(Support Vector Machine,SVM)技术、人工神经网络(Artificial Neural Network,ANN)技术、深度学习(Deep Learning,DL)技术等,但是这里不作任何限定。Specifically, the machine learning technology can be Support Vector Machine (SVM) technology, Artificial Neural Network (ANN) technology, Deep Learning (DL) technology, etc., but there is no limitation here.
在另一种可能的实施方式中,所述识别当前的应用场景,可以包括:In another possible implementation manner, the identifying the current application scenario may include:
获取前台应用事件;Get the foreground application event;
将所述前台应用事件在预定义前台应用事件列表中进行搜索;Search the foreground application event in a predefined foreground application event list;
若所述前台应用事件与所述预定义前台应用事件列表相匹配,则根据所述前台应用事件进行应用场景识别,确定出所述当前的应用场景。If the foreground application event matches the predefined foreground application event list, application scenario identification is performed according to the foreground application event, and the current application scenario is determined.
需要说明的是,对于前台应用事件,如果前台应用事件与预定义前台应用事件列表相匹配,那么可以根据匹配的前台应用事件,确定出当前的应用场景;如果前台应用事件与预定义前台应用事件列表不匹配,那么将会返回执行获取前台应用事件的步骤。It should be noted that for foreground application events, if the foreground application event matches the predefined foreground application event list, then the current application scenario can be determined according to the matched foreground application event; if the foreground application event matches the predefined foreground application event If the list does not match, it will return to the step of obtaining the foreground application event.
还需要说明的是,预定义前台应用事件列表可以包括至少一个预设前台应用事件。对于前台应用事件与预定义前台应用事件列表相匹配,该方法还可以包括:若在所述预设前台应用事件列表中搜索到与所述前台应用事件相匹配的预设前台应用事件,则确定所述前台应用事件与所述预定义前台应用事件列表相匹配。It should also be noted that the predefined foreground application event list may include at least one preset foreground application event. For foreground application events that match a predefined foreground application event list, the method may further include: if a preset foreground application event that matches the foreground application event is searched in the preset foreground application event list, determining The foreground application event matches the predefined foreground application event list.
也就是说,可以在预定义前台应用事件列表中进行搜索,如果能够搜索到与前台应用事件相匹配的预设前台应用事件,那么就可以说明前台应用事件与预定义前台应用事件列表相匹配;这时候可以根据该前台应用事件(或者,相匹配的预设前台应用事件)来确定出当前应用场景。In other words, you can search in the predefined foreground application event list. If you can search for the preset foreground application event that matches the foreground application event, then it can indicate that the foreground application event matches the predefined foreground application event list; At this time, the current application scenario can be determined according to the foreground application event (or the matching preset foreground application event).
另外,头戴式设备还可以包括前台应用监视服务模块,用于获取前台应用事件。因此,在一些实施例中,所述获取前台应用事件,可以包括:通过所述头戴式设备的前台应用监视服务模块,获取所述前台应用事件。In addition, the head-mounted device may also include a foreground application monitoring service module for acquiring foreground application events. Therefore, in some embodiments, the obtaining the foreground application event may include: obtaining the foreground application event through the foreground application monitoring service module of the head-mounted device.
需要说明的是,前台应用事件可以是头戴式设备上正在运行的用户的活动应用。换言之,前台应用监视服务模块可以监视正在头戴式设备上运行的用户的活动应用,以得到前台应用事件。It should be noted that the foreground application event may be an active application of the user running on the head-mounted device. In other words, the foreground application monitoring service module can monitor the user's active applications running on the head-mounted device to obtain foreground application events.
在又一种可能的实施方式中,所述识别当前的应用场景,可以包括:In another possible implementation manner, the identifying the current application scenario may include:
获取外部传感器事件;Obtain external sensor events;
根据所述外部传感器事件进行应用场景识别,确定出所述当前的应用场景。Perform application scenario recognition according to the external sensor event, and determine the current application scenario.
需要说明的是,本申请实施例还可以根据外部传感器事件来识别当前的应用场景。其中,对于外部传感器事件来说,可以是由外部传感器设备(或称为:外部扩展的传感器设备)检测得到的。It should be noted that the embodiments of the present application may also identify the current application scenario according to external sensor events. Among them, the external sensor event may be detected by an external sensor device (or called: externally extended sensor device).
在一些实施例中,所述获取外部传感器事件,可以包括:In some embodiments, the acquiring external sensor events may include:
建立外部传感器设备与所述头戴式设备的事件接收器之间的通信连接;Establishing a communication connection between the external sensor device and the event receiver of the head-mounted device;
通过所述事件接收器,获取所述外部传感器设备发送的所述外部传感器事件。Obtain the external sensor event sent by the external sensor device through the event receiver.
需要说明的是,外部传感器设备包括有传感器监视服务模块,通过传感器监视服务模块可以获取外部传感器事件,然后外部传感器设备将该外部传感器事件发送给头戴式设备的事件接收器,从而在头戴式设备中,可以从事件接收器中获得该外部传感器事件。It should be noted that the external sensor device includes a sensor monitoring service module. The external sensor event can be obtained through the sensor monitoring service module, and then the external sensor device sends the external sensor event to the event receiver of the head-mounted device, thereby In this type of device, the external sensor event can be obtained from the event receiver.
还需要说明的是,外部传感器设备与头戴式设备的事件接收器之间的通信连接可以是通过数据电缆建立的有线通信连接,也可以是根据无线通信协议建立的无线通信连接。It should also be noted that the communication connection between the external sensor device and the event receiver of the head-mounted device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol.
在本申请实施例中,无线通信协议至少可以包括下述之一:蓝牙(Bluetooth)协议、无线保真(Wireless Fidelity,WIFI)协议、红外数据(Infrared Data Association,IrDA)协议和近距离传输(Near Field Communication,NFC)协议。In the embodiment of the present application, the wireless communication protocol may include at least one of the following: Bluetooth (Bluetooth) protocol, Wireless Fidelity (WIFI) protocol, Infrared Data Association (IrDA) protocol, and short-distance transmission ( Near Field Communication, NFC) protocol.
在又一种可能的实施方式中,所述识别当前的应用场景,可以包括:In another possible implementation manner, the identifying the current application scenario may include:
获取移动设备的应用场景;Obtain application scenarios of mobile devices;
根据所述移动设备的应用场景进行应用场景识别,确定出所述当前的应用场景。Perform application scenario identification according to the application scenario of the mobile device, and determine the current application scenario.
进一步地,在一些实施例中,所述获取移动设备的应用场景,可以包括:Further, in some embodiments, the obtaining the application scenario of the mobile device may include:
建立移动设备与所述头戴式设备的事件接收器之间的通信连接;Establishing a communication connection between the mobile device and the event receiver of the head-mounted device;
通过所述事件接收器,获取所述移动设备发送的所述移动设备的应用场景。Obtain the application scenario of the mobile device sent by the mobile device through the event receiver.
进一步地,在一些实施例中,所述根据所述移动设备的应用场景进行应用场景识别,确定出所述当前的应用场景,可以包括:Further, in some embodiments, the performing application scenario identification according to the application scenario of the mobile device to determine the current application scenario may include:
将所述移动设备的应用场景确定为所述当前的应用场景。The application scenario of the mobile device is determined as the current application scenario.
需要说明的是,在移动设备中,可以通过移动设备内部的用户状态监视服务模块来获取用户传感器数据(或者用户的姿态信息),也可以通过移动设备内部的前台应用监视服务模块来获取前台应用事件;然后通过对这些用户传感器数据或者前台应用事件进行分析,从而可以确定出移动设备的应用场景;此外,移动设备甚至还可以通过外部传感器设备获取外部传感器事件,用以确定出移动设备的应用场景。这里,需要注意的是,移动设备利用其内部的用户状态监视服务模块或前台应用监视服务模块、或者利用外部传感器设备来确定移动设备的应用场景,其实现方式与头戴式设备利用其内部的用户状态监视服务模块或前台应用监视服务模块、或者利用外部传感器设备来确定头戴式设备的应用场景类似,这里不再详述。It should be noted that in a mobile device, user sensor data (or user posture information) can be obtained through the user status monitoring service module inside the mobile device, or the foreground application can be obtained through the foreground application monitoring service module inside the mobile device. Event; then through the analysis of these user sensor data or foreground application events, the application scenario of the mobile device can be determined; in addition, the mobile device can even obtain external sensor events through external sensor devices to determine the application of the mobile device Scenes. Here, it should be noted that the mobile device uses its internal user status monitoring service module or foreground application monitoring service module, or uses external sensor devices to determine the application scenario of the mobile device. The application scenarios of the user status monitoring service module or the foreground application monitoring service module, or the use of external sensor devices to determine the head-mounted device are similar, and will not be detailed here.
还需要说明的是,移动设备在确定出移动设备的应用场景之后,可以将其发送给头戴式设备的事件接收器,从而在头戴式设备中,可以从事件接收器中获取到移动设备的应用场景,进而确定出头戴式设备的应用场景。由于移动设备和头戴式设备处于同一视觉增强系统中,在一种具体的示例中,可以将移动设备的应用场景直接确定为头戴式设备当前的应用场景。It should also be noted that after determining the application scenario of the mobile device, the mobile device can send it to the event receiver of the head-mounted device, so that in the head-mounted device, the mobile device can be obtained from the event receiver The application scenarios of the headsets, and then determine the application scenarios of the head-mounted device. Since the mobile device and the head-mounted device are in the same visual enhancement system, in a specific example, the application scenario of the mobile device can be directly determined as the current application scenario of the head-mounted device.
在本申请实施例中,移动设备与头戴式设备的事件接收器之间的通信连接可以是通过数据电缆建立的有线通信连接,也可以是根据无线通信协议建立的无线通信连接。这里,无线通信协议至少可以包括下述之一:Bluetooth协议、WIFI协议、IrDA协议和NFC协议。In the embodiment of the present application, the communication connection between the mobile device and the event receiver of the head-mounted device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol. Here, the wireless communication protocol may include at least one of the following: Bluetooth protocol, WIFI protocol, IrDA protocol, and NFC protocol.
S102:基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。S102: Control the display state of the head-mounted device based on the display control command corresponding to the current application scenario.
需要说明的是,在确定出头戴式设备当前的应用场景之后,可以得到当前的应用场景对应的显示控制命令。具体地,在一些实施例中,在所述识别当前的应用场景之后,该方法还可以包括:It should be noted that after the current application scenario of the head-mounted device is determined, the display control command corresponding to the current application scenario can be obtained. Specifically, in some embodiments, after the identification of the current application scenario, the method may further include:
将所述当前的应用场景与预定义场景进行比较;Comparing the current application scenario with a predefined scenario;
若所述当前的应用场景与预定义场景相匹配,则确定出所述当前的应用场景对应的显示控制命令。If the current application scene matches the predefined scene, the display control command corresponding to the current application scene is determined.
在本申请实施例中,预定义场景至少可以包括下述其中一项:驾驶场景、步行场景、骑行场景、站立场景、坐下场景、握持移动设备场景、放下移动设备场景和地理位置场景;显示控制命令至少可以包括下述其中一项:开启显示命令、关闭显示命令、开启部分显示命令、关闭部分显示命令、调整显示区域尺寸命令和调整显示元素排列命令。In the embodiment of the present application, the predefined scenes may include at least one of the following: driving scenes, walking scenes, cycling scenes, standing scenes, sitting scenes, holding mobile devices scenes, putting down mobile devices scenes, and geographic location scenes ; Display control commands can include at least one of the following: open display command, close display command, open partial display command, close partial display command, adjust display area size command, and adjust display element arrangement command.
需要说明的是,针对头戴式设备当前的应用场景,如果当前的应用场景与预定义场景相匹配,那么这时候根据匹配场景可以确定出对应的显示控制命令,用以控制头戴式设备内部的显示模块的显示状态,比如改变显示、UI布局、重排UI元素、AR中的UI深度、亮度、功耗等;如果当前的应用场景与预定义场景不匹配,那么这时候将会返回执行识别当前的应用场景的步骤。It should be noted that for the current application scenario of the headset, if the current application scenario matches the predefined scenario, then the corresponding display control command can be determined according to the matching scenario at this time to control the interior of the headset The display status of the display module, such as changing the display, UI layout, rearranging UI elements, UI depth, brightness, power consumption in AR, etc.; if the current application scene does not match the predefined scene, then it will return to execution at this time Steps to identify the current application scenario.
还需要说明的是,预定义场景可以包括有至少一个预设场景,每一个预设场景对应一个具有不同显示状态的显示控制命令。对于应用场景与预定义场景相匹配,该方法还可以包括:将应用场景与预定义场景比较;若在所述预定义场景中查询到与所述应用场景相匹配的预设场景,则确定所述应用场景与预定义场景相匹配。It should also be noted that the predefined scene may include at least one preset scene, and each preset scene corresponds to a display control command with a different display state. For matching the application scene with the predefined scene, the method may further include: comparing the application scene with the predefined scene; if a preset scene matching the application scene is queried in the predefined scene, determining all The application scenarios described match the predefined scenarios.
也就是说,可以在预定义场景中查询,如果能够查询到与该应用场景相匹配的预设场景,那么就可以说明该应用场景与预定义场景相匹配;这时候可以根据该应用场景(或者,相匹配的预设场景)来确定出对应的显示控制命令,用以控制头戴式设备内部的显示模块的显示状态。That is to say, you can query in a predefined scene. If you can find a preset scene that matches the application scene, then it can indicate that the application scene matches the predefined scene; at this time, it can be based on the application scene (or , Matching preset scene) to determine the corresponding display control command to control the display state of the display module inside the head-mounted device.
本实施例提供了一种头戴式设备的显示控制方法,应用于头戴式设备。识别当前的应用场景;基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。这样,通过确定头戴式设备的应用场景,然后根据该应用场景对应的显示控制指令,从而可以使得在不同的应用场景下,头戴式设备能够根据用户行为或者使用状态自动改变用户界面的显示状态,以减少用户手动调整的工作量;同时还可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。This embodiment provides a display control method of a head-mounted device, which is applied to the head-mounted device. Identify the current application scenario; control the display state of the head-mounted device based on the display control command corresponding to the current application scenario. In this way, by determining the application scenario of the head-mounted device, and then according to the display control instructions corresponding to the application scenario, the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios. State, to reduce the workload of manual adjustment by the user; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
本申请的另一实施例中,参见图3,其示出了本申请实施例提供的另一种头戴式设备的显示控制方法的流程示意图。如图3所示,该方法可以包括:In another embodiment of the present application, refer to FIG. 3, which shows a schematic flowchart of another display control method for a head-mounted device provided in an embodiment of the present application. As shown in Figure 3, the method may include:
S201:识别移动设备的应用场景。S201: Identify the application scenario of the mobile device.
需要说明的是,本申请实施例的方法应用于移动设备。在本申请实施例中,移动设备内部也可以包括有场景分析服务模块,用以分析出移动设备的应用场景,然后将该应用场景发送给头戴式设备,而头戴式设备中包括有显示模块,以便在不同的应用场景下能够自适应控制显示模块的显示状态。It should be noted that the method in the embodiment of the present application is applied to a mobile device. In the embodiment of the present application, the mobile device may also include a scene analysis service module to analyze the application scene of the mobile device, and then send the application scene to the head-mounted device, and the head-mounted device includes a display Module, in order to be able to adaptively control the display state of the display module in different application scenarios.
还需要说明的是,本申请实施例也可以有多种方式来识别移动设备的应用场景,例如,可以根据用户的姿态信息识别移动设备的应用场景,也可以根据前台应用事件识别移动设备的应用场景,甚至还可以根据外部事件(如外部传感器事件等)识别移动设备的应用场景,这里不作任何限定。It should also be noted that the embodiment of the application can also have multiple ways to identify the application scenario of the mobile device. For example, the application scenario of the mobile device can be identified based on the user's posture information, or the application of the mobile device can be identified based on the foreground application event. Scenarios can even identify the application scenarios of the mobile device based on external events (such as external sensor events, etc.), which is not limited here.
在一种可能的实施方式中,所述识别移动设备的应用场景,可以包括:In a possible implementation manner, the application scenario of identifying a mobile device may include:
获取用户的姿态信息;Obtain the user's posture information;
将所述姿态信息与预定义姿态进行比较;Comparing the posture information with a predefined posture;
若所述姿态信息与预定义姿态相匹配,则根据所述姿态信息进行应用场景识别,确定出所述移动设备的应用场景。If the posture information matches the predefined posture, application scenario recognition is performed according to the posture information, and the application scenario of the mobile device is determined.
需要说明的是,在将姿态信息与预定义姿态进行比较之后,如果姿态信息与预定义姿态相匹配,那么可以根据匹配的姿态信息,确定出移动设备的应用场景;如果姿态信息与预定义姿态不匹配,那么将会返回执行获取用户的姿态信息的步骤。It should be noted that after comparing the posture information with the predefined posture, if the posture information matches the predefined posture, then the application scenario of the mobile device can be determined according to the matched posture information; if the posture information matches the predefined posture If it does not match, it will return to the step of obtaining the user's posture information.
还需要说明的是,在移动设备中,预定义姿态可以包括至少一个预设姿态。对于姿态信息与预定义姿态相匹配,该方法还可以包括:若在所述预定义姿态中查询到与所述姿态信息相匹配的预设姿态,则确定所述姿态信息与预定义姿态相匹配。It should also be noted that, in the mobile device, the predefined posture may include at least one preset posture. For the matching of the posture information with the predefined posture, the method may further include: if a preset posture matching the posture information is queried in the predefined posture, determining that the posture information matches the predefined posture .
也就是说,可以在预定义姿态中查询,如果能够查询到与姿态信息相匹配的预设姿态,那么就可以说明姿态信息与预定义姿态相匹配;这时候可以根据该姿态信息(或者,相匹配的预设姿态)来确定出移动设备的应用场景。In other words, you can query in the pre-defined posture. If the preset posture that matches the posture information can be queried, it can indicate that the posture information matches the pre-defined posture; at this time, it can be based on the posture information (or relative Matching preset posture) to determine the application scenario of the mobile device.
另外,移动设备还可以包括用户状态监视服务模块,用于获取用户传感器数据,以确定出用户的姿态信息。因此,在一些实施例中,所述获取用户的姿态信息,可以包括:In addition, the mobile device may also include a user status monitoring service module, which is used to obtain user sensor data to determine the user's posture information. Therefore, in some embodiments, the obtaining the user's posture information may include:
通过所述移动设备的用户状态监视服务模块,获取用户传感器数据;Obtain user sensor data through the user status monitoring service module of the mobile device;
对所述用户传感器数据进行分析,确定出所述用户的姿态信息。The sensor data of the user is analyzed to determine the posture information of the user.
也就是说,在通过移动设备的用户状态监视服务模块获取到用户传感器数据后,可 以利用信号处理技术和/或机器学习技术从用户传感器数据中确定出用户的姿态信息。例如,机器学习技术可以是SVM技术、ANN技术、深度学习技术等,但是不作任何限定。That is to say, after the user sensor data is obtained through the user status monitoring service module of the mobile device, signal processing technology and/or machine learning technology can be used to determine the user's posture information from the user sensor data. For example, the machine learning technology can be SVM technology, ANN technology, deep learning technology, etc., but it is not limited in any way.
在另一种可能的实施方式中,所述识别移动设备的应用场景,可以包括:In another possible implementation manner, the application scenario of identifying a mobile device may include:
获取前台应用事件;Get the foreground application event;
将所述前台应用事件在预定义前台应用事件列表中进行搜索;Search the foreground application event in a predefined foreground application event list;
若所述前台应用事件与所述预定义前台应用事件列表相匹配,则根据所述前台应用事件进行应用场景识别,确定出所述移动设备的应用场景。If the foreground application event matches the predefined foreground application event list, application scenario identification is performed according to the foreground application event, and the application scenario of the mobile device is determined.
需要说明的是,对于前台应用事件,如果前台应用事件与预定义前台应用事件列表相匹配,那么可以根据匹配的前台应用事件,确定出移动设备的应用场景;如果前台应用事件与预定义前台应用事件列表不匹配,那么将会返回执行获取前台应用事件的步骤。It should be noted that for foreground application events, if the foreground application event matches the predefined foreground application event list, then the application scenario of the mobile device can be determined according to the matched foreground application event; if the foreground application event matches the predefined foreground application If the event list does not match, it will return to the step of obtaining the foreground application event.
还需要说明的是,在移动设备中,预定义前台应用事件列表可以包括至少一个预设前台应用事件。对于前台应用事件与预定义前台应用事件列表相匹配,该方法还可以包括:若在所述预设前台应用事件列表中搜索到与所述前台应用事件相匹配的预设前台应用事件,则确定所述前台应用事件与所述预定义前台应用事件列表相匹配。It should also be noted that, in a mobile device, the predefined foreground application event list may include at least one preset foreground application event. For foreground application events that match a predefined foreground application event list, the method may further include: if a preset foreground application event that matches the foreground application event is searched in the preset foreground application event list, determining The foreground application event matches the predefined foreground application event list.
也就是说,可以在预定义前台应用事件列表中进行搜索,如果能够搜索到与前台应用事件相匹配的预设前台应用事件,那么就可以说明前台应用事件与预定义前台应用事件列表相匹配;这时候可以根据该前台应用事件(或者,相匹配的预设前台应用事件)来确定出移动设备的应用场景。In other words, you can search in the predefined foreground application event list. If you can search for the preset foreground application event that matches the foreground application event, then it can indicate that the foreground application event matches the predefined foreground application event list; At this time, the application scenario of the mobile device can be determined according to the foreground application event (or the matching preset foreground application event).
另外,移动设备还可以包括前台应用监视服务模块,用于获取前台应用事件。因此,在一些实施例中,所述获取前台应用事件,可以包括:通过所述移动设备的前台应用监视服务模块,获取所述前台应用事件。In addition, the mobile device may also include a foreground application monitoring service module for acquiring foreground application events. Therefore, in some embodiments, the obtaining the foreground application event may include: obtaining the foreground application event through the foreground application monitoring service module of the mobile device.
需要说明的是,前台应用事件可以是移动设备上正在运行的用户的活动应用。换言之,前台应用监视服务模块可以监视正在移动设备上运行的用户的活动应用,以得到前台应用事件。It should be noted that the foreground application event may be an active application of the user running on the mobile device. In other words, the foreground application monitoring service module can monitor the user's active applications running on the mobile device to obtain foreground application events.
在又一种可能的实施方式中,所述识别移动设备的应用场景,可以包括:In another possible implementation manner, the application scenario of identifying a mobile device may include:
获取外部传感器事件;Obtain external sensor events;
根据所述外部传感器事件进行应用场景识别,确定出所述移动设备的应用场景。The application scenario identification is performed according to the external sensor event, and the application scenario of the mobile device is determined.
需要说明的是,本申请实施例还可以根据外部传感器事件来识别移动设备的应用场景。其中,对于外部传感器事件来说,可以是由外部传感器设备(或称为:外部扩展的传感器设备)检测得到的。It should be noted that the embodiments of the present application may also identify application scenarios of the mobile device based on external sensor events. Among them, the external sensor event may be detected by an external sensor device (or called: externally extended sensor device).
在一些实施例中,所述获取外部传感器事件,可以包括:In some embodiments, the acquiring external sensor events may include:
建立外部传感器设备与所述移动设备的事件接收器之间的通信连接;Establishing a communication connection between the external sensor device and the event receiver of the mobile device;
通过所述事件接收器,获取所述外部传感器设备发送的所述外部传感器事件。Obtain the external sensor event sent by the external sensor device through the event receiver.
需要说明的是,外部传感器设备包括有传感器监视服务模块,通过传感器监视服务模块可以获取外部传感器事件,然后外部传感器设备将该外部传感器事件发送给移动设备的事件接收器,从而在移动设备中,可以从事件接收器中获得该外部传感器事件。It should be noted that the external sensor device includes a sensor monitoring service module. The external sensor event can be obtained through the sensor monitoring service module, and then the external sensor device sends the external sensor event to the event receiver of the mobile device, so that in the mobile device, The external sensor event can be obtained from the event receiver.
还需要说明的是,外部传感器设备与移动设备的事件接收器之间的通信连接可以是通过数据电缆建立的有线通信连接,也可以是根据无线通信协议建立的无线通信连接。这里,无线通信协议至少可以包括下述之一:Bluetooth协议、WIFI协议、IrDA协议和NFC协议。It should also be noted that the communication connection between the external sensor device and the event receiver of the mobile device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol. Here, the wireless communication protocol may include at least one of the following: Bluetooth protocol, WIFI protocol, IrDA protocol, and NFC protocol.
S202:将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。S202: Send the application scenario of the mobile device to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the head according to the display control command. The display status of the wearable device is controlled.
需要说明的是,在确定出移动设备的应用场景之后,可以将移动设备的应用场景发 送给头戴式设备,以便头戴式设备确定出显示控制命令并根据该显示控制命令控制头戴式设备的显示状态。It should be noted that after the application scenario of the mobile device is determined, the application scenario of the mobile device can be sent to the head-mounted device, so that the head-mounted device can determine the display control command and control the head-mounted device according to the display control command The display status.
在本申请实施例中,移动设备的应用场景至少可以包括下述其中一项:驾驶场景、步行场景、骑行场景、站立场景、坐下场景、握持移动设备场景、放下移动设备场景和地理位置场景;显示控制命令至少可以包括下述其中一项:开启显示命令、关闭显示命令、开启部分显示命令、关闭部分显示命令、调整显示区域尺寸命令和调整显示元素排列命令。In the embodiment of the present application, the application scenario of the mobile device may include at least one of the following: driving scenario, walking scenario, cycling scenario, standing scenario, sitting scenario, holding mobile device scenario, lowering the mobile device scenario, and geographic location Location scene; display control commands can include at least one of the following: open display command, close display command, open partial display command, close partial display command, adjust display area size command, and adjust display element arrangement command.
具体来讲,在确定出移动设备的应用场景之后,移动设备可以将其发送给头戴式设备的事件接收器,从而后续在头戴式设备中,可以从事件接收器中获取到移动设备的应用场景,进而确定出头戴式设备的应用场景。由于移动设备和头戴式设备处于同一视觉增强系统中,在一种具体的示例中,可以将移动设备的应用场景直接确定为头戴式设备当前的应用场景。Specifically, after the application scenario of the mobile device is determined, the mobile device can send it to the event receiver of the head-mounted device, so that in the subsequent head-mounted device, the mobile device's information can be obtained from the event receiver. Application scenarios, and then determine the application scenarios of the head-mounted device. Since the mobile device and the head-mounted device are in the same visual enhancement system, in a specific example, the application scenario of the mobile device can be directly determined as the current application scenario of the head-mounted device.
这样,头戴设备能够根据移动设备的应用场景确定出对应的显示控制命令,用以控制头戴式设备内部的显示模块的显示状态,比如改变显示、UI布局、重排UI元素、AR中的UI深度、亮度、功耗等。In this way, the head-mounted device can determine the corresponding display control command according to the application scenario of the mobile device to control the display status of the display module inside the head-mounted device, such as changing the display, UI layout, rearranging UI elements, and UI depth, brightness, power consumption, etc.
本实施例提供了一种头戴式设备的显示控制方法,应用于移动设备。识别移动设备的应用场景;将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。这样,在头戴式设备接收到移动设备的应用场景后,可以根据该应用场景对应的显示控制指令,从而使得在不同的应用场景下,头戴式设备能够根据用户行为或者使用状态自动改变用户界面的显示状态,以减少用户手动调整的工作量;同时还可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。This embodiment provides a display control method of a head-mounted device, which is applied to a mobile device. Identify the application scenario of the mobile device; send the application scenario of the mobile device to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine the display control command and control the display according to the Command to control the display state of the head-mounted device. In this way, after the head-mounted device receives the application scenario of the mobile device, it can display control instructions corresponding to the application scenario, so that in different application scenarios, the head-mounted device can automatically change the user according to user behavior or usage status. The display status of the interface can reduce the workload of manual adjustment by the user; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
本申请的又一实施例中,参见图4,其示出了本申请实施例提供的一种头戴式设备的显示控制方法的详细流程示意图。如图4所示,该方法可以包括:In another embodiment of the present application, refer to FIG. 4, which shows a detailed flowchart of a display control method for a head-mounted device provided in an embodiment of the present application. As shown in Figure 4, the method may include:
S301:监视用户的姿态信息;S301: monitor the user's posture information;
S302:与预定义姿态进行比较;S302: Compare with a predefined posture;
S303:判断是否匹配;S303: Judge whether it matches;
S304:若判断结果为是,则确定当前的应用场景,并与预定义场景进行比较;S304: If the judgment result is yes, determine the current application scenario and compare it with the predefined scenario;
需要说明的是,对于步骤S303来说,如果判断结果为是,则执行步骤S304;如果判断结果为否,则返回执行步骤S301。It should be noted that for step S303, if the judgment result is yes, then step S304 is executed; if the judgment result is no, then step S301 is executed back.
还需要说明的是,头戴式设备可以包括有用户状态监视服务模块、前台应用监视服务模块和场景分析服务器,而移动设备也可以包括有用户状态监视服务模块、前台应用监视服务模块和场景分析服务器。这样,针对用户的姿态信息、前台应用事件的监视等,可以是由头戴式设备执行,也可以是由移动设备执行,这里不作任何限定。It should also be noted that the head-mounted device may include a user status monitoring service module, a foreground application monitoring service module, and a scene analysis server, while a mobile device may also include a user status monitoring service module, a foreground application monitoring service module, and scene analysis. server. In this way, the monitoring of the user's posture information, foreground application events, etc., can be performed by a head-mounted device or a mobile device, and there is no limitation here.
在本申请实施例中,对于步骤S301来说,可以是由用户状态监视服务模块进行监视得到的;对于步骤S302-S304来说,可以是由场景分析服务器进行分析得到的。In the embodiment of the present application, for step S301, it may be obtained by monitoring by the user status monitoring service module; for step S302-S304, it may be obtained by analyzing by the scene analysis server.
S305:监视前台应用事件;S305: monitor foreground application events;
S306:在预定义前台应用事件列表中进行搜索;S306: Search in the predefined foreground application event list;
S307:判断是否匹配;S307: Judge whether it matches;
需要说明的是,对于步骤S307来说,如果判断结果为是,则执行步骤S304;如果判断结果为否,则返回执行步骤S305。It should be noted that, for step S307, if the judgment result is yes, then step S304 is executed; if the judgment result is no, then step S305 is executed back.
还需要说明的是,步骤S301-S303与步骤S305-S307可以并行执行,不存在执行的先后顺序,用以确定出当前的应用场景。It should also be noted that steps S301-S303 and steps S305-S307 can be executed in parallel, and there is no order of execution to determine the current application scenario.
另外,对于步骤S305来说,可以是由前台应用监视服务模块进行监视得到的;对 于步骤S306-S307和S304来说,可以是由场景分析服务器进行分析得到的。In addition, for step S305, it can be obtained by monitoring by the foreground application monitoring service module; for step S306-S307 and S304, it can be obtained by analyzing the scene analysis server.
S308:获取外部传感器事件,并根据外部传感器事件执行S304;S308: Obtain external sensor events, and execute S304 according to the external sensor events;
需要说明的是,在视觉增强系统中,除包括移动设备和头戴式设备之外,还可以包括有外部传感器设备,而外部传感器设备包括有传感器监视服务模块。这里,步骤S308可以是在外部传感器设备上运行传感器监视服务模块以检测得到该外部传感器事件。It should be noted that, in addition to mobile devices and head-mounted devices, the vision enhancement system may also include external sensor devices, and the external sensor devices include a sensor monitoring service module. Here, step S308 may be to run a sensor monitoring service module on the external sensor device to detect the external sensor event.
还需要说明的是,步骤S308、步骤S301-S303、步骤S305-S307可以并行执行,不存在执行的先后顺序;而且这也表示了三种用于识别当前的应用场景的方式,在其之后,均执行步骤S304。It should also be noted that steps S308, steps S301-S303, and steps S305-S307 can be executed in parallel, and there is no order of execution; and this also shows three ways to identify the current application scenario, after which, Step S304 is all performed.
S309:判断是否匹配;S309: Determine whether it matches;
S310:若判断结果为是,则基于匹配场景发送显示控制命令。S310: If the judgment result is yes, send a display control command based on the matching scene.
需要说明的是,对于步骤S309来说,如果判断结果为是,则执行步骤S310;如果判断结果为否,则返回执行步骤S308、S305和S301。It should be noted that, for step S309, if the judgment result is yes, then step S310 is executed; if the judgment result is no, then step S308, S305, and S301 are executed back.
还需要说明的是,对于S309和S310而言,可以是由头戴式设备执行。头戴式设备还可以包括显示模块(也可以称为“屏幕”),这样,头戴式设备中的场景分析服务器在将显示控制命令发送给显示模块后,可以控制该显示模块的显示状态。It should also be noted that S309 and S310 may be performed by a head-mounted device. The head-mounted device may also include a display module (also referred to as a “screen”). In this way, the scene analysis server in the head-mounted device can control the display state of the display module after sending a display control command to the display module.
进一步地,在步骤S310之后,可以进行下一阶段的应用场景识别及其对应的显示控制命令,即这时候也需要返回步骤S308、S305和S301。Further, after step S310, the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to steps S308, S305, and S301 at this time.
另外,本申请实施例所述的匹配场景,具体可以是指与当前的应用场景相匹配的预设场景(或者说,当前的应用场景)。这里,在判断结果为是的情况下,当前的应用场景或者与其相匹配的预设场景可以简称为“匹配场景”。这时候,在头戴式设备内部,由场景分析服务器向显示模块发送显示控制命令,用以控制该显示模块的显示状态。In addition, the matching scenario described in the embodiment of the present application may specifically refer to a preset scenario (or, the current application scenario) that matches the current application scenario. Here, in the case where the judgment result is yes, the current application scene or the preset scene matching it may be referred to as "matching scene" for short. At this time, inside the head-mounted device, the scene analysis server sends a display control command to the display module to control the display state of the display module.
简言之,本申请实施例涉及了一种用于基于与用户行为和使用情况相关的因素来控制用户界面的方法。具体来讲,本申请实施例涉及了一种用户界面,其能够适应头戴式设备在不同使用情况下的应用场景。例如,本申请实施例可以自主地感测用户状态并关闭头戴式设备的屏幕显示或在头戴式设备的屏幕上隐藏一部分UI组件。In short, the embodiments of the present application relate to a method for controlling a user interface based on factors related to user behavior and usage. Specifically, the embodiment of the present application relates to a user interface, which can adapt to application scenarios of a head-mounted device in different use cases. For example, the embodiment of the present application can autonomously sense the user's state and turn off the screen display of the head-mounted device or hide a part of UI components on the screen of the head-mounted device.
需要说明的是,本申请实施例主要是用于改善头戴式设备与用户之间的交互效果。更具体地讲,本申请实施例使得头戴式设备能够根据用户的行为自动改变其UI。这样可减少用户手动调整UI的工作量。例如,对于与移动设备(如智能手机)协同工作的一副AR眼镜,当系统感测到用户正在观看移动设备的屏幕时,AR眼镜将关闭其显示模块或避免呈现任何内容以减少视觉干扰。如此,在一种具体的示例中,关闭AR眼镜的光学透视(Optical See Through,OST),使得用户能够更好地观看现实世界。在另一种具体的示例中,系统可在显示器上调整UI组件的大小和/或重新组织UI组件(例如,当用户向下观看时,可以将UI重新布置到上半部分)。It should be noted that the embodiments of the present application are mainly used to improve the interaction effect between the head-mounted device and the user. More specifically, the embodiments of the present application enable the head-mounted device to automatically change its UI according to the user's behavior. This can reduce the workload of the user to manually adjust the UI. For example, for a pair of AR glasses that work with a mobile device (such as a smart phone), when the system senses that the user is watching the screen of the mobile device, the AR glasses will close its display module or avoid presenting any content to reduce visual interference. In this way, in a specific example, the optical see through (OST) of the AR glasses is turned off, so that the user can better view the real world. In another specific example, the system may resize and/or reorganize the UI components on the display (for example, when the user looks down, the UI may be rearranged to the upper half).
示例性地,用户使用头戴式设备玩视频游戏或观看电影时,用户可能使用无线控制器控制该头戴式设备。当用户的智能手机收到短信或来电时,用户放下无线控制器并拿起智能手机。本申请实施例检测这一系列事件,这时候将会关闭头戴式设备的屏幕,使得用户能够更好地观看智能手机的屏幕。事件的检测可以基于以下一项或多项:来自无线控制器的传感器数据(例如,用户是否握持无线控制器),来自智能手机的事件(例如,用户拿起智能手机和/或解锁智能手机以读取文本),以及来自头戴式设备的传感器数据(例如,设备是否指向下方)等。如上述图4所示,其示出了本申请实施例提供的一种可能的实现方式。Exemplarily, when a user uses a head-mounted device to play video games or watch a movie, the user may use a wireless controller to control the head-mounted device. When the user's smart phone receives a text message or a call, the user puts down the wireless controller and picks up the smart phone. The embodiment of the present application detects this series of events, and at this time, the screen of the head-mounted device will be turned off, so that the user can better watch the screen of the smart phone. The detection of events can be based on one or more of the following: sensor data from the wireless controller (for example, whether the user holds the wireless controller), events from the smartphone (for example, the user picks up the smartphone and/or unlocks the smartphone) To read text), and sensor data from the head-mounted device (for example, whether the device is pointing down), etc. As shown in the foregoing FIG. 4, it shows a possible implementation manner provided by an embodiment of the present application.
另外,图5示出了本申请实施例提供的另一种可能的实现方式,这时候仅用户行为可导致头戴式设备的屏幕关闭。例如,用户从慢速行走突然进入奔跑状态。参见图5,其示出了本申请实施例提供的另一种头戴式设备的显示控制方法的详细流程示意图。如 图5所示,该方法可以包括:In addition, FIG. 5 shows another possible implementation manner provided by an embodiment of the present application. At this time, only user behavior can cause the screen of the head-mounted device to turn off. For example, the user suddenly enters a running state from walking at a slow speed. Refer to FIG. 5, which shows a detailed flowchart of another display control method of a head-mounted device provided by an embodiment of the present application. As shown in Figure 5, the method may include:
S401:监视用户的姿态信息;S401: monitor the user's posture information;
S402:与预定义姿态进行比较;S402: Compare with a predefined posture;
S403:判断是否匹配;S403: Judge whether it matches;
S404:若判断结果为是,则确定当前的应用场景,并与预定义场景进行比较;S404: If the judgment result is yes, determine the current application scenario and compare it with the predefined scenario;
S405:在当前的应用场景与预定义场景相匹配的情况下,基于匹配场景发送显示控制命令。S405: When the current application scene matches the predefined scene, send a display control command based on the matching scene.
需要说明的是,对于步骤S403来说,如果判断结果为是,则执行步骤S404;如果判断结果为否,则返回执行步骤S401。It should be noted that, for step S403, if the judgment result is yes, then step S404 is executed; if the judgment result is no, then step S401 is executed back.
进一步地,在步骤S405之后,可以进行下一阶段的应用场景识别及其对应的显示控制命令,即这时候也需要返回步骤S401。Further, after step S405, the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to step S401 at this time.
在本申请实施例中,预定义场景包括有至少一个预设场景,每一个预设场景对应一个具有不同显示状态的显示控制命令。这里,在当前的应用场景与预定义场景相匹配的情况下,当前的应用场景或者与其相匹配的预设场景可以简称为“匹配场景”。这时候,在头戴式设备内部,由场景分析服务器向显示模块发送显示控制命令,用以控制显示模块的显示状态。In the embodiment of the present application, the predefined scene includes at least one preset scene, and each preset scene corresponds to a display control command with a different display state. Here, in the case that the current application scene matches the predefined scene, the current application scene or the preset scene matching it may be referred to as a “matching scene” for short. At this time, inside the head-mounted device, the scene analysis server sends a display control command to the display module to control the display state of the display module.
图6示出了本申请实施例提供的又一种可能的实现方式,这时候仅特殊应用事件可导致头戴式设备的屏幕关闭。例如,智能手机接收到打入的视频呼叫,并且当用户接受视频呼叫时,头戴式设备的屏幕关闭以方便在智能手机上进行视频呼叫。参见图6,其示出了本申请实施例提供的又一种头戴式设备的显示控制方法的详细流程示意图。如图6所示,该方法可以包括:FIG. 6 shows another possible implementation manner provided by an embodiment of the present application. At this time, only a special application event can cause the screen of the head-mounted device to turn off. For example, a smart phone receives an incoming video call, and when the user accepts the video call, the screen of the head-mounted device is turned off to facilitate the video call on the smart phone. Refer to FIG. 6, which shows a detailed flowchart of yet another display control method of a head-mounted device provided by an embodiment of the present application. As shown in Figure 6, the method may include:
S501:监视前台应用事件;S501: Monitor foreground application events;
S502:在预定义前台应用事件列表中进行搜索;S502: Search in a list of predefined foreground application events;
S503:判断是否匹配;S503: Judge whether it matches;
S504:若判断结果为是,则确定当前的应用场景,并与预定义场景进行比较;S504: If the judgment result is yes, determine the current application scenario and compare it with the predefined scenario;
S505:在当前的应用场景与预定义场景相匹配的情况下,基于匹配场景发送显示控制命令。S505: When the current application scene matches the predefined scene, send a display control command based on the matching scene.
需要说明的是,对于步骤S503来说,如果判断结果为是,则执行步骤S504;如果判断结果为否,则返回执行步骤S501。It should be noted that, for step S503, if the judgment result is yes, then step S504 is executed; if the judgment result is no, then step S501 is executed back.
进一步地,在步骤S505之后,可以进行下一阶段的应用场景识别及其对应的显示控制命令,即这时候也需要返回步骤S501。Further, after step S505, the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to step S501 at this time.
图7示出了本申请实施例提供的又一种可能的实现方式,这时候单个外部传感器事件可导致头戴式设备的屏幕关闭。例如,当用户接近智能门锁并开始在锁盘上键入密码时。参见图7,其示出了本申请实施例提供的又一种头戴式设备的显示控制方法的详细流程示意图。如图7所示,该方法可以包括:FIG. 7 shows another possible implementation manner provided by an embodiment of the present application. At this time, a single external sensor event may cause the screen of the head-mounted device to turn off. For example, when a user approaches a smart door lock and starts typing a password on the lock disk. Refer to FIG. 7, which shows a detailed flowchart of another method for display control of a head-mounted device provided by an embodiment of the present application. As shown in Figure 7, the method may include:
S601:获取外部传感器事件;S601: Obtain external sensor events;
S602:与预定义场景进行比较;S602: Compare with a predefined scene;
S603:判断是否匹配;S603: Determine whether it matches;
S604:若判断结果为是,则在当前的应用场景与预定义场景相匹配的情况下,基于匹配场景发送显示控制命令。S604: If the judgment result is yes, if the current application scene matches the predefined scene, send a display control command based on the matching scene.
需要说明的是,对于步骤S603来说,如果判断结果为是,则执行步骤S604;如果判断结果为否,则返回执行步骤S601。It should be noted that, for step S603, if the judgment result is yes, then step S604 is executed; if the judgment result is no, then step S601 is executed back.
进一步地,在步骤S604之后,可以进行下一阶段的应用场景识别及其对应的显示控制命令,即这时候也需要返回步骤S601。Further, after step S604, the next stage of application scene recognition and its corresponding display control command can be performed, that is, it is also necessary to return to step S601 at this time.
在本申请实施例中,视觉增强系统还可以包括有外部传感器设备,而通过外部传感器设备可以获取外部传感器事件。在获取到外部传感器事件后,可以识别出当前的应用场景;然后将当前的应用场景与预定义场景进行比较,从而在当前的应用场景与预定义场景相匹配的情况下,基于匹配场景可以由头戴式设备中的场景分析服务器将显示控制命令发送至显示模块,从而实现对其显示状态进行控制。In the embodiment of the present application, the vision enhancement system may further include an external sensor device, and the external sensor event can be acquired through the external sensor device. After acquiring external sensor events, the current application scenario can be identified; then the current application scenario is compared with the predefined scenario, so that when the current application scenario matches the predefined scenario, the matching scenario can be determined by The scene analysis server in the head-mounted device sends the display control command to the display module, so as to control its display state.
在本申请实施例中,预定义场景包括有至少一个预设场景,每一个预设场景对应一个具有不同显示状态的显示控制命令。这里,在当前的应用场景与预定义场景相匹配的情况下,当前的应用场景或者与其相匹配的预设场景可以简称为“匹配场景”。这时候,在头戴式设备内部,由场景分析服务器向显示模块发送显示控制命令,用以控制显示模块的显示状态。In the embodiment of the present application, the predefined scene includes at least one preset scene, and each preset scene corresponds to a display control command with a different display state. Here, in the case that the current application scene matches the predefined scene, the current application scene or the preset scene matching it may be referred to as a “matching scene” for short. At this time, inside the head-mounted device, the scene analysis server sends a display control command to the display module to control the display state of the display module.
本实施例提供了一种头戴式设备的显示控制方法,通过本实施例对前述实施例的具体实现进行详细阐述,从中可以看出,本申请实施例明显改善了头戴式设备的用户体验。更具体来说,本申请实施例可以使得头戴式设备能够根据用户行为而自动改变其UI,从而可减少用户手动调整UI的工作量。这样,本申请实施例能够开发具有更好的用户体验的头戴式设备产品,而且使得该头戴式设备产品更加智能化。This embodiment provides a display control method of a head-mounted device. The specific implementation of the foregoing embodiment is described in detail through this embodiment. It can be seen from this that the embodiment of the application significantly improves the user experience of the head-mounted device. . More specifically, the embodiments of the present application can enable the head-mounted device to automatically change its UI according to user behavior, thereby reducing the workload of the user to manually adjust the UI. In this way, the embodiments of the present application can develop a head-mounted device product with a better user experience, and make the head-mounted device product more intelligent.
本申请的再一实施例中,基于前述实施例相同的发明构思,参见图8,其示出了本申请实施例提供的一种头戴式设备70的组成结构示意图。如图8所示,该头戴式设备70可以包括:场景分析服务模块701、发送模块702和显示模块703;其中,In yet another embodiment of the present application, based on the same inventive concept as the foregoing embodiment, refer to FIG. 8, which shows a schematic diagram of the composition structure of a head-mounted device 70 provided by an embodiment of the present application. As shown in FIG. 8, the head-mounted device 70 may include: a scene analysis service module 701, a sending module 702, and a display module 703; among them,
场景分析服务模块701,配置为识别当前的应用场景;The scenario analysis service module 701 is configured to identify the current application scenario;
发送模块702,配置为将所述当前的应用场景对应的显示控制命令由所述场景分析服务模块发送至显示模块703;The sending module 702 is configured to send the display control command corresponding to the current application scene from the scene analysis service module to the display module 703;
显示模块703,配置为根据所述显示控制命令控制显示状态。The display module 703 is configured to control the display state according to the display control command.
在一些实施例中,场景分析服务模块701,具体配置为获取用户的姿态信息;以及将所述姿态信息与预定义姿态进行比较;以及若所述姿态信息与预定义姿态相匹配,则根据所述姿态信息进行应用场景识别,确定出所述当前的应用场景。In some embodiments, the scene analysis service module 701 is specifically configured to obtain posture information of the user; and compare the posture information with a predefined posture; The posture information is used to identify the application scenario, and the current application scenario is determined.
在一些实施例中,参见图8,头戴式设备70还可以包括用户状态监视服务模块704;In some embodiments, referring to FIG. 8, the head-mounted device 70 may further include a user status monitoring service module 704;
用户状态监视服务模块704,配置为获取用户传感器数据;以及对所述用户传感器数据进行分析,确定出所述用户的姿态信息;The user status monitoring service module 704 is configured to obtain user sensor data; and analyze the user sensor data to determine the posture information of the user;
发送模块702,还配置为将所述用户的姿态信息发送至场景分析服务模块701。The sending module 702 is further configured to send the posture information of the user to the scene analysis service module 701.
进一步地,场景分析服务模块701,具体配置为若在所述预定义姿态中查询到与所述姿态信息相匹配的预设姿态,则确定所述姿态信息与预定义姿态相匹配;其中,所述预定义姿态包括至少一个预设姿态。Further, the scene analysis service module 701 is specifically configured to determine that the posture information matches the predefined posture if a preset posture matching the posture information is queried in the predefined posture; The predefined posture includes at least one preset posture.
在一些实施例中,场景分析服务模块701,具体配置为获取前台应用事件;以及将所述前台应用事件在预定义前台应用事件列表中进行搜索;以及若所述前台应用事件与所述预定义前台应用事件列表相匹配,则根据所述前台应用事件进行应用场景识别,确定出所述当前的应用场景。In some embodiments, the scene analysis service module 701 is specifically configured to obtain foreground application events; and search for the foreground application events in a list of predefined foreground application events; and if the foreground application event matches the predefined If the foreground application event list matches, the application scenario identification is performed according to the foreground application event, and the current application scenario is determined.
在一些实施例中,参见图8,头戴式设备70还可以包括前台应用监视服务模块705;In some embodiments, referring to FIG. 8, the head-mounted device 70 may further include a foreground application monitoring service module 705;
前台应用监视服务模块705,配置为获取所述前台应用事件;The foreground application monitoring service module 705 is configured to obtain the foreground application event;
发送模块702,还配置为将所述前台应用事件发送至场景分析服务模块701。The sending module 702 is further configured to send the foreground application event to the scene analysis service module 701.
进一步地,场景分析服务模块701,具体配置为若在所述预设前台应用事件列表中搜索到与所述前台应用事件相匹配的预设前台应用事件,则确定所述前台应用事件与所述预定义前台应用事件列表相匹配;其中,所述预定义前台应用事件列表包括至少一个预设前台应用事件。Further, the scene analysis service module 701 is specifically configured to determine that the foreground application event matches the foreground application event if a preset foreground application event matching the foreground application event is searched in the preset foreground application event list. The predefined foreground application event list matches; wherein, the predefined foreground application event list includes at least one preset foreground application event.
在一些实施例中,场景分析服务模块701,具体配置为获取外部传感器事件;以及 根据所述外部传感器事件进行应用场景识别,确定出所述当前的应用场景。In some embodiments, the scene analysis service module 701 is specifically configured to acquire external sensor events; and perform application scene recognition according to the external sensor events, and determine the current application scene.
在一些实施例中,参见图8,头戴式设备70还可包括通信模块706和事件接收器707;In some embodiments, referring to FIG. 8, the head-mounted device 70 may further include a communication module 706 and an event receiver 707;
通信模块706,配置为建立外部传感器设备与事件接收器707之间的通信连接;The communication module 706 is configured to establish a communication connection between the external sensor device and the event receiver 707;
事件接收器707,配置为接收所述外部传感器设备发送的所述外部传感器事件。The event receiver 707 is configured to receive the external sensor event sent by the external sensor device.
在一些实施例中,场景分析服务模块701,还配置为获取移动设备的应用场景;以及将所述移动设备的应用场景确定为所述当前的应用场景。In some embodiments, the scenario analysis service module 701 is further configured to obtain the application scenario of the mobile device; and determine the application scenario of the mobile device as the current application scenario.
在一些实施例中,通信模块706,还配置为建立移动设备与事件接收器707之间的通信连接;In some embodiments, the communication module 706 is further configured to establish a communication connection between the mobile device and the event receiver 707;
事件接收器707,还配置为接收所述移动设备发送的所述移动设备的应用场景。The event receiver 707 is further configured to receive the application scenario of the mobile device sent by the mobile device.
在一些实施例中,场景分析服务模块701,还配置为将所述当前的应用场景与预定义场景进行比较;以及若所述当前的应用场景与预定义场景相匹配,则确定出所述当前的应用场景对应的显示控制命令。In some embodiments, the scene analysis service module 701 is further configured to compare the current application scene with a predefined scene; and if the current application scene matches the predefined scene, determine the current The display control commands corresponding to the application scenarios.
在一些实施例中,预定义场景至少包括下述其中一项:驾驶场景、步行场景、骑行场景、站立场景、坐下场景、握持移动设备场景、放下移动设备场景和地理位置场景;In some embodiments, the predefined scene includes at least one of the following: a driving scene, a walking scene, a riding scene, a standing scene, a sitting scene, a holding mobile device scene, a lowering mobile device scene, and a geographic location scene;
所述显示控制命令至少包括下述其中一项:开启显示命令、关闭显示命令、开启部分显示命令、关闭部分显示命令、调整显示区域尺寸命令和调整显示元素排列命令。The display control command includes at least one of the following: a display opening command, a display closing command, a partial display opening command, a partial display closing command, a display area size adjustment command, and a display element arrangement adjustment command.
可以理解地,在本申请实施例中,“模块”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。It can be understood that, in the embodiments of the present application, a "module" may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a unit, or may also be non-modular. Moreover, the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be realized in the form of hardware or software function module.
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product. The computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment. The aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
因此,本申请实施例提供了一种计算机存储介质,应用于头戴式设备70,该计算机存储介质存储有计算机程序,所述计算机程序被处理器执行时实现前述实施例中任一项所述的方法。Therefore, the embodiments of the present application provide a computer storage medium, which is applied to the head-mounted device 70, and the computer storage medium stores a computer program that, when executed by a processor, implements any one of the foregoing embodiments. Methods.
基于上述头戴式设备70的组成以及计算机存储介质,参见图9,其示出了本申请实施例提供的头戴式设备70的具体硬件结构示意图。如图9所示,可以包括:第一通信接口801、第一存储器802和第一处理器803;各个组件通过第一总线系统804耦合在一起。可理解,第一总线系统804用于实现这些组件之间的连接通信。第一总线系统804除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图9中将各种总线都标为第一总线系统804。其中,Based on the composition of the above-mentioned head-mounted device 70 and the computer storage medium, refer to FIG. 9, which shows a schematic diagram of the specific hardware structure of the head-mounted device 70 provided by the embodiment of the present application. As shown in FIG. 9, it may include: a first communication interface 801, a first memory 802, and a first processor 803; various components are coupled together through a first bus system 804. It can be understood that the first bus system 804 is used to implement connection and communication between these components. In addition to the data bus, the first bus system 804 also includes a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are marked as the first bus system 804 in FIG. 9. in,
第一通信接口801,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;The first communication interface 801 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
第一存储器802,用于存储能够在第一处理器803上运行的计算机程序;The first memory 802 is configured to store a computer program that can run on the first processor 803;
第一处理器803,用于在运行所述计算机程序时,执行:The first processor 803 is configured to execute: when the computer program is running:
识别当前的应用场景;Identify the current application scenario;
基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。Based on the display control command corresponding to the current application scenario, the display state of the head-mounted device is controlled.
可以理解,本申请实施例中的第一存储器802可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第一存储器802旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the first memory 802 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory. Among them, the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. The volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache. By way of exemplary but not restrictive description, many forms of RAM are available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), Synchronous Link Dynamic Random Access Memory (Synchlink DRAM, SLDRAM) And Direct Rambus RAM (DRRAM). The first memory 802 of the system and method described in this application is intended to include, but is not limited to, these and any other suitable types of memory.
而第一处理器803可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器803中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器803可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器802,第一处理器803读取第一存储器802中的信息,结合其硬件完成上述方法的步骤。The first processor 803 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the first processor 803 or instructions in the form of software. The aforementioned first processor 803 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the first memory 802, and the first processor 803 reads the information in the first memory 802, and completes the steps of the foregoing method in combination with its hardware.
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。It can be understood that the embodiments described in this application can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof. For hardware implementation, the processing unit can be implemented in one or more application specific integrated circuits (ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination. For software implementation, the technology described in this application can be implemented through modules (for example, procedures, functions, etc.) that perform the functions described in this application. The software codes can be stored in the memory and executed by the processor. The memory can be implemented in the processor or external to the processor.
可选地,作为另一个实施例,第一处理器803还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。Optionally, as another embodiment, the first processor 803 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
本实施例提供了一种头戴式设备,该头戴式设备可以包括场景分析服务模块、发送模块和显示模块。这样,本申请实施例能够开发具有更好的用户体验的头戴式设备产品,而且使得该头戴式设备产品更加智能化。如此可以使得头戴式设备能够根据用户行为而自动改变其UI,减少了用户手动调整UI的工作量;同时还可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。This embodiment provides a head-mounted device, which may include a scene analysis service module, a sending module, and a display module. In this way, the embodiments of the present application can develop a head-mounted device product with a better user experience, and make the head-mounted device product more intelligent. This allows the head-mounted device to automatically change its UI according to user behavior, reducing the user's manual adjustment of the UI; at the same time, it can also improve the interaction between the head-mounted device and the user, and improve the user-friendliness of the user interface. .
本申请的再一实施例中,基于前述实施例相同的发明构思,参见图10,其示出了本申请实施例提供的一种移动设备90的组成结构示意图。如图10所示,该移动设备90 可以包括:场景分析服务模块901和发送模块902;其中,In yet another embodiment of the present application, based on the same inventive concept as the foregoing embodiment, refer to FIG. 10, which shows a schematic diagram of the composition structure of a mobile device 90 provided by an embodiment of the present application. As shown in FIG. 10, the mobile device 90 may include: a scene analysis service module 901 and a sending module 902; among them,
场景分析服务模块901,配置为识别移动设备的应用场景;The scenario analysis service module 901 is configured to identify the application scenario of the mobile device;
发送模块902,配置为将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。The sending module 902 is configured to send the application scenario of the mobile device to the head-mounted device; wherein, the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and according to the display control command The display state of the head-mounted device is controlled.
在一些实施例中,场景分析服务模块901,具体配置为获取用户的姿态信息;以及将所述姿态信息与预定义姿态进行比较;以及若所述姿态信息与预定义姿态相匹配,则根据所述姿态信息进行应用场景识别,确定出所述移动设备的应用场景。In some embodiments, the scene analysis service module 901 is specifically configured to obtain posture information of the user; and compare the posture information with a predefined posture; The posture information is used to identify the application scenario, and the application scenario of the mobile device is determined.
在一些实施例中,参见图10,移动设备90还可以包括用户状态监视服务模块903;In some embodiments, referring to FIG. 10, the mobile device 90 may further include a user status monitoring service module 903;
用户状态监视服务模块903,配置为获取用户传感器数据;以及对所述用户传感器数据进行分析,确定出所述用户的姿态信息;The user status monitoring service module 903 is configured to obtain user sensor data; and analyze the user sensor data to determine the posture information of the user;
发送模块902,还配置为将所述用户的姿态信息发送至场景分析服务模块901。The sending module 902 is further configured to send the posture information of the user to the scene analysis service module 901.
进一步地,场景分析服务模块901,具体配置为若在所述预定义姿态中查询到与所述姿态信息相匹配的预设姿态,则确定所述姿态信息与预定义姿态相匹配;其中,所述预定义姿态包括至少一个预设姿态。Further, the scene analysis service module 901 is specifically configured to determine that the posture information matches the predefined posture if a preset posture matching the posture information is queried in the predefined posture; The predefined posture includes at least one preset posture.
在一些实施例中,场景分析服务模块901,具体配置为获取前台应用事件;以及将所述前台应用事件在预定义前台应用事件列表中进行搜索;以及若所述前台应用事件与所述预定义前台应用事件列表相匹配,则根据所述前台应用事件进行应用场景识别,确定出所述移动设备的应用场景。In some embodiments, the scene analysis service module 901 is specifically configured to obtain foreground application events; and search for the foreground application events in a list of predefined foreground application events; and if the foreground application event matches the predefined foreground application event If the foreground application event list matches, the application scenario identification is performed according to the foreground application event, and the application scenario of the mobile device is determined.
在一些实施例中,参见图10,移动设备90还可以包括前台应用监视服务模块904;In some embodiments, referring to FIG. 10, the mobile device 90 may also include a foreground application monitoring service module 904;
前台应用监视服务模块904,配置为获取所述前台应用事件;The foreground application monitoring service module 904 is configured to obtain the foreground application event;
发送模块902,还配置为将所述前台应用事件发送至场景分析服务模块901。The sending module 902 is further configured to send the foreground application event to the scene analysis service module 901.
进一步地,场景分析服务模块901,具体配置为若在所述预设前台应用事件列表中搜索到与所述前台应用事件相匹配的预设前台应用事件,则确定所述前台应用事件与所述预定义前台应用事件列表相匹配;其中,所述预定义前台应用事件列表包括至少一个预设前台应用事件。Further, the scene analysis service module 901 is specifically configured to determine that the foreground application event matches the foreground application event if a preset foreground application event matching the foreground application event is searched in the preset foreground application event list. The predefined foreground application event list matches; wherein, the predefined foreground application event list includes at least one preset foreground application event.
在一些实施例中,场景分析服务模块901,具体配置为获取外部传感器事件;以及根据所述外部传感器事件进行应用场景识别,确定出所述移动设备的应用场景。In some embodiments, the scene analysis service module 901 is specifically configured to acquire external sensor events; and perform application scene recognition according to the external sensor events, and determine the application scene of the mobile device.
在一些实施例中,参见图10,移动设备90还可包括通信模块905和事件接收器906;In some embodiments, referring to FIG. 10, the mobile device 90 may further include a communication module 905 and an event receiver 906;
通信模块905,配置为建立外部传感器设备与事件接收器906之间的通信连接;The communication module 905 is configured to establish a communication connection between the external sensor device and the event receiver 906;
事件接收器906,配置为接收所述外部传感器设备发送的所述外部传感器事件。The event receiver 906 is configured to receive the external sensor event sent by the external sensor device.
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。It is understandable that, in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, it may also be a module, or it may also be non-modular. Moreover, the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be realized in the form of hardware or software function module.
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例提供了一种计算机存储介质,应用于移动设备90,该计算机存储介质存储有计算机程序,所述计算机程序被第二处理器执行时实现前述实施例中任一项所述的方法。If the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, this embodiment provides a computer storage medium applied to the mobile device 90. The computer storage medium stores a computer program that, when executed by the second processor, implements any of the foregoing embodiments. The method described.
基于上述移动设备90的组成以及计算机存储介质,参见图11,其示出了本申请实施例提供的移动设备90的具体硬件结构示意图。如图11所示,可以包括:第二通信接口1001、第二存储器1002和第二处理器1003;各个组件通过第二总线系统1004耦合在一起。可理解,第二总线系统1004用于实现这些组件之间的连接通信。第二总线系 统1004除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图11中将各种总线都标为第二总线系统1004。其中,Based on the above composition of the mobile device 90 and the computer storage medium, refer to FIG. 11, which shows a schematic diagram of a specific hardware structure of the mobile device 90 provided by an embodiment of the present application. As shown in FIG. 11, it may include: a second communication interface 1001, a second memory 1002, and a second processor 1003; various components are coupled together through a second bus system 1004. It can be understood that the second bus system 1004 is used to implement connection and communication between these components. In addition to the data bus, the second bus system 1004 also includes a power bus, a control bus, and a status signal bus. However, for the sake of clear description, various buses are marked as the second bus system 1004 in FIG. 11. in,
第二通信接口1001,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;The second communication interface 1001 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
第二存储器1002,用于存储能够在第二处理器1003上运行的计算机程序;The second memory 1002 is configured to store a computer program that can run on the second processor 1003;
第二处理器1003,用于在运行所述计算机程序时,执行:The second processor 1003 is configured to execute: when running the computer program:
识别移动设备的应用场景;Identify the application scenarios of mobile devices;
将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。The application scenario of the mobile device is sent to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the head-mounted device according to the display control command. The display status of the device is controlled.
可选地,作为另一个实施例,第二处理器1003还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。Optionally, as another embodiment, the second processor 1003 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
可以理解,第二存储器1002与第一存储器802的硬件功能类似,第二处理器1003与第一处理器803的硬件功能类似;这里不再详述。It can be understood that the hardware functions of the second memory 1002 and the first memory 802 are similar, and the hardware functions of the second processor 1003 and the first processor 803 are similar; the details are not described herein again.
本实施例提供了一种移动设备,该移动设备可以包括场景分析服务模块和发送模块。这样,在头戴式设备接收到移动设备的应用场景后,可以根据该应用场景对应的显示控制指令,从而在不同的应用场景下,可以使得头戴式设备能够根据用户行为而自动改变其UI,减少了用户手动调整UI的工作量;同时还可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。This embodiment provides a mobile device, which may include a scene analysis service module and a sending module. In this way, after the head-mounted device receives the application scenario of the mobile device, it can display control instructions corresponding to the application scenario, so that in different application scenarios, the head-mounted device can automatically change its UI according to user behavior , Reducing the workload of the user to manually adjust the UI; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.
本申请再一实施例中,参见图12,其示出了本申请实施例提供的一种视觉增强系统的架构结构示意图。如图12所示,该视觉增强系统可以包括头戴式设备1101、移动设备1102和外部传感器设备1103。其中,头戴式设备1101为前述实施例任一项所述的头戴式设备,移动设备1102为前述实施例任一项所述的移动设备,外部传感器设备1103为前述实施例任一项所述的外部传感器设备。In yet another embodiment of the present application, refer to FIG. 12, which shows a schematic structural structure diagram of a vision enhancement system provided by an embodiment of the present application. As shown in FIG. 12, the visual enhancement system may include a head-mounted device 1101, a mobile device 1102, and an external sensor device 1103. Wherein, the head-mounted device 1101 is the head-mounted device described in any one of the preceding embodiments, the mobile device 1102 is the mobile device described in any one of the preceding embodiments, and the external sensor device 1103 is the one described in any one of the preceding embodiments. The external sensor device mentioned.
在本申请实施例中,头戴式设备1101可以包括用户状态监视服务模块1、场景分析服务模块2、事件接收器3、前台应用监视服务模块4和显示模块5(即图12中白色填充的模块);外部传感器设备1103可以包括传感器监视服务模块6(即图12中用黑色填充的模块);移动设备1102可以包括用户状态监视服务模块7、场景分析服务模块8和前台应用监视服务模块9(即图12中用灰色填充的模块)。In the embodiment of the present application, the head-mounted device 1101 may include a user status monitoring service module 1, a scene analysis service module 2, an event receiver 3, a foreground application monitoring service module 4, and a display module 5 (that is, the white-filled in FIG. 12 Module); the external sensor device 1103 may include a sensor monitoring service module 6 (that is, the module filled with black in Figure 12); the mobile device 1102 may include a user status monitoring service module 7, a scene analysis service module 8, and a foreground application monitoring service module 9 (That is, the module filled with gray in Figure 12).
基于上述的架构示例,在头戴式设备1101上:用户状态监视服务模块1监视相关的传感器数据(Sensor Data),并使用信号处理技术和/或机器学习技术从传感器数据中检测某些事件(如用户的姿态信息)。然后,该服务模块1将事件发送到头戴式设备1101上的场景分析服务模块2。另外,事件接收器3可以从外部传感器设备1103内的传感器监视服务模块6接收外部传感器事件(Sensor Event),即传感器监视服务模块6在外部传感器设备1103(如IoT设备)上运行以检测外部传感器事件。事件接收器3还可以从移动设备1102上的场景分析服务模块8接收用户状态事件,然后事件接收器3将该事件转发到头戴式设备1101上的场景分析服务模块2。Based on the above architectural example, on the head-mounted device 1101: The user status monitoring service module 1 monitors related sensor data (Sensor Data), and uses signal processing technology and/or machine learning technology to detect certain events from the sensor data ( Such as the user's posture information). Then, the service module 1 sends the event to the scene analysis service module 2 on the head-mounted device 1101. In addition, the event receiver 3 can receive external sensor events from the sensor monitoring service module 6 in the external sensor device 1103, that is, the sensor monitoring service module 6 runs on the external sensor device 1103 (such as an IoT device) to detect external sensors. event. The event receiver 3 may also receive user state events from the scene analysis service module 8 on the mobile device 1102, and then the event receiver 3 forwards the event to the scene analysis service module 2 on the head-mounted device 1101.
前台应用监视服务模块4监视正在头戴式设备1101上运行的用户的活动应用。该服务模块将前台应用事件(或称为前台任务事件)发送到场景分析服务模块2。例如,当用户启动或停止特定应用时。The foreground application monitoring service module 4 monitors the user's active applications running on the head-mounted device 1101. The service module sends the foreground application event (or called the foreground task event) to the scene analysis service module 2. For example, when the user starts or stops a particular application.
场景分析服务模块2从头戴式设备1101的内部传感器收集传感器数据、从外部传感器设备收集传感器事件,以及收集前台应用事件。然后,该模块分析这些数据并确定出合适的显示控制命令,然后该显示控制命令将发送到显示模块5。The scene analysis service module 2 collects sensor data from internal sensors of the head-mounted device 1101, collects sensor events from external sensor devices, and collects foreground application events. Then, the module analyzes the data and determines a suitable display control command, and then the display control command will be sent to the display module 5.
显示控制命令针对不同场景来预定义。这里,显示控制命令包括但不限于打开/关闭 显示、打开/关闭部分显示、亮度控制、调整显示大小、重新排列UI元素等。The display control commands are predefined for different scenarios. Here, the display control commands include, but are not limited to, turn on/off the display, turn on/off part of the display, brightness control, adjust the display size, rearrange UI elements, and so on.
在移动设备1102上:用户状态监视服务模块7监视移动设备上的实时传感器数据并从传感器数据中检测事件。用户状态监视服务模块7将传感器事件发送到移动设备上的场景分析服务模块8。例如,移动设备变为横向模式或纵向模式。前台应用监视服务模块9监视正在移动设备上运行的用户的活动应用。例如,当用户启动或停止特定应用时,该服务会将前台应用事件发送到场景分析服务模块8。需要注意的是,移动设备1102也可以包括有事件接收器(图中未示出),该事件接收器也可以用于接收从外部传感器设备1103内的传感器监视服务模块6接收外部传感器事件。On the mobile device 1102: The user status monitoring service module 7 monitors real-time sensor data on the mobile device and detects events from the sensor data. The user status monitoring service module 7 sends the sensor event to the scene analysis service module 8 on the mobile device. For example, the mobile device changes to landscape mode or portrait mode. The foreground application monitoring service module 9 monitors the user's active applications running on the mobile device. For example, when the user starts or stops a specific application, the service sends the foreground application event to the scene analysis service module 8. It should be noted that the mobile device 1102 may also include an event receiver (not shown in the figure), and the event receiver may also be used to receive external sensor events from the sensor monitoring service module 6 in the external sensor device 1103.
具体地,场景分析服务模块8从移动设备的内部传感器收集传感器数据、从前台应用监视服务模块9收集前台应用事件。然后,场景分析服务模块8分析这些数据,确定移动设备上的使用场景,并通过数据电缆或WiFi或其他无线通信信号将移动设备场景事件发送到头戴式设备1101上的事件接收器3。Specifically, the scene analysis service module 8 collects sensor data from internal sensors of the mobile device, and collects foreground application events from the foreground application monitoring service module 9. Then, the scene analysis service module 8 analyzes the data, determines the usage scene on the mobile device, and sends the scene event of the mobile device to the event receiver 3 on the head-mounted device 1101 through a data cable or WiFi or other wireless communication signals.
本申请实施例的视觉增强系统可检测到的应用场景包括但不限于驾驶、步行、骑行、站立、坐下、手持移动设备、放下移动设备、地理位置等。系统响应包括但不限于改变显示、UI布局、AR中的UI深度、亮度、功耗等。The application scenarios that can be detected by the vision enhancement system of the embodiment of the present application include, but are not limited to, driving, walking, cycling, standing, sitting, holding a mobile device, putting down a mobile device, geographic location, and so on. The system response includes but is not limited to changing the display, UI layout, UI depth, brightness, power consumption, etc. in AR.
如此,本申请实施例明显改善了头戴式设备的用户体验。更具体地,本申请实施例可以使得头戴式设备能够根据用户行为而自动改变其UI,从而可减少用户手动调整UI的工作量。这样,本申请实施例能够开发具有更好的用户体验的头戴式设备产品,而且使得该头戴式设备产品更加智能化。In this way, the embodiments of the present application significantly improve the user experience of the head-mounted device. More specifically, the embodiments of the present application can enable the head-mounted device to automatically change its UI according to user behavior, thereby reducing the workload of the user to manually adjust the UI. In this way, the embodiments of the present application can develop a head-mounted device product with a better user experience, and make the head-mounted device product more intelligent.
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that in this application, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements , And also include other elements not explicitly listed, or elements inherent to the process, method, article, or device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or device that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are for description only, and do not represent the superiority or inferiority of the embodiments.
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。The methods disclosed in the several method embodiments provided in this application can be combined arbitrarily without conflict to obtain new method embodiments.
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。The features disclosed in the several product embodiments provided in this application can be combined arbitrarily without conflict to obtain new product embodiments.
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。The features disclosed in the several method or device embodiments provided in this application can be combined arbitrarily without conflict to obtain a new method embodiment or device embodiment.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
工业实用性Industrial applicability
本申请实施例中,识别当前的应用场景;基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。这样,通过确定头戴式设备的应用场景,然后根据该应用场景对应的显示控制指令,从而可以使得在不同的应用场景下,头戴式设备能够根据用户行为或者使用状态自动改变用户界面的显示状态,可以减少用户手动调整的工作量;同时还可以改善头戴式设备与用户之间的交互效果,提高用户界面使用的友好性。In the embodiment of the present application, the current application scenario is identified; based on the display control command corresponding to the current application scenario, the display state of the head-mounted device is controlled. In this way, by determining the application scenario of the head-mounted device, and then according to the display control instructions corresponding to the application scenario, the head-mounted device can automatically change the display of the user interface according to the user behavior or use state in different application scenarios. Status can reduce the workload of the user's manual adjustment; at the same time, it can also improve the interaction effect between the head-mounted device and the user, and improve the user-friendliness of the user interface.

Claims (27)

  1. 一种头戴式设备的显示控制方法,应用于头戴式设备,所述方法包括:A display control method of a head-mounted device is applied to the head-mounted device, and the method includes:
    识别当前的应用场景;Identify the current application scenario;
    基于所述当前的应用场景对应的显示控制命令,控制所述头戴式设备的显示状态。Based on the display control command corresponding to the current application scenario, the display state of the head-mounted device is controlled.
  2. 根据权利要求1所述的方法,其中,所述识别当前的应用场景,包括:The method according to claim 1, wherein said identifying the current application scenario comprises:
    获取用户的姿态信息;Obtain the user's posture information;
    将所述姿态信息与预定义姿态进行比较;Comparing the posture information with a predefined posture;
    若所述姿态信息与预定义姿态相匹配,则根据所述姿态信息进行应用场景识别,确定出所述当前的应用场景。If the posture information matches the predefined posture, application scenario recognition is performed according to the posture information, and the current application scenario is determined.
  3. 根据权利要求2所述的方法,其中,所述获取用户的姿态信息,包括:The method according to claim 2, wherein said obtaining the user's posture information comprises:
    通过所述头戴式设备的用户状态监视服务模块,获取用户传感器数据;Obtain user sensor data through the user status monitoring service module of the head-mounted device;
    对所述用户传感器数据进行分析,确定出所述用户的姿态信息。The sensor data of the user is analyzed to determine the posture information of the user.
  4. 根据权利要求2所述的方法,其中,所述预定义姿态包括至少一个预设姿态,所述方法还包括:The method according to claim 2, wherein the predefined posture includes at least one preset posture, and the method further comprises:
    若在所述预定义姿态中查询到与所述姿态信息相匹配的预设姿态,则确定所述姿态信息与预定义姿态相匹配。If a preset posture matching the posture information is queried in the predefined posture, it is determined that the posture information matches the predefined posture.
  5. 根据权利要求1所述的方法,其中,所述识别当前的应用场景,包括:The method according to claim 1, wherein said identifying the current application scenario comprises:
    获取前台应用事件;Get the foreground application event;
    将所述前台应用事件在预定义前台应用事件列表中进行搜索;Search the foreground application event in a predefined foreground application event list;
    若所述前台应用事件与所述预定义前台应用事件列表相匹配,则根据所述前台应用事件进行应用场景识别,确定出所述当前的应用场景。If the foreground application event matches the predefined foreground application event list, application scenario identification is performed according to the foreground application event, and the current application scenario is determined.
  6. 根据权利要求5所述的方法,其中,所述获取前台应用事件,包括:The method according to claim 5, wherein said obtaining foreground application events comprises:
    通过所述头戴式设备的前台应用监视服务模块,获取所述前台应用事件。Obtain the foreground application event through the foreground application monitoring service module of the head-mounted device.
  7. 根据权利要求5所述的方法,其中,所述预定义前台应用事件列表包括至少一个预设前台应用事件,所述方法还包括:The method according to claim 5, wherein the predefined foreground application event list includes at least one preset foreground application event, and the method further comprises:
    若在所述预设前台应用事件列表中搜索到与所述前台应用事件相匹配的预设前台应用事件,则确定所述前台应用事件与所述预定义前台应用事件列表相匹配。If a preset foreground application event that matches the foreground application event is searched in the preset foreground application event list, it is determined that the foreground application event matches the predefined foreground application event list.
  8. 根据权利要求1所述的方法,其中,所述识别当前的应用场景,包括:The method according to claim 1, wherein said identifying the current application scenario comprises:
    获取外部传感器事件;Obtain external sensor events;
    根据所述外部传感器事件进行应用场景识别,确定出所述当前的应用场景。Perform application scenario recognition according to the external sensor event, and determine the current application scenario.
  9. 根据权利要求8所述的方法,其中,所述获取外部传感器事件,包括:The method according to claim 8, wherein said acquiring external sensor events comprises:
    建立外部传感器设备与所述头戴式设备的事件接收器之间的通信连接;Establishing a communication connection between the external sensor device and the event receiver of the head-mounted device;
    通过所述事件接收器,获取所述外部传感器设备发送的所述外部传感器事件。Obtain the external sensor event sent by the external sensor device through the event receiver.
  10. 根据权利要求1所述的方法,其中,所述识别当前的应用场景,包括:The method according to claim 1, wherein said identifying the current application scenario comprises:
    获取移动设备的应用场景;Obtain application scenarios of mobile devices;
    将所述移动设备的应用场景确定为所述当前的应用场景。The application scenario of the mobile device is determined as the current application scenario.
  11. 根据权利要求10所述的方法,其中,所述获取移动设备的应用场景,包括:The method according to claim 10, wherein said obtaining the application scenario of the mobile device comprises:
    建立移动设备与所述头戴式设备的事件接收器之间的通信连接;Establishing a communication connection between the mobile device and the event receiver of the head-mounted device;
    通过所述事件接收器,获取所述移动设备发送的所述移动设备的应用场景。Obtain the application scenario of the mobile device sent by the mobile device through the event receiver.
  12. 根据权利要求1至11任一项所述的方法,其中,在所述识别当前的应用场景之后,所述方法还包括:The method according to any one of claims 1 to 11, wherein, after the identifying the current application scenario, the method further comprises:
    将所述当前的应用场景与预定义场景进行比较;Comparing the current application scenario with a predefined scenario;
    若所述当前的应用场景与预定义场景相匹配,则确定出所述当前的应用场景对应的显示控制命令。If the current application scene matches the predefined scene, the display control command corresponding to the current application scene is determined.
  13. 根据权利要求12所述的方法,其中,所述预定义场景至少包括下述其中一项:驾驶场景、步行场景、骑行场景、站立场景、坐下场景、握持移动设备场景、放下移动设备场景和地理位置场景;The method according to claim 12, wherein the predefined scene includes at least one of the following: driving scene, walking scene, cycling scene, standing scene, sitting scene, holding mobile device scene, putting down the mobile device Scenes and geographic locations;
    所述显示控制命令至少包括下述其中一项:开启显示命令、关闭显示命令、开启部分显示命令、关闭部分显示命令、调整显示区域尺寸命令和调整显示元素排列命令。The display control command includes at least one of the following: a display opening command, a display closing command, a partial display opening command, a partial display closing command, a display area size adjustment command, and a display element arrangement adjustment command.
  14. 一种头戴式设备的显示控制方法,应用于移动设备,所述方法包括:A display control method of a head-mounted device, applied to a mobile device, and the method includes:
    识别移动设备的应用场景;Identify the application scenarios of mobile devices;
    将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。The application scenario of the mobile device is sent to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and to control the head-mounted device according to the display control command. The display status of the device is controlled.
  15. 根据权利要求14所述的方法,其中,所述识别移动设备的应用场景,包括:The method according to claim 14, wherein said identifying the application scenario of the mobile device comprises:
    获取用户的姿态信息;Obtain the user's posture information;
    将所述姿态信息与预定义姿态进行比较;Comparing the posture information with a predefined posture;
    若所述姿态信息与预定义姿态相匹配,则根据所述姿态信息进行应用场景识别,确定出所述移动设备的应用场景。If the posture information matches the predefined posture, application scenario recognition is performed according to the posture information, and the application scenario of the mobile device is determined.
  16. 根据权利要求15所述的方法,其中,所述获取用户的姿态信息,包括:The method according to claim 15, wherein said obtaining the posture information of the user comprises:
    通过所述移动设备的用户状态监视服务模块,获取用户传感器数据;Obtain user sensor data through the user status monitoring service module of the mobile device;
    对所述用户传感器数据进行分析,确定出所述用户的姿态信息。The sensor data of the user is analyzed to determine the posture information of the user.
  17. 根据权利要求15所述的方法,其中,所述预定义姿态包括至少一个预设姿态,所述方法还包括:The method according to claim 15, wherein the predefined posture includes at least one preset posture, and the method further comprises:
    若在所述预定义姿态中查询到与所述姿态信息相匹配的预设姿态,则确定所述姿态信息与预定义姿态相匹配。If a preset posture matching the posture information is queried in the predefined posture, it is determined that the posture information matches the predefined posture.
  18. 根据权利要求14所述的方法,其中,所述识别移动设备的应用场景,包括:The method according to claim 14, wherein said identifying the application scenario of the mobile device comprises:
    获取前台应用事件;Get the foreground application event;
    将所述前台应用事件在预定义前台应用事件列表中进行搜索;Search the foreground application event in a predefined foreground application event list;
    若所述前台应用事件与所述预定义前台应用事件列表相匹配,则根据所述前台应用事件进行应用场景识别,确定出所述移动设备的应用场景。If the foreground application event matches the predefined foreground application event list, application scenario identification is performed according to the foreground application event, and the application scenario of the mobile device is determined.
  19. 根据权利要求18所述的方法,其中,所述获取前台应用事件,包括:The method according to claim 18, wherein said obtaining foreground application events comprises:
    通过所述移动设备的前台应用监视服务模块,获取所述前台应用事件。Obtain the foreground application event through the foreground application monitoring service module of the mobile device.
  20. 根据权利要求18所述的方法,其中,所述预定义前台应用事件列表包括至少一个预设前台应用事件,所述方法还包括:The method according to claim 18, wherein the predefined foreground application event list includes at least one preset foreground application event, and the method further comprises:
    若在所述预设前台应用事件列表中搜索到与所述前台应用事件相匹配的预设前台应用事件,则确定所述前台应用事件与所述预定义前台应用事件列表相匹配。If a preset foreground application event that matches the foreground application event is searched in the preset foreground application event list, it is determined that the foreground application event matches the predefined foreground application event list.
  21. 根据权利要求14所述的方法,其中,所述识别移动设备的应用场景,包括:The method according to claim 14, wherein said identifying the application scenario of the mobile device comprises:
    获取外部传感器事件;Obtain external sensor events;
    根据所述外部传感器事件进行应用场景识别,确定出所述移动设备的应用场景。The application scenario identification is performed according to the external sensor event, and the application scenario of the mobile device is determined.
  22. 根据权利要求21所述的方法,其中,所述获取外部传感器事件,包括:The method according to claim 21, wherein said acquiring external sensor events comprises:
    建立外部传感器设备与所述移动设备的事件接收器之间的通信连接;Establishing a communication connection between the external sensor device and the event receiver of the mobile device;
    通过所述事件接收器,获取所述外部传感器设备发送的所述外部传感器事件。Obtain the external sensor event sent by the external sensor device through the event receiver.
  23. 一种头戴式设备,其中,所述头戴式设备包括场景分析服务模块、发送模块和显示模块;A head-mounted device, wherein the head-mounted device includes a scene analysis service module, a sending module, and a display module;
    所述场景分析服务模块,配置为识别当前的应用场景;The scenario analysis service module is configured to identify the current application scenario;
    所述发送模块,配置为将所述当前的应用场景对应的显示控制命令由所述场景分析服务模块发送至所述显示模块;The sending module is configured to send the display control command corresponding to the current application scene from the scene analysis service module to the display module;
    所述显示模块,配置为根据所述显示控制命令控制显示状态。The display module is configured to control the display state according to the display control command.
  24. 一种头戴式设备,其中,所述头戴式设备包括第一存储器和第一处理器;A head-mounted device, wherein the head-mounted device includes a first memory and a first processor;
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;The first memory is configured to store a computer program that can run on the first processor;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求1至13任一项所述的方法。The first processor is configured to execute the method according to any one of claims 1 to 13 when running the computer program.
  25. 一种移动设备,其中,所述移动设备包括场景分析服务模块和发送模块;A mobile device, wherein the mobile device includes a scene analysis service module and a sending module;
    所述场景分析服务模块,配置为识别移动设备的应用场景;The scenario analysis service module is configured to identify the application scenario of the mobile device;
    所述发送模块,配置为将所述移动设备的应用场景发送给头戴式设备;其中,所述移动设备的应用场景用于指示所述头戴式设备确定显示控制命令并根据所述显示控制命令对所述头戴式设备的显示状态进行控制。The sending module is configured to send the application scenario of the mobile device to the head-mounted device; wherein the application scenario of the mobile device is used to instruct the head-mounted device to determine a display control command and control the display according to the Command to control the display state of the head-mounted device.
  26. 一种移动设备,其中,所述移动设备包括第二存储器和第二处理器;A mobile device, wherein the mobile device includes a second memory and a second processor;
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;The second memory is configured to store a computer program that can run on the second processor;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求14至22任一项所述的方法。The second processor is configured to execute the method according to any one of claims 14 to 22 when running the computer program.
  27. 一种计算机存储介质,其中,所述计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如权利要求1至13任一项所述的方法、或者被第二处理器执行时实现如权利要求14至22任一项所述的方法。A computer storage medium, wherein the computer storage medium stores a computer program that, when executed by a first processor, implements the method according to any one of claims 1 to 13, or is executed by a second processor When executed, the method according to any one of claims 14 to 22 is realized.
PCT/CN2021/077112 2020-02-24 2021-02-20 Display control method and device for head-mounted device, and computer storage medium WO2021169881A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180013084.0A CN115053203A (en) 2020-02-24 2021-02-20 Display control method and device for head-mounted device, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062980915P 2020-02-24 2020-02-24
US62/980,915 2020-02-24

Publications (1)

Publication Number Publication Date
WO2021169881A1 true WO2021169881A1 (en) 2021-09-02

Family

ID=77490680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/077112 WO2021169881A1 (en) 2020-02-24 2021-02-20 Display control method and device for head-mounted device, and computer storage medium

Country Status (2)

Country Link
CN (1) CN115053203A (en)
WO (1) WO2021169881A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120050141A1 (en) * 2010-08-25 2012-03-01 Border John N Switchable head-mounted display
CN103838536A (en) * 2012-11-27 2014-06-04 联想(北京)有限公司 Displaying mode switching method, electronic equipment control method and electronic equipment
CN105324738A (en) * 2013-06-07 2016-02-10 索尼电脑娱乐公司 Switching mode of operation in a head mounted display
CN106200884A (en) * 2015-04-30 2016-12-07 成都理想境界科技有限公司 head-mounted display apparatus and control method thereof
CN107820599A (en) * 2016-12-09 2018-03-20 深圳市柔宇科技有限公司 The method of adjustment of user interface, adjustment system and wear display device
CN109189225A (en) * 2018-08-30 2019-01-11 Oppo广东移动通信有限公司 Display interface method of adjustment, device, wearable device and storage medium
CN109478101A (en) * 2016-07-22 2019-03-15 谷歌有限责任公司 For virtual reality user Interface detection user movement range

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6798106B2 (en) * 2015-12-28 2020-12-09 ソニー株式会社 Information processing equipment, information processing methods, and programs
CN106572417B (en) * 2016-10-27 2019-11-05 腾讯科技(深圳)有限公司 Sound effect control method and apparatus
US11287967B2 (en) * 2016-11-03 2022-03-29 Microsoft Technology Licensing, Llc Graphical user interface list content density adjustment
CN107765953B (en) * 2017-11-08 2023-08-22 网易(杭州)网络有限公司 Information display method and device, processor and head-mounted display equipment
US10824235B2 (en) * 2018-01-08 2020-11-03 Facebook Technologies, Llc Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120050141A1 (en) * 2010-08-25 2012-03-01 Border John N Switchable head-mounted display
CN103838536A (en) * 2012-11-27 2014-06-04 联想(北京)有限公司 Displaying mode switching method, electronic equipment control method and electronic equipment
CN105324738A (en) * 2013-06-07 2016-02-10 索尼电脑娱乐公司 Switching mode of operation in a head mounted display
CN106200884A (en) * 2015-04-30 2016-12-07 成都理想境界科技有限公司 head-mounted display apparatus and control method thereof
CN109478101A (en) * 2016-07-22 2019-03-15 谷歌有限责任公司 For virtual reality user Interface detection user movement range
CN107820599A (en) * 2016-12-09 2018-03-20 深圳市柔宇科技有限公司 The method of adjustment of user interface, adjustment system and wear display device
CN109189225A (en) * 2018-08-30 2019-01-11 Oppo广东移动通信有限公司 Display interface method of adjustment, device, wearable device and storage medium

Also Published As

Publication number Publication date
CN115053203A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
US10554807B2 (en) Mobile terminal and method of operating the same
US9075429B1 (en) Distortion correction for device display
US20170256096A1 (en) Intelligent object sizing and placement in a augmented / virtual reality environment
US10360876B1 (en) Displaying instances of visual content on a curved display
US10950205B2 (en) Electronic device, augmented reality device for providing augmented reality service, and method of operating same
US20160321843A1 (en) Display control device, display control method, and program
US20180060333A1 (en) System and method for placement of virtual characters in an augmented/virtual reality environment
WO2021109958A1 (en) Application program sharing method and electronic device
CN103105926A (en) Multi-sensor posture recognition
JP2016507815A (en) Image processing method, image processing device, terminal device, program, and recording medium
US20180174363A1 (en) Systems and methods for presenting indication(s) of whether virtual object presented at first device is also presented at second device
US20170054839A1 (en) Communication control device, method of controlling communication, and program
WO2021115103A1 (en) Display control method and terminal device
EP2807534B1 (en) Methods and devices to determine a preferred electronic device
JP2018032440A (en) Controllable headset computer displays
US11915671B2 (en) Eye gaze control of magnification user interface
KR102591413B1 (en) Mobile terminal and method for controlling the same
CN106454499A (en) Mobile terminal and method for controlling the same
EP3979620A1 (en) Photographing method and terminal
WO2021104162A1 (en) Display method and electronic device
WO2020156121A1 (en) Display control method and mobile terminal
WO2021169881A1 (en) Display control method and device for head-mounted device, and computer storage medium
WO2022199597A1 (en) Method, apparatus and system for cropping image by vr/ar device
WO2020220993A1 (en) Message display method and mobile terminal
KR101661974B1 (en) Mobile terminal and operation method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21760784

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21760784

Country of ref document: EP

Kind code of ref document: A1