CN115053203A - Display control method and device for head-mounted device, and computer storage medium - Google Patents

Display control method and device for head-mounted device, and computer storage medium Download PDF

Info

Publication number
CN115053203A
CN115053203A CN202180013084.0A CN202180013084A CN115053203A CN 115053203 A CN115053203 A CN 115053203A CN 202180013084 A CN202180013084 A CN 202180013084A CN 115053203 A CN115053203 A CN 115053203A
Authority
CN
China
Prior art keywords
scene
application
head
event
mobile device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180013084.0A
Other languages
Chinese (zh)
Inventor
张复尧
熊棋
徐毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN115053203A publication Critical patent/CN115053203A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Abstract

The embodiment of the application discloses a display control method and equipment of head-mounted equipment and a computer storage medium, which are applied to the head-mounted equipment, wherein the method comprises the following steps: identifying a current application scenario; and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene. In this way, by determining the application scene of the head-mounted device and then according to the display control instruction corresponding to the application scene, the head-mounted device can automatically change the display state of the user interface according to the user behavior or the use state in different application scenes, and the workload of manual adjustment of the user can be reduced; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the user-friendly property of the user interface is improved.

Description

Display control method and device for head-mounted device, and computer storage medium
Cross Reference to Related Applications
The present application claims priority from a prior U.S. provisional patent application entitled "Method for Controlling Head-Mounted Display" filed 24/02/2020, application number 62/980,915, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the technical field of visual enhancement, in particular to a display control method and device of a head-mounted device and a computer storage medium.
Background
In recent years, with the development of visual enhancement technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), a Virtual three-dimensional world can be simulated by a computer system, so that a user can interact with a Virtual scene, and the user can feel personally on the scene.
In the vision enhancement system, at least a head-mounted device and a mobile device are included, and the head-mounted device and the mobile device can be in communication connection through a wire or wirelessly. At present, although a set of user interfaces is also defined by the head-mounted device, the user interfaces cannot adapt to different use scenes, so that the user interfaces are not friendly to display and are not beneficial to the operation of users.
Disclosure of Invention
The embodiment of the application provides a display control method and device of a head-mounted device and a computer storage medium, which can improve the interaction effect between the head-mounted device and a user and improve the user-friendly property of a user interface.
The technical scheme of the embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides a display control method for a head-mounted device, which is applied to the head-mounted device, and the method includes:
identifying a current application scenario;
and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene.
In a second aspect, an embodiment of the present application provides a display control method for a head-mounted device, which is applied to a mobile device, and the method includes:
identifying an application scene of a mobile device;
sending the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
In a third aspect, an embodiment of the present application provides a head-mounted device, where the head-mounted device includes a scene analysis service module, a sending module, and a display module; wherein the content of the first and second substances,
the scene analysis service module is configured to identify a current application scene;
the sending module is configured to send the display control command corresponding to the current application scene to the display module by the scene analysis service module;
the display module is configured to control a display state according to the display control command.
In a fourth aspect, embodiments of the present application provide a head-mounted device, which includes a first memory and a first processor; wherein the content of the first and second substances,
the first memory for storing a computer program operable on the first processor;
the first processor, when executing the computer program, is configured to perform the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a mobile device, where the mobile device includes a scene analysis service module and a sending module; wherein the content of the first and second substances,
the scene analysis service module is configured to identify an application scene of the mobile device;
the sending module is configured to send the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
In a sixth aspect, an embodiment of the present application provides a mobile device, which includes a second memory and a second processor; wherein, the first and the second end of the pipe are connected with each other,
the second memory for storing a computer program operable on the second processor;
the second processor is adapted to perform the method according to the second aspect when running the computer program.
In a seventh aspect, the present application provides a computer storage medium storing a computer program, where the computer program implements the method according to the first aspect when executed by a first processor or implements the method according to the second aspect when executed by a second processor.
The embodiment of the application provides a display control method and device of a head-mounted device and a computer storage medium, which are applied to the head-mounted device and used for identifying a current application scene; and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene. In this way, by determining the application scene of the head-mounted device and then according to the display control instruction corresponding to the application scene, the head-mounted device can automatically change the display state of the user interface according to the user behavior or the use state in different application scenes, and the workload of manual adjustment of the user can be reduced; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the user-friendly property of the user interface is improved.
Drawings
FIG. 1 is a schematic diagram of a visual enhancement system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a display control method of a head-mounted device according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another display control method for a head-mounted device according to an embodiment of the present disclosure;
fig. 4 is a detailed flowchart of a display control method of a head-mounted device according to an embodiment of the present disclosure;
fig. 5 is a detailed flowchart of another display control method for a head-mounted device according to an embodiment of the present application;
fig. 6 is a detailed flowchart of a display control method of another head-mounted device according to an embodiment of the present disclosure;
fig. 7 is a detailed flowchart of a display control method of another head-mounted device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a head-mounted device according to an embodiment of the present disclosure;
fig. 9 is a schematic hardware structure diagram of a head-mounted device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a mobile device according to an embodiment of the present application;
fig. 11 is a schematic hardware structure diagram of a mobile device according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an architecture of a visual enhancement system according to an embodiment of the present application.
Detailed Description
So that the manner in which the features and elements of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. It should also be noted that reference to the terms "first \ second \ third" in the embodiments of the present application is simply for distinguishing similar objects and not for representing a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged with a specific order or sequence where permitted so that the embodiments of the present application described herein can be implemented in an order other than that shown or described herein.
Referring to fig. 1, a schematic diagram of a visual enhancement system provided in an embodiment of the present application is shown. As shown in fig. 1, the vision enhancement system 10 may include a head mounted device 110 and a mobile device 120. Wherein the head mounted device 110 and the mobile device 120 are connected in communication by wire or wirelessly.
Here, the Head Mounted device 110 may specifically refer to a monocular or binocular Head Mounted Display (HMD), such as AR glasses. In fig. 1, the head mounted device 110 may include one or more display modules 111 positioned in a location area near one or both eyes of the user. The content displayed in the display module 111 of the head-mounted device 110 can be presented in front of the eyes of the user, and the displayed content can fill or partially fill the field of view of the user. It should be further noted that the Display module 111 may refer to one or more Organic Light-Emitting Diode (OLED) modules, Liquid Crystal Display (LCD) modules, laser Display modules, and the like.
Additionally, in some embodiments, the head mounted device 110 may also include one or more sensors and one or more cameras. For example, the head-mounted device 110 may include one or more sensors such as an Inertial Measurement Unit (IMU), an accelerometer, a gyroscope, a proximity sensor, and a depth camera.
The mobile device 120 may wirelessly connect to the head mounted device 110 according to one or more wireless communication protocols (e.g., bluetooth, WIFI, etc.). Alternatively, mobile device 120 may also be wired to headset device 110 via a data cable according to one or more data transfer protocols, such as Universal Serial Bus (USB). Here, the mobile device 120 may be implemented in various forms. For example, the mobile devices described in the embodiments of the present application may include devices such as a smart phone, a tablet computer, a notebook computer, a laptop computer, a palmtop computer, a Personal Digital Assistant (PDA), a smart watch, and the like.
In some embodiments, a user operating on the mobile device 120 may control operations at the head mounted device 110 via the mobile device 120. Additionally, data collected by sensors in the head mounted device 110 may also be sent back to the mobile device 120 for further processing or storage.
It should be noted that current HMD schemes usually define a set of User Interfaces (UIs) that are not consistent, but these User interfaces cannot adapt to different usage scenarios. However, adaptive system behavior has been widely adopted by other devices and applications. Some of these devices sense the environment and adjust their functions; for example, the noise reducing headphones will detect the human voice in the environment and adjust the noise reduction level. Some applications will sense the user's behavior/posture and adjust the function accordingly; for example, a fitness application on a smart watch begins tracking the user's heartbeat when the user is detected to be in motion. Some systems will sense both environmental and user behavior; for example, an automotive display system may detect motion of the automobile and the bluetooth connection of the user's phone and in such a case display a particular set of UI layouts and/or components. That is, although the head-mounted device may also define a set of user interfaces, these user interfaces cannot adapt to different usage scenarios, so that the user interfaces are not displayed friendly, and are not favorable for the user to operate.
The embodiment of the application provides a display control method of a head-mounted device, which is applied to the head-mounted device and has the following basic ideas: identifying a current application scenario; and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene.
The embodiment of the application further provides a display control method of the head-mounted device, which is applied to the mobile device, and the basic idea of the method is as follows: identifying an application scene of a mobile device; sending the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
In this way, by determining the application scene of the head-mounted device and then according to the display control instruction corresponding to the application scene, the head-mounted device can automatically change the display state of the user interface according to the user behavior or the use state in different application scenes, and the workload of manual adjustment of the user can be reduced; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the user-friendly property of the user interface is improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In an embodiment of the present application, referring to fig. 2, a flowchart of a display control method of a head-mounted device provided in an embodiment of the present application is shown. As shown in fig. 2, the method may include:
s101: a current application scenario is identified.
It should be noted that the method of the embodiment of the present application is applied to a head-mounted device, and the head-mounted device includes a display module. Specifically, for a display module, user interfaces are usually defined on the display module, but at present, the user interfaces cannot adapt to different application scenarios, and the embodiment of the application mainly provides a method for controlling the user interfaces according to relevant factors such as user behaviors or use states; more specifically, the method provided by the embodiment of the application can enable the user interface to adapt to different application scenes of the head-mounted device.
In this embodiment of the application, the interior of the head-mounted device includes the scene analysis service module, and the application scene of the head-mounted device may be identified in multiple ways, for example, the application scene of the head-mounted device may be identified according to the posture information of the user, the application scene of the head-mounted device may be identified according to a foreground application event, and even the application scene of the head-mounted device may be identified according to an external event (e.g., an external sensor event, an application scene of the mobile device, etc.), which is not limited herein.
In a possible implementation, the identifying the current application scenario may include:
acquiring posture information of a user;
comparing the pose information to a predefined pose;
and if the attitude information is matched with the predefined attitude, identifying an application scene according to the attitude information to determine the current application scene.
It should be noted that, after comparing the gesture information with the predefined gesture, if the gesture information matches with the predefined gesture, the current application scenario may be determined according to the matched gesture information; if the gesture information does not match the predefined gesture, the step of obtaining the gesture information of the user is returned to.
It should also be noted that the predefined gesture may include at least one preset gesture. For the pose information matching the predefined pose, the method may further comprise: and if a preset gesture matched with the gesture information is inquired in the predefined gesture, determining that the gesture information is matched with the predefined gesture.
That is, the query can be performed in the predefined gesture, and if the preset gesture matched with the gesture information can be queried, the gesture information can be described to be matched with the predefined gesture; at this time, the current application scene can be determined according to the posture information (or the matched preset posture).
In addition, the head-mounted device may further include a user status monitoring service module for obtaining user sensor data to determine posture information of the user. Therefore, in some embodiments, the obtaining the posture information of the user may include:
acquiring user sensor data through a user state monitoring service module of the head-mounted device;
and analyzing the user sensor data to determine the attitude information of the user.
That is, after user sensor data is acquired by a user status monitoring service module of the head mounted device, signal processing techniques and/or machine learning techniques may be utilized to determine gesture information of the user from the user sensor data.
Specifically, the Machine Learning technique may be a Support Vector Machine (SVM) technique, an Artificial Neural Network (ANN) technique, a Deep Learning (DL) technique, or the like, but is not limited thereto.
In another possible implementation, the identifying the current application scenario may include:
acquiring a foreground application event;
searching the foreground application event in a predefined foreground application event list;
and if the foreground application event is matched with the predefined foreground application event list, carrying out application scene recognition according to the foreground application event to determine the current application scene.
It should be noted that, for a foreground application event, if the foreground application event matches with the predefined foreground application event list, the current application scenario may be determined according to the matched foreground application event; if the foreground application event does not match the list of predefined foreground application events, the step of obtaining the foreground application event is returned to.
It is further noted that the predefined list of foreground application events may include at least one preset foreground application event. For a foreground application event matching the predefined list of foreground application events, the method may further comprise: and if the preset foreground application event matched with the foreground application event is searched in the preset foreground application event list, determining that the foreground application event is matched with the predefined foreground application event list.
That is, a search may be performed in the predefined foreground application event list, and if a preset foreground application event matching the foreground application event can be searched, it may be said that the foreground application event matches the predefined foreground application event list; at this time, the current application scenario may be determined according to the foreground application event (or the matched preset foreground application event).
In addition, the head-mounted device may further include a foreground application monitoring service module for acquiring foreground application events. Thus, in some embodiments, the obtaining the foreground application event may include: and acquiring the foreground application event through a foreground application monitoring service module of the head-mounted device.
It should be noted that the foreground application event may be an active application of the user running on the head mounted device. In other words, the foreground application monitoring service module may monitor the active application of the user running on the headset for a foreground application event.
In yet another possible implementation, the identifying the current application scenario may include:
acquiring an external sensor event;
and identifying an application scene according to the external sensor event, and determining the current application scene.
It should be noted that, in the embodiment of the present application, a current application scenario may also be identified according to an external sensor event. Wherein, the external sensor event can be detected by an external sensor device (or a sensor device called external extension).
In some embodiments, the acquiring an external sensor event may include:
establishing a communication connection between an external sensor device and an event receiver of the head-mounted device;
and acquiring the external sensor event sent by the external sensor equipment through the event receiver.
It should be noted that the external sensor device includes a sensor monitoring service module, and the external sensor event can be obtained through the sensor monitoring service module, and then the external sensor device sends the external sensor event to the event receiver of the head-mounted device, so that the external sensor event can be obtained from the event receiver in the head-mounted device.
It should be noted that the communication connection between the external sensor device and the event receiver of the head-mounted device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol.
In the embodiment of the present application, the wireless communication protocol may include at least one of: bluetooth (Bluetooth) protocol, Wireless Fidelity (WIFI) protocol, Infrared Data Association (IrDA) protocol, and Near Field Communication (NFC) protocol.
In yet another possible implementation, the identifying the current application scenario may include:
acquiring an application scene of the mobile equipment;
and identifying the application scene according to the application scene of the mobile equipment, and determining the current application scene.
Further, in some embodiments, the obtaining the application scenario of the mobile device may include:
establishing a communication connection between a mobile device and an event receiver of the head-mounted device;
and acquiring the application scene of the mobile equipment sent by the mobile equipment through the event receiver.
Further, in some embodiments, the performing application scene identification according to the application scene of the mobile device to determine the current application scene may include:
determining the application scene of the mobile device as the current application scene.
It should be noted that, in the mobile device, the user state monitoring service module inside the mobile device may be used to acquire user sensor data (or user gesture information), or a foreground application monitoring service module inside the mobile device may be used to acquire a foreground application event; then, the application scene of the mobile equipment can be determined by analyzing the user sensor data or the foreground application events; in addition, the mobile device can even acquire the external sensor event through the external sensor device to determine the application scene of the mobile device. Here, it should be noted that the mobile device determines the application scenario of the mobile device by using the internal user state monitoring service module or the foreground application monitoring service module, or by using the external sensor device, which is implemented in a similar way that the head-mounted device determines the application scenario of the head-mounted device by using the internal user state monitoring service module or the foreground application monitoring service module, or by using the external sensor device, and is not detailed here.
It should be further noted that, after determining the application scenario of the mobile device, the mobile device may send the application scenario to an event receiver of the head-mounted device, so that in the head-mounted device, the application scenario of the mobile device may be acquired from the event receiver, and then the application scenario of the head-mounted device is determined. Since the mobile device and the head mounted device are in the same vision enhancement system, in a specific example, the application scene of the mobile device can be directly determined as the current application scene of the head mounted device.
In the embodiment of the present application, the communication connection between the mobile device and the event receiver of the head-mounted device may be a wired communication connection established through a data cable, or may also be a wireless communication connection established according to a wireless communication protocol. Here, the wireless communication protocol may include at least one of: bluetooth protocol, WIFI protocol, IrDA protocol and NFC protocol.
S102: and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene.
It should be noted that after the current application scene of the head-mounted device is determined, the display control command corresponding to the current application scene may be obtained. Specifically, in some embodiments, after the identifying the current application scenario, the method may further include:
comparing the current application scenario with a predefined scenario;
and if the current application scene is matched with the predefined scene, determining a display control command corresponding to the current application scene.
In the embodiment of the present application, the predefined scenario may include at least one of the following: a driving scene, a walking scene, a riding scene, a standing scene, a sitting scene, a holding mobile device scene, a putting down mobile device scene and a geographic position scene; the display control commands may include at least one of: an open display command, a close display command, an open portion display command, a close portion display command, an adjust display area size command, and an adjust display element arrangement command.
It should be noted that, for the current application scene of the head-mounted device, if the current application scene is matched with the predefined scene, at this time, a corresponding display control command may be determined according to the matched scene, so as to control the display state of the display module inside the head-mounted device, such as changing display, UI layout, rearranging UI elements, UI depth in AR, brightness, power consumption, and the like; if the current application scenario does not match the predefined scenario, then execution of the step of identifying the current application scenario is returned.
It should be further noted that the predefined scene may include at least one preset scene, and each preset scene corresponds to a display control command with a different display state. For an application scenario matching a predefined scenario, the method may further comprise: comparing the application scenario with a predefined scenario; and if a preset scene matched with the application scene is inquired in the predefined scene, determining that the application scene is matched with the predefined scene.
That is, a query may be made in a predefined scene, and if a preset scene matching the application scene can be queried, it may be stated that the application scene matches the predefined scene; at this time, the corresponding display control command may be determined according to the application scenario (or the matched preset scenario) to control the display state of the display module inside the head-mounted device.
The embodiment provides a display control method of a head-mounted device, which is applied to the head-mounted device. Identifying a current application scenario; and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene. In this way, by determining the application scene of the head-mounted device and then according to the display control instruction corresponding to the application scene, the head-mounted device can automatically change the display state of the user interface according to the user behavior or the use state in different application scenes, so as to reduce the workload of manual adjustment of the user; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the user-friendly property of the user interface is improved.
In another embodiment of the present application, referring to fig. 3, a flowchart of another display control method of a head-mounted device provided in an embodiment of the present application is shown. As shown in fig. 3, the method may include:
s201: an application scenario of a mobile device is identified.
It should be noted that the method of the embodiment of the present application is applied to a mobile device. In this embodiment of the application, the mobile device may also include a scene analysis service module inside to analyze an application scene of the mobile device, and then send the application scene to the head-mounted device, and the head-mounted device includes a display module, so that the display state of the display module can be adaptively controlled in different application scenes.
It should be further noted that, in the embodiment of the present application, the application scenario of the mobile device may also be recognized in multiple ways, for example, the application scenario of the mobile device may be recognized according to the gesture information of the user, the application scenario of the mobile device may also be recognized according to a foreground application event, and even the application scenario of the mobile device may also be recognized according to an external event (such as an external sensor event), which is not limited herein.
In a possible implementation, the identifying an application scenario of the mobile device may include:
acquiring posture information of a user;
comparing the pose information to a predefined pose;
and if the attitude information is matched with the predefined attitude, identifying an application scene according to the attitude information to determine the application scene of the mobile equipment.
It should be noted that, after comparing the gesture information with the predefined gesture, if the gesture information is matched with the predefined gesture, the application scenario of the mobile device may be determined according to the matched gesture information; if the gesture information does not match the predefined gesture, the step of obtaining the gesture information of the user is returned to.
It should also be noted that, in the mobile device, the predefined gesture may include at least one preset gesture. For the pose information matching the predefined pose, the method may further comprise: and if a preset gesture matched with the gesture information is inquired in the predefined gesture, determining that the gesture information is matched with the predefined gesture.
That is, the user can query in the predefined gesture, and if a preset gesture matched with the gesture information can be queried, it can be stated that the gesture information is matched with the predefined gesture; at this time, the application scene of the mobile device can be determined according to the posture information (or the matched preset posture).
In addition, the mobile device may further include a user status monitoring service module to obtain user sensor data to determine posture information of the user. Therefore, in some embodiments, the obtaining the posture information of the user may include:
acquiring user sensor data through a user state monitoring service module of the mobile equipment;
and analyzing the user sensor data to determine the attitude information of the user.
That is, after user sensor data is acquired by a user status monitoring service module of the mobile device, signal processing techniques and/or machine learning techniques may be utilized to determine the user's posture information from the user sensor data. For example, the machine learning technique may be, but is not limited to, an SVM technique, an ANN technique, a deep learning technique, and the like.
In another possible implementation, the identifying an application scenario of the mobile device may include:
acquiring a foreground application event;
searching the foreground application event in a predefined foreground application event list;
and if the foreground application event is matched with the predefined foreground application event list, carrying out application scene recognition according to the foreground application event to determine the application scene of the mobile equipment.
It should be noted that, for a foreground application event, if the foreground application event matches with the predefined foreground application event list, the application scenario of the mobile device may be determined according to the matched foreground application event; if the foreground application event does not match the list of predefined foreground application events, the step of obtaining the foreground application event is returned to.
It should also be noted that, in the mobile device, the predefined foreground application event list may include at least one preset foreground application event. For a foreground application event matching the predefined list of foreground application events, the method may further comprise: and if the preset foreground application event matched with the foreground application event is searched in the preset foreground application event list, determining that the foreground application event is matched with the predefined foreground application event list.
That is, a search can be performed in the predefined foreground application event list, and if a preset foreground application event matching the foreground application event can be searched, it can be said that the foreground application event matches the predefined foreground application event list; at this time, the application scenario of the mobile device may be determined according to the foreground application event (or the matched preset foreground application event).
In addition, the mobile device may further include a foreground application monitoring service module to obtain a foreground application event. Thus, in some embodiments, the obtaining the foreground application event may include: and acquiring the foreground application event through a foreground application monitoring service module of the mobile equipment.
It should be noted that the foreground application event may be an active application of the user running on the mobile device. In other words, the foreground application monitoring service module may monitor active applications of a user running on the mobile device for foreground application events.
In yet another possible implementation, the identifying an application scenario of the mobile device may include:
acquiring an external sensor event;
and identifying an application scene according to the external sensor event to determine the application scene of the mobile equipment.
It should be noted that, the embodiment of the present application may also identify an application scenario of the mobile device according to the external sensor event. Wherein, the external sensor event can be detected by an external sensor device (or a sensor device called external extension).
In some embodiments, the acquiring an external sensor event may include:
establishing a communication connection between an external sensor device and an event receiver of the mobile device;
and acquiring the external sensor event sent by the external sensor equipment through the event receiver.
It should be noted that the external sensor device includes a sensor monitoring service module, and the external sensor event can be acquired by the sensor monitoring service module, and then the external sensor device sends the external sensor event to the event receiver of the mobile device, so that the external sensor event can be acquired from the event receiver in the mobile device.
It should be noted that the communication connection between the external sensor device and the event receiver of the mobile device may be a wired communication connection established through a data cable, or may be a wireless communication connection established according to a wireless communication protocol. Here, the wireless communication protocol may include at least one of: bluetooth protocol, WIFI protocol, IrDA protocol and NFC protocol.
S202: sending the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
It should be noted that after the application scene of the mobile device is determined, the application scene of the mobile device may be sent to the head mounted device, so that the head mounted device determines the display control command and controls the display state of the head mounted device according to the display control command.
In the embodiment of the present application, the application scenario of the mobile device may include at least one of the following: a driving scene, a walking scene, a riding scene, a standing scene, a sitting scene, a holding mobile device scene, a putting down mobile device scene and a geographic position scene; the display control command may include at least one of: an open display command, a close display command, an open portion display command, a close portion display command, an adjust display area size command, and an adjust display element arrangement command.
Specifically, after determining the application scenario of the mobile device, the mobile device may send it to an event receiver of the head mounted device, so that the application scenario of the mobile device may be subsequently obtained from the event receiver in the head mounted device, and then the application scenario of the head mounted device may be determined. Since the mobile device and the head mounted device are in the same vision enhancement system, in a specific example, the application scene of the mobile device can be directly determined as the current application scene of the head mounted device.
In this way, the head-mounted device can determine a corresponding display control command according to an application scene of the mobile device, so as to control a display state of a display module inside the head-mounted device, such as changing display, UI layout, rearranging UI elements, UI depth in AR, brightness, power consumption, and the like.
The embodiment provides a display control method of a head-mounted device, which is applied to a mobile device. Identifying an application scene of a mobile device; sending the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command. Therefore, after the head-mounted device receives the application scene of the mobile device, the display control instruction corresponding to the application scene can be used, so that the head-mounted device can automatically change the display state of the user interface according to the user behavior or the use state in different application scenes, and the workload of manual adjustment of the user is reduced; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the user-friendly property of the user interface is improved.
In another embodiment of the present application, referring to fig. 4, a detailed flowchart of a display control method of a head-mounted device provided in an embodiment of the present application is shown. As shown in fig. 4, the method may include:
s301: monitoring posture information of a user;
s302: comparing with a predefined gesture;
s303: judging whether the matching is carried out;
s304: if the judgment result is yes, determining the current application scene and comparing the current application scene with the predefined scene;
it should be noted that, for step S303, if the determination result is yes, step S304 is executed; and if the judgment result is negative, returning to execute the step S301.
It should be further noted that the head-mounted device may include a user status monitoring service module, a foreground application monitoring service module, and a scene analysis server, and the mobile device may also include a user status monitoring service module, a foreground application monitoring service module, and a scene analysis server. In this way, the monitoring of the posture information of the user, the foreground application event, and the like may be performed by the head-mounted device or the mobile device, and the present invention is not limited thereto.
In this embodiment of the present application, for step S301, the user status monitoring service module may perform monitoring; the steps S302-S304 may be analyzed by the scene analysis server.
S305: monitoring foreground application events;
s306: searching in a predefined foreground application event list;
s307: judging whether the matching is carried out;
it should be noted that, for step S307, if the determination result is yes, step S304 is executed; if the judgment result is no, the step S305 is executed.
It should be further noted that steps S301 to S303 and steps S305 to S307 may be executed in parallel, and there is no execution sequence, so as to determine the current application scenario.
In addition, step S305 may be monitored by the foreground application monitoring service module; the steps S306 to S307 and S304 may be analyzed by the scene analysis server.
S308: acquiring an external sensor event, and executing S304 according to the external sensor event;
it should be noted that, in the vision enhancement system, in addition to the mobile device and the head-mounted device, an external sensor device may be included, and the external sensor device includes a sensor monitoring service module. Here, step S308 may be to run a sensor monitoring service module on the external sensor device to detect the external sensor event.
It should be further noted that step S308, steps S301 to S303, and steps S305 to S307 may be executed in parallel, and there is no execution sequence; this also represents three ways for identifying the current application scenario, after which step S304 is performed.
S309: judging whether the matching is carried out;
s310: and if so, sending a display control command based on the matching scene.
It should be noted that, for the step S309, if the determination result is yes, the step S310 is executed; if the judgment result is no, the steps S308, S305 and S301 are executed in a returning way.
It should be noted that, for S309 and S310, the method may be performed by a head-mounted device. The head-mounted device may further include a display module (also referred to as a "screen"), such that the scene analysis server in the head-mounted device may control the display state of the display module after sending the display control command to the display module.
Further, after step S310, the application scene recognition and its corresponding display control command of the next stage may be performed, that is, it is necessary to return to steps S308, S305, and S301 at this time.
In addition, the matching scenario described in the embodiment of the present application may specifically refer to a preset scenario (or a current application scenario) that matches the current application scenario. Here, in the case where the determination result is yes, the current application scene or the preset scene matched therewith may be simply referred to as a "matching scene". At this time, inside the head mounted device, the scene analysis server sends a display control command to the display module to control the display state of the display module.
Briefly, embodiments of the present application relate to a method for controlling a user interface based on factors related to user behavior and usage. In particular, the embodiment of the application relates to a user interface which can adapt to application scenes of a head-mounted device under different use conditions. For example, embodiments of the application may autonomously sense a user state and turn off a screen display of the head-mounted device or hide a portion of UI components on the screen of the head-mounted device.
It should be noted that the embodiments of the present application are mainly used to improve the interaction effect between the head-mounted device and the user. More specifically, embodiments of the present application enable a head-mounted device to automatically change its UI according to the user's behavior. This may reduce the amount of work required for the user to manually adjust the UI. For example, for a pair of AR glasses working in conjunction with a mobile device (such as a smartphone), when the system senses that the user is looking at the screen of the mobile device, the AR glasses will turn off their display module or avoid presenting any content to reduce visual interference. As such, in one particular example, the Optical See Through (OST) of the AR glasses is turned off so that the user can better view the real world. In another specific example, the system may resize and/or reorganize UI components on the display (e.g., the UI may be rearranged into the top half when the user looks down).
For example, when a user plays a video game or watches a movie using a head mounted device, the user may control the head mounted device using a wireless controller. When the smart phone of the user receives a short message or an incoming call, the user puts down the wireless controller and takes up the smart phone. The embodiment of the application detects the series of events, and the screen of the head-mounted device is closed at the moment, so that the user can better watch the screen of the smart phone. The detection of the event may be based on one or more of: sensor data from the wireless controller (e.g., whether the user is holding the wireless controller), events from the smartphone (e.g., whether the user picks up the smartphone and/or unlocks the smartphone to read text), and sensor data from the head-mounted device (e.g., whether the device is pointing down), etc. As shown in fig. 4, a possible implementation manner provided by the embodiment of the present application is shown.
In addition, fig. 5 shows another possible implementation provided by the embodiment of the present application, at this time, only the user behavior may cause the screen of the head mounted device to be closed. For example, the user suddenly enters a running state from a slow walk. Referring to fig. 5, a detailed flowchart of another display control method for a head-mounted device provided in an embodiment of the present application is shown. As shown in fig. 5, the method may include:
s401: monitoring posture information of a user;
s402: comparing with a predefined gesture;
s403: judging whether the matching is carried out;
s404: if the judgment result is yes, determining the current application scene and comparing the current application scene with the predefined scene;
s405: and in the case that the current application scene is matched with the predefined scene, sending a display control command based on the matched scene.
It should be noted that, for step S403, if the determination result is yes, step S404 is executed; if the judgment result is no, the step S401 is executed in a returning way.
Further, after step S405, the application scene recognition and the corresponding display control command of the next stage may be performed, that is, the process also needs to return to step S401.
In the embodiment of the application, the predefined scenes comprise at least one preset scene, and each preset scene corresponds to a display control command with a different display state. Here, in the case where the current application scene matches the predefined scene, the current application scene or the preset scene matching therewith may be simply referred to as a "matching scene". At this time, inside the head mounted device, the scene analysis server sends a display control command to the display module to control the display state of the display module.
Fig. 6 illustrates yet another possible implementation provided by embodiments of the present application, where only application-specific events may cause the screen of the head-mounted device to be closed. For example, a smartphone receives an incoming video call, and when the user accepts the video call, the screen of the headset is closed to facilitate the video call on the smartphone. Referring to fig. 6, a detailed flowchart of a display control method of another head mounted device provided in the embodiment of the present application is shown. As shown in fig. 6, the method may include:
s501: monitoring foreground application events;
s502: searching in a predefined foreground application event list;
s503: judging whether the matching is carried out;
s504: if the judgment result is yes, determining the current application scene, and comparing the current application scene with the predefined scene;
s505: in the case where the current application scene matches the predefined scene, a display control command is transmitted based on the matching scene.
It should be noted that, for step S503, if the determination result is yes, step S504 is executed; if the judgment result is no, the step S501 is executed in return.
Further, after step S505, the application scene recognition and the corresponding display control command of the next stage may be performed, that is, the process also needs to return to step S501.
Fig. 7 illustrates yet another possible implementation provided by embodiments of the present application, at which time a single external sensor event may cause the screen of the head-mounted device to be closed. For example, when a user approaches the smart door lock and begins to enter a password on the lock dial. Referring to fig. 7, a detailed flowchart of a display control method of another head mounted device provided in the embodiment of the present application is shown. As shown in fig. 7, the method may include:
s601: acquiring an external sensor event;
s602: comparing with a predefined scene;
s603: judging whether the matching is carried out;
s604: if the judgment result is yes, the display control command is sent based on the matching scene under the condition that the current application scene is matched with the predefined scene.
It should be noted that, for step S603, if the determination result is yes, step S604 is executed; if the judgment result is no, the step S601 is executed in a returning way.
Further, after step S604, the application scene recognition and the corresponding display control command of the next stage may be performed, that is, the process also needs to return to step S601.
In an embodiment of the present application, the visual enhancement system may further include an external sensor device, and the external sensor event may be acquired by the external sensor device. After an external sensor event is acquired, a current application scene can be identified; and then comparing the current application scene with the predefined scene, so that under the condition that the current application scene is matched with the predefined scene, a scene analysis server in the head-mounted equipment can send a display control command to the display module based on the matched scene, and the display state of the display module is controlled.
In the embodiment of the application, the predefined scenes comprise at least one preset scene, and each preset scene corresponds to a display control command with a different display state. Here, in the case where the current application scene matches the predefined scene, the current application scene or the preset scene matching therewith may be simply referred to as a "matching scene". At this time, inside the head mounted device, the scene analysis server sends a display control command to the display module to control the display state of the display module.
The embodiment provides a display control method of a head-mounted device, and specific implementation of the foregoing embodiment is explained in detail through this embodiment, and it can be seen that the embodiment of the present application obviously improves user experience of the head-mounted device. More specifically, the embodiment of the application can enable the head-mounted device to automatically change the UI thereof according to the user behavior, so that the workload of manually adjusting the UI by the user can be reduced. Therefore, the head-mounted equipment product with better user experience can be developed, and the head-mounted equipment product is more intelligent.
In yet another embodiment of the present application, based on the same inventive concept as the previous embodiment, refer to fig. 8, which shows a schematic structural diagram of a head-mounted device 70 provided in an embodiment of the present application. As shown in fig. 8, the head mounted device 70 may include: a scene analysis service module 701, a sending module 702 and a display module 703; wherein the content of the first and second substances,
a scenario analysis service module 701 configured to identify a current application scenario;
a sending module 702, configured to send the display control command corresponding to the current application scenario from the scenario analysis service module to a display module 703;
and a display module 703 configured to control a display state according to the display control command.
In some embodiments, the scenario analysis service module 701 is specifically configured to obtain the posture information of the user; and comparing the pose information to a predefined pose; and if the attitude information is matched with the predefined attitude, identifying the application scene according to the attitude information to determine the current application scene.
In some embodiments, referring to fig. 8, the head mounted device 70 may also include a user status monitoring service module 704;
a user status monitoring service module 704 configured to obtain user sensor data; analyzing the user sensor data to determine the attitude information of the user;
a sending module 702, further configured to send the gesture information of the user to the scenario analysis service module 701.
Further, the scene analysis service module 701 is specifically configured to determine that the gesture information matches the predefined gesture if a preset gesture matching the gesture information is queried in the predefined gesture; wherein the predefined gesture comprises at least one preset gesture.
In some embodiments, the scenario analysis service module 701 is specifically configured to obtain a foreground application event; searching the foreground application event in a predefined foreground application event list; and if the foreground application event is matched with the predefined foreground application event list, carrying out application scene recognition according to the foreground application event to determine the current application scene.
In some embodiments, referring to fig. 8, the head mounted device 70 may also include a foreground application monitoring service module 705;
a foreground application monitoring service module 705 configured to obtain the foreground application event;
the sending module 702 is further configured to send the foreground application event to the scene analysis service module 701.
Further, the scene analysis service module 701 is specifically configured to determine that the foreground application event matches the predefined foreground application event list if a preset foreground application event matching the foreground application event is searched in the preset foreground application event list; wherein the list of predefined foreground application events includes at least one preset foreground application event.
In some embodiments, the scenario analysis service module 701 is specifically configured to obtain external sensor events; and identifying an application scene according to the external sensor event to determine the current application scene.
In some embodiments, referring to fig. 8, the head-mounted device 70 may further include a communication module 706 and an event receiver 707;
a communication module 706 configured to establish a communication connection between an external sensor device and the event receiver 707;
an event receiver 707 configured to receive the external sensor event transmitted by the external sensor device.
In some embodiments, the scenario analysis service module 701 is further configured to obtain an application scenario of the mobile device; and determining the application scene of the mobile device as the current application scene.
In some embodiments, the communication module 706 is further configured to establish a communication connection between the mobile device and the event receiver 707;
the event receiver 707 is further configured to receive an application scenario of the mobile device transmitted by the mobile device.
In some embodiments, the scenario analysis service module 701 is further configured to compare the current application scenario with a predefined scenario; and if the current application scene is matched with the predefined scene, determining a display control command corresponding to the current application scene.
In some embodiments, the predefined scenario includes at least one of: a driving scene, a walking scene, a riding scene, a standing scene, a sitting scene, a holding mobile device scene, a putting down mobile device scene and a geographic position scene;
the display control command includes at least one of: an open display command, a close display command, an open portion display command, a close portion display command, an adjust display area size command, and an adjust display element arrangement command.
It is understood that in the embodiments of the present application, a "module" may be a part of a circuit, a part of a processor, a part of a program or software, or the like, and may also be a unit, and may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the understanding that the technical solution of the present embodiment essentially or partly contributes to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
Accordingly, the present embodiment provides a computer storage medium applied to the head-mounted device 70, and the computer storage medium stores a computer program, and the computer program implements the method described in any one of the foregoing embodiments when executed by a processor.
Based on the above-mentioned components of the head-mounted device 70 and the computer storage medium, refer to fig. 9, which shows a specific hardware structure diagram of the head-mounted device 70 provided in the embodiment of the present application. As shown in fig. 9, may include: a first communication interface 801, a first memory 802, and a first processor 803; the various components are coupled together by a first bus system 804. It is understood that the first bus system 804 is used to enable connection communications between these components. The first bus system 804 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as the first bus system 804 in fig. 9. Wherein the content of the first and second substances,
a first communication interface 801, configured to receive and transmit signals during information transmission and reception with other external network elements;
a first memory 802 for storing a computer program capable of running on the first processor 803;
a first processor 803, configured to, when running the computer program, perform:
identifying a current application scenario;
and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene.
It will be appreciated that the first memory 802 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The first memory 802 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the first processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the first processor 803. The first Processor 803 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the first memory 802, and the first processor 803 reads the information in the first memory 802, and completes the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof. For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the first processor 803 is further configured to execute the method of any one of the previous embodiments when running the computer program.
The embodiment provides a head-mounted device which can comprise a scene analysis service module, a sending module and a display module. Therefore, the head-mounted equipment product with better user experience can be developed, and the head-mounted equipment product is more intelligent. Therefore, the head-mounted equipment can automatically change the UI according to the user behavior, and the workload of manually adjusting the UI by the user is reduced; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the user-friendly property of the user interface is improved.
In yet another embodiment of the present application, based on the same inventive concept as the previous embodiment, refer to fig. 10, which shows a schematic structural diagram of a mobile device 90 provided in an embodiment of the present application. As shown in fig. 10, the mobile device 90 may include: a scene analysis service module 901 and a sending module 902; wherein the content of the first and second substances,
a scenario analysis service module 901 configured to identify an application scenario of the mobile device;
a sending module 902 configured to send an application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
In some embodiments, the scene analysis service module 901 is specifically configured to obtain gesture information of the user; and comparing the pose information to a predefined pose; and if the attitude information is matched with the predefined attitude, identifying an application scene according to the attitude information to determine the application scene of the mobile equipment.
In some embodiments, referring to fig. 10, the mobile device 90 may also include a user status monitoring service module 903;
a user status monitoring service module 903 configured to obtain user sensor data; analyzing the user sensor data to determine the attitude information of the user;
the sending module 902 is further configured to send the gesture information of the user to the scene analysis service module 901.
Further, the scene analysis service module 901 is specifically configured to determine that the posture information matches the predefined posture if a preset posture matching the posture information is queried in the predefined posture; wherein the predefined gesture comprises at least one preset gesture.
In some embodiments, the scenario analysis service module 901 is specifically configured to obtain a foreground application event; searching the foreground application event in a predefined foreground application event list; and if the foreground application event is matched with the predefined foreground application event list, carrying out application scene recognition according to the foreground application event to determine the application scene of the mobile equipment.
In some embodiments, referring to fig. 10, the mobile device 90 may also include a foreground application monitoring service module 904;
a foreground application monitoring service module 904 configured to obtain the foreground application event;
the sending module 902 is further configured to send the foreground application event to the scene analysis service module 901.
Further, the scene analysis service module 901 is specifically configured to determine that the foreground application event matches the predefined foreground application event list if a preset foreground application event matching the foreground application event is searched in the preset foreground application event list; wherein the list of predefined foreground application events includes at least one preset foreground application event.
In some embodiments, the scenario analysis service module 901 is specifically configured to obtain external sensor events; and identifying an application scene according to the external sensor event to determine the application scene of the mobile equipment.
In some embodiments, referring to fig. 10, mobile device 90 may also include a communications module 905 and an event receiver 906;
a communication module 905 configured to establish a communication connection between an external sensor device and the event receiver 906;
an event receiver 906 configured to receive the external sensor event transmitted by the external sensor device.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
The integrated unit, if implemented in the form of a software functional module and not sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the present embodiment provides a computer storage medium applied to the mobile device 90, which stores a computer program that realizes the method of any one of the foregoing embodiments when executed by the second processor.
Based on the above-mentioned components of the mobile device 90 and the computer storage medium, refer to fig. 11, which shows a specific hardware structure diagram of the mobile device 90 provided in the embodiment of the present application. As shown in fig. 11, may include: a second communication interface 1001, a second memory 1002, and a second processor 1003; the various components are coupled together by a second bus system 1004. It is understood that the second bus system 1004 is used to enable connection communications between these components. The second bus system 1004 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as the second bus system 1004 in FIG. 11. Wherein the content of the first and second substances,
a second communication interface 1001, which is used for receiving and sending signals during the process of receiving and sending information with other external network elements;
a second memory 1002 for storing a computer program capable of running on the second processor 1003;
a second processor 1003 configured to, when running the computer program, perform:
identifying an application scene of a mobile device;
sending the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
Optionally, as another embodiment, the second processor 1003 is further configured to execute the method in any one of the foregoing embodiments when running the computer program.
It is to be understood that the second memory 1002 is similar in hardware functionality to the first memory 802, and the second processor 1003 is similar in hardware functionality to the first processor 803; and will not be described in detail herein.
The embodiment provides a mobile device which can comprise a scene analysis service module and a sending module. Therefore, after the head-mounted device receives the application scene of the mobile device, the display control instruction corresponding to the application scene can be used, so that the UI of the head-mounted device can be automatically changed according to the user behavior in different application scenes, and the workload of manually adjusting the UI by the user is reduced; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the use friendliness of the user interface is improved.
In yet another embodiment of the present application, refer to fig. 12, which shows an architectural schematic diagram of a visual enhancement system provided in an embodiment of the present application. As shown in fig. 12, the vision enhancement system may include a head-mounted device 1101, a mobile device 1102, and an external sensor device 1103. The head-mounted device 1101 is the head-mounted device according to any one of the preceding embodiments, the mobile device 1102 is the mobile device according to any one of the preceding embodiments, and the external sensor device 1103 is the external sensor device according to any one of the preceding embodiments.
In this embodiment, the head-mounted device 1101 may include a user status monitoring service module 1, a scene analysis service module 2, an event receiver 3, a foreground application monitoring service module 4, and a display module 5 (i.e., white-filled module in fig. 12); the external sensor device 1103 may include a sensor monitoring service module 6 (i.e., a module filled with black in fig. 12); the mobile device 1102 may include a user status monitoring service module 7, a scene analysis service module 8, and a foreground application monitoring service module 9 (i.e., the modules filled with gray in FIG. 12).
Based on the above architecture example, on the head mounted device 1101: the user status monitoring service module 1 monitors relevant Sensor Data (Sensor Data) and detects certain events (such as posture information of the user) from the Sensor Data using signal processing techniques and/or machine learning techniques. Then, the service module 1 transmits the event to the scene analysis service module 2 on the head mounted device 1101. In addition, the Event receiver 3 may receive an external Sensor Event (Sensor Event) from the Sensor monitoring service module 6 within the external Sensor device 1103, i.e., the Sensor monitoring service module 6 runs on the external Sensor device 1103 (e.g., IoT device) to detect the external Sensor Event. The event receiver 3 may also receive user state events from the scenario analysis service module 8 on the mobile device 1102, and the event receiver 3 then forwards the events to the scenario analysis service module 2 on the head-mounted device 1101.
The foreground application monitoring service module 4 monitors active applications of the user that are running on the head mounted device 1101. The service module sends foreground application events (or foreground task events) to the scene analysis service module 2. For example, when a user starts or stops a particular application.
Scene analysis service module 2 collects sensor data from internal sensors of head mounted device 1101, sensor events from external sensor devices, and foreground application events. The module then analyses the data and determines the appropriate display control commands which are then sent to the display module 5.
The display control commands are predefined for different scenes. Here, the display control command includes, but is not limited to, turning on/off display, turning on/off part display, brightness control, resizing display, rearranging UI elements, and the like.
On mobile device 1102: the user status monitoring service module 7 monitors real-time sensor data on the mobile device and detects events from the sensor data. The user status monitoring service module 7 sends the sensor event to the scenario analysis service module 8 on the mobile device. For example, the mobile device changes to landscape mode or portrait mode. The foreground application monitoring service module 9 monitors active applications of a user that is running on the mobile device. For example, when a user starts or stops a particular application, the service may send a foreground application event to the scene analysis service module 8. It is noted that the mobile device 1102 may also include an event receiver (not shown) that may also be used to receive external sensor events from the sensor monitoring service module 6 within the external sensor device 1103.
Specifically, the scenario analysis service module 8 collects sensor data from internal sensors of the mobile device, and foreground application events from the foreground application monitoring service module 9. The scenario analysis service module 8 then analyzes the data, determines usage scenarios on the mobile device, and sends mobile device scenario events to the event receiver 3 on the head-mounted device 1101 through a data cable or WiFi or other wireless communication signal.
Application scenarios that may be detected by the vision enhancement system of embodiments of the present application include, but are not limited to, driving, walking, riding, standing, sitting, holding a mobile device in hand, setting down a mobile device, geographic location, and the like. System responses include, but are not limited to, changing display, UI layout, UI depth in AR, brightness, power consumption, and the like.
Thus, the user experience of the head-mounted device is obviously improved. More specifically, the embodiment of the application can enable the head-mounted device to automatically change the UI thereof according to the user behavior, so that the workload of manually adjusting the UI by the user can be reduced. Therefore, the head-mounted equipment product with better user experience can be developed, and the head-mounted equipment product is more intelligent.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
The features disclosed in the several product embodiments presented in this application can be combined arbitrarily, without conflict, to arrive at new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Industrial applicability
In the embodiment of the application, the current application scene is identified; and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene. In this way, by determining the application scene of the head-mounted device and then according to the display control instruction corresponding to the application scene, the head-mounted device can automatically change the display state of the user interface according to the user behavior or the use state in different application scenes, and the workload of manual adjustment of the user can be reduced; meanwhile, the interaction effect between the head-mounted equipment and the user can be improved, and the user-friendly property of the user interface is improved.

Claims (27)

  1. A display control method of a head-mounted device is applied to the head-mounted device, and the method comprises the following steps:
    identifying a current application scenario;
    and controlling the display state of the head-mounted equipment based on the display control command corresponding to the current application scene.
  2. The method of claim 1, wherein the identifying a current application scenario comprises:
    acquiring attitude information of a user;
    comparing the pose information to a predefined pose;
    and if the attitude information is matched with the predefined attitude, identifying an application scene according to the attitude information to determine the current application scene.
  3. The method of claim 2, wherein the obtaining of the pose information of the user comprises:
    acquiring user sensor data through a user state monitoring service module of the head-mounted device;
    and analyzing the user sensor data to determine the attitude information of the user.
  4. The method of claim 2, wherein the predefined gesture comprises at least one preset gesture, the method further comprising:
    and if a preset gesture matched with the gesture information is inquired in the predefined gesture, determining that the gesture information is matched with the predefined gesture.
  5. The method of claim 1, wherein the identifying a current application scenario comprises:
    acquiring a foreground application event;
    searching the foreground application event in a predefined foreground application event list;
    and if the foreground application event is matched with the predefined foreground application event list, carrying out application scene recognition according to the foreground application event to determine the current application scene.
  6. The method of claim 5, wherein the retrieving foreground application events comprises:
    and acquiring the foreground application event through a foreground application monitoring service module of the head-mounted device.
  7. The method of claim 5, wherein the list of predefined foreground application events includes at least one preset foreground application event, the method further comprising:
    and if the preset foreground application event matched with the foreground application event is searched in the preset foreground application event list, determining that the foreground application event is matched with the predefined foreground application event list.
  8. The method of claim 1, wherein the identifying a current application scenario comprises:
    acquiring an external sensor event;
    and identifying an application scene according to the external sensor event, and determining the current application scene.
  9. The method of claim 8, wherein the acquiring external sensor events comprises:
    establishing a communication connection between an external sensor device and an event receiver of the head-mounted device;
    and acquiring the external sensor event sent by the external sensor equipment through the event receiver.
  10. The method of claim 1, wherein the identifying a current application scenario comprises:
    acquiring an application scene of the mobile equipment;
    determining the application scene of the mobile device as the current application scene.
  11. The method of claim 10, wherein the obtaining the application scenario of the mobile device comprises:
    establishing a communication connection between a mobile device and an event receiver of the head-mounted device;
    and acquiring the application scene of the mobile equipment sent by the mobile equipment through the event receiver.
  12. The method of any of claims 1 to 11, wherein after the identifying a current application scenario, the method further comprises:
    comparing the current application scenario with a predefined scenario;
    and if the current application scene is matched with the predefined scene, determining a display control command corresponding to the current application scene.
  13. The method of claim 12, wherein the predefined scenario includes at least one of: a driving scene, a walking scene, a riding scene, a standing scene, a sitting scene, a holding mobile device scene, a putting down mobile device scene and a geographic position scene;
    the display control command includes at least one of: an open display command, a close display command, an open portion display command, a close portion display command, an adjust display area size command, and an adjust display element arrangement command.
  14. A display control method of a head-mounted device is applied to a mobile device, and comprises the following steps:
    identifying an application scene of a mobile device;
    sending the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
  15. The method of claim 14, wherein the identifying an application scenario of a mobile device comprises:
    acquiring posture information of a user;
    comparing the pose information to a predefined pose;
    and if the attitude information is matched with the predefined attitude, identifying an application scene according to the attitude information to determine the application scene of the mobile equipment.
  16. The method of claim 15, wherein the obtaining of the pose information of the user comprises:
    acquiring user sensor data through a user state monitoring service module of the mobile equipment;
    and analyzing the user sensor data to determine the attitude information of the user.
  17. The method of claim 15, wherein the predefined gesture comprises at least one preset gesture, the method further comprising:
    and if a preset gesture matched with the gesture information is inquired in the predefined gesture, determining that the gesture information is matched with the predefined gesture.
  18. The method of claim 14, wherein the identifying an application scenario of a mobile device comprises:
    acquiring a foreground application event;
    searching the foreground application event in a predefined foreground application event list;
    and if the foreground application event is matched with the predefined foreground application event list, carrying out application scene recognition according to the foreground application event to determine the application scene of the mobile equipment.
  19. The method of claim 18, wherein the retrieving a foreground application event comprises:
    and acquiring the foreground application event through a foreground application monitoring service module of the mobile equipment.
  20. The method of claim 18, wherein the list of predefined foreground application events includes at least one preset foreground application event, the method further comprising:
    and if the preset foreground application event matched with the foreground application event is searched in the preset foreground application event list, determining that the foreground application event is matched with the predefined foreground application event list.
  21. The method of claim 14, wherein the identifying an application scenario of a mobile device comprises:
    acquiring an external sensor event;
    and identifying an application scene according to the external sensor event to determine the application scene of the mobile equipment.
  22. The method of claim 21, wherein the acquiring external sensor events comprises:
    establishing a communication connection between an external sensor device and an event receiver of the mobile device;
    and acquiring the external sensor event sent by the external sensor equipment through the event receiver.
  23. A head-mounted device, wherein the head-mounted device comprises a scene analysis service module, a transmission module and a display module;
    the scene analysis service module is configured to identify a current application scene;
    the sending module is configured to send the display control command corresponding to the current application scene to the display module by the scene analysis service module;
    the display module is configured to control a display state according to the display control command.
  24. A head-mounted device, wherein the head-mounted device comprises a first memory and a first processor;
    the first memory for storing a computer program operable on the first processor;
    the first processor, when executing the computer program, is configured to perform the method of any of claims 1 to 13.
  25. A mobile device, wherein the mobile device comprises a scenario analysis service module and a transmission module;
    the scene analysis service module is configured to identify an application scene of the mobile device;
    the sending module is configured to send the application scene of the mobile device to a head-mounted device; the application scene of the mobile device is used for instructing the head-mounted device to determine a display control command and controlling the display state of the head-mounted device according to the display control command.
  26. A mobile device, wherein the mobile device comprises a second memory and a second processor;
    the second memory for storing a computer program operable on the second processor;
    the second processor, when running the computer program, is configured to perform the method of any of claims 14 to 22.
  27. A computer storage medium, wherein the computer storage medium stores a computer program which, when executed by a first processor, implements the method of any of claims 1 to 13, or which, when executed by a second processor, implements the method of any of claims 14 to 22.
CN202180013084.0A 2020-02-24 2021-02-20 Display control method and device for head-mounted device, and computer storage medium Pending CN115053203A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202062980915P 2020-02-24 2020-02-24
US62/980,915 2020-02-24
PCT/CN2021/077112 WO2021169881A1 (en) 2020-02-24 2021-02-20 Display control method and device for head-mounted device, and computer storage medium

Publications (1)

Publication Number Publication Date
CN115053203A true CN115053203A (en) 2022-09-13

Family

ID=77490680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180013084.0A Pending CN115053203A (en) 2020-02-24 2021-02-20 Display control method and device for head-mounted device, and computer storage medium

Country Status (2)

Country Link
CN (1) CN115053203A (en)
WO (1) WO2021169881A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838536A (en) * 2012-11-27 2014-06-04 联想(北京)有限公司 Displaying mode switching method, electronic equipment control method and electronic equipment
CN105324738A (en) * 2013-06-07 2016-02-10 索尼电脑娱乐公司 Switching mode of operation in a head mounted display
CN106572417A (en) * 2016-10-27 2017-04-19 腾讯科技(深圳)有限公司 Sound effect control method and sound effect control device
CN107765953A (en) * 2017-11-08 2018-03-06 网易(杭州)网络有限公司 Methods of exhibiting, device, processor and the head-mounted display apparatus of information
CN107820599A (en) * 2016-12-09 2018-03-20 深圳市柔宇科技有限公司 The method of adjustment of user interface, adjustment system and wear display device
US20180121047A1 (en) * 2016-11-03 2018-05-03 Microsoft Technology Licensing, Llc Graphical user interface list content density adjustment
CN108431667A (en) * 2015-12-28 2018-08-21 索尼公司 Information processing unit, information processing method and program
CN109478101A (en) * 2016-07-22 2019-03-15 谷歌有限责任公司 For virtual reality user Interface detection user movement range
US20190212823A1 (en) * 2018-01-08 2019-07-11 Facebook Technologies, Llc Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8780014B2 (en) * 2010-08-25 2014-07-15 Eastman Kodak Company Switchable head-mounted display
CN106200884A (en) * 2015-04-30 2016-12-07 成都理想境界科技有限公司 head-mounted display apparatus and control method thereof
CN109189225A (en) * 2018-08-30 2019-01-11 Oppo广东移动通信有限公司 Display interface method of adjustment, device, wearable device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838536A (en) * 2012-11-27 2014-06-04 联想(北京)有限公司 Displaying mode switching method, electronic equipment control method and electronic equipment
CN105324738A (en) * 2013-06-07 2016-02-10 索尼电脑娱乐公司 Switching mode of operation in a head mounted display
CN108431667A (en) * 2015-12-28 2018-08-21 索尼公司 Information processing unit, information processing method and program
CN109478101A (en) * 2016-07-22 2019-03-15 谷歌有限责任公司 For virtual reality user Interface detection user movement range
CN106572417A (en) * 2016-10-27 2017-04-19 腾讯科技(深圳)有限公司 Sound effect control method and sound effect control device
US20180121047A1 (en) * 2016-11-03 2018-05-03 Microsoft Technology Licensing, Llc Graphical user interface list content density adjustment
CN107820599A (en) * 2016-12-09 2018-03-20 深圳市柔宇科技有限公司 The method of adjustment of user interface, adjustment system and wear display device
CN107765953A (en) * 2017-11-08 2018-03-06 网易(杭州)网络有限公司 Methods of exhibiting, device, processor and the head-mounted display apparatus of information
US20190212823A1 (en) * 2018-01-08 2019-07-11 Facebook Technologies, Llc Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures

Also Published As

Publication number Publication date
WO2021169881A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
US9035874B1 (en) Providing user input to a computing device with an eye closure
US9547418B2 (en) Electronic device and method of adjusting user interface thereof
KR101636723B1 (en) Mobile terminal and operation method thereof
KR20180096183A (en) Method for controlling an intelligent system that performs multilingual processing
CN103419729B (en) Vehicle part control method
KR101646356B1 (en) Apparatus and Method for Controlling of Vehicle Using Wearable Device
WO2015073880A1 (en) Head-tracking based selection technique for head mounted displays (hmd)
KR101641424B1 (en) Terminal and operating method thereof
KR20160062585A (en) Mobile terminal and control method for the mobile terminal
KR20160071263A (en) Mobile terminal and method for controlling the same
KR20160019760A (en) Mobile terminal and control method for the mobile terminal
KR20160008372A (en) Mobile terminal and control method for the mobile terminal
WO2022156598A1 (en) Bluetooth connection method and apparatus, and electronic device
EP2807534B1 (en) Methods and devices to determine a preferred electronic device
JP2018032440A (en) Controllable headset computer displays
KR20220115102A (en) Application sharing method, first electronic device and computer-readable storage medium
US10321008B2 (en) Presentation control device for controlling presentation corresponding to recognized target
US20130166789A1 (en) Controlling device setting based on device setting guide information
US9350918B1 (en) Gesture control for managing an image view display
CN115053203A (en) Display control method and device for head-mounted device, and computer storage medium
KR102086348B1 (en) Mobile terminal and method for controlling the same
CN116048243B (en) Display method and electronic equipment
KR101661974B1 (en) Mobile terminal and operation method thereof
KR20120057256A (en) Mobile terminal and operation method thereof
KR101612864B1 (en) Mobile terminal and method for controlling the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination