CN117130471A - Man-machine interaction method, electronic equipment and system - Google Patents

Man-machine interaction method, electronic equipment and system Download PDF

Info

Publication number
CN117130471A
CN117130471A CN202310378606.8A CN202310378606A CN117130471A CN 117130471 A CN117130471 A CN 117130471A CN 202310378606 A CN202310378606 A CN 202310378606A CN 117130471 A CN117130471 A CN 117130471A
Authority
CN
China
Prior art keywords
screen
application
gesture
virtual space
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310378606.8A
Other languages
Chinese (zh)
Inventor
贺壮杰
包啸君
顾平平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310378606.8A priority Critical patent/CN117130471A/en
Publication of CN117130471A publication Critical patent/CN117130471A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The application discloses a man-machine interaction method, electronic equipment and a system, and relates to the technical field of terminals. A first application for managing the virtual space of the head-mounted display device is installed on the electronic device; after the first application is started, the electronic equipment receives a screen capturing gesture, a screen recording gesture or a gesture returning to the previous stage of the display interface of the first application. Because the first application runs in the foreground, system gestures such as screen capturing gestures, screen recording gestures or returning to a previous stage gesture act on the virtual space of the head-mounted display device. Therefore, when a user watches the picture in the virtual space, the user can realize operations such as screen capturing, screen recording, returning to the upper level and the like on the display interface of the virtual space of the head-mounted display device only by making gestures on the touch screen of the electronic device. The user's line of sight does not need to leave the virtual space, avoiding the line of sight jumping between the virtual space and the electronic device screen. Moreover, the gesture operation is convenient and quick, the operation cost of the user is low, and the experience is good.

Description

Man-machine interaction method, electronic equipment and system
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a human-computer interaction method, an electronic device, and a system.
Background
With the development of computer graphics technology, augmented Reality (AR) technology, virtual Reality (VR), mixed Reality (MR) and other extended reality (XR) technologies are gradually applied to the lives of people. The man-machine interaction mode of the existing XR equipment is usually operated by taking electronic equipment such as a mobile phone and the like as a remote controller. How to improve the ease of operating an XR device by an electronic device is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a man-machine interaction method, electronic equipment and a system, which can provide a convenient man-machine interaction mode, facilitate a user to operate a virtual space of XR equipment through electronic equipment such as a mobile phone and the like, and promote the use experience of the user.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, a human-computer interaction method is provided, and the method is applied to an electronic device, wherein the electronic device is connected with a head-mounted display device, and the method includes: after the electronic equipment starts the first application, a first gesture of a user on a display interface of the first application is received; the first application is used for managing the virtual space of the head-mounted display device; the first gesture includes a screen capture gesture, or a return to previous stage gesture. Because the first application runs in the foreground, system gestures such as screen capturing gestures, screen recording gestures or returning to a previous stage gesture act on the head-mounted display device. In response to the first gesture, the electronic device performs an action corresponding to the first gesture with respect to a virtual space of the head-mounted display device.
The first gesture is a screen capturing gesture, and the act corresponding to the first gesture is executed for the display interface of the virtual space comprises the following steps: screen capturing is carried out on a display interface of the virtual space; the first gesture is a screen recording gesture, and the act corresponding to the first gesture is executed aiming at the display interface of the virtual space comprises the following steps: recording a screen of a display interface of the virtual space; the first gesture is a gesture of returning to a previous stage, and the action corresponding to the first gesture is executed for the display interface of the virtual space, including: and enabling the virtual space to display a display interface of a previous stage of the current display interface.
In the method, a user is supported to execute a system gesture (a screen capturing gesture, a screen recording gesture or a gesture returning to the previous stage) on the touch screen of the electronic equipment, so that operations such as screen capturing, screen recording and returning to the previous stage are realized on the display interface of the virtual space of the head-mounted display equipment. Therefore, when a user watches the picture in the virtual space, the user can realize the operations of screen capturing, screen recording, returning to the upper level and the like on the display interface in the virtual space of the head-mounted display device only by making a system gesture on the touch screen of the electronic device. The user's line of sight does not need to leave the virtual space, avoiding the line of sight jumping between the virtual space and the electronic device screen. Moreover, the gesture operation is convenient and quick, the operation cost of the user is low, and the experience is good.
With reference to the first aspect, in one implementation manner, if the first application is closed or is returned to the background running, the electronic device exits to display the first application display interface, and the electronic device displays the application interface of the second application; the second application may be, for example, a desktop application, a system application, a third party application, and the like. The electronic equipment receives a first gesture of a user on a second application display interface; in response to a first gesture at the second application display interface, the electronic device performs an action corresponding to the first gesture with respect to the second application display interface. The first gesture is a screen capturing gesture, and executing the action corresponding to the first gesture on the second application display interface includes: screen capturing is carried out on the second application display interface; the first gesture is a screen recording gesture, and the act corresponding to the first gesture is executed aiming at the second application display interface, including: recording a screen of a second application display interface; the first gesture is a return to previous gesture, and the executing the action corresponding to the first gesture on the second application display interface includes: the electronic device displays a superior display interface of the second application display interface.
In the method, after the electronic equipment receives the system gesture, different conditions are distinguished, the electronic equipment is triggered to execute actions corresponding to the system gesture, or the head-mounted display equipment is triggered to execute actions corresponding to the system gesture. If the first application is not running in the foreground, the electronic device itself executes the action corresponding to the system gesture (the action corresponding to the first gesture is executed for the second application display interface). And operations such as screen capturing, screen recording and returning to the previous stage are supported to be executed on the electronic equipment through gestures.
With reference to the first aspect, in one embodiment, the screen capture gesture includes any one of multi-directional down-slide, multi-directional up-slide, single-finger double-click screen, or double-finger double-click screen on the screen; the screen recording gesture comprises any one of multi-directional sliding down, multi-directional sliding up, single-finger joint double-click screen or double-finger joint double-click screen on the screen; wherein the screen capture gesture is different from the screen capture gesture; returning to the previous level of gesture includes any one of a single finger sliding from the left side of the screen to the right, a single finger sliding from the right side of the screen to the left, or a finger sliding inward from both sides of the screen.
With reference to the first aspect, in one implementation manner, the electronic device generates a display interface of the virtual space, and sends the display interface to the head-mounted display device. That is, the electronic device generates display data and a display interface of the head-mounted display device, the electronic device providing the display data for the head-mounted display device, the head-mounted display device being a display device of the electronic device.
In the implementation manner, the electronic device locally stores information of a display interface of the head-mounted display device, and the electronic device performs screen capturing, screen recording or generates a previous display interface of the virtual space according to the locally acquired display interface of the head-mounted display device. In this way, when the screen is shot or recorded, the electronic equipment is not required to acquire a large amount of data of the virtual space display interface from the head-mounted display equipment, so that the data interaction between the electronic equipment and the head-mounted display equipment is reduced, and the restriction of the communication rate between the electronic equipment and the head-mounted display equipment is avoided.
With reference to the first aspect, in one implementation manner, the electronic device triggers the head-mounted display device to screen capture, record or generate a previous display interface of the virtual space. Further, the head-mounted display device can send screenshot pictures generated by screenshot or a screen recording file generated by screen recording to the electronic device.
In a second aspect, a human-computer interaction method is provided, and the method is applied to an electronic device, and the electronic device is connected with a head-mounted display device, and includes: after the electronic equipment starts a first application for managing the virtual space of the head-mounted display equipment, a first operation of a user on a first application display interface is received; and responding to the first operation, the electronic equipment triggers the virtual space of the head-mounted display equipment to display at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-last-stage virtual key and a return-to-homepage virtual key. The screen capturing virtual key is used for capturing a screen of a display interface of the virtual space; the screen recording virtual key is used for recording a screen of a display interface of the virtual space; returning to the previous-stage virtual key to trigger the virtual space to display the previous-stage display interface of the current display interface; the return home virtual key is used for triggering the virtual space to display the desktop interface.
In the method, virtual space of the head-mounted display device pops up virtual keys such as screen capturing, screen recording, returning to the upper level, returning to the homepage and the like in response to a first operation input by a user on an application interface of a first application on the electronic device. The virtual keys are independent of the current display interface of the virtual space, and can pop up on any display interface of the virtual space. Therefore, when the user needs to perform operations such as screen capturing, screen recording, returning to the upper level, returning to the homepage and the like, the user's sight does not need to leave the virtual space, and the user does not need to exit from the current application displayed in the virtual space, so long as simple gestures are made, the operation is simple, convenient and quick, and the user experience is good.
With reference to the second aspect, in one embodiment, the electronic device triggers at least one of a virtual space display screen capturing virtual key, a screen recording virtual key, a return to superior virtual key, and a return to home virtual key of the head-mounted display device, including: the electronic equipment triggers the virtual space of the head-mounted display equipment to display a shortcut operation panel, and at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-last-stage virtual key and a return-to-homepage virtual key is arranged on the shortcut operation panel.
The shape, size and position of the shortcut operation panel can be set according to specific situations.
In one embodiment, the shortcut operation panel is superimposed over an application interface of the second application displayed in the virtual space.
In another embodiment, the shortcut operation panel is displayed in a blank area of the virtual space display interface.
With reference to the second aspect, in one embodiment, at least one of the screen capturing virtual key, the screen recording virtual key, the return to superior virtual key, and the return to home virtual key is superimposed on an application interface of the second application displayed in the virtual space.
With reference to the second aspect, in one embodiment, the first operation includes: any one of a single finger tap, a multi-finger tap, a knuckle tap, or a single finger stroke.
With reference to the second aspect, in one embodiment, in response to the first operation, the electronic device triggers at least one of a virtual space display screen capturing virtual key, a screen recording virtual key, a return to superior virtual key, and a return to home virtual key of the head-mounted display device, including: in response to the first operation, the electronic device generates first display data, and sends the first display data to the head-mounted display device; the first display data is used for displaying at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-previous-stage virtual key and a return-to-homepage virtual key in the virtual space by the head-mounted display device.
In this implementation, the electronic device generates display data and a display interface for the head mounted display device, the electronic device providing the display data for the head mounted display device, the head mounted display device being a display device for the electronic device.
With reference to the second aspect, in one embodiment, in response to the first operation, the electronic device triggers at least one of a virtual space display screen capturing virtual key, a screen recording virtual key, a return to superior virtual key, and a return to home virtual key of the head-mounted display device, including: responding to the first operation, and sending a first event corresponding to the first operation to the head-mounted display device by the electronic device; the first event is used for triggering the head-mounted display device to generate first display data, and the first display data is used for displaying at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-last-stage virtual key and a return-to-home virtual key in the virtual space.
In a third aspect, an electronic device is provided, which has functionality to implement the method of the first or second aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a fourth aspect, there is provided an electronic device comprising: the touch screen comprises a processor, a touch screen and a memory; the memory is configured to store computer-executable instructions that, when executed by the electronic device, cause the electronic device to perform the method of any of the first or second aspects described above.
In a fifth aspect, there is provided an electronic device comprising: a processor; the processor is configured to, after being coupled to the memory and reading the instructions in the memory, perform the method according to any one of the first or second aspects described above in accordance with the instructions.
In a sixth aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of the first or second aspects above.
In a seventh aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first or second aspects above.
In an eighth aspect, there is provided an apparatus (e.g. the apparatus may be a system-on-a-chip) comprising a processor for supporting an electronic device to implement the functions referred to in the first or second aspect above. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the electronic device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
The technical effects caused by any one of the design manners of the third aspect to the eighth aspect may be referred to the technical effects caused by the different design manners of the first aspect or the second aspect, and are not repeated herein.
Drawings
Fig. 1 is a schematic diagram of a system architecture to which a man-machine interaction method provided by an embodiment of the present application is applicable;
fig. 2 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 3A is a schematic optical configuration diagram of a head-mounted display device according to an embodiment of the present application;
fig. 3B is a schematic hardware structure of a head-mounted display device according to an embodiment of the present application;
FIG. 4A is a schematic diagram of an example scenario in which a user interacts with a head-mounted display device;
FIG. 4B is a schematic diagram of another example scenario in which a user interacts with a head-mounted display device;
fig. 5 is a schematic diagram of an example of a scenario in which a screen capturing gesture is received when an application interface of an AR application is displayed on a mobile phone according to an embodiment of the present application;
fig. 6 is a schematic diagram of an example of a scenario in which a screen capturing gesture is received when an application interface of an AR application is not displayed on a mobile phone according to an embodiment of the present application;
fig. 7 is a schematic diagram of an example of a scenario in which a screen recording gesture is received when an application interface of an AR application is displayed on a mobile phone according to an embodiment of the present application;
Fig. 8 is a schematic diagram of an example of a scenario in which a screen recording gesture is received when an application interface of an AR application is not displayed on a mobile phone according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an example of a scenario in which a return-to-previous gesture is received when an application interface of an AR application is displayed on a mobile phone according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an example scenario in which a gesture returning to a previous stage is received when an application interface of an AR application is not displayed on a mobile phone according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of a man-machine interaction method according to an embodiment of the present application;
fig. 12 to fig. 13 are schematic diagrams of a detailed flow of a man-machine interaction method according to an embodiment of the present application;
fig. 14 is a schematic diagram of an example of a scenario in which a mobile phone according to an embodiment of the present application receives a first operation;
fig. 15A to 15D are schematic diagrams of a scene example of man-machine interaction of a user in a virtual space according to an embodiment of the present application;
FIGS. 16A-16C are schematic diagrams illustrating examples of virtual space display interfaces according to embodiments of the present application;
FIG. 17 is a schematic flow chart of another man-machine interaction method according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 19 is a schematic diagram of a system-on-chip structure according to an embodiment of the present application.
Detailed Description
In the description of embodiments of the present application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The man-machine interaction method provided by the embodiment of the application can be applied to the system shown in fig. 1. The system 10 may include: an electronic device 100 and a head mounted display device 200.
The electronic device 100 and the head-mounted display device 200 may be connected by a wired or wireless connection. The wired connection may include a wired connection that communicates through a universal serial bus (universal serial bus, USB) interface, a high definition multimedia interface (high definition multimedia interface, HDMI) interface, or the like. The wireless connection may include one or more of a wireless connection that communicates via Bluetooth, wireless fidelity (wireless fidelity, wi-Fi) direct (e.g., wi-Fi p2 p), wi-Fi softAP, wi-Fi LAN, radio frequency, etc. technology. The embodiment of the application does not limit the connection mode of the two.
The electronic device 100 may be a mobile phone, a tablet computer, or a non-portable terminal device such as a Laptop computer (Laptop) having a touch-sensitive surface or touch panel, a desktop computer having a touch-sensitive surface or touch panel, or the like. The electronic device 100 may run a particular application program, such as a video application, a gaming application, a music application, a desktop application, a mirrored screen casting application, etc., to provide content for transmission to the head mounted display device 200 for display.
Examples of realizations of the head-mounted display device 200 include helmets, glasses, headphones, and the like, which may be worn on the head of a user. The head-mounted display device 200 displays images in a virtual space by using technologies of AR, VR, MR and the like, so that a user can feel a 3D scene, and an AR/VR/MR experience is provided for the user. The 3D scene may include 3D images, 3D video, audio, etc. It will be appreciated that the virtual space shown in fig. 1 is a plane and in actual use the virtual space may be a curved space with curvature.
The head mounted display device 200 may be worn on the head of a user, corresponding to an epitaxial display of the electronic device 100. The electronic device 100 provides display data for the head mounted display device 200.
The electronic device 100 may also act as an input device, receiving user operations such as clicking, sliding, etc., and may deliver rays into the virtual space (AR space, VR space, or MR space) of the AR/VR/MR to simulate a mouse action, facilitating control operations by the user over what is displayed by the head mounted display device 200.
When the electronic device 100 is used as an input device, user input may be received through a variety of sensors configured thereto, such as a touch-sensitive sensor, an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, and the like. Wherein the acceleration sensor, the gyro sensor may be used to detect an operation of the user mobile electronic device 100, which may be used to change the direction of the radiation; touch sensors, pressure sensors, etc. may be used to detect touch operations of a user on a touch panel such as a touch screen, for example, a sliding operation, a clicking operation, a short press operation, a long press operation, etc.
The head mounted display device 200 may be configured with some physical keys to receive some user input, such as keys for switching screens, keys for adjusting screen brightness, keys for switching between spatial and mirror modes, etc. These user inputs may be transmitted to the electronic device 100 through a wired or wireless communication connection between the head mounted display device 200 and the electronic device 100, which in turn triggers the electronic device 100 to respond thereto. For example, in response to a user input switching from the spatial mode to the mirror mode, the electronic device 100 may stop transmitting display data of the spatial mode to the head mounted display device 200 and start transmitting display data of the mirror mode. The display data of the mirror mode is mainly a screen stream of the electronic device 100, and may be provided by a mirror projection application on the electronic device 100. The spatial mode of display data may be provided by a particular application on the electronic device 100, which may be a video application, a gaming application, a music application, a desktop application, and the like.
After the user sees the image displayed by the head-mounted display device 200, the user can control the display content in the virtual space and the operation state of the head-mounted display device 200, such as the on-off state, the screen brightness, etc., by inputting the user operation at the electronic device 100 or the head-mounted display device 200.
Fig. 2 schematically illustrates a hardware architecture diagram of the electronic device 100 according to an embodiment of the present application. As shown in fig. 2, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a sensor module 180, a camera 191, and a display 192. The sensor module 180 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display 192. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including Wireless Local Area Network (WLAN) (e.g., wireless fidelity (Wi-Fi) network), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near field wireless communication technology (NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 192, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 192 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 192 is used to display images, videos, and the like. The display 192 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 192, N being a positive integer greater than 1.
The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. In some embodiments, the pressure sensor may be provided on the display 192. Pressure sensors are of many kinds, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 192, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display 192, and the touch sensor and the display 192 form a touch screen, which is also referred to as a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 192.
The electronic device 100 may detect gestures made by a user on the display 192 through pressure sensors or touch sensors. Such as a swipe up gesture, swipe down gesture, swipe left gesture, swipe right gesture, tap gesture, long press gesture, tap gesture, etc.
The electronic device 100 may implement photographing functions through an ISP, a camera 191, a video codec, a GPU, a display 192, an application processor, and the like.
The ISP is used to process the data fed back by the camera 191. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes.
The camera 191 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. In some embodiments, the electronic device 100 may include 1 or N cameras 191, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an application processor, and the like. Such as music playing, recording, etc. Wherein the audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
Internal memory 121 may be used to store application programs, including instructions, for one or more applications. The application, when executed by the processor 110, causes the electronic device 100 to generate content for presentation to a user. By way of example, the applications may include an application for managing the head mounted display device 200, a gaming application, a conferencing application, a video application, a desktop application, or other applications, and the like.
The GPU may be used to perform mathematical and geometric operations from data acquired from the processor 110 (e.g., application-provided data), render images using computer graphics techniques, computer simulation techniques, etc., and determine images for display on the head mounted display device 200. In some embodiments, the GPU may add correction or pre-distortion to the rendering process of the image to compensate or correct for distortion caused by the optical components of the head mounted display device 200.
In an embodiment of the present application, the electronic device 100 may send the image obtained after the GPU processing to the head-mounted display device 200 through the mobile communication module 150, the wireless communication module 160 or the wired interface.
The configuration illustrated in fig. 2 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Fig. 3A schematically illustrates an optical configuration of the head-mounted display device 200 provided by the embodiment of the present application. As shown in fig. 3A, the head mounted display device 200 may include: a display 201, an optical assembly 202, a display 203, and an optical assembly 204. Wherein the display 201 and the display 203 may be one integral piece, i.e. left and right portions of a one-piece screen. The optical element 202 and the optical element 204 are the same in material, structure, etc. The optical assemblies 202 and 204 may be comprised of one or more lenses, which may include one or more of convex lenses, fresnel lenses, or other types of lenses.
The display screen 201 and the optical assembly 202 correspond to the left eye of the user. When the user wears the head mounted display device 200, the image a1 may be displayed on the display screen 201. The light emitted when the display screen 201 displays the image a1 will form a virtual image a1' of the image a1 in front of the left eye of the user after transmission through the optical assembly 202.
The display screen 203 and the optical component 204 correspond to the right eye of the user. The display screen 203 may display an image a2 when the user wears the head-mounted display device. The light emitted when the display screen 203 displays the image a2 will form a virtual image a2 'of the image a2 in front of the user's right eye after transmission through the optical assembly 204.
The image a1 and the image a2 are two images having parallax for the same object such as the object a. Parallax refers to the difference in position of an object in a field of view when the same object is viewed from two points at a distance. The virtual image a1 'and the virtual image a2' are located on the same plane, which may be referred to as a virtual image plane.
When the head-mounted display device 200 is worn, the left eye of the user is focused on the virtual image a1', and the right eye of the user is focused on the virtual image a 2'. Then, the virtual images a1 'and a2' are superimposed in the brain of the user as a complete and stereoscopic image, a process called convergence. During convergence, the junction of the binocular vision is considered by the user as the actual location of the object depicted by images a1 and a 2. Due to the convergence process, the user can feel the 3D scene provided by the head mounted display device 200.
Fig. 3B illustrates a hardware architecture of the head-mounted display device 200 provided by an embodiment of the present application. As shown in fig. 3B, the head mounted display device 200 may include: processor 210, memory 211, communication module 212, sensor system 213, camera 214, display device 215, audio device 216. The above components may be coupled and communicate with each other.
It will be appreciated that the structure shown in fig. 3B does not constitute a specific limitation on the head mounted display device 200. In other embodiments of the application, head mounted display device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. For example, the head mounted display device 200 may also include physical keys such as a switch key, a volume key, a screen brightness adjustment key, and various types of interfaces, such as a USB interface, etc. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor may include an AP, a modem processor, a GPU, an ISP, a controller, a video codec, a DSP, a baseband processor, and/or an NPU, etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish instruction fetching and instruction execution control, so that each component executes corresponding functions, such as man-machine interaction, motion tracking/prediction, rendering display, audio processing and the like.
Memory 211 may store some executable instructions. The memory 211 may include a stored program area and a stored data area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. The storage data area may store data created during use of the head mounted display device 200 (such as audio data, etc.), and the like. In addition, the memory 211 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 210 performs various functional applications and data processing of the head mounted display device 200 by executing instructions stored in the memory 211 and/or instructions stored in a memory provided in the processor.
The communication module 212 may include a mobile communication module and a wireless communication module. The mobile communication module may provide a solution including wireless communication of 2G/3G/4G/5G or the like applied to the head-mounted display device 200. The wireless communication module may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on the head mounted display device 200. The wireless communication module may be one or more devices that integrate at least one communication processing module.
The sensor system 213 may include an accelerometer, compass, gyroscope, magnetometer, or other sensor for detecting motion, or the like. The sensor system 213 is used to collect corresponding data, such as acceleration sensor collecting head mounted display device 200 acceleration, gyro sensor collecting head mounted display device 200 movement speed, etc. The data collected by the sensor system 213 may reflect the movement of the head of the user wearing the head mounted display device 200. In some embodiments, the sensor system 213 may be an inertial measurement unit (inertial measurement unit, IMU) disposed within the head mounted display device 200. In some embodiments, the head mounted display device 200 may send data acquired by the sensor system to the processor 210 for analysis. The user may trigger the head mounted display device 200 to perform a corresponding function by inputting a head movement operation on the head mounted display device 200. The movement of the user's head may include: whether rotated, the direction of rotation, etc.
The sensor system 213 may also include optical sensors for tracking the user's eye position and capturing eye movement data in conjunction with the camera 214. The eye movement data may be used, for example, to determine the distance between the eyes of the user, the 3D position of each eye relative to the head mounted display device 200, the magnitude and gaze direction of the twist and rotation (i.e., turning, pitching and panning) of each eye, and the like. In one example, infrared light is emitted within the head mounted display device 200 and reflected from each eye, the reflected light is detected by the camera 214 or optical sensor, and the detected data is transmitted to the processor 210 such that the processor 210 analyzes the position, pupil diameter, movement state, etc. of the user's eyes from the changes in the infrared light reflected from each eye.
The camera 214 may be used to capture a captured still image or video. The still image or video may be an image or video of the surroundings of the externally facing user or an internally facing image or video. The camera 214 may track movement of a user's single or both eyes. The cameras 214 include, but are not limited to, conventional color cameras (RGB cameras), depth cameras (RGB depth cameras), dynamic vision sensor (dynamic vision sensor, DVS) cameras, and the like. The depth camera can acquire depth information of a photographed object. In some embodiments, the camera 214 may be used to capture an image of the user's eye and send the image to the processor 210 for analysis. The processor 210 may determine the state of the eyes of the user based on the image acquired by the camera 214, and perform a corresponding function based on the state of the eyes of the user. That is, the user may trigger the head mounted display device 200 to perform a corresponding function by inputting an eye movement operation on the head mounted display device 200. The state of the user's eyes may include: whether rotated, the direction of rotation, whether not rotated for a long time, the angle to the outside, etc.
The audio device 216 is used to enable the acquisition and output of audio. Audio device 216 may include, but is not limited to: microphones, speakers, headphones, etc.
The head mounted display device 200 presents or displays images through a GPU, a display 215, and an application processor or the like.
The GPU is a microprocessor for image processing, and is connected to the display device 215 and the application processor. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information. The GPU is used to perform mathematical and geometric calculations from data obtained from the processor 210, render images using computer graphics techniques, computer simulation techniques, and the like, to provide content for display on the display device 215. The GPU is also used to add correction or pre-distortion to the rendering process of the image to compensate or correct for distortion caused by optical components in the display device 215. The GPU may also adjust the content provided to the display device 215 based on data from the sensor system 213. For example, the GPU may add depth information to the content provided to the display device 215 based on the 3D position of the user's eyes, pupil distance, etc.
The display device 215 may include: one or more display screens, one or more optical components. Here, the structures of the display screen and the optical component and the positional relationship therebetween can be described with reference to the correlation in fig. 3A. Wherein the display screen may comprise a display panel that may be used to display images to present a stereoscopic virtual scene to a user. The display panel may be LCD, OLED, AMOLED, FLED, miniled, microLed, micro-oLed, QLED, etc. The optical assembly may be used to direct light from the display screen to the exit pupil for perception by a user. In some embodiments, one or more optical elements (e.g., lenses) in the optical assembly may have one or more coatings, such as an anti-reflective coating. The magnification of the image light by the optical assembly allows the display to be physically smaller, lighter, and consume less power. In addition, the magnification of the image light can increase the field of view of the content displayed by the display screen. For example, the optical assembly may cause the field of view of the content displayed by the display screen to be the full field of view of the user.
In an embodiment of the present application, the display screen in the head-mounted display device 200 may be used to display data transmitted from the electronic device 100, and provide an XR experience for the user.
The head mounted display device 200 may display images in a virtual space such that the user perceives a 3D scene, providing an AR/VR/MR experience for the user. By way of example, as shown in FIG. 4A, the virtual space of the head mounted display device 200 displays a desktop interface on which desktop icons for various applications, such as desktop icons for applications of "contacts," "calendar," "weather," "video," "time," "shopping," "game," "short video," "music," and the like, may be displayed. Without being limited to an application program, objects such as files, folders, etc. may also be included on the desktop interface. In some virtual spaces, a user may operate on a display object of the virtual space by moving rays in the virtual space.
The electronic device 100 may be used as an input device of the head-mounted display device 200, for example, a mobile phone is used as an input device of AR glasses, and a user may implement man-machine interaction with the head-mounted display device 200 by operating on the electronic device 100. Taking the electronic device 100 as a mobile phone, the head-mounted display device 200 is an AR glasses example. Referring to fig. 4A, an AR application for managing a virtual space of the AR glasses 200 is installed on the mobile phone 100. In response to a click operation of an application icon of the AR application by the user, the mobile phone 100 starts the AR application, and displays the touch panel interface 101. After the AR glasses 200 display the virtual space, the user may implement human-computer interaction with the virtual space of the AR glasses 200 using the AR application. Optionally, the user may control the pointing position of the ray in the virtual space by moving the position and orientation of the mobile phone 100, so as to implement man-machine interaction with the display object in the virtual space. Optionally, various virtual keys are provided on the touch pad interface 101, and a user can implement man-machine interaction with a display object in the virtual space through operation of the virtual keys. Illustratively, the touch pad interface 101 includes a touch area 102 within which a user may slide up, down, left, or right (perform a slide gesture), moving rays within the virtual space up, down, left, or right, respectively; operations such as clicking, long pressing and the like (performing clicking gestures and long pressing gestures) can also be performed in the touch area 102, so that man-machine interaction with the display object in the virtual space is realized.
For example, referring to FIG. 4A, a user may slide up, down, left, or right within the touch area 102, moving a ray within the virtual space such that the ray moves to a desktop icon of a short video application on the desktop interface; the user may then perform a click operation within the touch area 102, selecting a desktop icon for the short video application. In response to the click operation, the head-mounted display device 200 displays an application interface of the short video application within the virtual space.
Generally, by inputting gestures on the electronic device 100, man-machine interaction with the virtual space of the head-mounted display device 200 is achieved, and the operation is convenient for the user, and the use experience is good. For example, in the implementations described above, a swipe gesture, a tap gesture, a long press gesture, etc., are performed within the touch area 102. In the man-machine interaction process, besides the above operations, the user needs to perform operations such as screen capturing, screen recording, and returning to the previous stage in the virtual space of the head-mounted display device 200.
Gesture operations supported on electronic devices such as mobile phones and the like with touch screens are abundant. For example, the screen capturing of the mobile phone display interface is realized by executing a multi-finger sliding gesture on the touch screen; for example, implementing a finger joint double-click gesture on the touch screen to realize screen capturing of a mobile phone display interface; for example, the mobile phone display interface returns to the upper level by executing a left sliding gesture on the touch screen.
In the prior art, in order to avoid a conflict with a system gesture (such as a gesture of sliding down with multiple fingers to perform screen capturing, a gesture of double clicking on a finger joint to perform screen capturing, a gesture of sliding left to return to a previous stage, etc.) of the electronic device 100, when a user needs to perform operations of screen capturing, screen recording, returning to a previous stage in a virtual space of the head-mounted display device 200, the user does not support to perform operations with the gesture on the electronic device 100, but performs operations with virtual keys, etc.
Illustratively, in one implementation, the user performs operations such as screen capturing, screen recording, returning to a previous level, etc. within the virtual space of the head-mounted display device 200 by clicking a virtual key on the electronic device 100. Illustratively, referring to fig. 4B, the touch panel interface 101 of the electronic device 100 includes a "home" button 103, and the user may click on the "home" button 103 to switch the virtual space display interface to the desktop interface. The touch panel interface 101 includes a "back" button 104, and the user can click on the "back" button 104 to return to the previous stage of the display interface of the virtual space. The touch panel interface 101 includes a "screen shot" button 105, and a user can click on the "screen shot" button 105 to realize a screen shot of the display interface of the virtual space. The touch panel interface 101 comprises a screen recording button 106, and a user can click the screen recording button 106 to record the screen of the display interface of the virtual space. The operations of screen capturing, screen recording, returning to the previous stage and the like are realized by using the virtual keys on the electronic device 100, and the user needs to move the sight from the interior of the head-mounted display device 200 to the touch pad interface of the electronic device 100 to search for the corresponding virtual keys. In the process of line-of-sight jump, as eyes need to adjust focal length to see two devices clearly, stronger uncomfortable feeling can be generated, and the user experience is poor.
The embodiment of the application provides a man-machine interaction method, which supports a user to execute system gestures on a touch screen of electronic equipment when the electronic equipment is used as input equipment of head-mounted display equipment, and realizes operations such as screen capturing, screen recording, returning to the upper level and the like on a display interface of a virtual space of the head-mounted display equipment. The operation flow of the user is simplified, and the use experience of the user is improved.
In the following, taking an example that the electronic device is a mobile phone and the head-mounted display device is AR glasses, the man-machine interaction method provided by the embodiment of the application is described in detail with reference to the accompanying drawings.
The mobile phone is provided with an AR application, and a user can start the AR application on the mobile phone to manage the virtual space of the AR glasses. When the mobile phone starts the AR application, and an application interface of the AR application is displayed, if system gestures, such as screen capturing gestures, screen recording gestures, returning to a previous stage gestures and the like, of a user on a touch screen of the mobile phone are received, the system gestures act on a virtual space of the AR glasses and do not act on the mobile phone. Specifically, after the mobile phone starts the AR application, an application interface of the AR application is displayed. If a screen capturing gesture of a user on a touch screen of the mobile phone is received, the screen capturing is not performed on a display interface of the mobile phone, but is performed on a display interface of a virtual space of the AR glasses. If a screen recording gesture of the user on the touch screen of the mobile phone is received, the screen recording is not carried out on the display interface of the mobile phone, but is carried out on the display interface of the virtual space of the AR glasses. If a gesture of a user returning to the previous stage on the touch screen of the mobile phone is received, the previous stage returning operation is not performed on the display interface of the mobile phone, but is performed on the display interface of the virtual space of the AR glasses. Therefore, when a user watches the picture in the virtual space, the user can realize the operations of screen capturing, screen recording, returning to the previous stage and the like on the display interface of the virtual space of the AR glasses only by making a system gesture on the touch screen of the mobile phone. The user's line of sight does not need to leave the virtual space, avoids the line of sight to jump between the virtual space and the cell-phone screen. Moreover, the gesture operation is convenient and quick, the operation cost of the user is low, and the experience is good.
When the mobile phone exits the AR application or the AR application is returned to the background operation, the mobile phone does not display an application interface of the AR application any more. For example, the handset may display a desktop interface, or the handset may display an application interface for other applications (e.g., an application interface for a video application, an application interface for a chat application, etc.). If the mobile phone receives system gestures of a user on a touch screen of the mobile phone, such as a screen capturing gesture, a screen recording gesture, a return to last level gesture and the like, the system gestures act on the mobile phone. Specifically, when the mobile phone does not display the application interface of the AR application, if a screen capturing gesture of a user on a touch screen of the mobile phone is received, a screen capturing is performed on the display interface of the mobile phone. And if the screen recording gesture of the user on the touch screen of the mobile phone is received, recording the screen of the display interface of the mobile phone. And if the gesture of the user returning to the previous stage on the touch screen of the mobile phone is received, returning to the previous stage on the display interface of the mobile phone. Therefore, the user can make system gestures on the touch screen of the mobile phone, and operations such as screen capturing, screen recording and returning to the previous stage are conveniently realized on the display interface of the mobile phone.
Fig. 5-10 illustrate diagrams of some examples of scenarios in which a mobile phone receives a system gesture. It should be noted that the specific gestures shown in fig. 5-10 are only one example of a corresponding scenario, and are used to schematically illustrate system gestures. In other embodiments, the system gestures, such as screen capture gestures, return to a previous level gesture, etc., may be in other specific forms. For example, the screen capturing gesture may be a multi-directional sliding down on the screen, a multi-directional sliding up on the screen, or a single-finger joint double-click screen, a double-finger joint double-click screen, or the like. For example, the screen recording gesture may be a multi-directional up-slide, a multi-directional down-slide on the screen, or a single-finger double-click screen, a double-finger double-click screen, or the like. It will be appreciated that the screen capture gesture is different from the screen capture gesture. For example, the gesture of returning to the previous stage may be a single finger sliding from the left side to the right side of the screen, a single finger sliding from the right side to the left side of the screen, a finger sliding inward from both sides of the screen, or the like.
Taking the example of the screen capturing gesture sliding down on the screen in multiple directions, fig. 5 shows an example of a scenario in which the screen capturing gesture is received when the mobile phone displays an application interface of the AR application.
As shown in fig. 5, after the mobile phone 100 starts the AR application, the touch panel interface 101 is displayed; the virtual space of the AR glasses 200 displays an application interface of the short video application. The user makes a screen capture gesture on the touch screen of the mobile phone 100; illustratively, as shown in FIG. 5, the user uses three fingers to slide down the screen of the cell phone 100. After receiving the screen capturing gesture, the mobile phone 100 triggers the generation of the screen capturing picture of the display interface of the virtual space of the AR glasses 200. Exemplary, an application interface of the short video application is screen-captured, and a screenshot picture is generated. Further, the mobile phone 100 may also store the generated screenshot pictures.
In one implementation, after receiving the screen capturing gesture, the mobile phone 100 determines that the current display interface of the mobile phone 100 is an application interface (such as the touch pad interface 101) of the AR application, and the mobile phone 100 obtains an image of the display interface of the virtual space of the AR glasses 200, captures the image, and generates a screenshot picture of the display interface of the virtual space.
In another implementation, after the mobile phone 100 receives the screen capture gesture, it is determined that the current display interface of the mobile phone 100 is an application interface (such as the touch pad interface 101) of the AR application. The mobile phone 100 notifies the AR glasses 200 to capture an image of the display interface of the virtual space. In one example, the cell phone 100 generates a screen capture event from the screen capture gesture, and distributes the screen capture event to the AR glasses 200. In another example, the mobile phone 100 sends a first indication message to the AR glasses 200, where the first indication message is used to instruct the AR glasses 200 to screenshot a display interface of the virtual space. After the AR glasses 200 receive the screen capturing event or the first indication message, the images of the display interface of the virtual space are captured, and the captured images of the display interface of the virtual space are generated. The AR glasses 200 may save the generated screenshot pictures; further, the AR glasses 200 may also send the generated screenshot picture to the mobile phone 100, and the mobile phone 100 may also store the screenshot picture.
In the scene, the AR application on the mobile phone runs in the foreground, and the mobile phone displays an application interface of the AR application. And the mobile phone receives the screen capturing gesture of the user and executes screen capturing on the display interface of the virtual space of the AR glasses.
Taking the example of the screen capturing gesture sliding down on the screen in multiple directions, fig. 6 shows an example of a scenario in which the screen capturing gesture is received when the mobile phone does not display the application interface of the AR application.
As shown in fig. 6, the mobile phone 100 does not launch an AR application, and the mobile phone 100 displays a desktop interface. It will be appreciated that the mobile phone 100 may also display application interfaces of other applications, such as application interfaces of video applications, application interfaces of chat applications, etc. The virtual space of the AR glasses 200 displays an application interface of the video application. The user makes a screen capture gesture on the touch screen of the mobile phone 100; illustratively, as shown in FIG. 6, the user slides down using three directions. After the mobile phone 100 receives the screen capturing gesture, a screen capturing picture of the mobile phone display interface is generated, and by way of example, a screen capturing picture of the mobile phone desktop interface is generated. Further, the mobile phone 100 may store the generated screenshot picture.
In the scene, the AR application is not operated on the mobile phone or is operated in the background, and the mobile phone does not display an application interface of the AR application. And the mobile phone receives the screen capturing gesture of the user and executes screen capturing on the display interface of the mobile phone.
Taking the example of a screen-recording gesture sliding upwards on a screen, fig. 7 shows an example of a scenario in which the screen-recording gesture is received when the mobile phone displays an application interface of an AR application.
As shown in fig. 7, after the mobile phone 100 starts the AR application, the touch panel interface 101 is displayed; the virtual space of the AR glasses 200 displays an application interface of the short video application. A user makes a screen recording gesture on a touch screen of the mobile phone 100; illustratively, as shown in FIG. 7, the user uses three fingers to slide up on the screen of the cell phone 100. After receiving the screen recording gesture, the mobile phone 100 triggers the display interface of the virtual space of the AR glasses 200 to record the screen, and generates a screen recording file. Exemplary, the application interface of the short video application is recorded, and a recording file is generated. Further, the mobile phone 100 may also store the generated screen file.
In one implementation, after the mobile phone 100 receives the screen recording gesture, it is determined that the current display interface of the mobile phone 100 is an application interface (such as the touch pad interface 101) of the AR application, and the mobile phone 100 records the screen of the display interface of the virtual space of the AR glasses 200, so as to generate a screen recording file.
In another implementation, after the mobile phone 100 receives the screen recording gesture, it is determined that the current display interface of the mobile phone 100 is an application interface (such as the touch pad interface 101) of the AR application. The mobile phone 100 informs the AR glasses 200 to record the display interface of the virtual space. In one example, the cell phone 100 generates a screen recording event from the screen recording gesture and distributes the screen recording event to the AR glasses 200. In another example, the mobile phone 100 sends a second indication message to the AR glasses 200, where the second indication message is used to instruct the AR glasses 200 to record a screen of the display interface of the virtual space. After the AR glasses 200 receive the screen recording event or the second indication message, screen recording is performed on the display interface of the virtual space, and a screen recording file of the display interface of the virtual space is generated. The AR glasses 200 may store the generated screen file; further, the AR glasses 200 may also send the generated screen recording file to the mobile phone 100, and the mobile phone 100 may also store the screen recording file.
In the scene, the AR application on the mobile phone runs in the foreground, and the mobile phone displays an application interface of the AR application. And the mobile phone receives the screen recording gesture of the user, and performs screen recording on the display interface of the virtual space of the AR glasses.
Taking the example of a screen-recording gesture sliding upwards on a screen, fig. 8 shows an example of a scenario in which the screen-recording gesture is received when the mobile phone does not display an application interface of the AR application.
As shown in fig. 8, the mobile phone 100 does not start the AR application, and the mobile phone 100 starts the video call application and displays an application interface of the video call application. It will be appreciated that the handset 100 may also display application interfaces for other applications. The virtual space of the AR glasses 200 displays an application interface of the video application. A user makes a screen recording gesture on a touch screen of the mobile phone 100; illustratively, as shown in FIG. 8, the user uses three fingers to slide up the screen of the cell phone 100. After the mobile phone 100 receives the screen recording gesture, the mobile phone starts to record the screen on the mobile phone display interface (the application interface of the video call application), and generates a screen recording file of the mobile phone display interface, and in an exemplary manner, generates a screen recording file of the application interface of the video call application. Further, the mobile phone 100 may store the generated screen file.
In the scene, the AR application is not operated on the mobile phone or is operated in the background, and the mobile phone does not display an application interface of the AR application. And the mobile phone receives the screen recording gesture of the user and executes screen recording on the display interface of the mobile phone.
Taking the example that the gesture returning to the previous stage is a single finger sliding from the left side to the right side of the screen, fig. 9 shows an example schematic diagram of a scene in which the gesture returning to the previous stage is received when the mobile phone displays an application interface of the AR application.
As shown in fig. 9, after the mobile phone 100 starts the AR application, the touch panel interface 101 is displayed; the virtual space of the AR glasses 200 displays an application interface of the short video application. The user makes a return to previous gesture on the touch screen of the mobile phone 100; illustratively, as shown in FIG. 9, the user uses a single finger to slide from left edge to right on the screen of the cell phone 100. After receiving the gesture of returning to the previous stage, the mobile phone 100 triggers the virtual space of the AR glasses 200 to return to the previous stage display interface. Illustratively, in response to the mobile phone 100 receiving a return to the previous level gesture, the virtual space of the AR glasses 200 displays a desktop interface.
In one implementation, after receiving the gesture of returning to the previous stage, the mobile phone 100 determines that the current display interface of the mobile phone 100 is an application interface (such as the touch pad interface 101) of the AR application, and the mobile phone 100 generates display data of the display interface of the previous stage of the display interface according to the current display interface of the virtual space of the AR glasses 200, and sends the display data of the display interface of the previous stage to the AR glasses 200. The AR glasses 200 display in the virtual space according to the display data of the previous display interface.
In another implementation, after the mobile phone 100 receives the return to the previous gesture, it is determined that the current display interface of the mobile phone 100 is an application interface (such as the touch pad interface 101) of the AR application. The mobile phone 100 notifies the AR glasses 200 to return to the previous-stage display interface. In one example, the handset 100 generates a return last level event from the return last level gesture and distributes the return last level event to the AR glasses 200. In another example, the mobile phone 100 sends a third indication message to the AR glasses 200, where the third indication message is used to instruct the AR glasses 200 to return to the previous display interface. The AR glasses 200 display the previous display interface of the current display interface after receiving the return to the previous event or receiving the third indication message.
In the scene, the AR application on the mobile phone runs in the foreground, and the mobile phone displays an application interface of the AR application. And the mobile phone receives the gesture returned to the previous stage of the user, and the display interface of the virtual space of the AR glasses returns to the previous stage.
Taking the example that the gesture returning to the previous stage is a single finger sliding from the left side to the right side of the screen, fig. 10 shows an example schematic diagram of a scene in which the gesture returning to the previous stage is received when the mobile phone does not display the application interface of the AR application.
As shown in fig. 10, the mobile phone 100 does not start the AR application, and the mobile phone 100 starts the chat application and displays an application interface of the chat application. The virtual space of the AR glasses 200 displays an application interface of the video application. The user makes a return to previous gesture on the touch screen of the mobile phone 100; illustratively, as shown in FIG. 10, the user uses a single finger to slide from left edge to right on the screen of the cell phone 100. After the mobile phone 100 receives the gesture of returning to the previous stage, the mobile phone 100 displays the previous stage display interface of the current interface. The current interface of the mobile phone is an application interface of the chat application, and the upper-level display interface of the application interface of the chat application is a desktop interface. In response to receiving the return to previous level gesture, the handset 100 displays a desktop interface.
In the scene, the AR application is not operated on the mobile phone or is operated in the background, and the mobile phone does not display an application interface of the AR application. And the mobile phone receives the gesture returned to the previous stage by the user and displays the previous stage display interface of the current display interface.
The embodiment of the application provides a man-machine interaction method which supports system gestures made on electronic equipment (such as a mobile phone). The electronic device determines whether the electronic device performs an action corresponding to the system gesture or the head-mounted display device (such as AR glasses) performs an action corresponding to the system gesture by judging whether the first application is running in the foreground.
Fig. 11 is a schematic flow chart of a man-machine interaction method according to an embodiment of the present application. As shown in fig. 11, the electronic device is communicatively coupled to the head mounted display device. The electronic device receives a first gesture of a user, such as a screen capture gesture, a screen recording gesture, or a return to previous gesture, etc. The electronic device determines whether a first application (such as an AR application) is running in the foreground, the first application being for managing a virtual space of the head-mounted display device. In one implementation, if the electronic device displays an application interface of the first application, determining that the first application is running in the foreground; if the electronic device does not display the application interface of the first application, it is determined that the first application is not running in the foreground.
If the electronic device determines that the first application is running in the foreground, an action corresponding to the system gesture is performed by the head-mounted display device (e.g., AR glasses). Exemplary, as shown in the scenarios of fig. 5, 7 and 9.
If the electronic device determines that the first application is not running in the foreground, the electronic device (such as a mobile phone) executes an action corresponding to the system gesture. Exemplary, as shown in the scenarios of fig. 6, 8 and 10.
In the method, after the electronic equipment receives the system gesture, different conditions are distinguished, the electronic equipment is triggered to execute actions corresponding to the system gesture, or the head-mounted display equipment is triggered to execute actions corresponding to the system gesture. And the system gestures are supported to be executed on the touch screen of the electronic equipment by the user, so that the operations of screen capturing, screen recording, returning to the previous stage and the like are realized on the display interface of the virtual space of the head-mounted display equipment. Therefore, when a user watches the picture in the virtual space, the user can realize the operations of screen capturing, screen recording, returning to the upper level and the like on the display interface in the virtual space of the head-mounted display device only by making a system gesture on the touch screen of the mobile phone. The user's line of sight does not need to leave the virtual space, avoids the line of sight to jump between the virtual space and the cell-phone screen. Moreover, the gesture operation is convenient and quick, the operation cost of the user is low, and the experience is good.
Fig. 12 and fig. 13 are schematic diagrams illustrating a detailed flow chart of a man-machine interaction method according to an embodiment of the present application.
As shown in fig. 12, the operating system of the mobile phone 100 is installed with an application 1 and an application 2, an application interface of the application 1 is displayed on the mobile phone 100, and an application interface of the application 2 is displayed in the virtual space of the AR glasses 200. For example, application 1 is a desktop application of a mobile phone, and application 2 is a video application. After the application 1 starts running, an Activity (Activity) corresponding to the application 1 is generated. After the application 2 starts running, an Activity (Activity) corresponding to the application 2 is generated. It will be appreciated that an application may include one or more interfaces, each interface corresponding to an Activity; an Activity includes one or more layers (layers). Each layer includes one or more elements (controls). The framework layer (framework) of the operating system includes window management services (window manager service, WMS), display management units, surface composition (surf f linger) modules, activity management services (Activity manager service, AMS), and the like. WMSs are used for window management (e.g., adding windows, deleting windows, modifying windows, etc.). Starting an application 1, and creating a window 1 corresponding to the application 1 by the WMS; application 2 starts, WMS creates window 2 corresponding to application 2. The Display management unit creates a corresponding Display (Display) for each window, and establishes a one-to-one correspondence between the windows and the displays. The surface synthesis (SurfaceFlinger) module is configured to obtain display data of each application interface (for example, the number of layers included in the interface, and display elements of each layer) from the WMS. The SurfaceFlinger determines a composite instruction (e.g., an OPENGL ES instruction, a HW compound instruction) corresponding to each display data according to the display data of the application interface. The graphic rendering composition component adopts a composition instruction to carry out graphic composition on the display data, and generates images of each layer of the application interface; and synthesizing and rendering the images of each layer to generate an interface corresponding to the Activity. Further, the graphics-rendering composition component generates interfaces of different applications to corresponding displays (displays), one screen for each display.
In one example, surfeflinger obtains display data 1 of the Activity of application 1 from WMS, and the graphics rendering composition component performs graphics composition on the display data 1 using composition instructions to generate images of each layer; and carrying out composite rendering on each layer image to generate an interface of the application 1 to display1 (display 1). The interface saved on display1 is for display on the screen of the handset 100. Thus, the screen of the mobile phone 100 displays the application interface of the application 1; illustratively, as shown in FIG. 4A, the handset 100 displays a desktop interface. The surface eFlinger acquires the display data 2 of the Activity of the application 2 from the WMS, and the graphics rendering synthesis component synthesizes the graphics of the display data 2 by adopting a synthesis instruction to generate images of each layer; and the images of all the layers are synthesized and rendered to generate an interface of the application 2 to the display2 (display 2). The interface saved on display2 is used for display within the virtual space of AR glasses 200. Further, the mobile phone 100 sends the interface information of the application 2 to the AR glasses 200, so that the AR glasses 200 can display the application interface of the application 2 in the virtual space.
The handset 100 receives a system gesture (such as a screen capture gesture, or a return to previous stage gesture, etc.) entered by the user. The mobile phone determines that the desktop application runs in the foreground and the AR application does not run in the foreground, and determines that the mobile phone executes actions corresponding to the system gestures. Referring to fig. 12, in response to receiving a system gesture, the mobile phone performs screen capturing or recording on the application interface of the application 1 or returns to the previous operation.
In other embodiments, the operating system of the mobile phone 100 is further provided with an application 3, where the application 3 is used for managing the virtual space of the AR glasses 200, and an application interface of the application 3 is displayed on the mobile phone 100. Application 3 is an AR application, for example. As shown in fig. 13, after the application 3 starts running, an Activity (Activity) corresponding to the application 3 is generated. WMS creates window 3 corresponding to application 3. The surface eFlinger acquires display data 3 of the Activity of the application 3 from the WMS, and the graphics rendering synthesis component synthesizes the graphics of the display data 3 by adopting a synthesis instruction to generate images of each layer; and the images of all the layers are synthesized and rendered to generate an interface of the application 3 to the display1 (display 1). The interface saved on display1 is for display on the screen of the handset 100. Thus, the screen of the mobile phone 100 displays the application interface of the application 3; illustratively, as shown in FIG. 4A, after the mobile phone 100 initiates the AR application, the touch pad interface 101 is displayed.
The handset 100 receives a system gesture (such as a screen capture gesture, or a return to previous stage gesture, etc.) entered by the user. The mobile phone determines that the AR application runs in the foreground and determines that the AR glasses execute actions corresponding to the system gestures. Referring to fig. 13, in response to receiving a system gesture, the application interface of the application 2 performs a screen capture or a screen recording or returns to the previous operation.
In one implementation manner of the prior art, a virtual key is arranged on a desktop interface of a virtual space, and a user can click the virtual key on the desktop interface to perform operations such as screen capturing and screen recording in the virtual space by moving a ray position. However, the operation steps of this operation mode are complicated, for example, when the user views the display interface of other applications, the user needs to first exit the currently displayed application interface, enter the desktop interface, and then click the virtual key on the desktop interface to perform operations such as screen capturing and screen recording. The user experience is poor.
The embodiment of the application also provides a man-machine interaction method, when a user inputs a first operation (such as finger joint knocking) on an application interface of the AR application on the electronic equipment, a shortcut operation panel is popped up in a virtual space of the head-mounted display equipment in response to the first operation, and virtual keys such as screen capturing, screen recording, returning to the upper stage, returning to the homepage and the like are arranged on the shortcut operation panel. The user can move the ray position, click the virtual key on the shortcut operation panel to realize operations such as screen capturing, screen recording, returning to the upper stage, returning to the homepage and the like of the virtual space display interface. When a user watches the picture in the virtual space, only a first operation (such as finger joint knocking) is needed to be input on the touch screen of the mobile phone, and then the operations of screen capturing, screen recording, returning to the previous stage and the like on the display interface of the virtual space can be realized through the virtual keys in the virtual space. The sight of the user does not need to leave the virtual space, and poor experience caused by the fact that the sight jumps between the virtual space and the mobile phone screen is avoided. The virtual key is arranged on the popped shortcut operation panel and does not depend on a display interface of the virtual space. When a user is using a particular application, the user does not need to exit the application into the desktop interface to operate. The operation is fast, the operation cost of the user is low, and the experience is good.
The following describes in detail the man-machine interaction method provided by the embodiment of the application, taking the example that the electronic device is a mobile phone and the head-mounted display device is an AR glasses.
Illustratively, as shown in fig. 14, the cell phone 100 and AR glasses 200 are communicatively connected. The mobile phone 100 starts an AR application and displays a touch pad interface 101; the virtual space of the AR glasses 200 displays an application interface 301 of the short video application. The user may wear the AR glasses 200 to view a display interface within the virtual space.
The user may input a preset first operation on the touch pad interface 101 of the mobile phone 100. In one example, the touch pad interface 101 includes a touch area 102 where a user may input a first operation. For example, the first operation may include a single-finger tap, a multi-finger tap, a knuckle tap, a single-finger stroke, and the like. It will be appreciated that the first operation may also include other forms, and embodiments of the present application are not limited in this regard.
Referring to fig. 14, the first operation is referred to as joint tapping as an example. The mobile phone 100 receives the action of the finger joint knocking of the user in the touch area 102, and responds to the action of the finger joint knocking, the virtual space of the AR glasses 200 pops up the shortcut operation panel 302, and virtual keys are arranged on the shortcut operation panel 302. Illustratively, as shown in fig. 14, the shortcut operation panel 302 includes a "home" button 303, a "back to previous" button 304, a "screen capture" button 305, and a "screen capture" button 306. The "home" button 303 is used for returning to the desktop interface of the virtual space, the "back to previous" button 304 is used for returning to the previous display interface of the current display interface, the "screen capturing" button 305 is used for capturing the screen of the display interface of the virtual space, and the "screen recording" button 306 is used for recording the screen of the display interface of the virtual space. Optionally, the shortcut operation panel 302 further includes a "close" button 307 for closing the shortcut operation panel 302.
Illustratively, as shown in FIG. 15A, the user may select the "home" button 303; for example, the user slides up, down, left or right in the touch area 102 displayed on the mobile phone 100, moves the ray in the virtual space to the "home" button 303, and then clicks one touch in the touch area 102, and selects the "home" button 303. In response to a user's selection operation of the "homepage" button 303, the virtual space displays a desktop interface 310 (homepage).
In one example, the user may select the "back to superior" button 304; in response to a user's selection operation of the "return to previous stage" button 304, the virtual space displays a previous stage display interface of the application interface 301 of the short video application. Illustratively, the top display interface of the application interface 301 of the short video application is a desktop interface 310, and the virtual space displays the desktop interface 310 in response to a user's selection of the "back to top" button 304.
Illustratively, as shown in FIG. 15B, the user may select the "screen shot" button 305; for example, the user slides up, down, left or right in the touch area 102 displayed by the mobile phone 100, moves the ray in the virtual space to the "screen capture" button 305, and then clicks down in the touch area 102, selecting the "screen capture" button 305. In response to a user's selection operation of the "screen shot" button 305, a screen shot is performed on a display interface of the virtual space (application interface 301 of the short video application), and a screen shot picture is generated.
Illustratively, as shown in FIG. 15C, the user may select the "record" button 306; for example, the user slides up, down, left or right in the touch area 102 displayed by the mobile phone 100, moves the ray in the virtual space to the "screen" button 306, and then clicks in the touch area 102 to select the "screen" button 306. And in response to the user selecting the screen recording button 306, the screen recording is performed on the display interface of the virtual space, and a screen recording file is generated.
Illustratively, as shown in FIG. 15D, the user may select the "off" button 307; in response to a user's selection operation of the "close" button 307, the virtual space stops displaying the shortcut operation panel 302.
Note that the shortcut operation panel 302 shown in fig. 15A to 15D is only an example. In a specific implementation, the shortcut operation panel may be in other forms, for example, the shape, size and position of the shortcut operation panel may be set according to specific situations.
In one example, as shown in fig. 16A, the shortcut operation panel 302 may be superimposed on the virtual space current display interface (application interface 301 of the short video application). Optionally, the shortcut operation panel 302 is disposed on one side of the current display interface of the virtual space.
In another example, as shown in fig. 16B, the shortcut operation panel 302 may be displayed in a blank area in the virtual space at the same layer as the application interface 301 of the short video application.
In another example, as shown in fig. 16C, the frame of the shortcut operation panel 302 may not be displayed in the virtual space, but a "home" button 303, a "back to superior" button 304, a "screen capture" button 305, a "screen capture" button 306, and the like may be directly displayed.
In the man-machine interaction method provided by the embodiment of the application, in response to the first operation input by the user on the application interface of the AR application on the electronic equipment, the virtual space of the head-mounted display equipment pops up the virtual keys such as screen capturing, screen recording, returning to the upper level, returning to the homepage and the like. The virtual keys are independent of the current display interface of the virtual space, and can pop up on any display interface. Therefore, when the user needs to perform operations such as screen capturing, screen recording, returning to the previous stage, returning to the homepage and the like, the user does not need to exit from the current application, and the operation is simple, convenient and quick.
Fig. 17 is a schematic flow chart of a man-machine interaction method according to an embodiment of the present application. As shown in fig. 17, the method includes:
s401, the electronic device is in communication connection with the head-mounted display device.
The electronic device may act as an input device for a head mounted display device.
S402, the electronic equipment starts a first application and displays an application interface of the first application.
A first application (such as an AR application) is installed on the electronic device for managing a virtual space of the head mounted display device. For example, a user may click on an application icon of a first application on a desktop interface of an electronic device to launch the first application. Illustratively, as shown in fig. 14, the handset 100 displays a touch pad interface 101 for an AR application.
S403, the electronic equipment receives a first operation of a user on an application interface of a first application.
In one example, the touch pad interface 101 includes a touch area 102 where a user may input a first operation. For example, the first operation may include a single-finger tap, a multi-finger tap, a knuckle tap, a single-finger stroke, and the like. It will be appreciated that the first operation may also include other forms, and embodiments of the present application are not limited in this regard.
S404, responding to the first operation, and displaying at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-last-stage virtual key and a return-to-homepage virtual key in a virtual space of the head-mounted display device.
In one implementation, after receiving a first operation, the electronic device generates display data of the virtual key and sends the display data of the virtual key to the head-mounted display device; the head-mounted display device displays screen capturing virtual keys, screen recording virtual keys, a last-stage virtual key, a homepage virtual key and the like in the virtual space according to the received display data of the virtual keys.
In another implementation, after receiving the first operation, the electronic device generates a corresponding first event. The electronic device distributes the first event to the head mounted display device. The head-mounted display device receives the first event, generates display data of the virtual keys, and displays screen capturing virtual keys, screen recording virtual keys, a last-stage virtual key, a homepage virtual key and the like in the virtual space according to the display data of the virtual keys.
In one example, the virtual space of the head mounted display device displays a shortcut operation panel that includes a screen capture virtual key, a screen recording virtual key, a return to superior virtual key, a return to home virtual key, and the like. As illustrated in fig. 14, 16A, and 16B.
In another example, screen capture virtual keys, return to previous level virtual keys, return to home page virtual keys, etc. are displayed directly on the application interface within the virtual space. Illustratively, as shown in fig. 16C, a "home" button 303, a "back to superior" button 304, a "screen shot" button 305, and a "screen record" button 306 are displayed superimposed on the application interface 301 of the short video application.
It will be appreciated that each of the above-described devices, in order to implement the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application can divide the functional modules of the device according to the method example, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In the case of an integrated unit, fig. 18 shows a schematic diagram of one possible structure of the electronic device involved in the above-described embodiment. The electronic device 1800 includes: a processing unit 1801, a communication unit 1802, and a storage unit 1803. The processing unit 1801 is configured to control and manage an operation of the electronic device 1800; a communication unit 1802 for supporting communication of the electronic device 1800 with other network entities; the storage unit 1803 stores instructions and data of the electronic device 1800, which can be used to perform the steps of the corresponding embodiment of the present application.
Of course, the unit modules in the electronic apparatus 1800 include, but are not limited to, the processing unit 1801, the communication unit 1802, and the storage unit 1803 described above. For example, the electronic device 1800 may include a display unit or the like for displaying a user interface of the electronic device 1800.
The processing unit 1801 may be a processor or a controller, such as a central processing unit (central processing unit, CPU), a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The communication unit 1802 may be a transceiver, a transceiver circuit, or the like. The storage unit 1803 may be a memory.
The electronic device 1800 provided by the embodiment of the application may be the electronic device 100 shown in fig. 2. Wherein the processors, memories, communication interfaces, etc. may be coupled together, such as by a bus. The processor invokes the memory-stored program code to perform the steps in the method embodiments above.
Embodiments of the present application also provide a chip system including at least one processor 1901 and at least one interface circuit 1902, as shown in FIG. 19. The processor 1901 and the interface circuit 1902 may be interconnected by wires. For example, interface circuit 1902 may be used to receive signals from other devices (e.g., a memory of a routing apparatus). For another example, the interface circuit 1902 may be used to transmit signals to other devices, such as the processor 1901. For example, the interface circuit 1902 may read instructions stored in a memory and send the instructions to the processor 1901. The instructions, when executed by the processor 1901, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
The embodiment of the present application also provides a computer readable storage medium, in which a computer program code is stored, which when executed by the above-mentioned processor, causes the electronic device to perform the method of the above-mentioned embodiment.
The embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the method of the above embodiments.
The electronic device 1800, the chip system, the computer readable storage medium or the computer program product provided by the embodiments of the present application are all configured to execute the corresponding method provided above, and therefore, the advantages achieved by the embodiments of the present application may refer to the advantages in the corresponding method provided above, and are not described herein.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a magnetic disk or an optical disk.
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A human-machine interaction method, characterized by being applied to an electronic device, the electronic device being connected to a head-mounted display device, the method comprising:
the electronic device starts a first application, wherein the first application is used for managing a virtual space of the head-mounted display device;
the electronic equipment receives a first gesture of a user on the first application display interface; the first gesture comprises a screen capturing gesture, a screen recording gesture or a gesture returning to the previous stage;
and responding to the first gesture, and executing an action corresponding to the first gesture by the electronic equipment aiming at a display interface of the virtual space.
2. The method according to claim 1, wherein the method further comprises:
the electronic equipment exits to display the first application display interface;
the electronic equipment receives a first gesture of a user on a second application display interface;
and responding to the first gesture on the second application display interface, and executing an action corresponding to the first gesture by the electronic equipment aiming at the second application display interface.
3. A method according to claim 1 or 2, characterized in that,
the screen capturing gesture comprises any one of multi-directional downward sliding, multi-directional upward sliding, single-finger joint double-click screen or double-finger joint double-click screen on the screen;
The screen recording gesture comprises any one of multi-directional downward sliding, multi-directional upward sliding, single-finger joint double-click screen or double-finger joint double-click screen on the screen; wherein the screen capture gesture is different from the screen capture gesture;
the return-to-previous gesture includes any one of a single finger sliding from the left side of the screen to the right, a single finger sliding from the right side of the screen to the left, or a finger sliding inward from both sides of the screen.
4. The method of any of claims 1-3, wherein the electronic device receives a first gesture of a user prior to the first application display interface, the method further comprising:
and the electronic equipment generates a display interface of the virtual space and sends the display interface to the head-mounted display equipment.
5. The method of claim 4, wherein the electronic device performing the action corresponding to the first gesture with respect to the display interface of the virtual space comprises:
and the electronic equipment performs screen capturing, screen recording or generates a higher-level display interface of the virtual space.
6. A method according to any one of claims 1-3, wherein the electronic device performing, for a display interface of the virtual space, an action corresponding to the first gesture, comprises:
And triggering the head-mounted display device to screen capture, record or generate a superior display interface of the virtual space by the electronic device.
7. A human-machine interaction method, characterized by being applied to an electronic device, the electronic device being connected to a head-mounted display device, the method comprising:
the electronic device starts a first application, wherein the first application is used for managing a virtual space of the head-mounted display device;
the electronic equipment receives a first operation of a user on the first application display interface;
and responding to the first operation, the electronic equipment triggers at least one of a virtual space display screen capturing virtual key, a screen recording virtual key, a last-stage virtual key and a homepage returning virtual key of the head-mounted display equipment.
8. The method of claim 7, wherein the electronic device triggering at least one of a virtual space display screen capture virtual key, a return to superior virtual key, and a return to home virtual key of the head mounted display device comprises:
the electronic equipment triggers the virtual space of the head-mounted display equipment to display a shortcut operation panel, and at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-last-stage virtual key and a return-to-homepage virtual key is arranged on the shortcut operation panel.
9. The method of claim 8, wherein the shortcut panel is superimposed over an application interface of a second application displayed in the virtual space.
10. The method of claim 8, wherein the shortcut panel is displayed in a blank area of the virtual space display interface.
11. The method of claim 7, wherein at least one of the screen capture virtual key, the return to superior virtual key, and the return to home virtual key is superimposed over an application interface of a second application displayed in the virtual space.
12. The method of any of claims 7-11, wherein the first operation comprises: any one of a single finger tap, a multi-finger tap, a knuckle tap, or a single finger stroke.
13. The method of any of claims 7-12, wherein the electronic device, in response to the first operation, triggers at least one of a virtual space display screen capture virtual key, a return to superior virtual key, and a return to home virtual key of the head mounted display device, comprising:
In response to the first operation, the electronic device generates first display data, and sends the first display data to the head-mounted display device; the first display data is used for displaying at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-last-stage virtual key and a return-to-homepage virtual key in the virtual space by the head-mounted display device.
14. The method of any of claims 7-12, wherein the electronic device, in response to the first operation, triggers at least one of a virtual space display screen capture virtual key, a return to superior virtual key, and a return to home virtual key of the head mounted display device, comprising:
responding to the first operation, and sending a first event corresponding to the first operation to the head-mounted display device by the electronic device; the first event is used for triggering the head-mounted display device to generate first display data, and the first display data is used for displaying at least one of a screen capturing virtual key, a screen recording virtual key, a return-to-last-stage virtual key and a return-to-home virtual key in a virtual space by the head-mounted display device.
15. An electronic device, the electronic device comprising: a processor, a touch screen, and a memory, the processor, touch screen being coupled to the memory; the memory is used for storing computer program codes; the computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-14.
16. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-14.
CN202310378606.8A 2023-03-31 2023-03-31 Man-machine interaction method, electronic equipment and system Pending CN117130471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310378606.8A CN117130471A (en) 2023-03-31 2023-03-31 Man-machine interaction method, electronic equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310378606.8A CN117130471A (en) 2023-03-31 2023-03-31 Man-machine interaction method, electronic equipment and system

Publications (1)

Publication Number Publication Date
CN117130471A true CN117130471A (en) 2023-11-28

Family

ID=88855304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310378606.8A Pending CN117130471A (en) 2023-03-31 2023-03-31 Man-machine interaction method, electronic equipment and system

Country Status (1)

Country Link
CN (1) CN117130471A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019877A (en) * 2020-10-19 2020-12-01 深圳乐播科技有限公司 Screen projection method, device and equipment based on VR equipment and storage medium
CN113225610A (en) * 2021-03-31 2021-08-06 北京达佳互联信息技术有限公司 Screen projection method, device, equipment and storage medium
CN114510186A (en) * 2020-10-28 2022-05-17 华为技术有限公司 Cross-device control method and device
CN115756268A (en) * 2021-09-03 2023-03-07 华为技术有限公司 Cross-device interaction method and device, screen projection system and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019877A (en) * 2020-10-19 2020-12-01 深圳乐播科技有限公司 Screen projection method, device and equipment based on VR equipment and storage medium
CN114510186A (en) * 2020-10-28 2022-05-17 华为技术有限公司 Cross-device control method and device
CN113225610A (en) * 2021-03-31 2021-08-06 北京达佳互联信息技术有限公司 Screen projection method, device, equipment and storage medium
CN115756268A (en) * 2021-09-03 2023-03-07 华为技术有限公司 Cross-device interaction method and device, screen projection system and terminal

Similar Documents

Publication Publication Date Title
US11595566B2 (en) Camera switching method for terminal, and terminal
EP4057135A1 (en) Display method for electronic device having foldable screen, and electronic device
CN112717370B (en) Control method and electronic equipment
CN109766066B (en) Message processing method, related device and system
WO2020238741A1 (en) Image processing method, related device and computer storage medium
CN113518967A (en) Method for controlling screen display and electronic equipment
WO2021036770A1 (en) Split-screen processing method and terminal device
US20230276014A1 (en) Photographing method and electronic device
US20220283610A1 (en) Electronic Device Control Method and Electronic Device
CN114040242B (en) Screen projection method, electronic equipment and storage medium
EP4195707A1 (en) Function switching entry determining method and electronic device
US20240098174A1 (en) Always on display control method and electronic device
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113741681A (en) Image correction method and electronic equipment
CN113973189A (en) Display content switching method, device, terminal and storage medium
CN114579016A (en) Method for sharing input equipment, electronic equipment and system
CN115756268A (en) Cross-device interaction method and device, screen projection system and terminal
CN113391775A (en) Man-machine interaction method and equipment
CN115150542B (en) Video anti-shake method and related equipment
CN114691059B (en) Screen-throwing display method and electronic equipment
CN117130471A (en) Man-machine interaction method, electronic equipment and system
CN117131888A (en) Method, electronic equipment and system for automatically scanning virtual space two-dimensional code
CN117130472A (en) Virtual space operation guide display method, mobile device and system
CN116055627B (en) Screen-off control method, electronic equipment and storage medium
CN115328592B (en) Display method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination