WO2022199597A1 - Method, apparatus and system for cropping image by vr/ar device - Google Patents

Method, apparatus and system for cropping image by vr/ar device Download PDF

Info

Publication number
WO2022199597A1
WO2022199597A1 PCT/CN2022/082432 CN2022082432W WO2022199597A1 WO 2022199597 A1 WO2022199597 A1 WO 2022199597A1 CN 2022082432 W CN2022082432 W CN 2022082432W WO 2022199597 A1 WO2022199597 A1 WO 2022199597A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic devices
user
user interface
roi
Prior art date
Application number
PCT/CN2022/082432
Other languages
French (fr)
Chinese (zh)
Inventor
姜永涛
许哲
卢曰万
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022199597A1 publication Critical patent/WO2022199597A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • the present application relates to the field of terminals, and in particular, to a method, an apparatus, and a system for capturing an image by a VR/AR device.
  • Virtual reality (VR) technology uses computer simulation to generate a three-dimensional (3D) virtual reality scene, and provides a visual, auditory, tactile or other sensory simulation experience, allowing users to feel as if they are in the real world .
  • Augmented reality (AR) technology can superimpose virtual reality to display virtual images for users while users watch the real world, and users can also interact with virtual images to achieve the effect of augmented reality.
  • the present application provides a method, device and system for capturing images by a VR/AR device, which can meet the personalized needs of users, capture images in a region of interest (ROI) of users, and improve user experience.
  • ROI region of interest
  • the present application provides a method for capturing an image from a VR/AR device, the method is applied to a VR/AR device, the VR/AR device includes an optical component for providing a 3D 3D scene, and is worn on a user's head part, the method for capturing an image includes: the VR/AR device displays a first user interface; the VR/AR device determines the ROI in the display screen of the VR/AR device in response to the first operation, and captures the first image, the first image is the image in the ROI.
  • the VR/AR device can intercept the image in the ROI according to the user's operation, so as to satisfy the user's personalization and improve the user experience.
  • the above-mentioned first operation includes: an operation of moving a user's hand, or an operation of moving a user's eye gaze point, or an operation of moving an input device connected to the VR/AR device.
  • the user can achieve the practicability of capturing images through various operations, and improve the operability of the user.
  • the method for capturing an image by the VR/AR device further includes: the VR/AR device displays the movement track of the cursor on the display screen in response to the first operation , the movement trajectory of the cursor corresponds to the movement trajectory of the user's hand, or the movement trajectory of the user's eye gaze point, or the movement trajectory of the input device; the ROI is an area containing the movement trajectory of the cursor.
  • the user can watch the movement track of the cursor on the display screen of the VR/AR device, further displaying the interaction process between the user and the VR/AR device, and improving the user experience.
  • the ROI is an area containing the movement track of the cursor, for example, the ROI may be the smallest regular area containing the movement track of the cursor; Regular area, the minimum irregular area is the area enclosed by the movement track of the cursor.
  • the VR/AR device can determine a variety of possible ROIs according to the user's operation, improving the user experience.
  • the method further includes: the VR/AR device displays a second user interface, and the second user interface displays one or more The identifiers of multiple electronic devices; the VR/AR device sends the first image to the first device in response to the second operation of selecting the first identifier, the first device corresponds to the first identifier, and one or more electronic devices in the second user interface
  • the identification of the device includes the first identification.
  • the user can send the captured first image to any one or more surrounding electronic devices, so as to meet user requirements and improve user experience.
  • the method before the VR/AR device displays the second user interface, the method further includes: the VR/AR device detects a third operation; in response to the third operation, the VR/AR device displays the first 2. User interface.
  • the VR/AR device can display the second user interface only after receiving the operation of triggering the display of the identifiers of the surrounding electronic devices, so as to meet the personalized needs of the user.
  • the location of the identifier of the one or more electronic devices in the second user interface is used to indicate the location of the one or more electronic devices relative to the VR/AR device.
  • the identifiers of one or more electronic devices include images of one or more electronic devices captured by the VR/AR device.
  • the method It also includes: the VR/AR device captures images of one or more electronic devices; the VR/AR device determines the position of one or more electronic devices relative to the VR/AR device according to the images of the one or more electronic devices.
  • the user can perceive the position of one or more surrounding electronic devices relative to himself by viewing the identifiers of one or more electronic devices, thereby providing a more immersive experience.
  • the identification of one or more electronic devices includes one or more of the following: icons, types or models of one or more electronic devices; VR/AR devices photographing one or more electronic devices After the image of the device, the method further includes: the VR/AR device acquires one or more of icons, types or models of the one or more electronic devices according to the images of the one or more electronic devices.
  • the VR/AR device can display images, virtual icons, types or models, etc. of one or more electronic devices according to user needs, so as to provide users with rich content to meet the individual needs of users.
  • the method before the VR/AR device displays the second user interface, the method further includes: after the VR/AR device sends the first request message, the VR/AR device receives one or more The first response message sent by the electronic device, the first response message carries the communication addresses of one or more electronic devices; the VR/AR device obtains the relative information of one or more electronic devices relative to the VR/AR device according to the reception of the first response message.
  • the VR/AR device sends the first image to the first device in response to the second operation of selecting the first identifier, which specifically includes: the VR/AR device interacts with one or more electronic devices according to the images of one or more electronic devices
  • the position of the first device is determined relative to the correspondence between the positions of the VR/AR devices; the VR/AR device is based on the correspondence between the communication addresses of the one or more electronic devices and the positions of the one or more electronic devices relative to the VR/AR device
  • the communication address of the first device is determined; the VR/AR device sends the first image to the first device according to the communication address of the first device.
  • the VR/AR device can share the first image with the first device according to the user's operation of selecting the first identifier of the first device and the communication address corresponding to the first identifier to meet the user's personalized needs.
  • the first operation includes one or more of the following: gestures, voice commands, the user's eye state, and an operation of pressing a button; the first operation is detected by the VR/AR device, Alternatively, detected by and input devices.
  • the first operation can be implemented in multiple ways, which improves the practicability of the screenshot method provided by the embodiment of the present application and improves the user experience.
  • the VR/AR device may further save the first image after capturing the first image in response to the first operation.
  • embodiments of the present application provide a VR/AR device, where the VR/AR includes one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, The one or more memories are used to store computer program code, the computer program code comprising computer instructions which, when executed by the one or more processors, cause the VR/AR device to perform the method described in the embodiment of the first aspect.
  • embodiments of the present application provide a computer program product containing instructions, characterized in that, when the computer program product runs on an electronic device, the electronic device is caused to execute the method described in the implementation manner of the first aspect.
  • embodiments of the present application provide a computer-readable storage medium, including instructions, characterized in that, when the instructions are executed on an electronic device, the electronic device is caused to execute the method described in the implementation manner of the first aspect.
  • the VR/AR device can capture an image in the ROI, and share the image to any one or more surrounding electronic devices, thereby improving user experience.
  • FIG. 1 is a schematic diagram of a communication system provided by an embodiment of the present application.
  • FIG. 2A is a schematic structural diagram of a VR/AR device provided by an embodiment of the present application.
  • FIG. 2B is a schematic diagram of an imaging principle of a VR/AR device provided by an embodiment of the present application.
  • FIG. 3 is a flow chart of a method for capturing an image provided by an embodiment of the present application.
  • 4A-4B provide a set of user interfaces for capturing images according to an embodiment of the present application
  • 5A-5E provide another set of user interfaces for capturing images according to an embodiment of the present application
  • FIG. 6 is a user interface for sharing an image provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • GUI graphical user interface
  • An embodiment of the present application provides a method for capturing an image by a VR/AR device.
  • the VR/AR device displays an image and presents a 3D scene to a user.
  • a first operation may be input, and the VR/AR device may capture a first image in response to the first operation.
  • the image displayed on the display screen of the VR/AR device may be the same or different from the image seen by the user.
  • the above-mentioned first image can be any of the following:
  • the VR/AR device may, in response to the first operation, capture all images displayed on the display screen as the first image.
  • the VR/AR device may determine a user's region of interest (ROI) in the display screen in response to the first operation, and then capture the image in the ROI on the display screen as the first operation. an image.
  • the ROI refers to an area in the display screen determined by the VR/AR device according to the user's operation.
  • the ROI determined by the user's operation may be based on the movement trajectory of the user's hand and the gaze point of the eyeball.
  • the movement trajectory or the movement trajectory of the input device 300 corresponds to displaying the movement trajectory of the cursor on the display screen, and then the VR/AR device determines the ROI according to the movement trajectory of the cursor.
  • the ROI can be a regular area, and the ROI can be an irregular area.
  • the VR/AR device can respond to the first operation by intercepting and displaying the image.
  • the entire image or part of the image displayed on the screen is displayed, it is also necessary to fuse the captured image or part of the image into the image seen by the user as the first image according to the imaging principle of the VR/AR device.
  • the imaging principle of VR/AR devices please refer to the introduction of VR/AR devices later.
  • the VR/AR device may also share the first image to surrounding electronic devices.
  • the VR/AR device may display a user interface including the identifiers of surrounding electronic devices, and the location of the identifiers of the electronic devices in the user interface indicates the location of the electronic device relative to the user.
  • the identification of the electronic device includes, but is not limited to: an image, a virtual icon, a type, a model, and the like of the electronic device. After the user sees the user interface, he can recognize the position of the surrounding electronic devices relative to himself.
  • the VR/AR device establishes a communication connection with the electronic device corresponding to the identification in response to the detected operation of the identification of the selected electronic device, and shares the above-mentioned first image through the communication connection.
  • the VR/AR device can capture the first image at any time, and the size of the captured first image can be defined by the user.
  • the first image is shared on the surrounding electronic devices, so that other users can also see the first image, which satisfies the user's personalized needs.
  • FIG. 1 exemplarily shows a communication system 10 provided by an embodiment of the present application.
  • the communication system 10 includes a VR/AR device 100, and one or more electronic devices such as electronic device 201, electronic device 202, electronic device 203, and electronic device 204 around the VR/AR device 100.
  • the communication system 10 may also include an input device 300 .
  • the VR/AR device 100 can be an electronic device such as a helmet, glasses, etc. that can be worn on the user's head.
  • the VR/AR device 100 can be used in conjunction with other electronic devices (such as mobile phones) to receive GPUs of other electronic devices (such as mobile phones).
  • Processed data or content (such as a rendered image) and display it.
  • the VR/AR device 100 may be a terminal device such as VR glasses with limited computing power.
  • the VR/AR device 100 is used for displaying images, so as to present a 3D scene to a user and bring a VR/AR/MR experience to the user.
  • the 3D scene may include 3D images, 3D videos, and the like.
  • the VR/AR device 100 may capture the first image in response to the detected first operation.
  • first image reference may be made to the above detailed description.
  • steps of intercepting the first image reference may be made to the related descriptions of the following method embodiments.
  • the VR/AR device may display a user interface containing identification (eg, images, virtual icons, types, models, etc.) of surrounding electronic devices in response to detected user actions. Then, in response to the detected operation of selecting the identification of the electronic device, such as a gesture pointing to the identification of the electronic device in the user interface, the VR/AR device establishes a communication connection with the electronic device corresponding to the identification, and shares the above-mentioned first section through the communication connection. an image.
  • identification eg, images, virtual icons, types, models, etc.
  • the above-mentioned communication connection may be a wired or wireless connection.
  • the wired connection may include a wired connection that communicates through an interface such as a USB interface, an HDMI interface, or the like.
  • the wireless connection may include a wireless connection that communicates through one or more of a BT communication module, a wireless local area networks (WLAN) communication module, and a UWB communication module.
  • WLAN wireless local area networks
  • the electronic device 201 , the electronic device 202 , the electronic device 203 and the electronic device 204 are the electronic devices around the VR/AR device 100 , such as a smart TV, a computer, a mobile phone, a VR/AR device, a tablet computer, or a touch-sensitive device.
  • Surface or touch panel laptop computer (Laptop) non-portable terminal equipment such as desktop computer with touch sensitive surface or touch panel, etc.
  • the electronic device 201 , the electronic device 202 , the electronic device 203 , and the electronic device 204 have one or more of a BT communication module, a WLAN communication module, and a UWB communication module.
  • the electronic device 201 can monitor the signals transmitted by the VR/AR device 100, such as detection requests, scanning signals, etc., through one or more of the BT communication module, the WLAN communication module, and the UWB communication module.
  • the VR/AR device 100 can discover the electronic device 201, and determine the location and network address of the electronic device 201, and then establish a communication connection with the electronic device 201, and based on the communication Connect to receive the first image shared by the VR/AR device 100 .
  • the input device 300 is a device for controlling the VR/AR device 100, for example, a handle, a mouse, a keyboard, a stylus, a wristband, and the like.
  • the input device 300 may be configured with various sensors, such as an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, and the like.
  • the pressure sensor may be disposed under the confirm key/cancel key of the input device 300 .
  • the input device 300 may collect movement data of the input device 300, and data indicating whether the confirmation button/cancel button of the input device 300 is pressed.
  • the movement data includes sensors of the input device 300 , for example, an acceleration sensor to collect the acceleration of the input device 300 , a gyro sensor to collect the movement speed of the input device 300 , and the like.
  • the data indicating whether the confirm button/cancel button of the input device 300 is pressed includes the pressure value collected by the pressure sensor disposed under the confirm button/cancel button or the level generated by the input device 300, etc.
  • the input device 300 can establish a communication connection with the VR/AR device 100, and through the communication connection, the collected movement data of the input device 300, and the confirmation button/cancel indicating the input device 300
  • the data of whether a key or the like is pressed is sent to the VR/AR device 100 .
  • the communication connection may be a wired or wireless connection.
  • the wired connection may include a wired connection that communicates through a USB interface, an HDMI interface, a custom interface, or the like.
  • the wireless connection may include one or more of wireless connections that communicate via Bluetooth, near field communication (NFC), ZigBee, and other short-range transmission technologies.
  • the movement data of the input device 300 and the data indicating whether the confirmation button/cancel button of the input device 300 is pressed can be used by the VR/AR device 100 to determine the movement of the input device 300 and/or state.
  • the movement of the input device 300 may include, but is not limited to, whether to move, the direction of movement, the speed of movement, the distance of movement, the trajectory of movement, and the like.
  • the state of the input device 300 may include whether the confirmation key of the input device 300 is pressed.
  • the VR/AR device 100 may activate a corresponding function according to the movement situation and/or state of the input device 300 . That is, the user can trigger the VR/AR device 100 to perform a corresponding function by inputting a user operation on the input device 300 . For example, when the user holds the input device 300 and moves it to the left by 3 cm, the cursor displayed on the display screen of the VR/AR device 100 can be moved to the left by 6 cm. In this way, the user can move the cursor to any position on the display screen of the VR/AR device 100 by manipulating the input device 300 .
  • the user can press the confirmation button of the input device 300, so that the VR/AR device 100 enables the VR/AR device 100 to activate the control corresponding to the control Function.
  • the input device 300 may be configured to receive a user operation for triggering the VR/AR device to capture the first image.
  • a user operation for triggering the VR/AR device to capture the first image.
  • FIG. 2A shows a schematic structural diagram of a VR/AR device 100 provided by an embodiment of the present application.
  • the VR/AR device 100 may include: a processor 201 , a memory 202 , a communication module 203 , a sensor module 204 , a camera 205 , a display device 206 , and an audio device 207 .
  • the above components can be coupled and connected and communicate with each other, for example, the above components can be connected through a bus.
  • the structure shown in FIG. 2A does not constitute a specific limitation to the VR/AR device 100 .
  • the VR/AR device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the VR/AR device 100 may further include physical keys such as on-off keys, volume keys, various interfaces such as a USB interface for supporting the connection between the VR/AR device 100 and the input device 300, and the like.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 201 may include one or more processing units, for example, the processor 110 may include a memory, an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal Image signal processor (ISP), controller, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction opcode and timing signal, and complete the control of fetching and executing instructions, so that each component can perform corresponding functions, such as human-computer interaction, motion tracking/prediction, rendering display, audio processing, etc.
  • the memory in the processor 201 can identify the operation input by the user, and execute the corresponding function.
  • the processor 201 can recognize user input gestures such as fist, palm, etc., voice such as screenshot, share, etc., and operations on the input device 300 such as shaking up and down, shaking left and right, moving, etc., and then determine to execute the corresponding operation according to these operations. Function.
  • the memory 202 stores executable program codes for executing the display methods provided by the embodiments of the present application, where the executable program codes include instructions.
  • the memory 202 may include a stored program area and a stored data area.
  • the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, etc.) created during the use of the VR/AR device 100 and the like.
  • the memory 202 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 201 executes various functional applications and data processing of the VR/AR device 100 by executing the instructions stored in the memory 202 and/or the instructions stored in the memory provided in the processor.
  • the memory 202 may store an image for display by the display device 206 of the VR/AR device 100, and send the image to the display device 206 in the form of an image stream.
  • the memory 202 may further store the first image captured by the VR/AR device 100 .
  • the memory 202 may store a database, the database including the image models of the electronic device 201, the electronic device 202, the electronic device 203, the electronic device 204 and other more electronic devices, and the electronic devices corresponding to the image models of each electronic device The type, virtual icon and model of the device, etc.
  • the memory 202 may also store images of the surrounding electronic devices captured by the camera 205 , the locations and communication addresses of the surrounding electronic devices, and the corresponding relationship between them. For details, please refer to the detailed description in Table 2 below, which will not be repeated here.
  • the communication module 203 may include a wireless communication module and a wired communication module.
  • the wireless communication module can provide solutions for wireless communication such as BT, WLAN, UWB, GNSS, FM, IR, etc. applied to the VR/AR device 100 .
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the wired communication module may provide a wired connection including communication through an interface such as a USB interface, an HDMI interface, and the like.
  • the communication module 203 can support the communication between the VR/AR device 100 and surrounding electronic devices (electronic device 201 , electronic device 202 , electronic device 203 , electronic device 204 , etc.).
  • the VR/AR device 100 may establish a communication connection with the input device 300 through a wireless communication module or a wired communication module, and based on the communication connection, acquire the movement data of the input device 300 collected by the input device 300 , and represent Data of whether the confirmation button/cancel button of the input device 300 is pressed.
  • the processor 201 can identify the operation input by the user according to the data sent by the input device 300, and determine the function to be executed corresponding to the operation.
  • the VR/AR device 100 may establish a communication connection with surrounding electronic devices through one or more of communication modules such as BT, WLAN, and UWB, and share the first image based on the communication connection.
  • communication modules such as BT, WLAN, and UWB
  • the sensor module 204 may include an accelerometer, a compass, a gyroscope, a magnetometer, or other sensors for detecting motion, or the like.
  • the sensor module 204 is used to collect corresponding data, for example, the acceleration sensor collects the acceleration of the VR/AR device 100, and the gyroscope sensor collects the motion speed of the VR/AR device 100, and the like.
  • the data collected by the sensor module 204 can reflect the movement of the head of the user wearing the VR/AR device 100 .
  • the sensor module 204 may be an inertial measurement unit (IMU) disposed within the VR/AR device 100 .
  • the VR/AR device 100 may send the data acquired by the sensor module to the processor 201 for analysis.
  • the processor 201 can determine the movement of the user's head according to the data collected by each sensor, and execute corresponding functions according to the movement of the user's head.
  • the sensor module 204 may also include optical sensors for tracking the user's eye position and capturing eye movement data in conjunction with the camera 205.
  • the eye movement data may be used, for example, to determine the user's eye separation, the 3D position of each eye relative to the VR/AR device 100, the magnitude of each eye's twist and rotation (ie, roll, pitch, and pan) and gaze direction, etc. .
  • infrared light is emitted within the VR/AR device 100 and reflected from each eye, the reflected light is detected by a camera 205 or an optical sensor, and the detected data is transmitted to the processor 201 to enable
  • the processor 201 analyzes the position, pupil diameter, motion state, etc. of the user's eyes from the changes of the infrared light reflected by each eye.
  • the above-mentioned optical sensor may be combined with the camera 205 to collect the image of the user's eyeball, and track the movement trajectory of the user's eyeball gaze point. and transmit the data of the movement trajectory of the gaze point to the processor 201, the processor 201 analyzes the data, and displays the movement trajectory of the cursor corresponding to the movement trajectory of the eye gaze point in the display screen, and then determines according to the movement trajectory of the cursor ROI in the display.
  • Camera 205 may be used to capture still images or video.
  • the still image or video can be an externally facing image or video around the user, or an internal facing image or video.
  • the camera 205 can track the movement of one or both eyes of the user.
  • the camera 205 includes, but is not limited to, a traditional color camera (RGB camera), a depth camera (RGB depth camera), a dynamic vision sensor (DVS) camera, and the like.
  • the depth camera can obtain the depth information of the photographed object.
  • the camera 205 includes at least a pair of cameras, and the pair of cameras can capture images facing the inside of the VR/AR device 100, that is, the user's eye image, and can also capture images facing the outside of the VR/AR device 100. That is, the image of the user's hand.
  • the pair of cameras can send the collected images of the user's eyes or the user's hand to the processor 201 for analysis, and the processor 201 can identify the state of the user's eyes, such as whether to blink, the number of blinks, etc. The state the eye is in performs the corresponding function.
  • the processor 201 may also recognize the user's gestures, such as making a fist, palm, "OK", etc. according to the hand image, and perform different functions correspondingly according to different gestures.
  • the camera 205 may also include two pairs of cameras, one of which is used to capture images facing the outside of the VR/AR device 100 , and the other pair is used to capture images facing the interior of the VR/AR device 100
  • the VR/AR device 100 may present or display images through the GPU, the display device 206, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display device 206 and the application processor.
  • Processor 201 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display device 206 may include: one or more display screens, one or more optical components.
  • the one or more display screens may include, for example, display screen 101 and display screen 103 .
  • the one or more optical assemblies include, for example, optical assembly 102 and optical assembly 104 .
  • the display screen such as the display screen 101 and the display screen 103, may include a display panel, and the display panel may be used to display images, thereby presenting a three-dimensional virtual scene to the user.
  • the display panel may adopt a liquid crystal display device LCD, OLED, AMOLED, FLED, Miniled, MicroLed, Micro-oLed, QLED, and the like.
  • Optical components such as the optical component 102, the optical component 104, are used to direct light from the display screen to the exit pupil for user perception.
  • one or more optical elements in an optical assembly may have one or more coatings, such as anti-reflection coatings.
  • the amplification of the image light by the optical components allows the display to be physically smaller, lighter, and consume less power.
  • the magnification of the image light can increase the field of view of the content displayed on the display screen.
  • the optical assembly can make the field of view of the content displayed on the display screen the full field of view of the user.
  • Optical assemblies may also be used to correct for one or more optical errors.
  • optical errors examples include: barrel distortion, pincushion distortion, longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, comatic aberration, field curvature, astigmatism, and the like.
  • the content provided to the display screen for display is pre-distorted, and the distortion is corrected by the optical assembly upon receiving the content-based image light from the display screen.
  • FIG. 2B is a schematic diagram of an imaging principle of the VR/AR device 100 .
  • the display device 206 of the VR/AR device 100 may include: a display screen 101 , an optical component 102 , a display screen 103 , and an optical component 104 .
  • the first straight line where the center of the display screen 101 and the center of the optical component 102 are located is perpendicular to the third straight line where the center of the optical component 102 and the center of the optical component 104 are located.
  • Display screen 101 and optical assembly 102 correspond to the user's left eye.
  • an image a1 may be displayed on the display screen 101 .
  • the display screen 101 displays the image a1
  • the light emitted by the optical component 102 will form a virtual image a1' of the image a1 in front of the user's left eye.
  • the second straight line where the center of the display screen 103 and the center of the optical component 104 are located is perpendicular to the third straight line where the center of the optical component 102 and the center of the optical component 104 are located.
  • Display screen 103 and optical assembly 104 correspond to the user's right eye.
  • the display screen 103 may display the image a2.
  • the display screen 103 displays the image a2
  • the light emitted by the optical component 104 will form a virtual image a2' of the image a2 in front of the user's right eye.
  • Image a1 and image a2 are two images with parallax for the same object, eg, object a.
  • Parallax refers to the difference in the position of the object in the field of view when the same object is viewed from two points that are a certain distance away.
  • the virtual image a1' and the virtual image a2' are located on the same plane, which can be called a virtual image plane.
  • the user's left eye will focus on the virtual image a1', and the user's right eye will focus on the virtual image a2'. Then, the virtual image a1' and the virtual image a2' are superimposed in the user's brain to form a complete and three-dimensional image, a process called vergence.
  • vergence a process called vergence.
  • the meeting point of the eyes of both eyes will be considered by the user as the actual location of the object described by the image a1 and the image a2. Due to the vergence process, the user can feel the 3D scene provided by the VR/AR device 100 .
  • the VR/AR device may also use other methods to provide the user with a 3D scene.
  • the display device 206 can be used to receive data or content (eg, rendered images) processed by the GPU of other electronic devices (eg, mobile phones), and display them.
  • the VR/AR device 100 can be a terminal device such as VR glasses with limited computing power, and needs to be used in conjunction with other electronic devices (such as mobile phones) to present a 3D scene to the user and provide VR/AR/MR to the user experience.
  • the audio device 207 is used to realize the collection and output of audio. Audio devices 207 may include, but are not limited to, microphones, speakers, headphones, and the like.
  • the microphone may be used to receive the voice input by the user, and convert the voice into an electrical signal.
  • the processor 201 may be configured to receive an electrical signal corresponding to the voice from the microphone, and after receiving the voice signal, the processor 201 may recognize the operation corresponding to the voice and perform the corresponding operation. For example, when the microphone receives the voice of "share", the processor 201 may recognize the voice instruction and share the first image with the surrounding electronic devices (eg, the electronic device 201).
  • the camera 205 can continuously capture multiple palm images of the user, and transmit the palm images to the processor 201.
  • the processor 201 analyzes the multiple palm images by using binocular positioning technology, and identifies the user The movement trajectory of the palm is displayed, and the movement trajectory of the cursor corresponding to the movement trajectory of the palm is displayed on the display screen, and then the ROI is determined according to the trajectory. Afterwards, the processor 201 may capture the image in the ROI on the display screen as the first image. In some embodiments, the processor 201 may obtain the image stream sent from the internal memory to the display device 206 to obtain the first image. In other embodiments, the processor 201 may directly acquire the image displayed by the display device 206, and capture the image to obtain the first image.
  • the camera 205 can capture an image of the surrounding electronic device, and send the image of the surrounding electronic device to the processor 201, and the processor 201 analyzes the image of the surrounding electronic device using a binocular positioning technology to identify the surrounding electronic device.
  • the position of the electronic device relative to the VR/AR device 100 is, for example, three-dimensional coordinates.
  • the processor 201 can also match the image of the surrounding electronic device with the image model of each electronic device in the database, so as to identify the type, virtual icon, model, etc.
  • the processor 201 may schedule the display device 206 to display the identifiers of the surrounding electronic devices, where the identifiers of the electronic devices include, but are not limited to, images, types, virtual icons, models, and the like of the electronic devices.
  • the processor 201 may also schedule one or more of communication modules such as BT, WLAN, and UWB to obtain the communication addresses of surrounding electronic devices, such as MAC addresses or IP addresses.
  • the processor 201 may also obtain positions such as angles and distances respectively corresponding to the communication addresses of the surrounding electronic devices in combination with the BT positioning technology/WiFi positioning technology/UWB positioning technology.
  • the processor 201 can use the coincidence positioning algorithm to match the positions of the surrounding electronic devices obtained by the binocular positioning technology with the positions of the surrounding electronic devices obtained by the BT positioning technology/WiFi positioning technology/UWB positioning technology. Then, a corresponding relationship is established between the identifiers of the surrounding electronic devices and the communication addresses.
  • the processor 201 receives the operation of the user selecting the identification of the electronic device, and in response to the operation, establishes a communication connection with the corresponding electronic device based on the communication address corresponding to the identification, and shares the above-mentioned first image.
  • the above binocular positioning technology refers to: the VR/AR device 100 uses a pair of cameras in the camera 205 to separately collect images of the surrounding electronic devices (for example, the electronic device 201 ), and respectively obtains two images according to the collected two images of the electronic device 201 . Pixel coordinates (two-dimensional coordinates) of the electronic device 201 in the images captured by the two cameras. Then, in combination with the relative positions of the two cameras, the position of the electronic device 201, such as three-dimensional coordinates, is obtained according to a geometric relationship algorithm.
  • the above BT positioning technology/WiFi positioning technology/UWB positioning technology refers to: the VR/AR device 100 sends a detection request to the surrounding electronic devices, and then receives the detection response from the surrounding electronic devices, and then arrives according to the BT/WIFI/UWB signal of the detection response.
  • the angle of arrival (AoA) measures the angle of the surrounding electronic device (eg electronic device 201) relative to the VR/AR device 100, and then uses triangulation, or the received signal strength indicator (RSSI) value, or BT
  • the time of flight (TOF) of the /WIFI/UWB signal calculates the distance of the electronic device 201, and then obtains the position (angle and distance) of the electronic device 201 relative to the VR/AR device 100.
  • the probe request is also referred to as the first request message
  • the probe response is also referred to as the first response message
  • the reception of the first response message is the BT/WIFI/UWB signal AoA and TOF of the probe response.
  • FIG. 3 exemplarily shows a flow of a method for capturing a first image by a VR/AR device according to an embodiment of the present application.
  • the VR/AR device 100 captures a first image in response to the detected first operation.
  • the first operation is used to capture all images displayed on the display screen of the VR/AR device 100 .
  • FIG. 4A-FIG. 4B exemplarily show a user interface 410 in which the VR/AR device 100 captures all images displayed on the display screen.
  • the user interface 410 may also be referred to as a first user interface.
  • FIG. 4A exemplarily shows a possible implementation manner in which the VR/AR device 100 detects a first operation, such as a gesture operation.
  • user interface 410 displays image 411, cursor 412A.
  • the image 411 is all the images displayed on the display screen of the VR/AR device 100 VR/AR device 100 .
  • Cursor 412A may take the form of an "arrow" icon.
  • the above-mentioned cursor can be an icon, an image, etc.
  • the icon corresponding to the cursor can be different.
  • the corresponding icon can be an arrow or a hand icon.
  • the icon corresponding to the cursor can be a vertical line or a horizontal line.
  • the VR/AR device 100 may detect a first operation in the user interface 410 shown in FIG. 4A , for example, detecting a user input of a "make a fist" gesture 412B, which may be captured by the camera of the VR/AR device 100
  • the gesture operation captured by the device 205 that is, the user inputting the “make a fist” gesture 412B should be within the shooting range of the camera device 205 .
  • the VR/AR device 100 captures the first image and displays the user interface 410 as shown in FIG. 4B .
  • FIG. 4B exemplarily shows a user interface in which the VR/AR device 100 captures a first image in response to the first operation.
  • the user interface 410 displays an image 411 , a cursor 412A, a control 413A, a control 413B and a rectangular wireframe 414 .
  • the image 411 is the first image captured by the user.
  • the processor 201 may obtain the image stream sent from the internal memory to the display device 206 to obtain the first image. In other embodiments, the processor 201 may directly acquire all the images displayed by the display device 206, and capture the images to obtain the first image.
  • the expression form of the cursor 412A may be changed to a "fist" image, which is used to prompt the user that the VR/AR device 100 has received the first operation input by the user.
  • the control 413A is used to share the captured first image
  • the control 413B is used to cancel the sharing of the captured first image
  • the rectangular wire frame 414 is the frame of the image 411 , which is used to prompt the user that the VR/AR device 100 has captured all the images in the user interface 410 .
  • FIG. 4A-FIG. 4B only exemplarily show an implementation manner in which the VR/AR device 100 captures the first image.
  • the above-mentioned first operation may also be an operation in which the user shakes the input device 300 up and down.
  • the relevant data is transmitted to the VR/AR device 100, and the processor 201 in the VR/AR device 100 determines that the input device 300 has received the first operation according to the data.
  • the first operation may also be the user inputting a voice instruction of "capture full screen", or the like.
  • the VR/AR device 100 after the VR/AR device 100 detects the first operation, it can automatically capture the first image within a certain period of time, for example, within 1 second. In other embodiments, after the VR/AR device 100 detects the first operation, it may also receive a user input again to determine the operation to capture the first image. For example, the user presses a button on the input device 300 and the VR/AR device 100 The first image is captured in response to detecting an operation of the control for confirming the capture of the first image.
  • the VR/AR device 100 may also receive an operation input by the user to adjust the size or position of the first image, for example, the first finger and the middle finger are pinched or expanded to control the first image.
  • the position movement of the first image is controlled by moving the index finger.
  • the first operation is used to capture the image in the ROI.
  • the first operation may include: an operation for triggering the VR/AR device 100 to select an ROI, an operation for the VR/AR device 100 to determine the ROI, and an operation for the VR/AR device 100 to capture an image in the ROI.
  • FIGS. 5A-5E exemplarily illustrate a user interface 510 for the VR/AR device 100 to capture an image in an ROI.
  • the user interface 510 may also be referred to as a first user interface.
  • FIG. 5A exemplarily shows a possible implementation manner in which the VR/AR device 100 detects a first operation for triggering the VR/AR device 100 to select an ROI, such as a gesture operation.
  • user interface 510 displays image 511, cursor 512A.
  • the user interface shown in FIG. 5A is the same as the user interface shown in FIG. 4A , please refer to the description of FIG. 4A above for details.
  • the VR/AR device 100 may detect an operation for triggering the selection of an ROI by the VR/AR device 100 in the user interface 510 shown in FIG. 5A , for example, detecting that the user inputs a “palm” gesture 512B, and in response to the operation, the VR/AR Device 100 displays user interface 510 as shown in FIG. 5B.
  • the camera 205 captures an image of the "palm” gesture 512B input by the user, sends the image to the processor 201, and the processor 201 analyzes the image to determine the operation corresponding to the "palm” gesture 512B in the image, In response to this operation, the VR/AR device 100 displays a user interface 510 as shown in FIG. 5B .
  • user interface 510 displays image 511, cursor 512A.
  • the representation form of the cursor 512A becomes a "palm" image, which is used to prompt the user that the VR/AR device 100 has received the user-input gesture operation that triggers the electronic device to select the ROI.
  • FIGS. 5A-5B only exemplarily show a possible implementation manner for triggering the VR/AR device 100 to select an ROI in the first operation. In some embodiments, this is used to trigger the VR/AR device 100.
  • the operation of selecting the ROI can also be a user inputting a voice command such as "screenshot". Specifically, the microphone of the VR/AR device 100 receives the voice, and transmits the voice to the processor 201, and the processor 201 analyzes the voice, The corresponding function is executed according to the voice.
  • the operation for triggering the selection of the ROI by the VR/AR device 100 may also be an operation of the user shaking the input device 300 left and right.
  • the input device 300 may detect the operation of being shaken, and use the The relevant data of the shaking operation is transmitted to the VR/AR device 100, and the processor 201 in the VR/AR device 100 determines, according to the data, that the input device 300 receives the first operation for triggering the VR/AR device 100 to select the ROI. operation, and execute the corresponding function of the operation.
  • the VR/AR device 100 may detect an operation for the VR/AR device 100 to determine the ROI in the user interface 510 shown in FIG. 5B , for example, an operation to detect the movement of the user's "palm” 512B, and the "palm” 512B moves the trajectory 512C Form a similar regular figure, such as a rectangle, a diamond, a triangle, a circle, etc.
  • the VR/AR device 100 displays a user interface 510 as shown in FIG. 5C .
  • the user interface 510 displays an image 511, a cursor 512A, and a movement track 512D of the cursor 512A, a rectangular wire frame 513, a first image 514, a control 515A, and a control 515B.
  • the movement track 512D of the cursor 512A is the movement track of the cursor corresponding to the movement track 512C of the "palm" 512B.
  • the VR/AR device 100 calls the camera 205 to continuously collect images of the user's hand within a certain period of time (for example, 3 seconds).
  • the camera 205 transmits the collected images of the hands to the processor 201, and the processor 201 analyzes the images of the multiple palms to identify the coordinates of the hands and palms in each image, and the coordinates of the palms in the multiple images form A moving track 512C, and then a moving track 512D of the cursor corresponding to the moving track 512C of the palm is displayed on the display screen.
  • the cursor displayed on the display screen of the electronic device 100 moves to the left by 10 cm.
  • the rectangular wire frame 513 is a rectangular wire frame surrounding the ROI, which is used to prompt the user that the VR/AR device 100 has determined the ROI.
  • the processor in the VR/AR device 100 may determine the ROI corresponding to the movement track 512D by using an algorithm for determining the ROI according to the movement track 512D.
  • the ROI may be the smallest regular area containing the movement trajectory 512D, or the smallest irregular area.
  • the VR/AR device 100 can take the minimum and maximum abscissa and the minimum ordinate among the pixel coordinates of the image displayed by the movement track 512D on the current display screen.
  • the four maximum values can be combined to obtain four two-dimensional coordinates, and the rectangular wire frame enclosed by the four two-dimensional coordinates is the rectangular wire frame 513 surrounding the ROI.
  • the ROI corresponding to the movement track 512D may not be the smallest rectangular area including the movement track 512D, but may be a circular area or other, which is specifically determined by the algorithm for determining the ROI stored in the VR/AR device 100 .
  • the ROI is the smallest irregular area including the movement track 512D
  • the area enclosed by the movement track 512D itself is the ROI area.
  • the image 514 is the image in the ROI, that is, the first image.
  • the processor 201 may acquire the image stream sent from the internal memory to the display device 206, and capture the image in the ROI in the image stream to obtain the first image. In other embodiments, the processor 201 may directly acquire the image displayed by the display device 206, and capture the image in the ROI in the display screen to obtain the first image.
  • Control 515A is for determining the captured image 514 .
  • Control 515B is used to cancel the captured image 514 .
  • FIG. 5C only exemplarily shows a possible implementation manner for triggering the VR/AR device 100 to select an ROI in the first operation.
  • the operation for triggering the VR/AR device 100 to select an ROI may be an operation of controlling the movement of the cursor by moving the user's gaze point, and the movement track of the cursor may form a similar regular graph.
  • the camera 205 continuously collects images of the user's eyeballs, and sends the collected multiple eyeball images to the processor 201.
  • the processor 201 analyzes the images, determines the movement trajectory of the user's gaze point, and displays the user's gaze point according to the movement trajectory of the gaze point.
  • the corresponding movement track of the cursor is displayed on the screen, and then the processor 201 determines the ROI according to the movement track of the cursor.
  • the operation for the VR/AR device 100 to determine the ROI may also be an operation for the user to control the movement of the cursor by moving the handheld input device 300, and the movement track of the cursor may form a similar regular graph.
  • the input device 300 can detect the operation of its own movement, and transmit the relevant data of the movement operation to the VR/AR device 100, and the processor 201 in the VR/AR device 100 determines the input device 300 according to the data.
  • the operation for determining the ROI of the VR/AR device 100 in the first operation is received, and the function corresponding to the operation is performed, that is, the movement trajectory of the cursor corresponding to the movement trajectory of the input device 300 is displayed on the display screen, and then the processor 201 according to the cursor's movement trajectory. Move the trajectory to determine the ROI.
  • the above-mentioned movement of the cursor is controlled by the user's palm movement, gaze point movement, and handheld input device 300 movement, and the cursor movement trajectory may or may not constitute a similar regular figure.
  • the processor of the VR/AR device 100 can automatically determine an ROI of a regular figure, such as a rectangle, a circle, a diamond, a triangle, and the like.
  • the ROI includes the movement track of the above-mentioned cursor that does not constitute a similar regular pattern.
  • the VR/AR device 100 may prompt the user that the operation is an invalid operation, and ask the user to re-enter the operation.
  • the VR/AR device 100 may also receive an operation input by the user to adjust the size or position of the first image, for example, by using an index finger
  • an operation input by the user to adjust the size or position of the first image, for example, by using an index finger
  • a gesture of pinching or expanding with the middle finger is used to control the reduction or expansion of the first image, and for example, the position movement of the first image is controlled by moving the index finger.
  • FIG. 5D exemplarily shows a possible implementation manner in which the VR/AR device 100 detects the first operation for capturing the image in the ROI, such as a gesture operation.
  • the VR/AR device 100 may detect an operation performed by the user on the control 515A in the user interface 510 shown in FIG. 5D .
  • the operation may be that the user controls the cursor 512A to move to the control 515A by moving the hand, and stays there for, for example, 1 second. operate.
  • the VR/AR device 100 captures the image in the ROI, and displays the user interface 510 shown in FIG. 5E .
  • the user interface 510 includes a cursor 512A, a first image 514, a control 516A, and a control 516B. in:
  • the first image 514 is the image in the ROI.
  • the control 516A is used to share the first image 514
  • the control 516B is used to cancel the sharing of the first image 514 .
  • FIG. 5D-FIG. 5E only exemplarily show a possible implementation manner for capturing the ROI in the first operation.
  • the operation for capturing the ROI may also be a user inputting a gesture such as "ok”, a voice such as inputting "OK”, an operation such as "blinking eyes twice in a row”, and the like.
  • FIGS. 5A-5E are only illustrative, and the VR/AR device 100 captures the user interface of the image in the ROI in response to the detected first operation, which should not constitute a limitation to the embodiments of the present application.
  • the operation for triggering the VR/AR device 100 to select the ROI and the operation for the VR/AR device 100 to determine the ROI may be merged into one operation.
  • the VR/AR device 100 can detect the user input operation such as the movement of the "palm" 512B in the user interface shown in FIG. 5A, and the movement track 512C of the "palm" 512B encloses a similar regular figure, such as a rectangle, a diamond , triangle, circle, etc.
  • the VR/AR device 100 displays a user interface 510 as shown in FIG. 5C .
  • the operation for the VR/AR device 100 to determine the ROI and the operation for the VR/AR device 100 to capture the image in the ROI may be merged into one operation.
  • the VR/AR device 100 detects that the user inputs an operation such as the movement of the "palm" 512B, and the movement track 512C of the "palm" 512B encloses a similar regular figure, such as a rectangle, a diamond, Triangles, circles, etc.
  • the VR/AR device 100 does not display the user interface 510 as shown in FIG. 5C , but directly captures the first image and displays the user interface as shown in FIG. 5D .
  • an operation for triggering the VR/AR device 100 to select an ROI an operation for the VR/AR device 100 to determine an ROI, and an operation for the VR/AR device 100 to capture Operations on the images in the ROI can be fused into one operation.
  • the VR/AR device 100 can detect the user input operation such as the movement of the "palm" 512B in the user interface shown in FIG. 5A, and the movement track 512C of the "palm" 512B encloses a similar regular figure, such as a rectangle, a diamond , triangle, circle, etc.
  • the VR/AR device 100 does not display the user interface 510 as shown in FIG. 5C , but directly captures the first image and displays the user interface as shown in FIG. 5D .
  • the method flow shown in FIG. 3 further includes: saving the above-mentioned first image.
  • the VR/AR device 100 saves the captured first image.
  • the first image may be automatically saved to the gallery of the VR/AR device 100 after a certain period of time.
  • the VR/AR device 100 may also receive a user operation, and in response to the operation, save the first image to the gallery of the VR/AR device 100 .
  • the VR/AR device 100 may fuse the first image into a first image with a 3D effect viewed by the user according to an imaging principle, and combine the first image with a 3D effect. to save the first image of .
  • the VR/AR device 100 may fuse the first image into a first image with a 3D effect viewed by the user according to an imaging principle, and combine the first image with a 3D effect. to save the first image of .
  • the user shares the first image with 3D effect to surrounding electronic devices such as mobile phones, computers, tablets, smart screens, etc.
  • the user watches the first image with 3D effect on the surrounding electronic devices it still has the same 3D effect as when wearing the VR/AR device 100 to watch the first image.
  • the method flow shown in FIG. 3 further includes: sharing the above-mentioned first image.
  • Steps S103-S104 are the method flow for implementing the VR/AR device 100 to share the first image to surrounding electronic devices such as the electronic device 201 .
  • the VR/AR device 100 displays the identifiers of surrounding electronic devices in response to the detected user operation.
  • the user operation is an operation that triggers the VR/AR device 100 to display the identifiers of surrounding electronic devices.
  • the third operation is a user operation for triggering the VR/AR device 100 to display the identifiers of surrounding electronic devices.
  • the VR/AR device 100 may detect in the user interface 410 shown in FIG. 4B or the user interface 510 in FIG. 5E, the user operation on the control 412A or the control 516A shown in FIG. 5E, or the user input
  • the VR/AR device 100 may also detect the above-mentioned operation of triggering the VR/AR device 100 to display the identifiers of the surrounding electronic devices in the user interface storing the first image (for example, in the user interface provided by the gallery), and responds
  • the user interface 610 shown in FIG. 6 is displayed for this operation.
  • the user interface 610 may also be referred to as a second user interface.
  • the user interface 610 displays identifications of surrounding electronic devices, such as images of the surrounding electronic devices, including: image 1 , image 2 , image 3 and image 4 .
  • the identifiers of the surrounding electronic devices displayed on the user interface 610 may further include: virtual icons, types, and models of the surrounding electronic devices.
  • the VR/AR device 100 obtains the position and image of the surrounding electronic devices relative to the VR/AR device 100 according to the binocular positioning technology, such as the position 1 and the image 1 of the electronic device 201, and then compares the image 1 with the pre-stored database.
  • the image model of the electronic device is matched, and then the virtual icon 1, type 1 and model 1 corresponding to the image 1 are determined.
  • the definition of the database stored in the VR/AR device 100 may be introduced in detail with reference to Table 1.
  • Table 1 exemplarily shows image models of electronic devices and virtual icons, types, models, etc. corresponding to each image model.
  • the database shown in Table 1 may also include more image models of electronic devices, as well as virtual icons, types, and models corresponding to each image model.
  • the VR/AR device 100 obtains the position of the surrounding electronic devices relative to the VR/AR device 100 and the corresponding communication address according to the BT positioning technology/WiFi positioning technology/UWB positioning technology. For example, the position 1 of the electronic device 201 and the corresponding communication address 1. Images of surrounding electronic devices are then displayed at corresponding locations in the user interface 610 . Afterwards, the VR/AR device 100 uses the binocular positioning technology to obtain the position of the surrounding electronic devices relative to the VR/AR device 100 according to the coincidence positioning algorithm, and uses the BT positioning technology/WiFi positioning technology/UWB positioning technology to obtain the relative position of the surrounding electronic devices relative to the VR/AR device 100 .
  • the location of the VR/AR device 100 is matched, and then the correspondence between the identifiers, locations and communication addresses of the surrounding electronic devices is determined.
  • the corresponding relationship between the identifiers, locations and communication addresses of the surrounding electronic devices reference may be made to the detailed introduction in Table 2.
  • the VR/AR device 100 displays the identifications of the surrounding electronic devices in the user interface 610 such as image 1, image 2, Image 3 and Image 4.
  • the positions of Image 1 , Image 2 , Image 3 and Image 4 in the user interface 610 indicate the positions of the surrounding electronic devices relative to the VR/AR device 100 .
  • the surrounding electronic devices such as electronic device 201 are located in the front left of the VR/AR device 100
  • the image of the electronic device 201 is displayed in the left position of the user interface 610 in the user interface 610, and the size of the image of the electronic device 201 is the same as that of the electronic device 610.
  • the distance between 201 and the VR/AR device 100 is said to be inversely proportional, that is, the closer the electronic device 201 is to the VR/AR device, the larger the image of the electronic device 201, and the farther the electronic device 201 is from the VR/AR device, the higher the distance between the electronic device 201 and the VR/AR device. the smaller the image.
  • the user can sense that the electronic device 201 is located in front of the user, and can also sense the distance between the electronic device 201 and the user.
  • a representation form of the identification of the surrounding electronic devices displayed by the VR/AR device 100 may be: The identifier of the device, the location of the identifier of the surrounding electronic device in the spatial coordinate system indicates the spatial position of the surrounding electronic device relative to the VR/AR device 100 .
  • the VR/AR device 100 may, after detecting the above-mentioned operation that triggers the VR/AR device 100 to display the identifiers of the surrounding electronic devices, in response to the operation, obtain the content shown in the above Table 2, according to Table 2 content to display the identity of the electronic device shown in FIG. 6 .
  • the VR/AR device 100 may detect the above-mentioned operation that triggers the VR/AR device 100 to display the identifiers of surrounding electronic devices, for example, when the VR/AR device 100 is enabled or the VR/AR device 100 intercepts the first When one image is used, the contents shown in Table 2 above are obtained in advance. Then, after detecting an operation that triggers the VR/AR device 100 to display the identifiers of surrounding electronic devices, in response to the operation, the identifiers of the electronic devices shown in FIG. 6 are displayed according to the content in Table 2 acquired in advance.
  • the content in Table 2 can be updated regularly, which not only saves the time delay for the VR/AR device 100 to display the identifiers of the surrounding electronic devices, but also ensures the accuracy of the position of the surrounding electronic devices relative to the VR/AR device 100 sex.
  • the VR/AR device 100 may directly display the identifiers of the surrounding electronic devices without receiving an operation for triggering the VR/AR device 100 to display the identifiers of the surrounding electronic devices. For example, in the user interface shown in FIG. 5E , the VR/AR device 100 directly displays the identifiers of the surrounding electronic devices shown in FIG. 6 after capturing the first image.
  • the VR/AR device 100 shares the first image to surrounding electronic devices.
  • the second operation is an operation for selecting the identifiers of the surrounding electronic devices.
  • the VR/AR device 100 may detect in the user interface shown in FIG. 6 that the user inputs a "finger" gesture 611 and points to a surrounding electronic device such as an identification of the electronic device 201, such as image 1, or an input device identification such as The voice of "Type 1", "Type 1", or by moving the input device 300, the cursor 512A is controlled to move to the position of the logo of the electronic device 201, and the operation of pressing the button is clicked and so on.
  • the VR/AR device 100 establishes a communication connection with the electronic device 201 based on the communication address 1 corresponding to the identification of the electronic device 201 in Table 2, and The first image is shared based on the communication connection.
  • the electronic device 201 may be referred to as the first device, and an identifier of the electronic device 201, such as image 1, may also be the first identifier.
  • the VR/AR device 100 may, in the user interface shown in FIG. 6 , detect an operation of continuously selecting the identifiers of multiple electronic devices within a certain period of time, such as 2 seconds, and in response to the operation, VR The AR device 100 establishes a communication connection with the plurality of electronic devices based on the communication addresses corresponding to the identifiers of the plurality of electronic devices in Table 2, and shares the first image based on the communication connection.
  • the method for the VR/AR device 100 to share the first image is not limited to the methods shown in steps S103-S104.
  • the VR/AR device 100 intercepts the first image, the user operation is received, and the VR/AR device 100 obtains the communication address of the surrounding electronic devices through the BT communication module/WiFi communication module/UWB module, and displays the surrounding electronic devices. A list of devices, and then the VR/AR device 100 receives the user's operation of selecting any one or more electronic devices in the list of surrounding electronic devices, establishes a communication connection with the electronic device according to the communication address of the electronic device, and shares the first image.
  • the VR/AR device 100 captures the first image, establishes a communication connection with surrounding electronic devices, and shares the first image based on the communication connection and is not limited to the sequence shown in FIG. 3 .
  • the VR/AR device 100 may first establish a communication connection with the surrounding electronic devices, then capture the first image, and then share the first image to the surrounding electronic devices based on the pre-established communication connection. For example, after the VR/AR device 100 is started, the identifiers of the surrounding electronic devices can be displayed. The VR/AR device 100 detects the operation of selecting the identifiers of the surrounding electronic devices, and establishes a communication connection with the surrounding electronic devices. After the VR/AR device 100 captures the first image, an operation for sharing the first image is detected, such as acting on the control 516A in FIG. 5E . The first image is shared to surrounding electronic devices.
  • the VR/AR device 100 can establish a connection with the surrounding electronic devices in advance, thereby improving the efficiency of the VR/AR device 100 sharing the first image.
  • the VR/AR device can capture the first image at any time.
  • the size of the first image can be defined by the user, and the user can also share the first image to any one or more surrounding electronic devices.
  • the user can first capture the first image, and then establish a connection with the surrounding electronic devices and share the first image.
  • the different first images can be shared to different electronic devices respectively, so as to meet the personalized needs of the user.
  • the user may first establish a connection with the surrounding electronic devices, and then share the first image. In this way, the first image can only be shared to the fixed electronic device, but the efficiency of sharing the first image can be improved.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions when loaded and executed on a computer, produce, in whole or in part, the processes or functions described herein.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk), and the like.
  • the process can be completed by instructing the relevant hardware by a computer program, and the program can be stored in a computer-readable storage medium.
  • the program When the program is executed , which may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random storage memory RAM, magnetic disk or optical disk and other mediums that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application provides a method, apparatus and system for cropping an image by a VR/AR device. In the method, a VR/AR device displays an image; during the process of presenting a 3D scene for a user, the user can input an operation after viewing the image presented by the VR/AR device; and the VR/AR device can crop the image in response to the operation. By implementing the method, personalized requirements of the user can be met, and the user experience can be improved.

Description

VR/AR设备截取图像的方法、装置及系统Method, device and system for capturing images from VR/AR equipment
本申请要求于2021年03月25日提交中国专利局、申请号为202110322569.X、申请名称为“VR/AR设备截取图像的方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on March 25, 2021 with the application number 202110322569.X and the application title "Method, Apparatus and System for Intercepting Images by VR/AR Devices", the entire contents of which are Incorporated herein by reference.
技术领域technical field
本申请涉及终端领域,尤其涉及VR/AR设备截取图像的方法、装置及系统。The present application relates to the field of terminals, and in particular, to a method, an apparatus, and a system for capturing an image by a VR/AR device.
背景技术Background technique
虚拟现实(virtual reality,VR)技术利用计算机模拟产生一个三维(three-dimensional,3D)的虚拟现实场景,并提供在视觉、听觉、触觉或其他感官上的模拟体验,让用户感觉仿佛身历其境。增强现实(augmented reality,AR)技术可以在用户观看真实世界的同时,叠加虚拟现实为用户显示虚拟图像,用户还可以与虚拟图像进行交互来实现增强现实的效果。Virtual reality (VR) technology uses computer simulation to generate a three-dimensional (3D) virtual reality scene, and provides a visual, auditory, tactile or other sensory simulation experience, allowing users to feel as if they are in the real world . Augmented reality (AR) technology can superimpose virtual reality to display virtual images for users while users watch the real world, and users can also interact with virtual images to achieve the effect of augmented reality.
随着VR设备、AR设备的普及,越来越多的用户开始体验VR、AR技术,如何满足用户日益增长的个性化需求,是当前及未来研究的主要问题。With the popularization of VR equipment and AR equipment, more and more users are beginning to experience VR and AR technology. How to meet the growing personalized needs of users is the main problem of current and future research.
发明内容SUMMARY OF THE INVENTION
本申请提供了VR/AR设备截取图像的方法、装置及系统,可以满足用户个性化需求,截取用户感兴趣区域(region of interest,ROI)中的图像,提升用户体验。The present application provides a method, device and system for capturing images by a VR/AR device, which can meet the personalized needs of users, capture images in a region of interest (ROI) of users, and improve user experience.
第一方面,本申请提供了一种VR/AR设备截取图像的方法,该方法应用于VR/AR设备,该VR/AR设备包括用于提供3维3D场景的光学组件,并佩戴于用户头部,该截取图像的方法包括:VR/AR设备显示第一用户界面;VR/AR设备响应于第一操作,确定VR/AR设备的显示屏中的ROI,并截取第一图像,第一图像为ROI中的图像。In a first aspect, the present application provides a method for capturing an image from a VR/AR device, the method is applied to a VR/AR device, the VR/AR device includes an optical component for providing a 3D 3D scene, and is worn on a user's head part, the method for capturing an image includes: the VR/AR device displays a first user interface; the VR/AR device determines the ROI in the display screen of the VR/AR device in response to the first operation, and captures the first image, the first image is the image in the ROI.
实施第一方面提供的方法,VR/AR设备可以根据用户的操作,截取ROI中的图像,满足用户个性化,提升用户体验。By implementing the method provided in the first aspect, the VR/AR device can intercept the image in the ROI according to the user's operation, so as to satisfy the user's personalization and improve the user experience.
结合第一方面,在一种实施方式中,上述第一操作包括:用户手部移动的操作,或者,用户眼球注视点移动的操作,或者,和VR/AR设备连接的输入设备移动的操作。With reference to the first aspect, in one embodiment, the above-mentioned first operation includes: an operation of moving a user's hand, or an operation of moving a user's eye gaze point, or an operation of moving an input device connected to the VR/AR device.
这样,用户可以通过多种不同的操作,实现截取图像的可实施性,提升用户的可操作性。In this way, the user can achieve the practicability of capturing images through various operations, and improve the operability of the user.
结合第一方面,在一种实施方式中,VR/AR设备确定ROI之前,VR/AR设备截取图像的方法还包括:VR/AR设备响应于第一操作,在显示屏上显示光标的移动轨迹,光标的移动轨迹对应于用户手部的移动轨迹,或者,用户眼球注视点的移动轨迹,或者,输入设备的移动轨迹;ROI为包含光标的移动轨迹的区域。With reference to the first aspect, in an embodiment, before the VR/AR device determines the ROI, the method for capturing an image by the VR/AR device further includes: the VR/AR device displays the movement track of the cursor on the display screen in response to the first operation , the movement trajectory of the cursor corresponds to the movement trajectory of the user's hand, or the movement trajectory of the user's eye gaze point, or the movement trajectory of the input device; the ROI is an area containing the movement trajectory of the cursor.
这样,用户可以在VR/AR设备的显示屏中观看到光标的移动轨迹,进一步展示用户与VR/AR设备的交互过程,提升用户体验。In this way, the user can watch the movement track of the cursor on the display screen of the VR/AR device, further displaying the interaction process between the user and the VR/AR device, and improving the user experience.
结合第一方面,在一种实施方式中,ROI为包含光标的移动轨迹的区域,例如可以是,ROI为包含光标的移动轨迹的最小规则区域;或者,ROI为包含光标的移动轨迹的最小非规则区域,最小非规则区域是光标的移动轨迹所围成的区域。With reference to the first aspect, in one embodiment, the ROI is an area containing the movement track of the cursor, for example, the ROI may be the smallest regular area containing the movement track of the cursor; Regular area, the minimum irregular area is the area enclosed by the movement track of the cursor.
这样,VR/AR设备可以根据用户的操作,确定多种可能的ROI,提升用户体验。In this way, the VR/AR device can determine a variety of possible ROIs according to the user's operation, improving the user experience.
结合第一方面,在一种实施方式中,VR/AR设备响应于第一操作,截取第一图像之后,该方法还包括:VR/AR设备显示第二用户界面,第二用户界面显示一个或多个电子设备的标识;VR/AR设备响应于选中第一标识的第二操作,将第一图像发送至第一设备,第一设备对应第一标识,第二用户界面中一个或多个电子设备的标识包括第一标识。With reference to the first aspect, in an embodiment, after the VR/AR device captures the first image in response to the first operation, the method further includes: the VR/AR device displays a second user interface, and the second user interface displays one or more The identifiers of multiple electronic devices; the VR/AR device sends the first image to the first device in response to the second operation of selecting the first identifier, the first device corresponds to the first identifier, and one or more electronic devices in the second user interface The identification of the device includes the first identification.
这样,用户可以将截取到的第一图像发送给周围任意一个或多个电子设备,满足用户需求,提升用户体验。In this way, the user can send the captured first image to any one or more surrounding electronic devices, so as to meet user requirements and improve user experience.
结合第一方面,在一种实施方式中,VR/AR设备显示第二用户界面之前,该方法还包括:VR/AR设备检测到第三操作;响应于第三操作,VR/AR设备显示第二用户界面。With reference to the first aspect, in one embodiment, before the VR/AR device displays the second user interface, the method further includes: the VR/AR device detects a third operation; in response to the third operation, the VR/AR device displays the first 2. User interface.
这样,VR/AR设备可以在接收到用触发显示周围电子设备的标识的操作之后,才会显示第二用户界面,满足用户的个性化需求。In this way, the VR/AR device can display the second user interface only after receiving the operation of triggering the display of the identifiers of the surrounding electronic devices, so as to meet the personalized needs of the user.
结合第一方面,在一种实施方式中,一个或多个电子设备的标识在第二用户界面中的位置用于指示一个或多个电子设备相对于VR/AR设备的位置。In conjunction with the first aspect, in one embodiment, the location of the identifier of the one or more electronic devices in the second user interface is used to indicate the location of the one or more electronic devices relative to the VR/AR device.
结合第一方面,在一种实施方式中,一个或多个电子设备的标识包括VR/AR设备拍摄到的一个或多个电子设备的图像,VR/AR设备显示第二用户界面之前,该方法还包括:VR/AR设备拍摄一个或多个电子设备的图像;VR/AR设备根据一个或多个电子设备的图像,确定一个或多个电子设备相对于VR/AR设备的位置。With reference to the first aspect, in one embodiment, the identifiers of one or more electronic devices include images of one or more electronic devices captured by the VR/AR device. Before the VR/AR device displays the second user interface, the method It also includes: the VR/AR device captures images of one or more electronic devices; the VR/AR device determines the position of one or more electronic devices relative to the VR/AR device according to the images of the one or more electronic devices.
这样,用户可以通过观看一个或多个电子设备的标识,感知到周围一个或多个电子设备相对于自己的位置,提供了更沉浸式的体验。In this way, the user can perceive the position of one or more surrounding electronic devices relative to himself by viewing the identifiers of one or more electronic devices, thereby providing a more immersive experience.
结合第一方面,在一种实施方式中,一个或多个电子设备的标识包括以下一个或多个:一个或多个电子设备的图标、类型或型号;VR/AR设备拍摄一个或多个电子设备的图像之后,该方法还包括:VR/AR设备根据一个或多个电子设备的图像,获取一个或多个电子设备的图标、类型或型号中的一个或多个。In conjunction with the first aspect, in one embodiment, the identification of one or more electronic devices includes one or more of the following: icons, types or models of one or more electronic devices; VR/AR devices photographing one or more electronic devices After the image of the device, the method further includes: the VR/AR device acquires one or more of icons, types or models of the one or more electronic devices according to the images of the one or more electronic devices.
这样,VR/AR设备可以根据用户需求,显示一个或多个电子设备的图像、虚拟图标、类型或型号等,为用户提供丰富的内容,满足用户的个性化需求。In this way, the VR/AR device can display images, virtual icons, types or models, etc. of one or more electronic devices according to user needs, so as to provide users with rich content to meet the individual needs of users.
结合第一方面,在一种实施方式中,在VR/AR设备显示第二用户界面之前,该方法还包括:VR/AR设备发送第一请求消息之后,VR/AR设备接收到一个或多个电子设备发送的第一响应消息,第一响应消息携带一个或多个电子设备的通信地址;VR/AR设备根据第一响应消息的接收情况,获取一个或多个电子设备相对VR/AR设备的位置;VR/AR设备响应于选中第一标识的第二操作,将第一图像发送至第一设备,具体包括:VR/AR设备根据一个或多个电子设备的图像与一个或多个电子设备相对于VR/AR设备的位置的对应关系,确定第一设备的位置;VR/AR设备根据一个或多个电子设备的通信地址与一个或多个电子设备相对于VR/AR设备的位置的对应关系,确定第一设备的通信地址;VR/AR设备根据第一设备的通信地址,将第一图像发送至第一设备。With reference to the first aspect, in an embodiment, before the VR/AR device displays the second user interface, the method further includes: after the VR/AR device sends the first request message, the VR/AR device receives one or more The first response message sent by the electronic device, the first response message carries the communication addresses of one or more electronic devices; the VR/AR device obtains the relative information of one or more electronic devices relative to the VR/AR device according to the reception of the first response message. position; the VR/AR device sends the first image to the first device in response to the second operation of selecting the first identifier, which specifically includes: the VR/AR device interacts with one or more electronic devices according to the images of one or more electronic devices The position of the first device is determined relative to the correspondence between the positions of the VR/AR devices; the VR/AR device is based on the correspondence between the communication addresses of the one or more electronic devices and the positions of the one or more electronic devices relative to the VR/AR device The communication address of the first device is determined; the VR/AR device sends the first image to the first device according to the communication address of the first device.
这样,VR/AR设备可以根据用户选择第一设备的第一标识的操作,根据该第一标识对应的通信地址,向第一设备分享第一图像,满足用户的个性化需求。In this way, the VR/AR device can share the first image with the first device according to the user's operation of selecting the first identifier of the first device and the communication address corresponding to the first identifier to meet the user's personalized needs.
结合第一方面,在一种实施方式中,第一操作包括以下一项或多项:手势、语音指令、用户的眼部状态、按压按键的操作;第一操作由VR/AR设备检测到,或者,由和输入设备检测到。With reference to the first aspect, in one embodiment, the first operation includes one or more of the following: gestures, voice commands, the user's eye state, and an operation of pressing a button; the first operation is detected by the VR/AR device, Alternatively, detected by and input devices.
这样,第一操作的实现方式可以有多种,提高了本申请实施例提供的截图方法的可实施性,提升用户体验。In this way, the first operation can be implemented in multiple ways, which improves the practicability of the screenshot method provided by the embodiment of the present application and improves the user experience.
结合第一方面,在一种实施方式中,VR/AR设备响应于第一操作,截取第一图像之后, 还可以保存第一图像。With reference to the first aspect, in an implementation manner, the VR/AR device may further save the first image after capturing the first image in response to the first operation.
这样,便于用户对该第一图形进行进一步的操作,例如分享到其他电子设备中,提升用户体验。In this way, it is convenient for the user to perform further operations on the first graphic, such as sharing it with other electronic devices, so as to improve the user experience.
第二方面,本申请实施例提供一种VR/AR设备,该VR/AR包括一个或多个处理器和一个或多个存储器;其中,一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得VR/AR设备执行如第一方面实施方式描述的方法。In a second aspect, embodiments of the present application provide a VR/AR device, where the VR/AR includes one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, The one or more memories are used to store computer program code, the computer program code comprising computer instructions which, when executed by the one or more processors, cause the VR/AR device to perform the method described in the embodiment of the first aspect.
第三方面,本申请实施例提供一种包含指令的计算机程序产品,其特征在于,当计算机程序产品在电子设备上运行时,使得电子设备执行如第一方面实施方式描述的方法。In a third aspect, embodiments of the present application provide a computer program product containing instructions, characterized in that, when the computer program product runs on an electronic device, the electronic device is caused to execute the method described in the implementation manner of the first aspect.
第四方面,本申请实施例提供一种计算机可读存储介质,包括指令,其特征在于,当指令在电子设备上运行时,使得电子设备执行如第一方面实施方式描述的方法。In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, including instructions, characterized in that, when the instructions are executed on an electronic device, the electronic device is caused to execute the method described in the implementation manner of the first aspect.
实施本申请提供的VR/AR设备截取图像的方法后,可以实现VR/AR设备截取ROI中的图像,并将该图像分享至周围任意一个或多个电子设备,进而提升用户体验。After implementing the method for capturing an image by a VR/AR device provided in this application, the VR/AR device can capture an image in the ROI, and share the image to any one or more surrounding electronic devices, thereby improving user experience.
附图说明Description of drawings
图1为本申请实施例提供的通信系统的示意图;1 is a schematic diagram of a communication system provided by an embodiment of the present application;
图2A为本申请实施例提供的VR/AR设备结构示意图;2A is a schematic structural diagram of a VR/AR device provided by an embodiment of the present application;
图2B为本申请实施例提供的VR/AR设备成像原理示意图;FIG. 2B is a schematic diagram of an imaging principle of a VR/AR device provided by an embodiment of the present application;
图3为本申请实施例提供的截取图像的方法流程;3 is a flow chart of a method for capturing an image provided by an embodiment of the present application;
图4A-图4B为本申请实施例提供的用于截取图像的一组用户界面;4A-4B provide a set of user interfaces for capturing images according to an embodiment of the present application;
图5A-图5E为本申请实施例提供的用于截取图像的另一组用户界面;5A-5E provide another set of user interfaces for capturing images according to an embodiment of the present application;
图6为本申请实施例提供的用于分享图像的用户界面。FIG. 6 is a user interface for sharing an image provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。The technical solutions in the embodiments of the present application will be described clearly and in detail below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise specified, “/” means or, for example, A/B can mean A or B; “and/or” in the text is only a description of an associated object The association relationship indicates that there can be three kinds of relationships, for example, A and/or B can indicate that A exists alone, A and B exist at the same time, and B exists alone. In addition, in the description of the embodiments of this application , "plurality" means two or more.
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。Hereinafter, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as "first" and "second" may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the "multiple" The meaning is two or more.
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的 界面元素。本申请实施例的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。The term "user interface (UI)" in the following embodiments of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, which realizes the internal form of information and the form acceptable to the user. conversion between. The user interface is the source code written in a specific computer language such as java and extensible markup language (XML). A commonly used form of user interface is a graphical user interface (GUI), which refers to a user interface related to computer operations that is displayed graphically. It can be text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, and other visual interface elements displayed in the display screen of the electronic device. The terms used in the implementation part of the embodiments of the present application are only used to explain the specific embodiments of the present application, and are not intended to limit the present application.
用户在体验VR/AR过程中,可能想要截取图像,VR/AR设备如何实现截取图像,是亟待解决的问题。When users experience VR/AR, they may want to capture images. How to capture images in VR/AR devices is an urgent problem to be solved.
本申请实施例提供了VR/AR设备截取图像的方法,在该方法中,VR/AR设备显示图像,为用户呈现3D场景的过程中,用户在看到VR/AR设备所呈现的图像后,可以输入第一操作,VR/AR设备可以响应于该第一操作,截取第一图像。在本申请实施例中,VR/AR设备为用户呈现3D场景时,VR/AR设备的显示屏上所显示的图像和用户看到的图像可能相同也可能不同。An embodiment of the present application provides a method for capturing an image by a VR/AR device. In this method, the VR/AR device displays an image and presents a 3D scene to a user. After the user sees the image presented by the VR/AR device, A first operation may be input, and the VR/AR device may capture a first image in response to the first operation. In this embodiment of the present application, when the VR/AR device presents a 3D scene to the user, the image displayed on the display screen of the VR/AR device may be the same or different from the image seen by the user.
上述第一图像可以为以下任意一种:The above-mentioned first image can be any of the following:
1、VR/AR设备的显示屏上所显示的全部图像或者部分图像。1. All or part of the image displayed on the display screen of the VR/AR device.
在一些实施例中,VR/AR设备可以响应于第一操作,将显示屏上显示的全部图像截取下来,作为第一图像。In some embodiments, the VR/AR device may, in response to the first operation, capture all images displayed on the display screen as the first image.
在一些实施例中,VR/AR设备可以响应于第一操作,确定显示屏中的用户感兴趣区域(region of interest,ROI),然后将显示屏上的ROI中的图像截取下来,以作为第一图像。在本申请实施例中,ROI是指,VR/AR设备根据用户操作确定的显示屏中的一个区域,这里,通过用户操作确定的ROI,可以是根据用户手部的移动轨迹、眼球注视点的移动轨迹或输入设备300的移动轨迹,对应的在显示屏中显示光标的移动轨迹,然后VR/AR设备根据光标的移动轨迹确定ROI,具体可参见后文方法实施例中详细描述。ROI可以是规则的区域,ROI可以是非规则的区域。In some embodiments, the VR/AR device may determine a user's region of interest (ROI) in the display screen in response to the first operation, and then capture the image in the ROI on the display screen as the first operation. an image. In this embodiment of the present application, the ROI refers to an area in the display screen determined by the VR/AR device according to the user's operation. Here, the ROI determined by the user's operation may be based on the movement trajectory of the user's hand and the gaze point of the eyeball. The movement trajectory or the movement trajectory of the input device 300 corresponds to displaying the movement trajectory of the cursor on the display screen, and then the VR/AR device determines the ROI according to the movement trajectory of the cursor. The ROI can be a regular area, and the ROI can be an irregular area.
2、VR/AR设备为用户呈现VR/AR场景时,用户所看到的全部图像或者部分图像。2. When the VR/AR device presents the VR/AR scene to the user, all or part of the image that the user sees.
在一些实施例中,基于VR/AR设备的成像原理可知,VR/AR设备显示屏显示的图像和用户看到的图像可能不同,所以,VR/AR设备可以响应于第一操作,截取到显示屏显示的全部图像或者部分图像时,还需要根据VR/AR设备的成像原理将该截取到的全部图像或者部分图像融合为用户所看到的图像,以作为第一图像。其中,关于VR/AR设备的成像原理可参考后文对VR/AR设备的介绍。In some embodiments, based on the imaging principle of the VR/AR device, it can be known that the image displayed on the display screen of the VR/AR device may be different from the image seen by the user. Therefore, the VR/AR device can respond to the first operation by intercepting and displaying the image. When the entire image or part of the image displayed on the screen is displayed, it is also necessary to fuse the captured image or part of the image into the image seen by the user as the first image according to the imaging principle of the VR/AR device. Among them, for the imaging principle of VR/AR devices, please refer to the introduction of VR/AR devices later.
在一些实施例中,VR/AR设备还可以将第一图像分享到周围电子设备上。In some embodiments, the VR/AR device may also share the first image to surrounding electronic devices.
具体的,VR/AR设备响应于检测到的用户操作,可以显示包含周围电子设备的标识的用户界面,该用户界面中电子设备的标识所在的位置指示了该电子设备相对于用户的位置。其中,电子设备的标识包括但不限于:电子设备的图像、虚拟图标、类型、型号等。用户看到该用户界面之后,可以识别到周围电子设备相对于自身的位置。最后,VR/AR设备响应于检测到的选中电子设备的标识的操作,与该标识对应的电子设备建立通信连接,并通过该通信连接分享上述第一图像。Specifically, in response to the detected user operation, the VR/AR device may display a user interface including the identifiers of surrounding electronic devices, and the location of the identifiers of the electronic devices in the user interface indicates the location of the electronic device relative to the user. The identification of the electronic device includes, but is not limited to: an image, a virtual icon, a type, a model, and the like of the electronic device. After the user sees the user interface, he can recognize the position of the surrounding electronic devices relative to himself. Finally, the VR/AR device establishes a communication connection with the electronic device corresponding to the identification in response to the detected operation of the identification of the selected electronic device, and shares the above-mentioned first image through the communication connection.
实施本申请实施例提供的VR/AR设备截取图像的方法后,VR/AR设备可以随时截取第一图像,并且截取的第一图像的大小可以由用户自己定义,此外用户还可以将截取到的第一图像分享至周围的电子设备上,让其他用户也可以看到该第一图像,满足了用户的个性化需求。After implementing the method for capturing an image by a VR/AR device provided in this embodiment of the present application, the VR/AR device can capture the first image at any time, and the size of the captured first image can be defined by the user. The first image is shared on the surrounding electronic devices, so that other users can also see the first image, which satisfies the user's personalized needs.
参见图1,图1示例性示出了本申请实施例提供的通信系统10。Referring to FIG. 1 , FIG. 1 exemplarily shows a communication system 10 provided by an embodiment of the present application.
如图1所示,该通信系统10包括VR/AR设备100,VR/AR设备100周围的一个或多个 电子设备例如电子设备201、电子设备202、电子设备203、电子设备204。在一些实施例中,通信系统10还可以包括输入设备300。As shown in FIG. 1 , the communication system 10 includes a VR/AR device 100, and one or more electronic devices such as electronic device 201, electronic device 202, electronic device 203, and electronic device 204 around the VR/AR device 100. In some embodiments, the communication system 10 may also include an input device 300 .
VR/AR设备100可以是头盔、眼镜等可以佩戴在用户头部的电子装置,VR/AR设备100可以和其他电子设备(例如手机)配合使用,用于接收其他电子设备(例如手机)的GPU处理后的数据或内容(例如经过渲染后的图像),并将其显示出来。在这种情况下,VR/AR设备100可以为计算能力有限的VR眼镜等终端设备。该VR/AR设备100用于显示图像,从而向用户呈现3D场景,给用户带来VR/AR/MR体验。该3D场景可包括3D的图像、3D的视频等等。The VR/AR device 100 can be an electronic device such as a helmet, glasses, etc. that can be worn on the user's head. The VR/AR device 100 can be used in conjunction with other electronic devices (such as mobile phones) to receive GPUs of other electronic devices (such as mobile phones). Processed data or content (such as a rendered image) and display it. In this case, the VR/AR device 100 may be a terminal device such as VR glasses with limited computing power. The VR/AR device 100 is used for displaying images, so as to present a 3D scene to a user and bring a VR/AR/MR experience to the user. The 3D scene may include 3D images, 3D videos, and the like.
在本申请实施例中,VR/AR设备100响应于检测到的第一操作,可以截取第一图像。关于第一图像的定义可以参考上文的详细描述。截取第一图像的具体步骤可参考后文方法实施例的相关描述。In this embodiment of the present application, the VR/AR device 100 may capture the first image in response to the detected first operation. For the definition of the first image, reference may be made to the above detailed description. For the specific steps of intercepting the first image, reference may be made to the related descriptions of the following method embodiments.
在一些实施例中,VR/AR设备响应于检测到的用户操作,可以显示包含周围电子设备的标识(例如图像、虚拟图标、类型、型号等)的用户界面。然后,VR/AR设备响应于检测到的选中电子设备的标识的操作,例如指向用户界面中电子设备的标识的手势,与该标识对应的电子设备建立通信连接,并通过该通信连接分享上述第一图像。In some embodiments, the VR/AR device may display a user interface containing identification (eg, images, virtual icons, types, models, etc.) of surrounding electronic devices in response to detected user actions. Then, in response to the detected operation of selecting the identification of the electronic device, such as a gesture pointing to the identification of the electronic device in the user interface, the VR/AR device establishes a communication connection with the electronic device corresponding to the identification, and shares the above-mentioned first section through the communication connection. an image.
上述通信连接可以是有线或者无线方式的连接。有线连接可包括通过USB接口、HDMI接口等接口进行通信的有线连接。无线连接可包括通过BT通信模块、无线局域网(wireless local area networks,WLAN)通信模块、UWB通信模块中的一项或多项进行通信的无线连接。The above-mentioned communication connection may be a wired or wireless connection. The wired connection may include a wired connection that communicates through an interface such as a USB interface, an HDMI interface, or the like. The wireless connection may include a wireless connection that communicates through one or more of a BT communication module, a wireless local area networks (WLAN) communication module, and a UWB communication module.
电子设备201、电子设备202、电子设备203和电子设备204为VR/AR设备100周围的电子设备,例如可以为智能电视、电脑、手机、VR/AR设备、平板电脑,还可以是具有触敏表面或触控面板的膝上型计算机(Laptop)、具有触敏表面或触控面板的台式计算机等非便携式终端设备等。The electronic device 201 , the electronic device 202 , the electronic device 203 and the electronic device 204 are the electronic devices around the VR/AR device 100 , such as a smart TV, a computer, a mobile phone, a VR/AR device, a tablet computer, or a touch-sensitive device. Surface or touch panel laptop computer (Laptop), non-portable terminal equipment such as desktop computer with touch sensitive surface or touch panel, etc.
在本申请实施例中,电子设备201、电子设备202、电子设备203和电子设备204具有BT通信模块、WLAN通信模块、UWB通信模块中的一项或多项。以电子设备201为例,电子设备201可以通过BT通信模块、WLAN通信模块、UWB通信模块中的一项或多项监听到VR/AR设备100发射的信号,如探测请求、扫描信号等等,并可以发送响应信号,如探测响应、扫描响应等,使得VR/AR设备100可以发现电子设备201,并确定电子设备201的位置与网络地址,然后与电子设备201建立通信连接,并基于该通信连接接收VR/AR设备100分享的第一图像。In this embodiment of the present application, the electronic device 201 , the electronic device 202 , the electronic device 203 , and the electronic device 204 have one or more of a BT communication module, a WLAN communication module, and a UWB communication module. Taking the electronic device 201 as an example, the electronic device 201 can monitor the signals transmitted by the VR/AR device 100, such as detection requests, scanning signals, etc., through one or more of the BT communication module, the WLAN communication module, and the UWB communication module. And can send response signals, such as detection response, scan response, etc., so that the VR/AR device 100 can discover the electronic device 201, and determine the location and network address of the electronic device 201, and then establish a communication connection with the electronic device 201, and based on the communication Connect to receive the first image shared by the VR/AR device 100 .
输入设备300是用于控制VR/AR设备100的设备,例如,手柄、鼠标、键盘、手写笔、手环等等。The input device 300 is a device for controlling the VR/AR device 100, for example, a handle, a mouse, a keyboard, a stylus, a wristband, and the like.
在本申请实施例中,输入设备300可配置有多种传感器,例如加速度传感器、陀螺仪传感器、磁传感器、压力传感器等。压力传感器可设置于输入设备300的确认按键/取消按键之下。In this embodiment of the present application, the input device 300 may be configured with various sensors, such as an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, and the like. The pressure sensor may be disposed under the confirm key/cancel key of the input device 300 .
在本申请实施例中,输入设备300可以采集输入设备300的移动数据,和,表示所述输入设备300的确认按键/取消按键等是否被按压的数据。其中,移动数据包括所述输入设备300的传感器例如加速度传感器采集所述输入设备300的加速度、陀螺仪传感器采集所述输入设备300的移动速度等。其中,表示输入设备300的确认按键/取消按键等是否被按压的数据包 括设置于确认按键/取消按键下的压力传感器采集到的压力值或输入设备300生成的电平等。In this embodiment of the present application, the input device 300 may collect movement data of the input device 300, and data indicating whether the confirmation button/cancel button of the input device 300 is pressed. The movement data includes sensors of the input device 300 , for example, an acceleration sensor to collect the acceleration of the input device 300 , a gyro sensor to collect the movement speed of the input device 300 , and the like. Wherein, the data indicating whether the confirm button/cancel button of the input device 300 is pressed includes the pressure value collected by the pressure sensor disposed under the confirm button/cancel button or the level generated by the input device 300, etc.
在本申请实施例中,输入设备300可以与VR/AR设备100建立通信连接,并通过该通信连接将采集到的输入设备300的移动数据,和,表示所述输入设备300的确认按键/取消按键等是否被按压的数据发送至VR/AR设备100。该通信连接可以是有线或者无线方式的连接。有线连接可包括通过USB接口、HDMI接口或自定义接口等进行通信的有线连接。无线连接可以包括通过蓝牙、近场通信(near field communication,NFC)、ZigBee等近距离传输技术进行通信的无线连接中一项或多项。In this embodiment of the present application, the input device 300 can establish a communication connection with the VR/AR device 100, and through the communication connection, the collected movement data of the input device 300, and the confirmation button/cancel indicating the input device 300 The data of whether a key or the like is pressed is sent to the VR/AR device 100 . The communication connection may be a wired or wireless connection. The wired connection may include a wired connection that communicates through a USB interface, an HDMI interface, a custom interface, or the like. The wireless connection may include one or more of wireless connections that communicate via Bluetooth, near field communication (NFC), ZigBee, and other short-range transmission technologies.
在本申请实施例中,上述输入设备300的移动数据,和,表示输入设备300的确认按键/取消按键等是否被按压的数据可以供VR/AR设备100确定输入设备300的移动情况和/或状态。输入设备300的移动情况可包括但不限于:是否移动、移动的方向、移动的速度、移动的距离、移动的轨迹等等。输入设备300的状态可包括:输入设备300的确认按键是否被按压。In this embodiment of the present application, the movement data of the input device 300 and the data indicating whether the confirmation button/cancel button of the input device 300 is pressed can be used by the VR/AR device 100 to determine the movement of the input device 300 and/or state. The movement of the input device 300 may include, but is not limited to, whether to move, the direction of movement, the speed of movement, the distance of movement, the trajectory of movement, and the like. The state of the input device 300 may include whether the confirmation key of the input device 300 is pressed.
VR/AR设备100可以根据输入设备300的移动情况和/或状态,启动对应的功能。也就是说,用户可通过在输入设备300上输入用户操作,来触发VR/AR设备100执行对应的功能。例如,用户可以握持输入设备300向左移动3cm时,以使得VR/AR设备100的显示屏上显示的光标向左移动6cm。这样可以使得用户通过对输入设备300的操控,来将光标移动到VR/AR设备100中显示屏上的任意位置。又例如,在光标被移动至VR/AR设备100中所显示的某个控件上后,用户可以按压输入设备300的确认按键,以使得VR/AR设备100VR/AR设备100启用和该控件对应的功能。The VR/AR device 100 may activate a corresponding function according to the movement situation and/or state of the input device 300 . That is, the user can trigger the VR/AR device 100 to perform a corresponding function by inputting a user operation on the input device 300 . For example, when the user holds the input device 300 and moves it to the left by 3 cm, the cursor displayed on the display screen of the VR/AR device 100 can be moved to the left by 6 cm. In this way, the user can move the cursor to any position on the display screen of the VR/AR device 100 by manipulating the input device 300 . For another example, after the cursor is moved to a certain control displayed in the VR/AR device 100, the user can press the confirmation button of the input device 300, so that the VR/AR device 100 enables the VR/AR device 100 to activate the control corresponding to the control Function.
在本申请实施例中,输入设备300可用于接收用于触发VR/AR设备截取第一图像的用户操作,该用户操作的具体实现,可参考后文方法实施例的相关描述。In this embodiment of the present application, the input device 300 may be configured to receive a user operation for triggering the VR/AR device to capture the first image. For the specific implementation of the user operation, reference may be made to the related description of the method embodiments later.
参考图2A,图2A示出了本申请实施例提供的VR/AR设备100的结构示意图。Referring to FIG. 2A , FIG. 2A shows a schematic structural diagram of a VR/AR device 100 provided by an embodiment of the present application.
如图2A所示,VR/AR设备100可包括:处理器201、存储器202、通信模块203、传感器模块204、摄像头205、显示装置206、音频装置207。以上各个部件可以耦合连接并相互通信,例如以上各个部件可以通过总线连接。As shown in FIG. 2A , the VR/AR device 100 may include: a processor 201 , a memory 202 , a communication module 203 , a sensor module 204 , a camera 205 , a display device 206 , and an audio device 207 . The above components can be coupled and connected and communicate with each other, for example, the above components can be connected through a bus.
可理解的,图2A所示的结构并不构成对VR/AR设备100的具体限定。在本申请另一些实施例中,VR/AR设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。例如,VR/AR设备100还可以包括物理按键如开关键、音量键、各类接口例如用于支持VR/AR设备100和输入设备300连接的USB接口等等。图示的部件可以以硬件,软件或软件和硬件的组合实现。It is understandable that the structure shown in FIG. 2A does not constitute a specific limitation to the VR/AR device 100 . In other embodiments of the present application, the VR/AR device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components. For example, the VR/AR device 100 may further include physical keys such as on-off keys, volume keys, various interfaces such as a USB interface for supporting the connection between the VR/AR device 100 and the input device 300, and the like. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
处理器201可以包括一个或多个处理单元,例如:处理器110可以包括存储器,应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制,使得各个部件执行相应的功能,例如人机交互、运动跟踪/预测、渲染显示、音频处理等。The processor 201 may include one or more processing units, for example, the processor 110 may include a memory, an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal Image signal processor (ISP), controller, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. The controller can generate operation control signals according to the instruction opcode and timing signal, and complete the control of fetching and executing instructions, so that each component can perform corresponding functions, such as human-computer interaction, motion tracking/prediction, rendering display, audio processing, etc.
在本申请实施例中,处理器201中的存储器可以识别用户输入的操作,执行相应的功能。例如,处理器201可以识别用户输入的手势例如握拳、手掌等,语音例如截图、分享等,以及作用于输入设备300例如上下摇晃、左右摇晃、移动等的操作,然后确定根据这些操作执 行对应的功能。In this embodiment of the present application, the memory in the processor 201 can identify the operation input by the user, and execute the corresponding function. For example, the processor 201 can recognize user input gestures such as fist, palm, etc., voice such as screenshot, share, etc., and operations on the input device 300 such as shaking up and down, shaking left and right, moving, etc., and then determine to execute the corresponding operation according to these operations. Function.
存储器202存储用于执行本申请实施例提供的显示方法的可执行程序代码,该可执行程序代码包括指令。存储器202可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储VR/AR设备100使用过程中所创建的数据(比如音频数据等)等。此外,存储器202可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器201通过运行存储在存储器202的指令,和/或存储在设置于处理器中的存储器的指令,执行VR/AR设备100的各种功能应用以及数据处理。The memory 202 stores executable program codes for executing the display methods provided by the embodiments of the present application, where the executable program codes include instructions. The memory 202 may include a stored program area and a stored data area. The storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like. The storage data area may store data (such as audio data, etc.) created during the use of the VR/AR device 100 and the like. In addition, the memory 202 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like. The processor 201 executes various functional applications and data processing of the VR/AR device 100 by executing the instructions stored in the memory 202 and/or the instructions stored in the memory provided in the processor.
在本申请实施例中,存储器202可以存储用于VR/AR设备100的显示装置206显示的图像,并将该图像以图像流的形式发送至显示装置206。In this embodiment of the present application, the memory 202 may store an image for display by the display device 206 of the VR/AR device 100, and send the image to the display device 206 in the form of an image stream.
在本申请实施例中,存储器202还可以存储VR/AR设备100截取到的第一图像。In this embodiment of the present application, the memory 202 may further store the first image captured by the VR/AR device 100 .
在一些实施例中,存储器202可以存储数据库,该数据库包括电子设备201、电子设备202、电子设备203、电子设备204以及其他更多电子设备的图像模型,和各个电子设备的图像模型所对应电子设备的类型、虚拟图标和型号等。关于数据库具体包含的内容,具体参见后文方法实施例中表1的详细描述。在一些实施例中,存储器202还可以存储摄像头205拍设备到的周围电子设备的图像、周围电子设备的位置和通信地址,以及它们之间的对应关系。具体可参见后文表2的详细描述,在此暂不赘述。In some embodiments, the memory 202 may store a database, the database including the image models of the electronic device 201, the electronic device 202, the electronic device 203, the electronic device 204 and other more electronic devices, and the electronic devices corresponding to the image models of each electronic device The type, virtual icon and model of the device, etc. For the content specifically included in the database, please refer to the detailed description of Table 1 in the following method embodiments for details. In some embodiments, the memory 202 may also store images of the surrounding electronic devices captured by the camera 205 , the locations and communication addresses of the surrounding electronic devices, and the corresponding relationship between them. For details, please refer to the detailed description in Table 2 below, which will not be repeated here.
通信模块203可包括无线通信模块和有线通信模块。无线通信模块可以提供应用在VR/AR设备100上BT,WLAN,UWB,GNSS,FM,IR等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。有线通信模块可以提供包括通过USB接口、HDMI接口等接口进行通信的有线连接。通信模块203可支持VR/AR设备100和周围的电子设备(电子设备201、电子设备202、电子设备203和电子设备204等)进行通信。The communication module 203 may include a wireless communication module and a wired communication module. The wireless communication module can provide solutions for wireless communication such as BT, WLAN, UWB, GNSS, FM, IR, etc. applied to the VR/AR device 100 . The wireless communication module may be one or more devices integrating at least one communication processing module. The wired communication module may provide a wired connection including communication through an interface such as a USB interface, an HDMI interface, and the like. The communication module 203 can support the communication between the VR/AR device 100 and surrounding electronic devices (electronic device 201 , electronic device 202 , electronic device 203 , electronic device 204 , etc.).
在本申请实施例中,VR/AR设备100可以通过无线通信模块或有线通信模块与输入设备300建立通信连接,基于该通信连接获取输入设备300采集到的输入设备300的移动数据,和,表示所述输入设备300的确认按键/取消按键等是否被按压的数据。之后,处理器201可以根据输入设备300发送的数据,识别用户输入的操作,以及确定该操作对应执行的功能。In this embodiment of the present application, the VR/AR device 100 may establish a communication connection with the input device 300 through a wireless communication module or a wired communication module, and based on the communication connection, acquire the movement data of the input device 300 collected by the input device 300 , and represent Data of whether the confirmation button/cancel button of the input device 300 is pressed. Afterwards, the processor 201 can identify the operation input by the user according to the data sent by the input device 300, and determine the function to be executed corresponding to the operation.
在一些实施例中,VR/AR设备100可以通过BT,WLAN,UWB等通信模块中一项或多项,与周围电子设备建立通信连接,基于该通信连接分享第一图像。In some embodiments, the VR/AR device 100 may establish a communication connection with surrounding electronic devices through one or more of communication modules such as BT, WLAN, and UWB, and share the first image based on the communication connection.
传感器模块204可包括加速度计、指南针、陀螺仪、磁力计、或用于检测运动的其他传感器等。传感器模块204用于采集对应的数据,例如加速度传感器采集VR/AR设备100加速度、陀螺仪传感器采集VR/AR设备100的运动速度等。传感器模块204采集到的数据可以反映佩戴该VR/AR设备100的用户头部的运动情况。在一些实施例中,传感器模块204可以为设置在VR/AR设备100内的惯性测量单元(inertial measurement unit,IMU)。在一些实施例中,所述VR/AR设备100可以将传感器模块获取到的数据发送给所述处理器201进行分析。所述处理器201可以根据各个传感器采集到的数据,确定用户头部的运动情况,并根据用户头部的运动情况执行对应的功能。The sensor module 204 may include an accelerometer, a compass, a gyroscope, a magnetometer, or other sensors for detecting motion, or the like. The sensor module 204 is used to collect corresponding data, for example, the acceleration sensor collects the acceleration of the VR/AR device 100, and the gyroscope sensor collects the motion speed of the VR/AR device 100, and the like. The data collected by the sensor module 204 can reflect the movement of the head of the user wearing the VR/AR device 100 . In some embodiments, the sensor module 204 may be an inertial measurement unit (IMU) disposed within the VR/AR device 100 . In some embodiments, the VR/AR device 100 may send the data acquired by the sensor module to the processor 201 for analysis. The processor 201 can determine the movement of the user's head according to the data collected by each sensor, and execute corresponding functions according to the movement of the user's head.
传感器模块204还可以包括光学传感器,用于结合摄像头205来跟踪用户的眼睛位置以 及捕获眼球运动数据。该眼球运动数据例如可以用于确定用户的眼间距、每只眼睛相对于VR/AR设备100的3D位置、每只眼睛的扭转和旋转(即转动、俯仰和摇动)的幅度和注视方向等等。在一个示例中,红外光在所述VR/AR设备100内发射并从每只眼睛反射,反射光由摄像头205或者光学传感器检测到,检测到的数据被传输给所述处理器201,以使得所述处理器201从每只眼睛反射的红外光的变化中分析用户眼睛的位置、瞳孔直径、运动状态等。The sensor module 204 may also include optical sensors for tracking the user's eye position and capturing eye movement data in conjunction with the camera 205. The eye movement data may be used, for example, to determine the user's eye separation, the 3D position of each eye relative to the VR/AR device 100, the magnitude of each eye's twist and rotation (ie, roll, pitch, and pan) and gaze direction, etc. . In one example, infrared light is emitted within the VR/AR device 100 and reflected from each eye, the reflected light is detected by a camera 205 or an optical sensor, and the detected data is transmitted to the processor 201 to enable The processor 201 analyzes the position, pupil diameter, motion state, etc. of the user's eyes from the changes of the infrared light reflected by each eye.
在本申请实施例,上述光学传感器可以结合摄像头205来采集用户眼球图像,跟踪用户的眼球注视点的移动轨迹。并将该注视点的移动轨迹的数据传至处理器201,处理器201对该数据进行分析,并在显示屏中显示眼球注视点的移动轨迹对应的光标的移动轨迹,然后根据光标移动轨迹确定显示屏中的ROI。In this embodiment of the present application, the above-mentioned optical sensor may be combined with the camera 205 to collect the image of the user's eyeball, and track the movement trajectory of the user's eyeball gaze point. and transmit the data of the movement trajectory of the gaze point to the processor 201, the processor 201 analyzes the data, and displays the movement trajectory of the cursor corresponding to the movement trajectory of the eye gaze point in the display screen, and then determines according to the movement trajectory of the cursor ROI in the display.
摄像头205可以用于捕捉捕获静态图像或视频。该静态图像或视频可以是面向外部的用户周围的图像或视频,也可以是面向内部的图像或视频。摄像头205可以跟踪用户单眼或者双眼的运动。摄像头205包括但不限于传统彩色摄像头(RGB camera)、深度摄像头(RGB depth camera)、动态视觉传感器(dynamic vision sensor,DVS)相机等。深度摄像头可以获取被拍摄对象的深度信息。Camera 205 may be used to capture still images or video. The still image or video can be an externally facing image or video around the user, or an internal facing image or video. The camera 205 can track the movement of one or both eyes of the user. The camera 205 includes, but is not limited to, a traditional color camera (RGB camera), a depth camera (RGB depth camera), a dynamic vision sensor (DVS) camera, and the like. The depth camera can obtain the depth information of the photographed object.
在本申请实施例中,摄像头205至少包括一对摄像头,该一对摄像头既可以采集面向VR/AR设备100内部的图像,即用户眼睛图像,也可以采集面向VR/AR设备100外部的图像,即用户手的图像。该一对摄像头可将采集到的用户眼睛的图像或用户手的图像发送至处理器201进行分析,处理器201可以根据眼睛图像,识别用户眼睛的状态例如是否眨眼、眨眼次数等,并根据用户眼睛所处的状态执行对应的功能。或者,处理器201还可以根据手的图像,识别用户的手势例如握拳,手掌,“OK”等,并根据不同手势对应执行不同的功能。In this embodiment of the present application, the camera 205 includes at least a pair of cameras, and the pair of cameras can capture images facing the inside of the VR/AR device 100, that is, the user's eye image, and can also capture images facing the outside of the VR/AR device 100. That is, the image of the user's hand. The pair of cameras can send the collected images of the user's eyes or the user's hand to the processor 201 for analysis, and the processor 201 can identify the state of the user's eyes, such as whether to blink, the number of blinks, etc. The state the eye is in performs the corresponding function. Alternatively, the processor 201 may also recognize the user's gestures, such as making a fist, palm, "OK", etc. according to the hand image, and perform different functions correspondingly according to different gestures.
在一些实施例中,摄像头205也可以包括两对摄像头,其中一对用来采集外面向VR/AR设备100外部的图像,另一对用来采集面向VR/AR设备100内部的图像In some embodiments, the camera 205 may also include two pairs of cameras, one of which is used to capture images facing the outside of the VR/AR device 100 , and the other pair is used to capture images facing the interior of the VR/AR device 100
在本申请实施例中,VR/AR设备100可以100通过GPU,显示装置206,以及应用处理器等来呈现或者显示图像。In this embodiment of the present application, the VR/AR device 100 may present or display images through the GPU, the display device 206, and the application processor.
GPU为图像处理的微处理器,连接显示装置206和应用处理器。处理器201可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The GPU is a microprocessor for image processing, and is connected to the display device 206 and the application processor. Processor 201 may include one or more GPUs that execute program instructions to generate or alter display information.
显示装置206可包括:一个或多个显示屏、一个或多个光学组件。该一个或多个显示屏例如可以包括显示屏101和显示屏103。该一个或多个光学组件例如包括光学组件102和光学组件104。显示屏,例如所述显示屏101、所述显示屏103,可包括显示面板,显示面板可以用于显示图像,从而为用户呈现立体的虚拟场景。显示面板可以采用液晶显示装置LCD,OLED,AMOLED,FLED,Miniled,MicroLed,Micro-oLed,QLED等。光学组件,例如所述光学组件102、所述光学组件104,用于将来自显示屏的光引导至出射光瞳以供用户感知。在一些实施方式中,光学组件中的一个或多个光学元件(例如透镜)可具有一个或多个涂层,诸如,抗反射涂层。光学组件对图像光的放大允许显示屏在物理上更小、更轻、消耗更少的功率。另外,图像光的放大可以增加显示屏显示的内容的视野。例如,光学组件可以使得显示屏所显示的内容的视野为用户的全部视野。光学组件还可用于校正一个或多个光学误差。光学误差的示例包括:桶形失真、枕形失真、纵向色差、横向色差、球面像差、彗形像差、场曲率、散光等。在一些实施方式中,提供给显示屏显示的内容被预先失真,由光学组件再从显示屏接收基于内容产生的图像光时校正该失真。 Display device 206 may include: one or more display screens, one or more optical components. The one or more display screens may include, for example, display screen 101 and display screen 103 . The one or more optical assemblies include, for example, optical assembly 102 and optical assembly 104 . The display screen, such as the display screen 101 and the display screen 103, may include a display panel, and the display panel may be used to display images, thereby presenting a three-dimensional virtual scene to the user. The display panel may adopt a liquid crystal display device LCD, OLED, AMOLED, FLED, Miniled, MicroLed, Micro-oLed, QLED, and the like. Optical components, such as the optical component 102, the optical component 104, are used to direct light from the display screen to the exit pupil for user perception. In some embodiments, one or more optical elements (eg, lenses) in an optical assembly may have one or more coatings, such as anti-reflection coatings. The amplification of the image light by the optical components allows the display to be physically smaller, lighter, and consume less power. In addition, the magnification of the image light can increase the field of view of the content displayed on the display screen. For example, the optical assembly can make the field of view of the content displayed on the display screen the full field of view of the user. Optical assemblies may also be used to correct for one or more optical errors. Examples of optical errors include: barrel distortion, pincushion distortion, longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, comatic aberration, field curvature, astigmatism, and the like. In some embodiments, the content provided to the display screen for display is pre-distorted, and the distortion is corrected by the optical assembly upon receiving the content-based image light from the display screen.
参考图2B,图2B为VR/AR设备100的成像原理示意图。Referring to FIG. 2B , FIG. 2B is a schematic diagram of an imaging principle of the VR/AR device 100 .
如图2B所示,VR/AR设备100的显示装置206可包括:显示屏101、光学组件102、显示屏 103、光学组件104。显示屏101的中心和光学组件102的中心所处的第一直线,垂直于,光学组件102的中心和光学组件104的中心所处的第三直线。显示屏101和光学组件102对应于用户的左眼。用户佩戴VR/AR设备100时,显示屏101上可以显示有图像a1。显示屏101显示图像a1时发出的光经过光学组件102的透射后将在用户左眼前方形成该图像a1的虚像a1’。As shown in FIG. 2B , the display device 206 of the VR/AR device 100 may include: a display screen 101 , an optical component 102 , a display screen 103 , and an optical component 104 . The first straight line where the center of the display screen 101 and the center of the optical component 102 are located is perpendicular to the third straight line where the center of the optical component 102 and the center of the optical component 104 are located. Display screen 101 and optical assembly 102 correspond to the user's left eye. When the user wears the VR/AR device 100 , an image a1 may be displayed on the display screen 101 . When the display screen 101 displays the image a1, the light emitted by the optical component 102 will form a virtual image a1' of the image a1 in front of the user's left eye.
显示屏103的中心和光学组件104的中心所处的第二直线,垂直于,光学组件102的中心和光学组件104的中心所处的第三直线。显示屏103和光学组件104对应于用户的右眼。用户佩戴VR/AR设备100时,显示屏103可以显示有图像a2。显示屏103显示图像a2时发出的光经过光学组件104的透射后将在用户右眼前方形成该图像a2的虚像a2’。The second straight line where the center of the display screen 103 and the center of the optical component 104 are located is perpendicular to the third straight line where the center of the optical component 102 and the center of the optical component 104 are located. Display screen 103 and optical assembly 104 correspond to the user's right eye. When the user wears the VR/AR device 100, the display screen 103 may display the image a2. When the display screen 103 displays the image a2, the light emitted by the optical component 104 will form a virtual image a2' of the image a2 in front of the user's right eye.
图像a1和图像a2为针对同一物体例如物体a的具有视差的两幅图像。视差是指从有一定距离的两个点上观察同一个物体时,该物体在视野中位置的差异。虚像a1’和虚像a2’位于同一平面上,该平面可以被称为虚像面。Image a1 and image a2 are two images with parallax for the same object, eg, object a. Parallax refers to the difference in the position of the object in the field of view when the same object is viewed from two points that are a certain distance away. The virtual image a1' and the virtual image a2' are located on the same plane, which can be called a virtual image plane.
在佩戴VR/AR设备100时,用户的左眼会聚焦到虚像a1’上,用户的右眼会聚焦到虚像a2’上。然后,虚像a1’和虚像a2’会在用户的大脑中叠加成为一幅完整且具有立体感的图像,该过程被称为辐辏。在辐辏过程中,双眼视线的交汇点会被用户认为是图像a1和图像a2所描述的物体实际所在的位置。由于辐辏过程,用户可以感受到VR/AR设备100提供的3D场景。When wearing the VR/AR device 100, the user's left eye will focus on the virtual image a1', and the user's right eye will focus on the virtual image a2'. Then, the virtual image a1' and the virtual image a2' are superimposed in the user's brain to form a complete and three-dimensional image, a process called vergence. During the vergence process, the meeting point of the eyes of both eyes will be considered by the user as the actual location of the object described by the image a1 and the image a2. Due to the vergence process, the user can feel the 3D scene provided by the VR/AR device 100 .
不限于上述图2B示出的VR/AR设备100的成像原理,VR/AR设备还可以使用其他方式来为用户提供3D场景。Not limited to the imaging principle of the VR/AR device 100 shown in FIG. 2B above, the VR/AR device may also use other methods to provide the user with a 3D scene.
可以理解的是,显示装置206可以用于接收其他电子设备(例如手机)的GPU处理后的数据或内容(例如经过渲染后的图像),并将其显示出来。在这种情况下,VR/AR设备100可以为计算能力有限的VR眼镜等终端设备,需要和其他电子设备(例如手机)配合使用,从而为用户呈现3D场景,给用户提供VR/AR/MR体验。It can be understood that the display device 206 can be used to receive data or content (eg, rendered images) processed by the GPU of other electronic devices (eg, mobile phones), and display them. In this case, the VR/AR device 100 can be a terminal device such as VR glasses with limited computing power, and needs to be used in conjunction with other electronic devices (such as mobile phones) to present a 3D scene to the user and provide VR/AR/MR to the user experience.
音频装置207用于实现音频的采集以及输出。音频装置207可包括但不限于:麦克风、扬声器、耳机等等。The audio device 207 is used to realize the collection and output of audio. Audio devices 207 may include, but are not limited to, microphones, speakers, headphones, and the like.
在本申请实施例中,麦克风可用于接收用户输入的语音,并将该语音转换为电信号。处理器201可以被配置为从麦克风接收语音对应的电信号,在接收到语音信号后,处理器201可以识别该语音对应的操作并执行对应的操作。例如,当麦克风接收到“分享”的语音时,处理器201可以识别该语音指令,向周围的电子设备(例如电子设备201)分享第一图像。In this embodiment of the present application, the microphone may be used to receive the voice input by the user, and convert the voice into an electrical signal. The processor 201 may be configured to receive an electrical signal corresponding to the voice from the microphone, and after receiving the voice signal, the processor 201 may recognize the operation corresponding to the voice and perform the corresponding operation. For example, when the microphone receives the voice of "share", the processor 201 may recognize the voice instruction and share the first image with the surrounding electronic devices (eg, the electronic device 201).
在本申请实施例中,摄像头205可以连续拍摄到用户多个手掌图像,并将该手掌图像传输至处理器201,处理器201采用双目定位技术对该多个手掌图像进行分析,识别到用户手掌的移动轨迹,并在显示屏上显示该手掌的移动轨迹对应的光标的移动轨迹,然后根据该轨迹确定ROI。之后,处理器201可以将显示屏上ROI中的图像截取下来,作为第一图像。在一些实施例中,处理器201可以获取内部存储器发送给显示装置206的图像流,以得到第一图像。在另一些实施例中,处理器201可以直接获取显示装置206显示的图像,并将该图像截取下来以得到第一图像。在一些实施例中,摄像头205可以拍摄周围电子设备的图像,并将该周围电子设备的图像发送至处理器201,处理器201采用双目定位技术对该周围电子设备的图像进行分析,识别周围电子设备相对于VR/AR设备100的位置例如三维坐标,处理器201还可以将周围电子设备的图像,与数据库中各个电子设备的图像模型进行匹配,从而识别周围电子设备的类型、虚拟图标、型号等等。之后处理器201可以调度显示显示装置206 显示周围电子设备的标识,电子设备的标识包括但不限于电子设备的图像、类型、虚拟图标、型号等等。之后,处理器201还可以调度BT,WLAN,UWB等通信模块中一项或多项获取周围电子设备的通信地址,该通信地址例如MAC地址或IP地址等。同时,处理器201还可以结合BT定位技术/WiFi定位技术/UWB定位技术获取周围电子设备的通信地址分别对应的位置例如角度和距离。之后,处理器201可以采用重合定位算法将,双目定位技术获取的周围电子设备的位置,与,通过BT定位技术/WiFi定位技术/UWB定位技术获取的周围电子设备的位置进行匹配。进而将周围电子设备的标识与通信地址建立对应关系。最后,处理器201接收到用户选中电子设备的标识的操作,响应于该操作,基于该标识对应的通信地址,与对应的电子设备建立通信连接,并分享上述第一图像。In this embodiment of the present application, the camera 205 can continuously capture multiple palm images of the user, and transmit the palm images to the processor 201. The processor 201 analyzes the multiple palm images by using binocular positioning technology, and identifies the user The movement trajectory of the palm is displayed, and the movement trajectory of the cursor corresponding to the movement trajectory of the palm is displayed on the display screen, and then the ROI is determined according to the trajectory. Afterwards, the processor 201 may capture the image in the ROI on the display screen as the first image. In some embodiments, the processor 201 may obtain the image stream sent from the internal memory to the display device 206 to obtain the first image. In other embodiments, the processor 201 may directly acquire the image displayed by the display device 206, and capture the image to obtain the first image. In some embodiments, the camera 205 can capture an image of the surrounding electronic device, and send the image of the surrounding electronic device to the processor 201, and the processor 201 analyzes the image of the surrounding electronic device using a binocular positioning technology to identify the surrounding electronic device. The position of the electronic device relative to the VR/AR device 100 is, for example, three-dimensional coordinates. The processor 201 can also match the image of the surrounding electronic device with the image model of each electronic device in the database, so as to identify the type, virtual icon, model, etc. Afterwards, the processor 201 may schedule the display device 206 to display the identifiers of the surrounding electronic devices, where the identifiers of the electronic devices include, but are not limited to, images, types, virtual icons, models, and the like of the electronic devices. Afterwards, the processor 201 may also schedule one or more of communication modules such as BT, WLAN, and UWB to obtain the communication addresses of surrounding electronic devices, such as MAC addresses or IP addresses. At the same time, the processor 201 may also obtain positions such as angles and distances respectively corresponding to the communication addresses of the surrounding electronic devices in combination with the BT positioning technology/WiFi positioning technology/UWB positioning technology. After that, the processor 201 can use the coincidence positioning algorithm to match the positions of the surrounding electronic devices obtained by the binocular positioning technology with the positions of the surrounding electronic devices obtained by the BT positioning technology/WiFi positioning technology/UWB positioning technology. Then, a corresponding relationship is established between the identifiers of the surrounding electronic devices and the communication addresses. Finally, the processor 201 receives the operation of the user selecting the identification of the electronic device, and in response to the operation, establishes a communication connection with the corresponding electronic device based on the communication address corresponding to the identification, and shares the above-mentioned first image.
上述双目定位技术是指:VR/AR设备100使用摄像头205中的一对摄像头,分别采集周围电子设备(例如电子设备201)的图像,根据采集到的电子设备201的两张图像,分别获得该电子设备201在两个摄像头所采集的图像中的像素坐标(二维坐标)。然后结合两个摄像机的相对位置,根据几何关系算法获得该电子设备201的位置例如三维坐标。The above binocular positioning technology refers to: the VR/AR device 100 uses a pair of cameras in the camera 205 to separately collect images of the surrounding electronic devices (for example, the electronic device 201 ), and respectively obtains two images according to the collected two images of the electronic device 201 . Pixel coordinates (two-dimensional coordinates) of the electronic device 201 in the images captured by the two cameras. Then, in combination with the relative positions of the two cameras, the position of the electronic device 201, such as three-dimensional coordinates, is obtained according to a geometric relationship algorithm.
上述BT定位技术/WiFi定位技术/UWB定位技术是指:VR/AR设备100向周围电子设备发送探测请求,然后接收到周围电子设备的探测响应,然后根据探测响应的BT/WIFI/UWB信号到达角(angle of arrival,AoA)测量周围电子设备(例如电子设备201)相对于VR/AR设备100的角度,然后采用三角测量法,或者信号强度指示(received signal strength indicator,RSSI)值,或者BT/WIFI/UWB信号的飞行时间(time of flight,TOF)计算出电子设备201的距离,进而获得了电子设备201相对于VR/AR设备100的位置(角度和距离)。The above BT positioning technology/WiFi positioning technology/UWB positioning technology refers to: the VR/AR device 100 sends a detection request to the surrounding electronic devices, and then receives the detection response from the surrounding electronic devices, and then arrives according to the BT/WIFI/UWB signal of the detection response. The angle of arrival (AoA) measures the angle of the surrounding electronic device (eg electronic device 201) relative to the VR/AR device 100, and then uses triangulation, or the received signal strength indicator (RSSI) value, or BT The time of flight (TOF) of the /WIFI/UWB signal calculates the distance of the electronic device 201, and then obtains the position (angle and distance) of the electronic device 201 relative to the VR/AR device 100.
在本申请实施例中,探测请求也称第一请求消息,探测响应也称为第一响应消息,第一响应消息的接收情况即探测响应的BT/WIFI/UWB信号AoA和TOF。In this embodiment of the present application, the probe request is also referred to as the first request message, the probe response is also referred to as the first response message, and the reception of the first response message is the BT/WIFI/UWB signal AoA and TOF of the probe response.
基于前文对图1所示的通信系统10,以及图2A所示的VR/AR设备100的结构的详细介绍。接下来结合方法流程图以及一系列用户界面,来详细描述本申请实施例提供的方法。Based on the foregoing detailed introduction to the structure of the communication system 10 shown in FIG. 1 and the VR/AR device 100 shown in FIG. 2A . Next, the method provided by the embodiments of the present application will be described in detail with reference to the method flowchart and a series of user interfaces.
参考图3,图3示例性示出了本申请实施例提供的VR/AR设备截取第一图像的方法流程。Referring to FIG. 3 , FIG. 3 exemplarily shows a flow of a method for capturing a first image by a VR/AR device according to an embodiment of the present application.
S101,VR/AR设备100响应于检测到的第一操作,截取第一图像。S101, the VR/AR device 100 captures a first image in response to the detected first operation.
在一些实施例中,该第一操作用于截取VR/AR设备100显示屏上显示的全部图像。In some embodiments, the first operation is used to capture all images displayed on the display screen of the VR/AR device 100 .
参考图4A-图4B,图4A-图4B示例性示出了VR/AR设备100截取显示屏显示全部图像的用户界面410。在本申请实施例中,用户界面410也可以称为第一用户界面。Referring to FIG. 4A-FIG. 4B, FIG. 4A-FIG. 4B exemplarily show a user interface 410 in which the VR/AR device 100 captures all images displayed on the display screen. In this embodiment of the present application, the user interface 410 may also be referred to as a first user interface.
图4A示例性示出VR/AR设备100检测到第一操作的一种可能实现方式例如手势操作。FIG. 4A exemplarily shows a possible implementation manner in which the VR/AR device 100 detects a first operation, such as a gesture operation.
如图4A所示,用户界面410显示图像411,光标412A。图像411为VR/AR设备100VR/AR设备100显示屏显示的全部图像。光标412A的表现形式可以为“箭头”图标。As shown in Figure 4A, user interface 410 displays image 411, cursor 412A. The image 411 is all the images displayed on the display screen of the VR/AR device 100 VR/AR device 100 . Cursor 412A may take the form of an "arrow" icon.
上述光标可以是图标、图像等,当光标处于VR/AR设备显示屏所显示页面的不同位置时,光标所对应的图标可以不同,例如,当光标处于上述页面中控件所在位置时,光标所对应的图标可以是一个箭头或一个手的图标,当光标处于上述页面中文本输入框所在位置时,光标所对应的图标可以是一个竖线或横线。The above-mentioned cursor can be an icon, an image, etc. When the cursor is at different positions on the page displayed on the VR/AR device display screen, the icon corresponding to the cursor can be different. For example, when the cursor is at the position of the control on the above-mentioned page, the corresponding icon The icon can be an arrow or a hand icon. When the cursor is at the position of the text input box in the above page, the icon corresponding to the cursor can be a vertical line or a horizontal line.
VR/AR设备100可以在图4A所示的用户界面410中检测到第一操作,例如检测到用户输入“握拳”手势412B,该用户输入“握拳”手势412B可以被VR/AR设备100的摄像装置205拍摄到,即用户输入“握拳”手势412B的手势操作应该处于摄像装置205的拍摄范围内。响应于该第一操作,VR/AR设备100截取第一图像并显示如图4B所示的用户界面410。The VR/AR device 100 may detect a first operation in the user interface 410 shown in FIG. 4A , for example, detecting a user input of a "make a fist" gesture 412B, which may be captured by the camera of the VR/AR device 100 The gesture operation captured by the device 205 , that is, the user inputting the “make a fist” gesture 412B should be within the shooting range of the camera device 205 . In response to the first operation, the VR/AR device 100 captures the first image and displays the user interface 410 as shown in FIG. 4B .
图4B示例性示出VR/AR设备100响应于第一操作,截取第一图像的用户界面。FIG. 4B exemplarily shows a user interface in which the VR/AR device 100 captures a first image in response to the first operation.
如图4B所示,用户界面410显示图像411,光标412A,控件413A,控件413B和矩形线框414。As shown in FIG. 4B , the user interface 410 displays an image 411 , a cursor 412A, a control 413A, a control 413B and a rectangular wireframe 414 .
图像411为用户截取到的第一图像。在一些实施例中,处理器201可以获取内部存储器发送给显示装置206的图像流,以得到第一图像。在另一些实施例中,处理器201可以直接获取显示装置206显示的全部图像,并将该图像截取下来以得到第一图像。The image 411 is the first image captured by the user. In some embodiments, the processor 201 may obtain the image stream sent from the internal memory to the display device 206 to obtain the first image. In other embodiments, the processor 201 may directly acquire all the images displayed by the display device 206, and capture the images to obtain the first image.
光标412A的表现形式可以为变为“拳头”图像,用于提示用户VR/AR设备100已接收到用户输入的第一操作。The expression form of the cursor 412A may be changed to a "fist" image, which is used to prompt the user that the VR/AR device 100 has received the first operation input by the user.
控件413A用于分享截取到的第一图像,控件413B用于取消分享截取到的第一图像。The control 413A is used to share the captured first image, and the control 413B is used to cancel the sharing of the captured first image.
矩形线框414即图像411的边框,用以提示用户VR/AR设备100已经截取到用户界面410中的全部图像。The rectangular wire frame 414 is the frame of the image 411 , which is used to prompt the user that the VR/AR device 100 has captured all the images in the user interface 410 .
可以理解的是,图4A-图4B仅仅示例性示出,VR/AR设备100截取第一图像的一种实现方式。上述第一操作除了上述用户输入“握拳”手势412B操作之外,还可以是用户上下摇晃输入设备300的操作,具体的,输入设备300可以检测到自身被摇晃的操作,并将该摇晃操作的相关数据传输给VR/AR设备100,由VR/AR设备100中的处理器201根据该数据,确定该输入设备300接收到第一操作。或者,第一操作还可以是用户输入“截取全屏”的语音指令等等。It can be understood that FIG. 4A-FIG. 4B only exemplarily show an implementation manner in which the VR/AR device 100 captures the first image. In addition to the above-mentioned user inputting the "make a fist" gesture 412B operation, the above-mentioned first operation may also be an operation in which the user shakes the input device 300 up and down. The relevant data is transmitted to the VR/AR device 100, and the processor 201 in the VR/AR device 100 determines that the input device 300 has received the first operation according to the data. Alternatively, the first operation may also be the user inputting a voice instruction of "capture full screen", or the like.
在一些实施例中,VR/AR设备100检测到第一操作后,可以在一定时间内例如1秒钟内自动截取第一图像。在另一些实施例中,VR/AR设备100检测到第一操作后,还可以再次接收到用户输入的确定截取第一图像的操作,例如用户按压输入设备300上的按键,VR/AR设备100响应于检测到作用于确认截取第一图像的控件的操作后,截取第一图像。In some embodiments, after the VR/AR device 100 detects the first operation, it can automatically capture the first image within a certain period of time, for example, within 1 second. In other embodiments, after the VR/AR device 100 detects the first operation, it may also receive a user input again to determine the operation to capture the first image. For example, the user presses a button on the input device 300 and the VR/AR device 100 The first image is captured in response to detecting an operation of the control for confirming the capture of the first image.
在一些实施例中,VR/AR设备100检测到第一操作后,还可以接收用户输入的调整第一图像的大小或位置的操作,例如通过食指和中指相互捏合或扩张的手势来控制第一图像的缩小或扩大,又例如通过食指移动控制第一图像的位置移动。In some embodiments, after detecting the first operation, the VR/AR device 100 may also receive an operation input by the user to adjust the size or position of the first image, for example, the first finger and the middle finger are pinched or expanded to control the first image. For the reduction or expansion of the image, for example, the position movement of the first image is controlled by moving the index finger.
在另一些实施例中,该第一操作用于截取ROI中的图像。In other embodiments, the first operation is used to capture the image in the ROI.
该第一操作可以包含:用于触发VR/AR设备100选择ROI的操作,和,用于VR/AR设备100确定ROI的操作,和用于VR/AR设备100截取ROI中的图像的操作。The first operation may include: an operation for triggering the VR/AR device 100 to select an ROI, an operation for the VR/AR device 100 to determine the ROI, and an operation for the VR/AR device 100 to capture an image in the ROI.
参考图5A-图5E,图5A-图5E示例性示出VR/AR设备100截取ROI中的图像的用户界面510。在本申请实施例中,用户界面510也可以称为第一用户界面。Referring to FIGS. 5A-5E , FIGS. 5A-5E exemplarily illustrate a user interface 510 for the VR/AR device 100 to capture an image in an ROI. In this embodiment of the present application, the user interface 510 may also be referred to as a first user interface.
图5A示例性示出VR/AR设备100检测到第一操作中用于触发VR/AR设备100选择ROI的一种可能实现方式,例如手势操作。FIG. 5A exemplarily shows a possible implementation manner in which the VR/AR device 100 detects a first operation for triggering the VR/AR device 100 to select an ROI, such as a gesture operation.
如图5A所示,用户界面510显示图像511,光标512A。图5A所示的用户界面和图4A所示的用户界面一样,具体参见上文对图4A的描述。As shown in Figure 5A, user interface 510 displays image 511, cursor 512A. The user interface shown in FIG. 5A is the same as the user interface shown in FIG. 4A , please refer to the description of FIG. 4A above for details.
VR/AR设备100可以在图5A所示的用户界面510中检测到用于触发VR/AR设备100选择ROI的操作,例如检测到用户输入“手掌”手势512B,响应于该操作,VR/AR设备100显示如图5B所示的用户界面510。具体的,摄像头205拍摄到用户输入的“手掌”手势512B的图像,将该图像发送至处理器201,处理器201对该图像进行分析,确定该图像中的“手掌”手势512B对应的操作,响应于该操作,VR/AR设备100显示如图5B所示的用户界面510。The VR/AR device 100 may detect an operation for triggering the selection of an ROI by the VR/AR device 100 in the user interface 510 shown in FIG. 5A , for example, detecting that the user inputs a “palm” gesture 512B, and in response to the operation, the VR/AR Device 100 displays user interface 510 as shown in FIG. 5B. Specifically, the camera 205 captures an image of the "palm" gesture 512B input by the user, sends the image to the processor 201, and the processor 201 analyzes the image to determine the operation corresponding to the "palm" gesture 512B in the image, In response to this operation, the VR/AR device 100 displays a user interface 510 as shown in FIG. 5B .
如图5B所示,用户界面510显示图像511,光标512A。光标512A的表现形式变为“手掌”图像,用于提示用户VR/AR设备100已接收到用户输入的触发电子设备选择ROI的手 势操作。As shown in Figure 5B, user interface 510 displays image 511, cursor 512A. The representation form of the cursor 512A becomes a "palm" image, which is used to prompt the user that the VR/AR device 100 has received the user-input gesture operation that triggers the electronic device to select the ROI.
可以理解的是,图5A-图5B仅仅示例性示出第一操作中用于触发VR/AR设备100选择ROI的一种可能实现方式,在一些实施例中,该用于触发VR/AR设备100选择ROI的操作还可以是用户输入语音指令例如“截图”,具体的,VR/AR设备100的麦克风接收到该语音,将该语音传输至处理器201,处理器201对该语音进行分析,根据该语音执行对应的功能。在另一些实施例中,该用于触发VR/AR设备100选择ROI的操作还可以是用户左右摇晃输入设备300的操作,具体的,输入设备300可以检测到自身被摇晃的操作,并将该摇晃操作的相关数据传输给VR/AR设备100,由VR/AR设备100中的处理器201根据该数据,确定该输入设备300接收到第一操作中用于触发VR/AR设备100选择ROI的操作,并执行该操作对应的功能。It can be understood that FIGS. 5A-5B only exemplarily show a possible implementation manner for triggering the VR/AR device 100 to select an ROI in the first operation. In some embodiments, this is used to trigger the VR/AR device 100. 100 The operation of selecting the ROI can also be a user inputting a voice command such as "screenshot". Specifically, the microphone of the VR/AR device 100 receives the voice, and transmits the voice to the processor 201, and the processor 201 analyzes the voice, The corresponding function is executed according to the voice. In other embodiments, the operation for triggering the selection of the ROI by the VR/AR device 100 may also be an operation of the user shaking the input device 300 left and right. Specifically, the input device 300 may detect the operation of being shaken, and use the The relevant data of the shaking operation is transmitted to the VR/AR device 100, and the processor 201 in the VR/AR device 100 determines, according to the data, that the input device 300 receives the first operation for triggering the VR/AR device 100 to select the ROI. operation, and execute the corresponding function of the operation.
VR/AR设备100可以在图5B所示的用户界面510中检测到用于VR/AR设备100确定ROI的操作,例如检测到用户“手掌”512B移动的操作,且“手掌”512B移动轨迹512C围成一个类似规则图形,例如类似矩形、菱形、三角形、圆形等。响应于该操作,VR/AR设备100显示如图5C所示的用户界面510。The VR/AR device 100 may detect an operation for the VR/AR device 100 to determine the ROI in the user interface 510 shown in FIG. 5B , for example, an operation to detect the movement of the user's "palm" 512B, and the "palm" 512B moves the trajectory 512C Form a similar regular figure, such as a rectangle, a diamond, a triangle, a circle, etc. In response to this operation, the VR/AR device 100 displays a user interface 510 as shown in FIG. 5C .
如图5C所示,用户界面510显示图像511,光标512A,以及光标512A的移动轨迹512D,矩形线框513,第一图像514,控件515A,以及控件515B。As shown in FIG. 5C , the user interface 510 displays an image 511, a cursor 512A, and a movement track 512D of the cursor 512A, a rectangular wire frame 513, a first image 514, a control 515A, and a control 515B.
其中,光标512A移动轨迹512D是:“手掌”512B的移动轨迹512C对应的光标的移动轨迹。具体的,VR/AR设备100调用摄像头205在一定时间内(例如3秒)连续采集用户手的图像。然后摄像头205将采集到的多个手的图像传输到处理器201中,处理器201对该多个手掌的图像进行分析,识别各个图像中手手掌的坐标,该多个图像中手掌的坐标形成一个移动轨迹512C,之后在显示屏上显示手掌的移动轨迹512C对应的光标的移动轨迹512D。例如,用户手向左移动20cm时,则电子设备100显示屏上显示的光标向左移动10cm。The movement track 512D of the cursor 512A is the movement track of the cursor corresponding to the movement track 512C of the "palm" 512B. Specifically, the VR/AR device 100 calls the camera 205 to continuously collect images of the user's hand within a certain period of time (for example, 3 seconds). The camera 205 then transmits the collected images of the hands to the processor 201, and the processor 201 analyzes the images of the multiple palms to identify the coordinates of the hands and palms in each image, and the coordinates of the palms in the multiple images form A moving track 512C, and then a moving track 512D of the cursor corresponding to the moving track 512C of the palm is displayed on the display screen. For example, when the user's hand moves to the left by 20 cm, the cursor displayed on the display screen of the electronic device 100 moves to the left by 10 cm.
其中,矩形线框513是:包围ROI的矩形线框,用以提示用户VR/AR设备100已经确定ROI。具体的,VR/AR设备100中的处理器可以根据移动轨迹512D,采用确定ROI的算法来确定移动轨迹512D对应的ROI。该ROI可以是包含移动轨迹512D的最小规则区域,或者最小非规则区域。当ROI是包含移动轨迹512D的最小规则区域,例如最小矩形区域时,VR/AR设备100可以取移动轨迹512D在当前显示屏显示的图像的像素坐标中横坐标最小值和最大值,纵坐标最小值和最大值,该四个最值可以组合得出四个二维坐标,该四个二维坐标围成的矩形线框即包围ROI的矩形线框513。可以理解的是,移动轨迹512D对应的ROI也可以不是包含移动轨迹512D的最小矩形区域,可以是圆形区域或者其他,具体由VR/AR设备100存储的确定ROI的算法决定。当ROI是包含移动轨迹512D的最小非规则区域,该移动轨迹512D本身所围成的区域即ROI区域。The rectangular wire frame 513 is a rectangular wire frame surrounding the ROI, which is used to prompt the user that the VR/AR device 100 has determined the ROI. Specifically, the processor in the VR/AR device 100 may determine the ROI corresponding to the movement track 512D by using an algorithm for determining the ROI according to the movement track 512D. The ROI may be the smallest regular area containing the movement trajectory 512D, or the smallest irregular area. When the ROI is the smallest regular area including the movement track 512D, such as the smallest rectangular area, the VR/AR device 100 can take the minimum and maximum abscissa and the minimum ordinate among the pixel coordinates of the image displayed by the movement track 512D on the current display screen. value and the maximum value, the four maximum values can be combined to obtain four two-dimensional coordinates, and the rectangular wire frame enclosed by the four two-dimensional coordinates is the rectangular wire frame 513 surrounding the ROI. It can be understood that the ROI corresponding to the movement track 512D may not be the smallest rectangular area including the movement track 512D, but may be a circular area or other, which is specifically determined by the algorithm for determining the ROI stored in the VR/AR device 100 . When the ROI is the smallest irregular area including the movement track 512D, the area enclosed by the movement track 512D itself is the ROI area.
其中,图像514是ROI中的图像即第一图像。在一些实施例中,处理器201可以获取内部存储器发送给显示装置206的图像流,并将该图像流中ROI中的图像截取下来以得到第一图像。在另一些实施例中,处理器201可以直接获取显示装置206显示的图像,并将该显示屏中ROI中的图像截取下来以得到第一图像。The image 514 is the image in the ROI, that is, the first image. In some embodiments, the processor 201 may acquire the image stream sent from the internal memory to the display device 206, and capture the image in the ROI in the image stream to obtain the first image. In other embodiments, the processor 201 may directly acquire the image displayed by the display device 206, and capture the image in the ROI in the display screen to obtain the first image.
控件515A是用于确定截取图像514。控件515B是用于取消截取图像514。Control 515A is for determining the captured image 514 . Control 515B is used to cancel the captured image 514 .
可以理解的是,图5C仅仅示例性示出第一操作中用于触发VR/AR设备100选择ROI的一种可能实现方式。在一些实施例中,该用于触发VR/AR设备100选择ROI的操作可以是 用户注视点移动来控制光标移动的操作,光标的移动轨迹可以构成类似规则图形。具体的,摄像头205连续采集用户眼球图像,将采集到的多个眼球图像发送至处理器201,处理器201对该图像进行分析,确定用户注视点的移动轨迹,并根据注视点移动轨迹在显示屏中显示对应的光标的移动轨迹,然后处理器201根据光标的移动轨迹确定ROI。在另一些实施例中,该用于VR/AR设备100确定ROI的操作还可以是用户通过手持输入设备300移动来控制光标移动的操作,光标的移动轨迹可以构成类似规则图形。具体的,输入设备300可以检测到自身移动的操作,并将该移动操作的相关数据传输给VR/AR设备100,由VR/AR设备100中的处理器201根据该数据,确定该输入设备300接收到第一操作中用于VR/AR设备100确定ROI的操作,并执行该操作对应的功能即在显示屏中显示输入设备300移动轨迹对应的光标的移动轨迹,然后处理器201根据光标的移动轨迹确定ROI。It can be understood that FIG. 5C only exemplarily shows a possible implementation manner for triggering the VR/AR device 100 to select an ROI in the first operation. In some embodiments, the operation for triggering the VR/AR device 100 to select an ROI may be an operation of controlling the movement of the cursor by moving the user's gaze point, and the movement track of the cursor may form a similar regular graph. Specifically, the camera 205 continuously collects images of the user's eyeballs, and sends the collected multiple eyeball images to the processor 201. The processor 201 analyzes the images, determines the movement trajectory of the user's gaze point, and displays the user's gaze point according to the movement trajectory of the gaze point. The corresponding movement track of the cursor is displayed on the screen, and then the processor 201 determines the ROI according to the movement track of the cursor. In other embodiments, the operation for the VR/AR device 100 to determine the ROI may also be an operation for the user to control the movement of the cursor by moving the handheld input device 300, and the movement track of the cursor may form a similar regular graph. Specifically, the input device 300 can detect the operation of its own movement, and transmit the relevant data of the movement operation to the VR/AR device 100, and the processor 201 in the VR/AR device 100 determines the input device 300 according to the data. The operation for determining the ROI of the VR/AR device 100 in the first operation is received, and the function corresponding to the operation is performed, that is, the movement trajectory of the cursor corresponding to the movement trajectory of the input device 300 is displayed on the display screen, and then the processor 201 according to the cursor's movement trajectory. Move the trajectory to determine the ROI.
值得注意的是,上述通过用户手掌移动、注视点移动,手持输入设备300移动来控制光标移动,光标移动轨迹可以构成类似规则图形,也可以未构成类似规则图形。当光标移动轨迹未构成类似规则图形时,VR/AR设备100处理器可以自动确定一个规则图形的ROI,例如矩形、圆形、菱形、三角形等。该ROI包含上述未构成类似规则图形的光标的移动轨迹。当光标移动轨迹未构成类似规则图形或者非类似规则图形,例如一个点、或者一条短线条等,VR/AR设备100可以提示用户该操作为无效操作,请用户重新输入操作。It is worth noting that the above-mentioned movement of the cursor is controlled by the user's palm movement, gaze point movement, and handheld input device 300 movement, and the cursor movement trajectory may or may not constitute a similar regular figure. When the movement track of the cursor does not form a similar regular figure, the processor of the VR/AR device 100 can automatically determine an ROI of a regular figure, such as a rectangle, a circle, a diamond, a triangle, and the like. The ROI includes the movement track of the above-mentioned cursor that does not constitute a similar regular pattern. When the movement track of the cursor does not form a similar regular graphic or a non-similar regular graphic, such as a point or a short line, the VR/AR device 100 may prompt the user that the operation is an invalid operation, and ask the user to re-enter the operation.
在一些实施例中,VR/AR设备100检测到第一操作中用于VR/AR设备100确定ROI的操作后,还可以接收用户输入的调整第一图像的大小或位置的操作,例如通过食指和中指相互捏合或扩张的手势来控制第一图像的缩小或扩大,又例如通过食指移动控制第一图像的位置移动。In some embodiments, after detecting the operation for the VR/AR device 100 to determine the ROI in the first operation, the VR/AR device 100 may also receive an operation input by the user to adjust the size or position of the first image, for example, by using an index finger A gesture of pinching or expanding with the middle finger is used to control the reduction or expansion of the first image, and for example, the position movement of the first image is controlled by moving the index finger.
图5D示例性示出VR/AR设备100检测到第一操作中用于截取ROI中的图像的一种可能实现方式,例如手势操作。FIG. 5D exemplarily shows a possible implementation manner in which the VR/AR device 100 detects the first operation for capturing the image in the ROI, such as a gesture operation.
VR/AR设备100可以在图5D所示的用户界面510中检测到的用户作用于控件515A的操作,该操作可以是用户通过移动手控制光标512A移动到控件515A处,并停留例如1秒的操作。响应于该操作,VR/AR设备100截取ROI中的图像,并显示图5E所示的用户界面510。The VR/AR device 100 may detect an operation performed by the user on the control 515A in the user interface 510 shown in FIG. 5D . The operation may be that the user controls the cursor 512A to move to the control 515A by moving the hand, and stays there for, for example, 1 second. operate. In response to this operation, the VR/AR device 100 captures the image in the ROI, and displays the user interface 510 shown in FIG. 5E .
如图5E所示,用户界面510包括光标512A,第一图像514,控件516A,控件516B。其中:As shown in FIG. 5E, the user interface 510 includes a cursor 512A, a first image 514, a control 516A, and a control 516B. in:
第一图像514即ROI中的图像。控件516A用于分享该第一图像514,控件516B用于取消分享该第一图像514。The first image 514 is the image in the ROI. The control 516A is used to share the first image 514 , and the control 516B is used to cancel the sharing of the first image 514 .
可以理解的是,图5D-图5E仅仅示例性示出第一操作中用于截取ROI的一种可能实现方式。在一些实施例中,该用于截取ROI的操作还可以是用户输入例如“ok”的手势、例如输入“确定”的语音、例如“连续眨眼两次”的操作等。It can be understood that FIG. 5D-FIG. 5E only exemplarily show a possible implementation manner for capturing the ROI in the first operation. In some embodiments, the operation for capturing the ROI may also be a user inputting a gesture such as "ok", a voice such as inputting "OK", an operation such as "blinking eyes twice in a row", and the like.
可以理解的是,图5A-图5E仅仅示例性示出,VR/AR设备100响应于检测到的第一操作截取ROI中的图像的用户界面,不应构成对本申请实施例的限制。It can be understood that FIGS. 5A-5E are only illustrative, and the VR/AR device 100 captures the user interface of the image in the ROI in response to the detected first operation, which should not constitute a limitation to the embodiments of the present application.
在一些实施例中,上述第一操作中的,用于触发VR/AR设备100选择ROI的操作,和,用于VR/AR设备100确定ROI的操作可以融合为一个操作。例如,VR/AR设备100可以在图5A所示的用户界面中检测到用户输入例如“手掌”512B移动的操作,且“手掌”512B移动轨迹512C围成一个类似规则图形,例如类似矩形、菱形、三角形、圆形等。响应于该操作,VR/AR设备100显示如图5C所示的用户界面510。In some embodiments, in the above-mentioned first operation, the operation for triggering the VR/AR device 100 to select the ROI and the operation for the VR/AR device 100 to determine the ROI may be merged into one operation. For example, the VR/AR device 100 can detect the user input operation such as the movement of the "palm" 512B in the user interface shown in FIG. 5A, and the movement track 512C of the "palm" 512B encloses a similar regular figure, such as a rectangle, a diamond , triangle, circle, etc. In response to this operation, the VR/AR device 100 displays a user interface 510 as shown in FIG. 5C .
在另一些实施例中,上述第一操作中的,用于VR/AR设备100确定ROI的操作,和用于VR/AR设备100截取ROI中的图像的操作可以融合为一个操作。例如,VR/AR设备100在图5B所示的用户界面中检测到用户输入例如“手掌”512B移动的操作,且“手掌”512B移动轨迹512C围成一个类似规则图形,例如类似矩形、菱形、三角形、圆形等。响应于该操作,VR/AR设备100不显示如图5C所示的用户界面510,而是直接截取第一图像并显示如图5D所示的用户界面。In other embodiments, in the above-mentioned first operation, the operation for the VR/AR device 100 to determine the ROI and the operation for the VR/AR device 100 to capture the image in the ROI may be merged into one operation. For example, in the user interface shown in FIG. 5B, the VR/AR device 100 detects that the user inputs an operation such as the movement of the "palm" 512B, and the movement track 512C of the "palm" 512B encloses a similar regular figure, such as a rectangle, a diamond, Triangles, circles, etc. In response to this operation, the VR/AR device 100 does not display the user interface 510 as shown in FIG. 5C , but directly captures the first image and displays the user interface as shown in FIG. 5D .
在另一些实施例中,上述第一操作中的,用于触发VR/AR设备100选择ROI的操作,和,用于VR/AR设备100确定ROI的操作,和用于VR/AR设备100截取ROI中的图像的操作可以融合为一个操作。例如,VR/AR设备100可以在图5A所示的用户界面中检测到用户输入例如“手掌”512B移动的操作,且“手掌”512B移动轨迹512C围成一个类似规则图形,例如类似矩形、菱形、三角形、圆形等。响应于该操作,VR/AR设备100不显示如图5C所示的用户界面510,而是直接截取第一图像并显示如图5D所示的用户界面。In other embodiments, among the above-mentioned first operations, an operation for triggering the VR/AR device 100 to select an ROI, an operation for the VR/AR device 100 to determine an ROI, and an operation for the VR/AR device 100 to capture Operations on the images in the ROI can be fused into one operation. For example, the VR/AR device 100 can detect the user input operation such as the movement of the "palm" 512B in the user interface shown in FIG. 5A, and the movement track 512C of the "palm" 512B encloses a similar regular figure, such as a rectangle, a diamond , triangle, circle, etc. In response to this operation, the VR/AR device 100 does not display the user interface 510 as shown in FIG. 5C , but directly captures the first image and displays the user interface as shown in FIG. 5D .
在一些实施例中,图3所示的方法流程还包括:保存上述第一图像。In some embodiments, the method flow shown in FIG. 3 further includes: saving the above-mentioned first image.
S102,VR/AR设备100保存截取到的第一图像。S102, the VR/AR device 100 saves the captured first image.
VR/AR设备100截取到第一图像后,可以在一定时间后,自动将第一图像保存至VR/AR设备100的图库中。或者,VR/AR设备100截取到第一图像后,还可以接收用户操作,响应于该操作,将第一图像保存至VR/AR设备100的图库中。After the VR/AR device 100 captures the first image, the first image may be automatically saved to the gallery of the VR/AR device 100 after a certain period of time. Alternatively, after the VR/AR device 100 captures the first image, it may also receive a user operation, and in response to the operation, save the first image to the gallery of the VR/AR device 100 .
在一些实施例中,VR/AR设备100截取到第一图像后,VR/AR设备100可以根据成像原理将第一图像融合为用户观看的具有3D效果的第一图像,并将该具有3D效果的第一图像保存至图库中。在这种情况下,如果用户将该具有3D效果的第一图像分享至周围例如手机、电脑、平板、智慧屏等电子设备上时,用户在周围的电子设备观看该具有3D效果的第一图像时,仍然与佩戴VR/AR设备100观看该第一图像时有着同样的3D效果。In some embodiments, after the VR/AR device 100 captures the first image, the VR/AR device 100 may fuse the first image into a first image with a 3D effect viewed by the user according to an imaging principle, and combine the first image with a 3D effect. to save the first image of . In this case, if the user shares the first image with 3D effect to surrounding electronic devices such as mobile phones, computers, tablets, smart screens, etc., the user watches the first image with 3D effect on the surrounding electronic devices , it still has the same 3D effect as when wearing the VR/AR device 100 to watch the first image.
在一些实施例中,图3所示的方法流程还包括:分享上述第一图像。In some embodiments, the method flow shown in FIG. 3 further includes: sharing the above-mentioned first image.
步骤S103-S104为实现VR/AR设备100将第一图像分享至周围电子设备如电子设备201的方法流程。Steps S103-S104 are the method flow for implementing the VR/AR device 100 to share the first image to surrounding electronic devices such as the electronic device 201 .
S103,VR/AR设备100响应于检测到的用户操作,显示周围电子设备的标识。S103, the VR/AR device 100 displays the identifiers of surrounding electronic devices in response to the detected user operation.
该用户操作是触发VR/AR设备100显示周围电子设备的标识的操作。在本申请实施例中,第三操作即用于触发VR/AR设备100显示周围电子设备的标识的用户操作。The user operation is an operation that triggers the VR/AR device 100 to display the identifiers of surrounding electronic devices. In this embodiment of the present application, the third operation is a user operation for triggering the VR/AR device 100 to display the identifiers of surrounding electronic devices.
例如,VR/AR设备100可以在图4B所示的用户界面410或者图5E用户界面510中检测到,作用于控件412A或者控件516A的用户操作图5E所示的用户操作,或者是,用户输入“ok”的手势,“分享”的语音,以及“连续眨眼两次”的操作等。For example, the VR/AR device 100 may detect in the user interface 410 shown in FIG. 4B or the user interface 510 in FIG. 5E, the user operation on the control 412A or the control 516A shown in FIG. 5E, or the user input The gesture of "ok", the voice of "share", and the operation of "blink twice in a row", etc.
在一些实施例中,VR/AR设备100还可以在存储第一图像的用户界面(例如图库提供的用户界面中)中检测到上述触发VR/AR设备100显示周围电子设备的标识的操作,响应于该操作显示图6所示的用户界面610。在本申请实施例中,用户界面610也可以称为第二用户界面。In some embodiments, the VR/AR device 100 may also detect the above-mentioned operation of triggering the VR/AR device 100 to display the identifiers of the surrounding electronic devices in the user interface storing the first image (for example, in the user interface provided by the gallery), and responds The user interface 610 shown in FIG. 6 is displayed for this operation. In this embodiment of the present application, the user interface 610 may also be referred to as a second user interface.
如图6所示,用户界面610显示有周围电子设备的标识例如周围电子设备的图像,包括:图像1,图像2,图像3和图像4。在一些实施例中,用户界面610显示的周围电子设备的标识还可以包括:周围电子设备的虚拟图标、类型、型号。As shown in FIG. 6 , the user interface 610 displays identifications of surrounding electronic devices, such as images of the surrounding electronic devices, including: image 1 , image 2 , image 3 and image 4 . In some embodiments, the identifiers of the surrounding electronic devices displayed on the user interface 610 may further include: virtual icons, types, and models of the surrounding electronic devices.
其中,VR/AR设备100显示周围电子设备的标识的具体步骤如下:The specific steps for the VR/AR device 100 to display the identifiers of surrounding electronic devices are as follows:
首先,VR/AR设备100根据双目定位技术获取周围电子设备相对于VR/AR设备100的位置和图像,例如电子设备201的位置1和图像1,然后将该图像1与预先存储的数据库中的电子设备的图像模型进行匹配,进而确定图像1所对应的虚拟图标1、类型1和型号1等。其中,VR/AR设备100存储的数据库的定义可以参考表1详细介绍。First, the VR/AR device 100 obtains the position and image of the surrounding electronic devices relative to the VR/AR device 100 according to the binocular positioning technology, such as the position 1 and the image 1 of the electronic device 201, and then compares the image 1 with the pre-stored database. The image model of the electronic device is matched, and then the virtual icon 1, type 1 and model 1 corresponding to the image 1 are determined. The definition of the database stored in the VR/AR device 100 may be introduced in detail with reference to Table 1.
参考表1,表1示例性示出电子设备的图像模型以及各个图像模型对应的虚拟图标、类型、型号等等。Referring to Table 1, Table 1 exemplarily shows image models of electronic devices and virtual icons, types, models, etc. corresponding to each image model.
电子设备的图像模型Image model of electronic equipment 虚拟图标virtual icon 类型type 型号model
图像模型1Image Model 1 虚拟图标1virtual icon 1 类型1Type 1 型号1Model 1
图像模型2Image Model 2 虚拟图标2virtual icon 2 类型2Type 2 型号2Model 2
图像模型3Image Model 3 虚拟图标3virtual icon 3 类型3Type 3 型号3Model 3
图像模型4Image Model 4 虚拟图标4virtual icon 4 类型4Type 4 型号4Model 4
表1Table 1
表1所示的数据库还可以包括更多电子设备的图像模型,以及各个图像模型所对应的虚拟图标、类型和型号等。The database shown in Table 1 may also include more image models of electronic devices, as well as virtual icons, types, and models corresponding to each image model.
然后,VR/AR设备100根据BT定位技术/WiFi定位技术/UWB定位技术获取周围电子设备相对于VR/AR设备100的位置以及对应的通信地址。例如电子设备201的位置1以及对应的通信地址1。然后在用户界面610中的对应位置显示周围电子设备的图像。之后,VR/AR设备100根据重合定位算法,将双目定位技术获取周围电子设备相对于VR/AR设备100的位置,与,BT定位技术/WiFi定位技术/UWB定位技术获取周围电子设备相对于VR/AR设备100的位置进行匹配,进而确定周围电子设备的标识、位置和通信地址的对应关系。其中,周围电子设备的标识、位置和通信地址的对应关系可以参考表2的详细介绍。Then, the VR/AR device 100 obtains the position of the surrounding electronic devices relative to the VR/AR device 100 and the corresponding communication address according to the BT positioning technology/WiFi positioning technology/UWB positioning technology. For example, the position 1 of the electronic device 201 and the corresponding communication address 1. Images of surrounding electronic devices are then displayed at corresponding locations in the user interface 610 . Afterwards, the VR/AR device 100 uses the binocular positioning technology to obtain the position of the surrounding electronic devices relative to the VR/AR device 100 according to the coincidence positioning algorithm, and uses the BT positioning technology/WiFi positioning technology/UWB positioning technology to obtain the relative position of the surrounding electronic devices relative to the VR/AR device 100 . The location of the VR/AR device 100 is matched, and then the correspondence between the identifiers, locations and communication addresses of the surrounding electronic devices is determined. For the corresponding relationship between the identifiers, locations and communication addresses of the surrounding electronic devices, reference may be made to the detailed introduction in Table 2.
电子设备图像electronic equipment images 虚拟图标virtual icon 类型type 型号model 位置Location 通信地址contact address
图像1image 1 虚拟图标1virtual icon 1 类型1Type 1 型号1Model 1 位置1position 1 通信地址1Mailing address 1
图像2image 2 虚拟图标2virtual icon 2 类型2Type 2 型号2Model 2 位置2position 2 通信地址2Mailing address 2
图像3image 3 虚拟图标3virtual icon 3 类型3Type 3 型号3Model 3 位置3position 3 通信地址3Mailing address 3
图像4image 4 虚拟图标4virtual icon 4 类型4Type 4 型号4Model 4 位置4position 4 通信地址4Mailing address 4
表2Table 2
最后,VR/AR设备100根据表2中的周围电子设备的图像设备标识例如图像、虚拟图标、类型、型号,以及位置,在用户界面610中显示周围电子设备的标识例如图像1,图像2,图像3和图像4。Finally, the VR/AR device 100 displays the identifications of the surrounding electronic devices in the user interface 610 such as image 1, image 2, Image 3 and Image 4.
其中图像1,图像2,图像3和图像4在用户界面610中的位置指示了:周围电子设备相对于VR/AR设备100的位置。例如,如果周围电子设备如电子设备201位于VR/AR设备100左前方,在用户界面中610中电子设备201的图像显示于用户界面610的左边位置,并且电子设备201的图像的大小和电子设备201与VR/AR设备100之间的距离称反比,即电子设备201距离VR/AR设备越近,电子设备201的图像越大,电子设备201距离VR/AR设备越远,则电子设备201的图像越小。这样,用户看到用户界面610后,即可以感知到电子设备201位于自己左前方,也可以感知电子设备201距离自己的远近程度。The positions of Image 1 , Image 2 , Image 3 and Image 4 in the user interface 610 indicate the positions of the surrounding electronic devices relative to the VR/AR device 100 . For example, if the surrounding electronic devices such as electronic device 201 are located in the front left of the VR/AR device 100, the image of the electronic device 201 is displayed in the left position of the user interface 610 in the user interface 610, and the size of the image of the electronic device 201 is the same as that of the electronic device 610. The distance between 201 and the VR/AR device 100 is said to be inversely proportional, that is, the closer the electronic device 201 is to the VR/AR device, the larger the image of the electronic device 201, and the farther the electronic device 201 is from the VR/AR device, the higher the distance between the electronic device 201 and the VR/AR device. the smaller the image. In this way, after seeing the user interface 610, the user can sense that the electronic device 201 is located in front of the user, and can also sense the distance between the electronic device 201 and the user.
在一些实施例中,VR/AR设备100显示的周围电子设备的标识的一种表现形式可以为: VR/AR设备100在用户界面610显示一个空间坐标系,该空间坐标系中显示有周围电子设备的标识,周围电子设备的标识在空间坐标系中位置指示了周围电子设备相对于VR/AR设备100的空间位置。In some embodiments, a representation form of the identification of the surrounding electronic devices displayed by the VR/AR device 100 may be: The identifier of the device, the location of the identifier of the surrounding electronic device in the spatial coordinate system indicates the spatial position of the surrounding electronic device relative to the VR/AR device 100 .
在一些实施例中,VR/AR设备100可以在检测到上述触发VR/AR设备100显示周围电子设备的标识的操作之后,响应于该操作,获取上述表2所示的内容,根据表2中的内容来显示图6所示的电子设备的标识。In some embodiments, the VR/AR device 100 may, after detecting the above-mentioned operation that triggers the VR/AR device 100 to display the identifiers of the surrounding electronic devices, in response to the operation, obtain the content shown in the above Table 2, according to Table 2 content to display the identity of the electronic device shown in FIG. 6 .
在另一些实施例中,VR/AR设备100可以在检测到上述触发VR/AR设备100显示周围电子设备的标识的操作之前,例如VR/AR设备100启用时或者VR/AR设备100截取到第一图像时,预先获取上述表2所示的内容。然后,在检测到触发VR/AR设备100显示周围电子设备的标识的操作之后,响应于该操作,根据预先获取到的表2中的内容来显示图6所示的电子设备的标识。在这种情况下,表2中的内容可以定时更新,这样既节约VR/AR设备100显示周围电子设备的标识的时延,也保证了周围电子设备相对于VR/AR设备100的位置的准确性。In other embodiments, the VR/AR device 100 may detect the above-mentioned operation that triggers the VR/AR device 100 to display the identifiers of surrounding electronic devices, for example, when the VR/AR device 100 is enabled or the VR/AR device 100 intercepts the first When one image is used, the contents shown in Table 2 above are obtained in advance. Then, after detecting an operation that triggers the VR/AR device 100 to display the identifiers of surrounding electronic devices, in response to the operation, the identifiers of the electronic devices shown in FIG. 6 are displayed according to the content in Table 2 acquired in advance. In this case, the content in Table 2 can be updated regularly, which not only saves the time delay for the VR/AR device 100 to display the identifiers of the surrounding electronic devices, but also ensures the accuracy of the position of the surrounding electronic devices relative to the VR/AR device 100 sex.
在一些实施例中,VR/AR设备100可以不用接收用于触发VR/AR设备100显示周围电子设备的标识的操作,直接显示周围电子设备的标识。例如,VR/AR设备100在图5E所示的用户界面中,截取到第一图像后,直接显示图6所示的周围电子设备的标识。In some embodiments, the VR/AR device 100 may directly display the identifiers of the surrounding electronic devices without receiving an operation for triggering the VR/AR device 100 to display the identifiers of the surrounding electronic devices. For example, in the user interface shown in FIG. 5E , the VR/AR device 100 directly displays the identifiers of the surrounding electronic devices shown in FIG. 6 after capturing the first image.
S104,VR/AR设备100响应于检测到第二操作,将第一图像分享至周围电子设备。S104, in response to detecting the second operation, the VR/AR device 100 shares the first image to surrounding electronic devices.
该第二操作是,用于选择周围电子设备的标识的操作。The second operation is an operation for selecting the identifiers of the surrounding electronic devices.
例如,VR/AR设备100可以在图6所示的用户界面中检测到,用户输入“手指”手势611,并指向周围电子设备如电子设备201的标识,例如图像1,或者是输入设备标识如“类型1”、“型号1”的语音,或者还可以是通过移动输入设备300,控制光标512A移动到电子设备201的标识所在位置,并点击按压按键的操作等等。响应于该用于选择电子设备201的标识的操作,VR/AR设备100根据表2中的电子设备201的标识所对应的通信地址1,基于该通信地址1与电子设备201建立通信连接,并基于该通信连接分享第一图像。For example, the VR/AR device 100 may detect in the user interface shown in FIG. 6 that the user inputs a "finger" gesture 611 and points to a surrounding electronic device such as an identification of the electronic device 201, such as image 1, or an input device identification such as The voice of "Type 1", "Type 1", or by moving the input device 300, the cursor 512A is controlled to move to the position of the logo of the electronic device 201, and the operation of pressing the button is clicked and so on. In response to the operation for selecting the identification of the electronic device 201, the VR/AR device 100 establishes a communication connection with the electronic device 201 based on the communication address 1 corresponding to the identification of the electronic device 201 in Table 2, and The first image is shared based on the communication connection.
在本申请实施例中,电子设备201可以称为第一设备,电子设备201的标识,例如图像1也可以成为第一标识。In this embodiment of the present application, the electronic device 201 may be referred to as the first device, and an identifier of the electronic device 201, such as image 1, may also be the first identifier.
在一些实施例中,VR/AR设备100可以在图6所示的用户界面中,在一定时间内例如2秒内,检测到连续选择多个电子设备的标识的操作,响应于该操作,VR/AR设备100根据表2中的多个电子设备的标识所对应的通信地址,基于该通信地址与多个电子设备建立通信连接,并基于该通信连接分享第一图像。In some embodiments, the VR/AR device 100 may, in the user interface shown in FIG. 6 , detect an operation of continuously selecting the identifiers of multiple electronic devices within a certain period of time, such as 2 seconds, and in response to the operation, VR The AR device 100 establishes a communication connection with the plurality of electronic devices based on the communication addresses corresponding to the identifiers of the plurality of electronic devices in Table 2, and shares the first image based on the communication connection.
在一些实施例中,VR/AR设备100分享第一图像的方法不限于步骤S103-S104所示的方法。例如还可以是:VR/AR设备100截取到第一图像后,接收用户操作,VR/AR设备100通过BT通信模块/WiFi通信模块/UWB模块,获取周围电子设备的通信地址,并显示周围电子设备的列表,然后VR/AR设备100接收用户选中周围电子设备的列表中任意一个或多个电子设备的操作,根据该电子设备的通信地址与该电子设备建立通信连接,并分享第一图像。In some embodiments, the method for the VR/AR device 100 to share the first image is not limited to the methods shown in steps S103-S104. For example, after the VR/AR device 100 intercepts the first image, the user operation is received, and the VR/AR device 100 obtains the communication address of the surrounding electronic devices through the BT communication module/WiFi communication module/UWB module, and displays the surrounding electronic devices. A list of devices, and then the VR/AR device 100 receives the user's operation of selecting any one or more electronic devices in the list of surrounding electronic devices, establishes a communication connection with the electronic device according to the communication address of the electronic device, and shares the first image.
可以理解的是,VR/AR设备100截取第一图像,与周围电子设备建立通信连接,以及基于该通信连接分享第一图像操作步骤不限于图3所示的顺序。It can be understood that, the VR/AR device 100 captures the first image, establishes a communication connection with surrounding electronic devices, and shares the first image based on the communication connection and is not limited to the sequence shown in FIG. 3 .
一些实施例中,VR/AR设备100在可以先与周围电子设备建立通信连接,然后截取第一图像,之后基于预先建立好的通信连接将第一图像分享至周围电子设备中。例如,VR/AR设备100启动后,可以显示周围电子设备的标识,VR/AR设备100检测到选中周围电子设备的标识的操作,与周围电子设备建立通信连接。之后VR/AR设备100截取到第一图像后,检测到用于分享第一图像的操作如作用于图5E的控件516A,响应于该操作,VR/AR设备100基于预先建立好的通信连接将第一图像分享至周围电子设备中。In some embodiments, the VR/AR device 100 may first establish a communication connection with the surrounding electronic devices, then capture the first image, and then share the first image to the surrounding electronic devices based on the pre-established communication connection. For example, after the VR/AR device 100 is started, the identifiers of the surrounding electronic devices can be displayed. The VR/AR device 100 detects the operation of selecting the identifiers of the surrounding electronic devices, and establishes a communication connection with the surrounding electronic devices. After the VR/AR device 100 captures the first image, an operation for sharing the first image is detected, such as acting on the control 516A in FIG. 5E . The first image is shared to surrounding electronic devices.
在这种情况下,VR/AR设备100可以提前与周围电子设备建立连接,从而提高了VR/AR设备100分享第一图像的效率。In this case, the VR/AR device 100 can establish a connection with the surrounding electronic devices in advance, thereby improving the efficiency of the VR/AR device 100 sharing the first image.
可见,采用本申请实施例提供的方法后,VR/AR设备可以随时截取第一图像。进一步的,该第一图像的大小可以由用户自己定义,并且用户还可以将该第一图像分享至周围任意一个或多个电子设备中。具体的,用户可以先截取第一图像,然后与周围电子设备建立连接并分享第一图像。这样,当用户截取到不同的第一图像时,可以将不同的第一图像分别分享至不同的电子设备中,满足用户的个性化需求。或者用户可以先与周围电子设备建立连接,然后再分享第一图像。这样,第一图像只能分享到固定的电子设备中,但可以提高分享第一图像的效率。It can be seen that after using the method provided by the embodiment of the present application, the VR/AR device can capture the first image at any time. Further, the size of the first image can be defined by the user, and the user can also share the first image to any one or more surrounding electronic devices. Specifically, the user can first capture the first image, and then establish a connection with the surrounding electronic devices and share the first image. In this way, when the user captures different first images, the different first images can be shared to different electronic devices respectively, so as to meet the personalized needs of the user. Or the user may first establish a connection with the surrounding electronic devices, and then share the first image. In this way, the first image can only be shared to the fixed electronic device, but the efficiency of sharing the first image can be improved.
本申请的各实施方式可以任意进行组合,以实现不同的技术效果。The various embodiments of the present application can be arbitrarily combined to achieve different technical effects.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk)等。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer program instructions, when loaded and executed on a computer, produce, in whole or in part, the processes or functions described herein. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media. The usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk), and the like.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented. The process can be completed by instructing the relevant hardware by a computer program, and the program can be stored in a computer-readable storage medium. When the program is executed , which may include the processes of the foregoing method embodiments. The aforementioned storage medium includes: ROM or random storage memory RAM, magnetic disk or optical disk and other mediums that can store program codes.
总之,以上所述仅为本发明技术方案的实施例而已,并非用于限定本发明的保护范围。凡根据本发明的揭露,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。In a word, the above descriptions are merely examples of the technical solutions of the present invention, and are not intended to limit the protection scope of the present invention. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of the present invention shall be included within the protection scope of the present invention.

Claims (14)

  1. 一种虚拟现实VR/增强现实AR设备截取图像的方法,所述方法应用于VR/AR设备,其特征在于,所述VR/AR设备包括用于提供3维3D场景的光学组件,并佩戴于用户头部,所述方法包括:A method for capturing an image from a virtual reality VR/augmented reality AR device, the method is applied to the VR/AR device, wherein the VR/AR device includes an optical component for providing a 3D 3D scene, and is worn on the VR/AR device. User head, the method includes:
    所述VR/AR设备显示第一用户界面;the VR/AR device displays a first user interface;
    所述VR/AR设备响应于第一操作,确定所述VR/AR设备的显示屏中的用户感兴趣区域ROI,并截取所述第一图像,所述第一图像为所述ROI中的图像。In response to the first operation, the VR/AR device determines the ROI of the region of interest of the user in the display screen of the VR/AR device, and captures the first image, where the first image is the image in the ROI .
  2. 根据权利要求1所述的方法,其特征在于,所述第一操作包括:用户手部移动的操作,或者,用户眼球注视点移动的操作,或者,和所述VR/AR设备连接的输入设备移动的操作;The method according to claim 1, wherein the first operation comprises: an operation of moving a user's hand, or an operation of moving a user's eye gaze point, or an input device connected to the VR/AR device moving operations;
    所述VR/AR设备确定所述ROI之前,所述方法还包括:Before the VR/AR device determines the ROI, the method further includes:
    所述VR/AR设备响应于所述第一操作,在所述显示屏上显示光标的移动轨迹,所述光标的移动轨迹对应于所述用户手部的移动轨迹,或者,所述用户眼球注视点的移动轨迹,或者,所述输入设备的移动轨迹;In response to the first operation, the VR/AR device displays a movement trajectory of the cursor on the display screen, where the movement trajectory of the cursor corresponds to the movement trajectory of the user's hand, or the user's eye gaze the movement trajectory of the point, or the movement trajectory of the input device;
    所述ROI为包含所述光标的移动轨迹的区域。The ROI is an area including the movement track of the cursor.
  3. 根据权利要求2所述的方法,其特征在于,所述ROI为包含所述光标的移动轨迹的区域,具体包括:The method according to claim 2, wherein the ROI is an area containing the movement track of the cursor, and specifically includes:
    所述ROI为包含所述光标的移动轨迹的最小规则区域;The ROI is the smallest regular area including the movement track of the cursor;
    或者,所述ROI为包含所述光标的移动轨迹的最小非规则区域,所述非规则区域为所述光标的移动轨迹所围成的区域。Alternatively, the ROI is a minimum irregular area including the movement trajectory of the cursor, and the irregular area is an area surrounded by the movement trajectory of the cursor.
  4. 根据权利要求1-3所述的方法,其特征在于,所述VR/AR设备响应于第一操作,截取第一图像之后,所述方法还包括:The method according to claims 1-3, wherein after the VR/AR device captures the first image in response to the first operation, the method further comprises:
    所述VR/AR设备显示第二用户界面,所述第二用户界面显示一个或多个电子设备的标识;the VR/AR device displays a second user interface, and the second user interface displays the identifiers of one or more electronic devices;
    所述VR/AR设备响应于选中第一标识的第二操作,将所述第一图像发送至第一设备,所述第一设备对应所述第一标识,所述第二用户界面中一个或多个电子设备的标识包括所述第一标识。In response to the second operation of selecting the first identifier, the VR/AR device sends the first image to the first device, the first device corresponds to the first identifier, and one of the second user interface or The identifications of the plurality of electronic devices include the first identification.
  5. 根据权利要求4所述的方法,其特征在于,所述VR/AR设备显示第二用户界面之前,所述方法还包括:The method according to claim 4, wherein before the VR/AR device displays the second user interface, the method further comprises:
    所述VR/AR设备检测到第三操作;the VR/AR device detects a third operation;
    所述VR/AR设备显示第二用户界面,具体包括:响应于所述第三操作,所述VR/AR设备显示第二用户界面。The VR/AR device displaying the second user interface specifically includes: in response to the third operation, the VR/AR device displaying the second user interface.
  6. 根据权利要求4或5任一项所述的方法,其特征在于,所述一个或多个电子设备的标识在所述第二用户界面中的位置用于指示所述一个或多个电子设备相对于所述VR/AR设备的位置。The method according to any one of claims 4 or 5, wherein the position of the identifier of the one or more electronic devices in the second user interface is used to indicate that the one or more electronic devices are relatively the location of the VR/AR device.
  7. 根据权利要求6所述的方法,其特征在于,所述一个或多个电子设备的标识包括所述VR/AR设备拍摄到的所述一个或多个电子设备的图像,所述VR/AR设备显示第二用户界面之前,所述方法还包括:The method according to claim 6, wherein the identification of the one or more electronic devices comprises images of the one or more electronic devices captured by the VR/AR device, and the VR/AR device Before displaying the second user interface, the method further includes:
    所述VR/AR设备拍摄所述一个或多个电子设备的图像;the VR/AR device captures images of the one or more electronic devices;
    所述VR/AR设备根据所述一个或多个电子设备的图像,确定所述一个或多个电子设备相对于所述VR/AR设备的位置。The VR/AR device determines the position of the one or more electronic devices relative to the VR/AR device according to the images of the one or more electronic devices.
  8. 根据权利要求7所述的方法,其特征在于,所述一个或多个电子设备的标识包括以下一个或多个:所述一个或多个电子设备的图标、类型或型号;所述VR/AR设备拍摄所述一个或多个电子设备的图像之后,所述方法还包括:The method according to claim 7, wherein the identification of the one or more electronic devices comprises one or more of the following: icons, types or models of the one or more electronic devices; the VR/AR After the device captures the image of the one or more electronic devices, the method further includes:
    所述VR/AR设备根据所述一个或多个电子设备的图像,获取所述一个或多个电子设备的图标、类型或型号中的一个或多个。The VR/AR device acquires one or more of icons, types or models of the one or more electronic devices according to the images of the one or more electronic devices.
  9. 根据权利要求7或8任一项所述的方法,在所述VR/AR设备显示第二用户界面之前,所述方法还包括:The method according to any one of claims 7 or 8, before the VR/AR device displays the second user interface, the method further comprises:
    所述VR/AR设备发送第一请求消息;The VR/AR device sends a first request message;
    所述VR/AR设备接收到所述一个或多个电子设备发送的第一响应消息,所述第一响应消息携带所述一个或多个电子设备的通信地址;The VR/AR device receives a first response message sent by the one or more electronic devices, where the first response message carries the communication address of the one or more electronic devices;
    所述VR/AR设备根据所述第一响应消息的接收情况,获取所述一个或多个电子设备相对所述VR/AR设备的位置;The VR/AR device acquires the position of the one or more electronic devices relative to the VR/AR device according to the reception of the first response message;
    所述VR/AR设备响应于选中第一标识的第二操作,将所述第一图像发送至第一设备,具体包括:The VR/AR device sends the first image to the first device in response to the second operation of selecting the first identifier, specifically including:
    所述VR/AR设备根据所述一个或多个电子设备的图像与所述一个或多个电子设备相对于所述VR/AR设备的位置的对应关系,确定所述第一设备的位置;The VR/AR device determines the position of the first device according to the correspondence between the images of the one or more electronic devices and the positions of the one or more electronic devices relative to the VR/AR device;
    所述VR/AR设备根据所述一个或多个电子设备的通信地址与所述一个或多个电子设备相对于所述VR/AR设备的位置的对应关系,确定所述第一设备的通信地址;The VR/AR device determines the communication address of the first device according to the correspondence between the communication addresses of the one or more electronic devices and the positions of the one or more electronic devices relative to the VR/AR device ;
    所述VR/AR设备根据所述第一设备的通信地址,将所述第一图像发送至所述第一设备。The VR/AR device sends the first image to the first device according to the communication address of the first device.
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述第一操作包括以下一项或多项:手势、语音指令、用户的眼部状态、按压按键的操作;The method according to any one of claims 1-9, wherein the first operation includes one or more of the following: gesture, voice command, user's eye state, and operation of pressing a button;
    所述第一操作由所述VR/AR设备检测到,或者,由和所述输入设备检测到。The first operation is detected by the VR/AR device, or is detected by the input device.
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述VR/AR设备响应于第一操作,截取第一图像之后,所述方法还包括:The method according to any one of claims 1-10, wherein after the VR/AR device captures the first image in response to the first operation, the method further comprises:
    所述VR/AR设备保存所述第一图像。The VR/AR device saves the first image.
  12. 一种VR/AR设备,其特征在于,所述VR/AR包括一个或多个处理器和一个或多个存储器;其中,所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述VR/AR设备执行如权利要求1-11任一项所述的方法。A VR/AR device, characterized in that the VR/AR includes one or more processors and one or more memories; wherein the one or more memories are coupled with the one or more processors, the one or more memories for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the VR/AR device to perform as claimed The method of any one of 1-11.
  13. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-11中任一项所述的方法。A computer program product comprising instructions, wherein, when the computer program product is run on an electronic device, the electronic device is caused to perform the method of any one of claims 1-11.
  14. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-11中任一项所述的方法。A computer-readable storage medium comprising instructions, characterized in that, when the instructions are executed on an electronic device, the electronic device is caused to perform the method according to any one of claims 1-11.
PCT/CN2022/082432 2021-03-25 2022-03-23 Method, apparatus and system for cropping image by vr/ar device WO2022199597A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110322569.XA CN115131547A (en) 2021-03-25 2021-03-25 Method, device and system for image interception by VR/AR equipment
CN202110322569.X 2021-03-25

Publications (1)

Publication Number Publication Date
WO2022199597A1 true WO2022199597A1 (en) 2022-09-29

Family

ID=83373855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/082432 WO2022199597A1 (en) 2021-03-25 2022-03-23 Method, apparatus and system for cropping image by vr/ar device

Country Status (2)

Country Link
CN (1) CN115131547A (en)
WO (1) WO2022199597A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117406867B (en) * 2023-12-15 2024-02-09 小芒电子商务有限责任公司 Webpage-based augmented reality interaction method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130050432A1 (en) * 2011-08-30 2013-02-28 Kathryn Stone Perez Enhancing an object of interest in a see-through, mixed reality display device
CN106454098A (en) * 2016-10-31 2017-02-22 深圳晨芯时代科技有限公司 Virtual reality shooting and displaying method and system
US20180160048A1 (en) * 2016-12-01 2018-06-07 Varjo Technologies Oy Imaging system and method of producing images for display apparatus
CN108139799A (en) * 2016-04-22 2018-06-08 深圳市大疆创新科技有限公司 The system and method for region of interest (ROI) processing image data based on user
CN112187619A (en) * 2020-05-26 2021-01-05 华为技术有限公司 Instant messaging method and equipment
CN112394891A (en) * 2019-07-31 2021-02-23 华为技术有限公司 Screen projection method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130050432A1 (en) * 2011-08-30 2013-02-28 Kathryn Stone Perez Enhancing an object of interest in a see-through, mixed reality display device
CN108139799A (en) * 2016-04-22 2018-06-08 深圳市大疆创新科技有限公司 The system and method for region of interest (ROI) processing image data based on user
CN106454098A (en) * 2016-10-31 2017-02-22 深圳晨芯时代科技有限公司 Virtual reality shooting and displaying method and system
US20180160048A1 (en) * 2016-12-01 2018-06-07 Varjo Technologies Oy Imaging system and method of producing images for display apparatus
CN112394891A (en) * 2019-07-31 2021-02-23 华为技术有限公司 Screen projection method and electronic equipment
CN112187619A (en) * 2020-05-26 2021-01-05 华为技术有限公司 Instant messaging method and equipment

Also Published As

Publication number Publication date
CN115131547A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
US11137967B2 (en) Gaze-based user interactions
JP6419262B2 (en) Headset computer (HSC) as an auxiliary display with ASR and HT inputs
EP3396511B1 (en) Information processing device and operation reception method
US11231845B2 (en) Display adaptation method and apparatus for application, and storage medium
US9075429B1 (en) Distortion correction for device display
JP6266814B1 (en) Information processing method and program for causing computer to execute information processing method
US20220159178A1 (en) Automated eyewear device sharing system
US11314396B2 (en) Selecting a text input field using eye gaze
CN112835445B (en) Interaction method, device and system in virtual reality scene
CN108037826B (en) Information processing method and program for causing computer to execute the information processing method
US20210220738A1 (en) Perspective rotation method and apparatus, device, and storage medium
WO2022199597A1 (en) Method, apparatus and system for cropping image by vr/ar device
KR102521557B1 (en) Electronic device controlling image display based on scroll input and method thereof
US20230126025A1 (en) Context-sensitive remote eyewear controller
CN110728744B (en) Volume rendering method and device and intelligent equipment
JP6374203B2 (en) Display system and program
JP2018109940A (en) Information processing method and program for causing computer to execute the same
EP3739897A1 (en) Information displaying method and electronic device therefor
US11825237B1 (en) Segmented video preview controls by remote participants in a video communication session
US11763517B1 (en) Method and device for visualizing sensory perception
US20230409271A1 (en) Function based selective segmented video feed from a transmitting device to different participants on a video communication session
US20240135650A1 (en) Electronic device and method for displaying modification of virtual object and method thereof
US20230333645A1 (en) Method and device for processing user input for multiple devices
JP6205047B1 (en) Information processing method and program for causing computer to execute information processing method
CN118034548A (en) Man-machine interaction method, system and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22774253

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22774253

Country of ref document: EP

Kind code of ref document: A1