WO2021227628A1 - Electronic device and interaction method therefor - Google Patents

Electronic device and interaction method therefor Download PDF

Info

Publication number
WO2021227628A1
WO2021227628A1 PCT/CN2021/079995 CN2021079995W WO2021227628A1 WO 2021227628 A1 WO2021227628 A1 WO 2021227628A1 CN 2021079995 W CN2021079995 W CN 2021079995W WO 2021227628 A1 WO2021227628 A1 WO 2021227628A1
Authority
WO
WIPO (PCT)
Prior art keywords
stylus
image
electronic device
current scene
plane
Prior art date
Application number
PCT/CN2021/079995
Other languages
French (fr)
Chinese (zh)
Inventor
提纯利
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021227628A1 publication Critical patent/WO2021227628A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • This application belongs to the technical field of handwriting interaction, and in particular relates to an electronic device and an interaction method thereof.
  • mixed reality technology generates a new visual environment by merging reality and virtual world.
  • an interactive feedback information loop is built between the virtual world, the real world and the user to enhance the user The realism of the experience.
  • Emerging interactive technologies for mixed reality scenes such as data gloves can better target game and entertainment scenes, but the ability to input text content through data gloves is weak.
  • the operation method of collecting the image of the stylus pen to detect the pen tip through the visual algorithm cannot accurately determine the position of the pen tip, and therefore cannot accurately restore the writing trajectory.
  • the embodiments of the present application provide an electronic device and an interactive method thereof, which can solve the problem of the inability to accurately determine the position of the pen tip when interactive input is performed through a stylus in the prior art.
  • an embodiment of the present application provides an interactive method for an electronic device.
  • the electronic device displays an interactive image.
  • the interactive image may be an image of the current scene or an image of a multimedia file;
  • the movement information of the pen acquires the first operation of the stylus, wherein the movement information includes the relative movement information of the stylus, and the first operation may include a tap operation, a writing operation, or may also include the tip of the stylus in the empty state Mobile operation;
  • the electronic device displays a virtual interactive interface on the interactive image, and responds to the first operation on the virtual interactive interface.
  • the first operation is a click operation
  • a click can be triggered in the virtual interactive interface Instruction
  • the first operation is a writing operation
  • the corresponding writing track or writing content may be displayed in the virtual interactive interface.
  • the electronic device is used to display an interactive image and a virtual interactive interface, can receive interactive data input by an input device such as a stylus pen, and respond to the interactive data in the displayed virtual interactive interface.
  • the electronic device may be an electronic device such as a head-mounted display device, virtual reality glasses, augmented reality glasses, or mixed reality glasses.
  • the relative motion information can be collected by sensing devices such as gyroscopes, inertial sensors, and acceleration sensors. By acquiring the relative movement information of the stylus, the electronic device can perform more detailed relative displacement detection of the stylus, so that the writing trajectory of the stylus can be determined more accurately, and the writing trajectory can be restored more accurately. Facilitate the realization of more refined interactive operations.
  • the sensing device is set at the tip position of the stylus pen, and by collecting the relative movement information of the tip position, the writing track of the stylus pen can be obtained more accurately.
  • the interactive image is an image of the current scene
  • the writing position of the stylus can be determined by the correspondence between the image of the current scene and the interactive image. It may include: acquiring the position of the stylus in the current scene according to the image of the current scene, wherein the position of the stylus in the current scene can be represented by the relative positional relationship of the stylus with respect to the handwriting plane; The position of the stylus in the current scene determines the handwriting position of the tip of the stylus in the virtual interactive interface. The relative position of the stylus with respect to the handwriting plane can be determined by combining with the preset mapping relationship. The corresponding position of the pen tip in the virtual interactive interface.
  • the position of the stylus in the current scene can also be determined by locating the position of the stylus in the current scene, and the handwriting position to be written in the empty state can be tracked.
  • the corresponding position of the stylus in the virtual interactive interface can be tracked in real time by moving virtual icons, including virtual stylus pens and virtual cursors, so as to improve the convenience of user writing.
  • determining the position of the stylus in the current scene it may include: acquiring the image of the current scene according to the camera, and directly determining the position of the stylus in the current scene through the image of the current scene; or, also including acquiring the current scene
  • the depth information, combined with the depth information, can obtain the position of the stylus in the current scene more accurately.
  • the initial position of the stylus in the image of the current scene can be recognized according to the preset characteristics of the stylus.
  • the initial position of the stylus can be determined according to the position of the tip of the stylus in the image of the current scene relative to the position of other reference information in the image.
  • the other reference information may be information such as the edges and vertices of the writing plane, characters or patterns included in the writing plane.
  • the image of the current scene and the depth information of the current scene can be combined to determine the position of the pen tip, the position of the writing plane, and the position relative to the writing plane. According to the determined relative position, the position of the writing pen can be obtained. The corresponding position in the virtual interactive interface.
  • determining the position of the stylus in the current scene it may include: detecting the tip feature of the stylus in the image of the current scene, and determining the position of the tip of the stylus in the image of the current scene; The matching relationship between the image of the current scene and the depth information to obtain the depth information matched by the pen tip; according to the depth information of the pen tip, the spatial coordinates of the pen tip can be determined, and the spatial coordinates of the determined handwriting plane can be combined to obtain the The position of the pen tip relative to the handwriting plane.
  • the position of the pen tip in the image of the current scene can be determined.
  • the depth information of the object in the image of the current scene can be determined.
  • the relative position of the tip of the stylus with respect to the writing plane can be determined. According to the relative position, the writing position of the stylus corresponding to the virtual interactive interface can be effectively determined.
  • the pen tip feature includes one or more of color feature, reflected light feature, or fluorescent feature. Including the combination of color feature and reflected light feature, combination of reflected light feature and fluorescence feature, or combination of color feature and fluorescence feature, or combination of color feature, fluorescence feature and reflected light feature, etc.
  • the position of the pen tip in the image of the current scene can be quickly obtained through color detection.
  • the color of the pen tip may be different from the color of the writing plane.
  • the color of the current writing plane can be detected, and the color of the pen tip can be adjusted according to the color of the current writing plane, so as to facilitate adapting to the pen tip detection requirements of different writing planes.
  • the pen tip may be set as a reflective material layer, the light is reflected by the reflective material, the position of the reflected light is detected in the image of the current scene, and the position of the pen tip is determined.
  • the pen tip may be provided with a fluorescent material layer, and the position of the pen tip can be determined by detecting the fluorescent position in the image of the current scene.
  • determining the handwriting position of the pen tip in the virtual interactive interface it may include: when the stylus is in the empty state, acquiring the position of the stylus and the position of the writing plane of the current scene, and determining the handwriting The relative positional relationship of the pen relative to the writing plane; according to the relative positional relationship of the stylus relative to the writing plane, combined with the predetermined mapping relationship between the writing plane and the virtual interactive interface, it is determined that the stylus is in the virtual interactive interface s position.
  • the position of the stylus relative to the writing plane can be determined.
  • the position of the stylus in the virtual interactive interface can be determined.
  • the interactive image is collected by a camera or presented by a display device with a predetermined transparency, that is, according to the set transparency, the image of the real scene is allowed to partially penetrate the display device, and the user The image of the real scene can be viewed through the display device with the predetermined transparency.
  • the interactive image collected by the camera may be an image of the user's current scene.
  • the image content of the interactive image may correspond to the vision range of the naked eye when the user is not wearing the electronic device, so that it can be mixed with the virtual interactive interface to obtain a mixed reality image.
  • the interactive image may be displayed in a display device with a predetermined transparency.
  • the predetermined transparency display device may be related to scene information. For example, the transparency of the display device may gradually decrease when the scene light intensity increases, and when the scene light intensity decreases, the transparency gradually increases, so that an interactive image with more comfortable brightness can be obtained.
  • displaying the virtual interactive interface it may include displaying the virtual interactive interface in a predetermined area in the interactive image, that is, the display position of the virtual interactive interface is fixed, which is convenient for recording general information of the screen, such as recording the current scene Information such as the feeling, mood or date corresponding to the image; or, the position of the virtual interactive interface is determined according to the plane area in the current scene, so as to facilitate the coordination of the unification of the writing position and the display position of the writing track.
  • the interactive image By displaying the virtual interactive interface in a fixed area in the interactive image, the interactive image can be selected more flexibly.
  • Write in the outer area and display the written information in the interactive image.
  • the position information of the stylus can be collected through sensors such as a camera.
  • the writing information can be displayed at the position of the stylus, which can make the correspondence between the writing position and the displayed content more in line with the user's usage habits, and improve the convenience of the user. sex.
  • the virtual interactive interface When determining the position of the virtual interactive interface according to the plane area in the current scene, it may include: filtering the plane area in the current scene according to preset plane area requirements; according to the shape and/or position of the filtered plane area, Determine the shape and/or position of the virtual interactive interface.
  • the preset requirement may include one or more of the size range of the plane area, the orientation of the plane area, or the distance of the plane area from the camera.
  • the plane area with a smaller size can be screened, and the plane area with a larger size or meeting the requirements of the size range can be automatically selected, so that it is convenient for the user to write.
  • the plane orientation can be selected as upward or toward the plane area of the user to improve the convenience of the user's writing.
  • the method further includes: selecting among the plurality of plane areas according to the predetermined preferred writing position of the stylus. A plane area that better matches the better writing position.
  • selecting a plane area according to the determined preferred writing position it may include: respectively obtaining the distances between a plurality of to-be-selected plane areas and the preferred writing position; and selecting a plane area with a relatively close distance.
  • the preferred writing position can be set by the user, or it can be obtained by statistical analysis of the preferred writing position according to the user's writing habits, that is, the user's writing image is analyzed to determine the position where the user writes more frequently, and the The determined position is used as a better writing position.
  • the distance between the writing plane and the preferred writing position can be the closest distance between the edge of the writing plane and the preferred writing position, or it can also be the distance between the center of the writing plane and the preferred writing position.
  • the size of the preferred writing area can be specified by the user, or the preferred writing range can be obtained by automatic statistics according to the user's writing habits.
  • the method further includes: displaying an edit button in the virtual interactive interface; responding to the writing information on the virtual interactive interface includes: when a stylus is detected at a corresponding position of the edit button During the click operation, respond to the corresponding function of the edit button.
  • the virtual buttons may include buttons such as a brush shape and a brush function.
  • the image drawn by the writing pen can be edited, or the information or image generated by the image of the current scene can be edited.
  • displaying virtual keys in the virtual interactive interface it may also include saving and/or sending the edited image or text content to other users; or, selecting the text content of the virtual interactive interface and sending the search place to the network.
  • a request for selected text content receiving and displaying the result corresponding to the request on the virtual interactive interface.
  • the interactive image and the image of the virtual interactive interface can be saved, and the real-time saved image can be transmitted to other users through the network.
  • the electronic device can also collect audio information in real time, and transmit the audio information and saved images to other users in real time, so that convenient online teaching and explanation can be realized.
  • the search request is triggered by the virtual button, and the result corresponding to the request is received and displayed, which can facilitate the user to interact with the network in real time.
  • the text content of the virtual interactive interface may be the content of text media in the real scene automatically recognized by the electronic device, thereby facilitating more convenient query of the realization scene.
  • the text content of the virtual interactive interface may also be text content included in a multimedia image.
  • the method includes obtaining an image of the pen tip when drawing is performed through a visual perception module, and generating a first trajectory according to the image when drawing is performed; acquiring through a relative motion sensing unit The relative displacement information of the pen tip when the drawing is executed, the second trajectory is generated according to the relative displacement information; the difference between the first trajectory and the second trajectory is compared, and the relative motion information is calibrated according to the difference .
  • the first trajectory is generated by the visual perception module
  • the second trajectory is generated by the relative motion information.
  • the relative motion information is calibrated by comparing the difference between the first trajectory and the second trajectory, for example, the relative displacement information is obtained by sensing equipment Adjustment, which can improve the accuracy of the relative movement.
  • the position of the stylus pen tip in the three-dimensional scene model can be determined by establishing a three-dimensional scene model, according to the position of the stylus in the interactive image and the depth information of the interactive image, and thereby Get a more accurate first trajectory.
  • an embodiment of the present application provides an electronic device that includes a memory, a processor, and a computer program that is stored in the memory and can run on the processor, and the processor executes all The computer program implements the interactive method of the electronic device as described in any one of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the Interactive method of electronic equipment.
  • the electronic device described in the second aspect and the readable storage medium described in the third aspect correspond to the interaction method of the electronic device described in the first aspect.
  • FIG. 1 is a schematic block diagram of the structure of an electronic device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a use state of an electronic device provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a display of a virtual interactive interface provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram showing another virtual interactive interface provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of determining a virtual interactive interface according to a preferred writing position according to an embodiment of the present application
  • FIG. 6 is another schematic diagram of determining a virtual interactive interface according to a preferred writing position according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of the layout of a head-mounted display device provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a virtual interactive interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the implementation process of an electronic device interaction method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of image acquisition of a main device provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of depth information calculation provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of detecting a current scene image provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of determining the position of a pen tip provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of an image taken by a visual perception device provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a virtual interactive interface provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a device calibration provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a screen of an electronic device used for signing according to an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a screen of an electronic device used for live broadcasting according to an embodiment of the present application.
  • FIG. 19 is a schematic diagram of a screen for writing on an electronic device according to an embodiment of the present application.
  • FIG. 20 is a schematic diagram of an interaction apparatus of an electronic device provided by an embodiment of the present application.
  • the electronic device interaction method provided by the embodiments of this application can be applied to augmented reality (AR)/virtual reality (VR) devices, mobile phones, tablet computers, wearable devices, in-vehicle devices, notebook computers, and ultra mobile On devices such as ultra-mobile personal computers (UMPC), netbooks, and personal digital assistants (personal digital assistants, PDAs), the embodiments of this application do not impose any restrictions on the specific types of electronic devices.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computers
  • PDAs personal digital assistants
  • the electronic device may be a station (STAION, ST) in a WLAN, a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, Personal Digital Assistant (PDA) devices, handheld devices with wireless communication functions, computing devices or other processing devices connected to wireless modems, in-vehicle devices, car networking terminals, computers, laptop computers, handheld communication devices , Handheld computing devices, satellite wireless devices, wireless modem cards, television set top boxes (STB), customer premise equipment (customer premise equipment, CPE), and/or other equipment used to communicate on the wireless system and download
  • a first-generation communication system for example, a mobile terminal in a 5G network or a mobile terminal in a public land mobile network (PLMN) network that will evolve in the future.
  • PLMN public land mobile network
  • the wearable device can also be a general term for using wearable technology to intelligently design daily wear and develop wearable devices, such as glasses, gloves, Watches, clothing and shoes, etc.
  • a wearable device is a portable device that is directly worn on the body or integrated into the user's clothes or accessories.
  • Wearable devices are not only a kind of hardware device, but also realize powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-sized, complete or partial functions that can be achieved without relying on smart phones, such as smart watches, head-mounted display devices, or smart glasses, and only focus on a certain type of application function.
  • Equipment such as smart phones are used together, such as various smart bracelets and smart jewelry for physical sign monitoring.
  • the electronic device includes a main device and a stylus.
  • FIG. 1 shows a schematic block diagram of a part of the structure of an electronic device provided in an embodiment of the present application. 1, the electronic device includes a head-mounted display device 1 as a main device and a stylus 2 that can establish a connection with the main device.
  • the head-mounted display device 1 includes a first communication unit 110, a visual perception module 120, a depth sensing unit 130, a display unit 140, a first calculation processing unit 150, a first storage unit 160, and a first power supply unit 180
  • the stylus 2 includes a second communication unit 210, a relative motion sensing unit 220, a second calculation processing unit 230, a second storage unit 240, and a second power supply unit 250.
  • the structure of the electronic device shown in FIG. 1 does not constitute a limitation on the electronic device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different component arrangements.
  • the first communication unit 110 can communicate with the second communication unit 210.
  • the first communication unit 110 and the second communication unit 210 may use short-distance communication circuits, including but not limited to Bluetooth communication circuits, infrared communication circuits, Wifi communication circuits, and the like.
  • the first communication unit 110 can establish a connection link with the second communication unit 210.
  • the first communication unit 110 or the second communication unit 210 may also establish a communication connection with other electronic devices.
  • the first communication unit 110 and the second communication unit 210 can establish a communication link with devices such as smart phones, computers, etc., and send data collected or processed by the head-mounted display device 1 or the stylus 2 to other electronic devices.
  • the device, or through the link, the head-mounted display device 1 or the stylus 2 receives data sent by other electronic devices.
  • the user can fix the head-mounted display device 1 on the user's head by wearing it, and the head-mounted display device 1 can use a camera The way of collecting images, and displaying the collected images of the current scene on the display screen of the head-mounted display device in real time.
  • the captured image can be divided into a first image and a second image, which are respectively displayed on the first display screen and the second display screen in the head-mounted display device, so that the user can watch through the head-mounted display device To the real image in the current scene.
  • the display screen of the head-mounted display device may be a screen with a predetermined transparency, such as a display screen with a semi-transparent structure. The user can view the picture in the current scene in real time through the light transmitted through the display screen.
  • a virtual screen may also be displayed on the display screen in the head-mounted display device, and the virtual screen may include a virtual interactive interface.
  • the virtual interactive interface may display the handwriting written by the stylus held by the user shown in FIG. 2.
  • the virtual interactive interface may display the text content obtained by performing text recognition on the text medium in the current scene.
  • the virtual interactive interface may also include an application program interface opened by the user, and screen content such as editing and processing of written characters and images.
  • the head-mounted display device 1 is provided with a first communication unit 110, and the stylus 2 is provided with a second communication unit 210.
  • the first communication unit 110 and the second communication unit 210 may establish a link connection via Bluetooth.
  • the stylus 1 is provided with a sensing device for detecting relative movement, and can collect relative displacement data of the stylus.
  • the relative displacement data may include the position of the tip of the stylus relative to the last detection time, the distance and direction of the movement of the tip of the stylus at the current detection moment, and the collected relative displacement data is sent through the link Give head-mounted display devices. That is, when the user uses the stylus pen to write text, draw or other editing processing actions, if the stylus is detected to be in the writing state, the detection is performed according to the preset time interval of every two adjacent detection moments to obtain each
  • the relative displacement data of the detection time relative to the previous detection time that is, the relative displacement distance and the relative displacement direction of the current detection time relative to the previous detection time are obtained.
  • the pen tip position at the current detection time can be determined. After determining the position of the pen tip at each detection moment, the pen tip trajectory when the user writes with the stylus pen can be obtained.
  • the stylus 2 can also detect whether the stylus is in a writing state or an empty state through a pressure sensor, and send the detected stylus state information to the head-mounted display device through the link, Or, the relative displacement data of the stylus relative to the handwriting plane can also be collected by the stylus pen according to the state information of the stylus pen.
  • the visual perception module 120 may be a visible light camera.
  • the camera can capture external environment information to generate a video stream signal, and provide data for the current scene mapping (Mapping), stylus localization (localization), gestures, stylus gestures and other action recognition.
  • multiple cameras can be set to collect images in the scene at different angles of view to provide multi-view stereo vision for current scene drawing.
  • the current scene drawing may include drawing one or more of the objects included in the current scene, the image of the object, the size of the object, the position of the object in the current scene, the distance between the object and the user, and other information.
  • the main device includes a first camera 1201 and a second camera 1202, where the first camera may be a visible light camera, and the second camera may be a visible light camera or an infrared camera.
  • the images collected by the first camera and the second camera are combined with the camera parameters of the first camera and the second camera, including the internal parameters of the camera and the external parameters of the camera, to determine the depth information of the object in the image.
  • the determined depth information of the object in the image it can be used to determine the position of the object in the image of the current scene, or to determine the distance between the object and the user when the current scene is drawn.
  • the head-mounted display device 1 may further include a light supplement unit 180.
  • the light supplement unit 180 can provide visible light supplement light.
  • the supplementary light unit 180 may also provide infrared supplementary light. Through infrared supplement light, and the infrared camera for image collection, it can effectively improve the accuracy and robustness of the pen tip's visual positioning without affecting the perception of the environment.
  • the depth sensing unit 130 is used to detect the distance between an object in the current scene and the head-mounted display device 1.
  • the depth sensing unit may include two or more cameras.
  • the depth sensing unit 130 may include one or more of distance measurement units such as a time-of-flight ranging camera, a structured light ranging system, a radar, and an ultrasonic sensor.
  • the current scene can be 3D modeled, the plane in the current scene can be detected, and the real-time location and map of the scene can be completed Build SLAM (Simultaneous Localization And Mapping), etc.
  • SLAM Simultaneous Localization And Mapping
  • the depth information of the object in the image is determined according to the images captured by the two cameras and the parameter information of the two cameras.
  • the depth sensing unit 130 When the depth sensing unit 130 is a single camera, it can record the emission time of the light beam by emitting a light beam to the object. In addition, when the camera captures the light beam reflected by the object, the receiving time of the recording light beam is recorded, and the distance between the object and the camera can be calculated according to the time difference between the emission time and the receiving time, combined with the propagation speed of the light.
  • the display unit 140 may be used to display an interactive image, and the interactive image may be a video image taken by a camera, or may also include a virtual interactive interface.
  • the display unit 140 is a display device with a predetermined transparency.
  • the predetermined transparency may be a semi-transparent display device through which the user can see the current scene, and superimpose and display a virtual interactive interface on the semi-transparent display device.
  • the predetermined transparency can also be automatically changed according to the brightness of the current scene. For example, when the brightness of the current scene increases, the transparency of the display device can be reduced, and when the brightness of the current scene decreases, the transparency of the display device can be increased, so that the user can view interactive images with appropriate brightness through the display unit.
  • Fig. 3 is a schematic diagram showing the display of a virtual interactive interface provided by an embodiment of the application.
  • the virtual interactive interface 302 may be superimposed and displayed on an interactive image, such as the current scene image 301.
  • the virtual interactive interface 302 may be an area where interactive content is displayed, such as an area where the track of a stylus pen and a cursor corresponding to the stylus pen are displayed.
  • the virtual interactive interface may also include virtual image information.
  • the virtual image information may include a text editing interface
  • the text editing interface may include text content recognized according to the text image included in the scene image, and virtual keys that can be used to edit the text content Wait.
  • the user can use a stylus pen to perform processing such as drawing, signing, text input or editing content in the virtual interactive interface.
  • the display position of the virtual interactive interface may be a predetermined position in the scene image.
  • the virtual interactive interface may be superimposed and displayed in a predetermined area at the lower right corner of the preset scene image.
  • the virtual interactive interface can be determined according to the plane information detected in the scene image.
  • the corresponding virtual interactive interface can be directly generated according to the plane area in the current scene. For example, in FIG. 4, if it is detected that the desktop 303 in the image of the current scene meets the requirements of the predetermined plane area, the position of the virtual interactive interface can be determined according to the position of the desktop in the image of the current scene. Alternatively, the shape of the virtual interaction may also be determined according to the shape of the desktop.
  • the preset requirements for the plane area may include one or more of the limited conditions such as the plane size range, the plane orientation, and the distance between the plane and the camera or the user.
  • the size of the plane may include the minimum size requirement of the set plane.
  • the plane orientation may include upwards or toward the user, or the like, or the plane orientation may include the tilt angle range of the plane.
  • the inclination angle range may include a range from a horizontal angle to an angle perpendicular to the horizontal plane.
  • the distance of the plane from the camera or the user may be smaller than the first set distance value.
  • the limiting conditions may include the size of the plane, the orientation of the plane, and the distance between the plane and the camera, and a better plane can be obtained through the combination of plane size screening, plane orientation screening, and the distance between the plane and the camera.
  • FIG. 5 when the scene includes multiple planes, and the multiple plane areas all meet the requirements of the virtual interactive interface. Then, according to the predetermined preferred writing position of the stylus, a preferred plane among the multiple planes can be selected as the virtual interactive interface. It can be selected according to the distance between the plane and the better writing position. When the plane is closer to the better writing position, the preferred choice is.
  • FIG. 4 includes a flat area A and a flat area B that meet the requirements of a preset virtual interactive interface. The flat area A is closer to the preset better writing position N, and the flat area A is selected as the virtual interactive interface.
  • the preferred writing position may also be a preset preferred writing area.
  • the virtual interactive interface is determined according to the preferred writing area.
  • a number of candidate planes can be determined according to the requirements of the virtual interactive interface, and then the area of the intersection area between the multiple candidate planes and the preferred writing area can be calculated, and the candidate plane with the largest intersection area is selected as the virtual interactive interface .
  • the current scene includes a flat area C and a flat area D.
  • the flat area C, the flat area D, and the better writing area M are calculated for the intersection area, and the plane D with a larger intersection area is selected as the virtual interactive interface. .
  • the preferred writing position of the stylus can be determined by receiving a user designation, or, according to the characteristics of the human body’s physiological structure, when the user’s arm and elbow are at a predetermined angle, the user’s holding The position of the tip of the stylus is determined as the preferred writing position.
  • the state of the user's holding a stylus can be collected through a camera, and when the arm and elbow are detected at a predetermined angle, the position of the pen tip at that moment can be recorded as the preferred writing position of the stylus.
  • the preferred writing area can be determined by the preferred writing position.
  • the preferred writing area may be a rectangular area, an elliptical area or an area of other shapes with the preferred writing position as the center.
  • the display unit may include a first display unit 211 and a second display unit 212, which can be used to display The constructed video of the 3D scene or the image including the virtual interactive interface, the user can watch the generated 3D scene image by wearing the head-mounted display device.
  • a visual perception unit 22 and a depth sensing unit 23 can be provided in the middle of the display unit and the second display unit, or a light supplement unit 24 and the like can be further included.
  • a virtual interactive interface can be displayed on the interactive image of the current scene, and the data input by the user can be displayed on the virtual interactive interface.
  • the user can write on any plane in the scene with a stylus.
  • the stylus detects the relative displacement data of the stylus tip through the relative motion sensing unit, and determines the writing trajectory of the tip according to the writing.
  • the trajectory determines the user's writing data, including drawing trajectory, text content or virtual key operation, etc.
  • the written data is displayed on the virtual interactive interface showing a fixed position shown in FIG. 3, or in response to a user's virtual key operation, for example, in response to a virtual key deletion operation.
  • the camera can detect and track the identification feature of the stylus tip to determine the position of the stylus tip.
  • the initial position of the pen tip in the writing state can be determined.
  • the virtual interactive interface and the plane of the object in the scene have the same position in the image of the current scene, that is, the interactive image.
  • the corresponding writing trajectory can be displayed in the corresponding position of the virtual interactive interface, so that the user can view the writing trajectory at the writing position through the head-mounted display device , So as to better match the user's usual writing habits, and enhance the user's writing experience with the stylus.
  • the click operation triggered by the stylus on the function button corresponding to the position is received.
  • the information written by the user using the stylus pen on the object plane in the scene may include text, images, editing instructions, and so on.
  • the track written by the user can be recorded, and the characters corresponding to the track can be recognized, and the recognized characters can be displayed in the virtual interactive interface.
  • the virtual interactive interface includes text and keys for editing the text.
  • the user can trigger the keys in the virtual interactive interface through a stylus to realize the editing process of the corresponding text, and display the response information of the editing process through the virtual interactive interface.
  • the user adds a comment 802 to the text included in the virtual interactive interface through the "comment" button 801.
  • the user can use the stylus to move to the position to be annotated, and click on the position to be annotated to keep the editing cursor in a blinking state at the position to be annotated .
  • the virtual interactive interface can be used to display the signature data input by the user to the file in the current scene, or to copy, paste, cut, and other operations on the text in the current scene.
  • the edited text is saved, for example, the modification to the text is saved, or the annotation or signature to the text is saved.
  • the first processing unit 150 can be used to process the collected video images and the received sensor information, including acquiring the depth information of the object in the current scene, and performing SLAM based on the depth information of the object and the image of the current scene in combination with camera parameters Synchronize positioning and mapping, and determine the absolute position of the pen tip or fingertip according to the characteristic information of the stylus or fingertip in the image.
  • the first processing unit 150 or the second processing unit 230 is configured to determine the accurate position of the pen tip of the stylus according to the relative displacement data of the pen tip collected by the stylus, and according to the absolute position and the relative displacement data, so as to Accurately calculate the data corresponding to the trajectory written by the stylus.
  • the first processing unit 150 may also include performing text recognition on the text medium in the current scene, such as text in a book or paper, to obtain text content that can be edited, combining with the editing information input by the stylus, and recognizing the recognized text.
  • the content is edited. For example, when a text medium such as a book file or paper is detected in the current scene, when it is detected that the stylus is in a pressed writing state, and the pressed position is in the area where the text medium is located, the image of the text medium can be written.
  • Recognition processing acquiring the text information included in the text media, and editing the text media according to the data processing information written by the stylus, including modifying text content, adding annotation information, copying selection, translation, or in the current scene Trigger instructions for the control buttons of the virtual interactive interface superimposed on it.
  • the text medium detection may perform comparison detection in the collected scene image according to preset text medium characteristics to obtain the text medium included in the scene image.
  • the first processing unit 150 or the second processing unit 230 is the control center of the electronic device, which uses various interfaces and lines to connect various parts of the entire electronic device, and runs or executes software programs and/or modules stored in the memory 120 , And call the data stored in the memory 120 to execute various functions of the electronic device and process data, so as to monitor the electronic device as a whole.
  • the first processing unit 150 or the second processing unit 230 may include one or more processing units; preferably, the first processing unit 150 may integrate an application processor and a modem processor, where the application processor mainly Processing the operating system, user interface and application programs, etc., the modem processor mainly deals with wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the first processing unit 150.
  • the first storage unit 160 or the second storage unit 240 can be used to store software programs and modules.
  • the first processing unit 150 runs the software programs and modules stored in the first storage unit 160
  • the second calculation processing unit 230 runs The software programs and modules stored in the second storage unit 240 execute various functional applications and data processing of the electronic device.
  • the first storage unit 160 or the second storage unit 240 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required by at least one function (such as a sound playback function, an image playback function). Functions, etc.), etc.; the data storage area can store data created based on the use of electronic devices (such as video images, edited electronic text or images, etc.), etc.
  • first storage unit 160 or the second storage unit 240 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. .
  • the first power supply unit 170 or the second power supply unit 250 may be a battery.
  • the first power supply unit 170 or the second power supply unit 250 may be logically connected to the computing processing unit through a power management system, thereby through the power management system Realize functions such as managing charging, discharging, and power consumption management.
  • the relative motion sensing unit 220 may include a pressure sensing unit 2201 and a handwriting trajectory information sensing unit 2202.
  • the pressure sensing unit 2201 can detect whether the stylus is in a writing state. According to the pressure sensing unit 2201 detecting the pressure between the pen tip and the writing plane, when the pressure is greater than a predetermined value, the stylus is considered to be in a writing state.
  • the handwriting trajectory information sensing unit 2202 may include a laser interference unit and/or an inertial sensing unit. The relative displacement information of the pen tip is obtained through the laser interference unit, and the magnitude and direction changes of the acceleration of the pen tip are sensed by the inertial sensing unit. Changes in size and direction determine the movement trajectory of the pen tip.
  • the handwritten trajectory information sensing unit may also include a displacement sensing sensor such as a camera, which detects and perceives changes in the relative position of the pen tip through a change in the image collected by the camera set by the pen tip.
  • the inertial sensing unit may be an acceleration sensor, a gyroscope, etc., for detecting changes in the magnitude and direction of the acceleration of the pen tip.
  • the inertial sensing unit may include a plurality of units, which are respectively arranged at different positions of the pen tip.
  • an inertial sensing unit can be provided at the tip and the end of the pen, and the change in the posture of the stylus can be determined by the difference in acceleration sensed by the inertial sensing unit.
  • the tip of the stylus is also provided with a tip feature.
  • the feature of the pen tip can be a special color mark, or a mark such as infrared reflection or fluorescence.
  • the position of the pen tip in the virtual interactive interface can be effectively determined to realize the writing position and content of the stylus pen Matching of the display position, thereby better improving the user’s writing experience.
  • the electronic device described in the present application is not limited to this, and may also include other components not listed.
  • the stylus 2 may also include a display screen, through which the writing status of the stylus pen can be displayed, or information such as time can also be displayed.
  • the second display unit may be a touch screen, and the sensitivity of the stylus to the writing state detection can be adjusted through the second display unit.
  • the pressure threshold set by the stylus is F1, when the pressure sensor of the stylus currently detects that the pressure of the tip of the stylus is F2, if F2 is greater than F1, it is considered that the stylus is currently in the writing state.
  • the currently detected pressure of the pen tip of the stylus is F3.
  • the size of the pressure threshold can be adjusted through the second display unit. When the pressure threshold is increased, the sensitivity of writing state switching decreases, and a greater pressure is required to trigger the writing state. When the pressure threshold is adjusted smaller, the sensitivity of writing state switching increases, and a smaller pressure can trigger the writing state.
  • step S901 the handwriting interactive device displays an interactive image.
  • the interactive image may be an image of the current scene acquired by a visual perception module, or may be other multimedia images to be played, such as video images, PPT images, and so on.
  • real-time positioning and mapping SLAM can be performed on the current scene, so as to facilitate the reconstruction of the three-dimensional scene model, Determine the position of the detected pen tip relative to the object in the scene when the pen is raised, and determine the handwriting interaction area or the interaction position in the handwriting interaction area corresponding to the position of the pen tip according to the position change.
  • the handwriting interaction area may be an area in the current scene touched by the stylus pen.
  • the handwriting interaction area is usually a flat area. For example, areas such as desktops, walls, or other flat surfaces.
  • the image of the current scene can be acquired through a visible light camera, such as the first camera shown in FIG. 1.
  • multiple cameras can also be used to obtain multiple images of the current scene, and the multiple images can be spliced according to the angle and position of the cameras corresponding to the multiple images, or according to the content of the images, to obtain more information.
  • the image can be a video or other multimedia format.
  • the main device is provided with a visible light camera A and a visible light camera B.
  • the visible light camera A obtains a first video image P1
  • the visible light camera B obtains a second video image P2.
  • the spliced video image P is obtained, or the spliced video image can also be enhanced.
  • the depth information corresponding to the object in the image can be determined based on the images taken by two or more cameras, or the depth information between the object in the current scene and the main device can be acquired based on the depth sensor. The distance between.
  • the depth information of the objects in the image can be determined based on the principle of triangulation ranging. As shown in FIG. 11, when two cameras with the same parameters are located on the same plane, and the focal length f of the two cameras and the center distance T of the two cameras are known in advance.
  • the disparity (Xr-Xt) of the object in the two images can be determined according to the position of the object in the captured image.
  • the depth information of the object is Z, as shown in Figure 11.
  • the center distance B between the two cameras is known in advance.
  • the parallax (Xr-Xt) can be determined, and thus the depth information Z can be calculated.
  • the distance measurement module or system such as the time-of-flight ranging camera, structured light ranging system, radar ranging system, ultrasonic sensor, etc. in the depth sensing unit can also be used to obtain the information in the scene. Depth information of the object.
  • the depth information of the object in the image is calculated according to the position of the same object acquired by the camera in the image captured by the two or more cameras.
  • the calculated depth information is directly matched with the position in the image.
  • the corresponding position of the object in the image can be determined according to the position of the measured object.
  • the depth information corresponding to the object in the image can be determined according to the feature information of the distance information measured by the distance measuring unit, including the change feature of the distance, and the matching relationship with the object in the image.
  • the coordinates of the image taken by the camera can be obtained and converted into a coordinate transfer matrix of the coordinate position of the object in the current scene in the world coordinate system.
  • the coordinate conversion matrix between the coordinates in the image coordinate system of the camera image and the world coordinate system of the three-dimensional space can be obtained.
  • the coordinate position of the coordinate position of the image captured by the camera in the world coordinate system can be obtained.
  • the surface shape information of the object in the current scene can be determined, and the object in the current scene can be reconstructed according to the shape information.
  • the feature points in the image can be detected and the feature information in the image can be obtained.
  • the feature information in the image may be a feature point used to describe an object in the image, including, for example, a corner point describing the object.
  • the feature information detection may further include detecting a plane in the scene, and determining the plane area included in the scene.
  • the inner edge detection may be further performed on the plane area.
  • the inner edge detection refers to the detection of the edge line feature located in the detected plane area, and it is judged that the inner edge area exists.
  • the plane area S1 of the desktop is obtained.
  • the inner edge detection is performed on the plane area S1 to obtain the inner edge area determined by the inner edge, that is, the plane area S2. It can be determined whether the flat area S2 includes a square interactive area, or whether the flat area S2 includes a text medium such as a book or the like. Wherein, the square interaction area can be detected according to the size feature or shape feature of the predetermined square interaction area.
  • the image corresponding to the text medium can be subjected to text recognition processing to obtain the text content included in the text medium, and all the text content can be displayed in the virtual interactive interface.
  • the recognized text content is convenient for the user to edit the text content in the text medium. Including operations such as modifying the content in the text medium, copying the content in the text medium, or labeling and remarks.
  • information related to the selected text medium content can be obtained, including, for example, obtaining translation information of the selected content, or obtaining other related search information of the selected content, etc.
  • the handwriting interaction device acquires the first operation of the stylus according to the movement information of the stylus, and the movement information includes the relative movement information of the stylus.
  • the first operation may be a writing operation or a clicking operation in the writing state, or may also include a moving operation when the stylus is in the empty state.
  • the accurate relative position detection of the relative motion sensing unit can be used to obtain the trajectory information of the stylus when the stylus is in the writing state, that is, the writing trajectory of the stylus.
  • the relative motion sensing unit may include one or more of sensing devices such as a gyroscope, an inertial sensor, or an acceleration sensor.
  • the tip of the stylus can also be positioned, so that the change of the position of the writing content of the stylus can be determined according to the change of the position of the stylus in space.
  • the position of the pen tip of the stylus in the image can be detected, and the stylus pen tip can be located and tracked according to the position of the detected pen tip in the image, which can specifically include:
  • the depth information corresponding to the pen tip position obtained in advance according to the depth sensing unit can be used.
  • the depth information corresponding to the position of the pen tip is calculated in real time according to the real-time image obtained by the camera.
  • the spatial position of the pen tip can be calculated according to the position of the pen tip in the image and the depth information corresponding to the position.
  • the image includes the uv coordinate system that determines the position of the pixel in the image, the camera coordinate system XcYcZc determined by the camera, and the world coordinate system XYZ, where the camera coordinate system is The distance between the origin of and the imaging plane is determined according to the camera parameters.
  • the depth sensing unit is a dual camera
  • the camera center distance between the dual cameras, the focal length of the dual cameras, and the parallax of the images captured by the pen tip P of the stylus on the two cameras can be combined as shown in FIG. 11
  • the distance of the pen tip P of the stylus relative to the camera is calculated, that is, the depth information of the pen tip P.
  • the distance between the pen tip P and the camera can also be measured by a depth sensor, such as a radar, a time-of-flight ranging camera, a structured light ranging system, and an ultrasonic sensor.
  • a depth sensor such as a radar, a time-of-flight ranging camera, a structured light ranging system, and an ultrasonic sensor.
  • the point P in the current scene can be uniquely determined.
  • the pen point is determined by combining the position of the pen tip in the image (point A) and the depth information corresponding to the pen tip position (point A) The world coordinate corresponding to the position in the world coordinate system.
  • the world coordinates corresponding to the pen tip can be determined in real time, and the pen tip can be tracked in real time.
  • step S903 the electronic device displays a virtual interactive interface on the interactive image, and responds to the first operation on the virtual interactive interface.
  • a virtual interactive interface can be generated at any position on the interactive image, or a virtual interactive interface can be generated for a fixed position on the interactive image, or one of the planar areas included in the interactive image can be selected Virtual interactive interface.
  • the position of the pen tip in the virtual interactive interface can be determined according to the spatial position of the pen tip and the generated three-dimensional scene model.
  • the relative motion sensing unit acquires the handwriting information of the stylus in the virtual interactive interface.
  • the relative motion sensing unit may be an inertial sensor or an activation interferometer or other sensing equipment.
  • the method of generating the virtual interactive interface may also be different.
  • the virtual interaction interface can be directly displayed on the display unit of the virtual reality VR device, and the position in the virtual interaction interface and the position in the current scene can be established. Correspondence. According to the absolute position of the stylus in the current scene, the position of the cursor corresponding to the stylus on the virtual interaction interface or the trajectory of the stylus is determined.
  • the virtual interaction interface displayed in the virtual reality VR device may be a fixed area in a virtual screen, or may also be a plane area that moves with the line of sight and is located at a predetermined distance within the range of the line of sight.
  • the electronic device is an augmented reality AR device or a mixed reality MR device
  • the virtual interactive interface which may specifically include:
  • the current scene image displayed on the display unit may be an image of the current scene directly transmitted through the semi-transparent display unit, or may also be a display unit, such as an image of the current scene captured by a camera.
  • the display unit displays the image of the current scene
  • the displayed view of the current scene image is basically the same as the view through the semi-transparent display unit, so that after matching the virtual interaction interface, the user’s experience in the virtual interaction is improved.
  • the coordination of the operations in the interface is improved.
  • the display unit While displaying the current scene image captured in real time or through the current scene image, the display unit also displays a virtual interaction interface in the current scene image to facilitate the user to interact with the current scene image.
  • the setting of the position of the virtual interactive interface may be fixed in a fixed area in the virtual screen, or may be a plane range that moves with the line of sight and is located at a predetermined distance within the range of the line of sight.
  • the virtual interaction interface may correspond to the position of the handwriting interaction area in the current scene image. According to the position of the handwriting interaction area in the real space, the handwriting interaction area in the real scene or real-time image signal is used as the virtual interaction interface. When the position of the writing interaction area in the image changes, the position displayed on the display unit of the virtual interaction interface also changes accordingly.
  • the handwriting interaction area in the current scene image may be determined according to the position of the stylus, and the virtual interaction interface may be adjusted according to the determined handwriting interaction area.
  • the image of the current scene includes plane X and plane Y.
  • the handwriting interaction area is displayed on plane X.
  • the position of the virtual interaction interface is changed, and the corresponding virtual interaction interface is generated on the plane Y.
  • the virtual interactive interface generated on the plane Y can retain the previously input data, thereby enabling user input to reduce environmental constraints and greatly improve the convenience of input.
  • the virtual interaction interface may cover an area of the real text medium. It can electronically identify the real text medium (the content of the document, and display the electronic content in the corresponding position of the real text medium, and the position of the recognized content can be compared with the position of the content in the real text medium.
  • the real text medium in the current scene can be directly displayed on the display unit, or the electronic text content can be displayed.
  • the virtual interaction interface can also include the physical range of the handwriting interaction area.
  • the handwriting interaction The physical range of the area can be determined according to the visual features such as the edge of the scene.
  • the handwriting interaction area can be the edge corresponding to the real text medium, that is, the area determined by the paper.
  • the virtual interaction interface when the virtual interaction interface may be a fixed area in the display unit, the virtual interaction interface is displayed at a fixed position in the virtual interaction interface.
  • the virtual interactive interface may include preset keys or buttons to facilitate the user in certain fixed places, for example, the user can complete operations such as live broadcast, formula input, and sketch drawing through the electronic device described in this application.
  • the pressure sensing unit provided at the position of the tip of the stylus pen.
  • the pressure sensed by the pressure sensing unit is greater than a predetermined value, it is determined that the stylus pen is in a writing state, and when the pressure sensed by the pressure sensing unit is less than or equal to a predetermined value, it is determined that the stylus pen is raised. Pen status.
  • the state of the stylus pen or the fingertip can also be determined by the acquired absolute position of the pen tip or the fingertip and the position between the handwriting interaction planes in the reconstructed current scene.
  • the stylus or fingertip When the stylus or fingertip is in the pen-up state, according to the calculated absolute position determined by the stylus pen tip or the fingertip of the finger, and the reconstructed current scene image, it is determined that the fingertip or the pen tip is at The position in the current scene image. According to the position of the fingertip or pen tip in the current scene image, the position of the trajectory of the operation input by the user can be determined. The content in the current scene according to the data corresponding to the position of the track, or the content in the virtual interactive interface corresponding to the operation input by the user.
  • the image collected by the camera may not show the position of the fingertip or the pen tip, or the trajectory of the writing occupies a larger range in the image. If it is small, it will affect the accuracy of the recognized pen tip or the track content written by the fingertip.
  • the relative movement sensing unit described in the embodiments of the present application can obtain the relative movement track information of the stylus.
  • the relative displacement information of the pen tip can be acquired by the handwriting trajectory information sensing unit.
  • the relative displacement information of the pen tip may be obtained through the laser interference unit in the handwriting trajectory sensing unit, and the handwriting of the stylus can be determined according to the relative displacement.
  • the change in the magnitude and direction of the acceleration of the pen tip of the stylus pen is sensed by an inertial sensing unit to determine the relative displacement information corresponding to the pen tip.
  • the handwriting trajectory sensing unit may include a camera, and the distance and direction of the pen tip movement is determined according to the change of the picture taken by the camera.
  • the pen tip pressure collected by the pressure sensing unit can also be acquired, and the line thickness of the relative movement trajectory information is determined according to the magnitude of the pen tip pressure, so as to obtain a more accurate relative movement trajectory information. Movement track information.
  • the inertial sensing unit may include an acceleration sensor and/or a gyroscope.
  • the inertial sensing unit may include a plurality of them, and they may be arranged on different parts of the stylus, for example, they may be arranged at the positions of the pen tip and the end of the pen respectively, and the magnitude and magnitude of acceleration of the tip of the stylus and the end of the pen may be obtained by the inertial sensing unit.
  • the direction is to determine the posture information of the stylus based on the magnitude of the acceleration and the change in the direction.
  • the determined posture information can be adjusted to the posture of the stylus in the virtual interactive interface, so that the user can get a more realistic writing experience.
  • the step of calibrating the relative movement direction of the stylus pen may also be included.
  • the guide line can be generated in the area where the virtual interactive interface and the real image are superimposed.
  • the guide line may be a straight line or a curve of other shapes.
  • the position of the pen tip in the image is acquired through the visual perception module, the depth information of the pen tip is acquired according to the visual perception module or the depth sensing unit, and the position of the pen tip in the image and the depth information of the pen tip are used to determine that the pen tip is in the current scene According to the obtained video image, track the spatial position of the pen tip in the current scene to obtain the absolute movement track of the pen tip, that is, the first movement track.
  • the second movement track of the pen tip is acquired according to the movement sensing unit, and the difference between the first movement track and the second movement track is compared.
  • the tilt angle A of the motion sensing unit is adjusted to the adjustment range
  • a distance sensor may also be provided on the stylus pen.
  • the distance between the stylus pen and the handwriting interaction area is detected by the distance sensor, and when the distance is less than a preset value, the position of the pen tip is measured by the laser interference unit and/or the inertial sensing unit provided on the stylus pen. The fine adjustment improves the accuracy of visually positioning the pen tip position, and obtains more accurate relative motion information.
  • the real text medium in the current scene can be identified, and the content of the text corresponding to the identified text medium can be saved. Or you can receive the user's editing information and save the edited content.
  • the edited text can be determined according to the position of the stylus. For example, editing operations such as text selection, copying, retrieval, or translation can be realized according to the pen style of the stylus, the keys of the stylus, gestures or voice commands, or the translated content can be played through the speaker.
  • the scene image information and the handwriting interaction process can be saved or transmitted to the network, which can facilitate the sharing of the interaction process with other users.
  • a user can wear the electronic device described in this application when reading a paper document, such as a book or other paper containing text.
  • the electronic device may include a head-mounted display device and a stylus pen.
  • the image 1402 including the paper file 1401 as shown in FIG. 14 is captured by the visual perception module in the head-mounted display device, and the depth information corresponding to the image is acquired by the depth sensing unit to construct the 3D corresponding to the captured image.
  • Model The inner edge of the plane in the 3D model is detected, and the text medium of the paper document 1401 included in the plane 1403 in FIG. 14 is identified.
  • the image area corresponding to the text medium can be intercepted, and the text content in the text medium can be recognized by OCR recognition, and the text content in the text medium can be generated as shown in Figure 15
  • the virtual interactive interface 151 is shown, and the position of the virtual interactive interface is determined according to the position of the text medium in the image (not limited to this, the virtual interactive interface can also be fixed in the image as needed. The method of a position, or the area fixed at a predetermined distance in the direction of the line of sight).
  • the virtual interactive interface also includes a toolbar 152.
  • the toolbar includes selection boxes 153 of different colors. The user can move the stylus , Select different colors to add different background colors to the text content, or add different annotations to the text through the selected color, and generate annotations 154 and so on.
  • the content of the toolbar is not limited to this, and can also include editing tools such as copy, paste, cut, bold, and undo.
  • the position of the pen tip in the image can be determined according to the visual features set by the stylus pen tip, including features such as special color, fluorescence, or reflection.
  • the depth information corresponding to the image acquired by the depth sensing unit is determined.
  • the position of the pen tip in the image, the depth information of the pen tip, and the coordinate conversion matrix, the spatial position corresponding to the pen tip in the current scene can be calculated.
  • the pressure sensing unit provided on the pen tip of the stylus or according to the detected spatial position of the pen tip, it is determined whether the stylus is in the pen-up state or the pen-down state.
  • the stylus When the stylus is in the pen-up state, it can be determined that the pen tip corresponds to the cursor position in the virtual interactive interface according to the 3D model corresponding to the constructed image and the position of the pen tip in the 3D model.
  • the vertical line of the plane where the handwriting interaction area is located may be generated according to the spatial position of the pen tip, and the intersection point of the vertical line and the handwriting interaction area is the cursor position corresponding to the pen tip.
  • the cursor position corresponding to the tip of the stylus is updated in real time, so that when editing the text, the user can clearly know the content to be edited when the pen is about to be lowered.
  • a distance sensor can be provided on the tip of the stylus. When it is detected that the distance of the tip from the handwriting interaction area is less than a predetermined value, the position of the tip can be adjusted by the relative motion sensing unit, thereby improving the positioning accuracy of the tip position.
  • the spatial position of the pen tip that can be determined by the visual perception module and the depth sensing unit corresponds to the cursor position in the virtual interaction interface as M, when the distance between the pen tip and the handwriting interaction area is less than a predetermined value, such as less than 0.5 cm
  • the relative motion sensing unit determines that the cursor position of the pen tip in the virtual interactive interface is N, then the position of the cursor in the virtual interactive interface can be adjusted by fine-tuning, so as to improve the accuracy of the acquired handwriting .
  • the relative motion sensing unit can determine the cursor position corresponding to the pen tip by means of laser interference, inertial sensing unit or camera shooting image.
  • a calibration button can be generated in the virtual interactive interface, or the calibration button can be set on the stylus.
  • the calibration button can be The trigger information of the calibration button generates a calibration line on the virtual interactive interface.
  • the user can view the calibration line through the head-mounted display device.
  • the user can draw at the perceived position according to the drawing prompt of the calibration straight line, and according to the position of the perceived straight line.
  • the absolute movement trajectory of the pen tip that is, the first trajectory L1
  • the first trajectory L1 can be obtained through the position of the pen tip captured by the head-mounted display device.
  • the motion trajectory of the pen tip can also be obtained, and the second trajectory L2 can be obtained.
  • the error occurring in the second trajectory L2 can be calibrated.
  • the direction of the direction sensor in the relative motion sensing unit is calibrated.
  • the relative motion sensing unit can be calibrated according to a predetermined time period, so that the accuracy of the track information detected by the system can be further ensured.
  • the edited text content can be saved for easy sharing with other users or convenient for users to view by themselves. For example, view the annotation information for part of the text in the book, and the cross-line information for part of the text in the book. Or, you can perform operations such as translation and reading aloud on the selected text according to the translation, reading and other tools included in the toolbar.
  • different function labels can be displayed on the virtual interactive interface. When a user triggers a function label, the function button corresponding to the function label is displayed.
  • the edit label includes comments, Delete, copy and other function keys.
  • the virtual interactive interface 172 may display the file to be signed.
  • the contract document may be the image 171 corresponding to the contract document received by the first communication unit or other documents that need to be signed.
  • the document to be signed can also be a contract paper in the current scene.
  • the image of the document to be signed can be obtained through the visual perception module in the head-mounted display device. After the image corresponding to the document to be signed is obtained, , Receive the user's signature processing, add signature data to the image corresponding to the file to be signed, and send the signed image to other users that need to be signed, so as to conveniently implement a quick and convenient handwriting pen signature operation.
  • the signature information can be accurately generated at the signature of the image of the document to be signed, so that the user can feel the experience of using the stylus to write on real paper, and can Get the image corresponding to the valid signature file.
  • the cursor corresponding to the stylus may be displayed on the virtual interactive interface According to the displayed cursor position, the user can move the stylus pen in the handwriting plane, so that the cursor moves to the signature position, completes the signature operation of the file in the virtual interactive interface, and realizes efficient and safe mixed reality office operations.
  • the user may wear the electronic device for live explanation.
  • the content that needs to be explained can be played on the display unit in the head-mounted display device.
  • the content that needs to be explained may be a multimedia file, including video, PPT, etc., or the content that needs to be explained may also be the current image collected by the visual perception unit when the user is broadcasting live.
  • the preset area of the image played by the display unit can be set as a virtual interactive interface, or the entire screen played by the display unit is a virtual interactive interface.
  • the virtual exchange interface may be set as a transparent layer.
  • the virtual interactive interface and the image played by the display unit can be synthesized and then sent to other users, or the audio during the explanation can be synthesized and sent to other users, so that other users can get vivid Explain the information.
  • the writing state of the stylus can be detected.
  • the cursor corresponding to the pen tip can be displayed on the virtual interactive interface.
  • the writing trajectory of the stylus is acquired according to the relative motion sensing unit of the stylus,
  • the corresponding track content is displayed in the interactive interface.
  • visual perception is used to determine the cursor position of the stylus in the pen-up state
  • relative motion perception is used to determine the corresponding handwriting of the stylus when the pen is down. It is convenient to perform operations such as marking in the handwriting interaction area.
  • the virtual interactive interface, tag information, and the collected user's voice signal can be shared with other users.
  • formula calculations, drawing drawings, etc. can also be performed through the electronic device described in this application.
  • a blank area 191 corresponding to the handwriting interaction area may be displayed in the virtual interactive interface, and a tool 192 corresponding to formula calculation or drawing and drawing may be included around the blank area.
  • FIG. 20 shows a structural block diagram of an apparatus provided in an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
  • the device includes:
  • the image display module 2001 is used to display interactive images by electronic devices
  • the operation information obtaining module 2002 is configured to obtain the first operation of the stylus pen by the electronic device according to the movement information of the stylus pen, wherein the movement information includes the relative movement information of the stylus pen;
  • the response module 2003 is configured to display a virtual interactive interface on the interactive image by the electronic device, and respond to the first operation on the virtual interactive interface.
  • the disclosed device and method may be implemented in other ways.
  • the system embodiment described above is merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be It can be combined or integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program code to the handwriting input device/mixed reality interactive device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random Access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal, and software distribution medium.
  • ROM read-only memory
  • RAM random Access memory
  • electric carrier signal telecommunications signal
  • software distribution medium for example, U disk, mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.

Abstract

The present application is suitable for the technical field of handwriting interactions, and provides an interaction method for an electronic device, comprising: an electronic device displays an interaction image; the electronic device obtains a first operation of a stylus according to movement information of the stylus, wherein the movement information comprises relative movement information of the stylus; the electronic device displays a virtual interaction interface on the interaction image, and responds to the first operation on the virtual interaction interface. By obtaining the relative movement information of the stylus, detailed relative displacement detection can be performed on the stylus, so that a writing trajectory of the stylus can be accurately determined, and the writing trajectory can be accurately restored, thus facilitating realizing an accurate interaction operation.

Description

一种电子设备及其交互方法Electronic equipment and its interaction method
本申请要求于2020年05月14日提交国家知识产权局、申请号为202010407584.X、申请名称为“一种电子设备及其交互方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office on May 14, 2020, the application number is 202010407584.X, and the application name is "an electronic device and its interaction method", the entire content of which is incorporated by reference In this application.
技术领域Technical field
本申请属于手写交互技术领域,尤其涉及一种电子设备及其交互方法。This application belongs to the technical field of handwriting interaction, and in particular relates to an electronic device and an interaction method thereof.
背景技术Background technique
虚拟现实、增强现实以及混合现实等产业正随着5G传输技术、显示技术和交互技术的发展而逐渐崛起。比如,混合现实技术通过合并现实和虚拟世界而产生新的可视化环境,通过在虚拟环境中引入现实场景信息,在虚拟世界、现实世界和用户之间搭起一个交互反馈的信息回路,以增强用户体验的真实感。Industries such as virtual reality, augmented reality, and mixed reality are gradually rising with the development of 5G transmission technology, display technology and interactive technology. For example, mixed reality technology generates a new visual environment by merging reality and virtual world. By introducing real scene information into the virtual environment, an interactive feedback information loop is built between the virtual world, the real world and the user to enhance the user The realism of the experience.
通过数据手套等新兴的针对混合现实场景的交互技术可以较好的针对游戏娱乐场景,但是通过数据手套输入文字内容的能力较弱。而通过视觉算法采集手写笔图像进行笔尖检测的操作方式,无法准确的确定笔尖位置,因而不能对书写轨迹进行准确的还原。Emerging interactive technologies for mixed reality scenes such as data gloves can better target game and entertainment scenes, but the ability to input text content through data gloves is weak. However, the operation method of collecting the image of the stylus pen to detect the pen tip through the visual algorithm cannot accurately determine the position of the pen tip, and therefore cannot accurately restore the writing trajectory.
发明内容Summary of the invention
本申请实施例提供了一种电子设备及其交互方法,可以解决现有技术中通过手写笔进行交互输入时,无法准确确定笔尖位置问题。The embodiments of the present application provide an electronic device and an interactive method thereof, which can solve the problem of the inability to accurately determine the position of the pen tip when interactive input is performed through a stylus in the prior art.
第一方面,本申请实施例提供了一种电子设备的交互方法包括:电子设备显示交互图像,所述交互图像可以为当前场景的图像,或者也可以为播放多媒体文件的图像;电子设备根据手写笔的运动信息获取手写笔的第一操作,其中所述运动信息包括手写笔的相对运动信息,所述第一操作可以包括点击操作、书写操作,或者还可以包括手写笔在腾空状态下的笔尖移动操作;电子设备在所述交互图像上显示虚拟交互界面,在所述虚拟交互界面响应所述第一操作,当所述第一操作为点击操作时,可以在所述虚拟交互界面中触发点击指令,当所述第一操作为书写操作时,可在所述虚拟交互界面中显示对应的书写轨迹或书写内容。In the first aspect, an embodiment of the present application provides an interactive method for an electronic device. The electronic device displays an interactive image. The interactive image may be an image of the current scene or an image of a multimedia file; The movement information of the pen acquires the first operation of the stylus, wherein the movement information includes the relative movement information of the stylus, and the first operation may include a tap operation, a writing operation, or may also include the tip of the stylus in the empty state Mobile operation; the electronic device displays a virtual interactive interface on the interactive image, and responds to the first operation on the virtual interactive interface. When the first operation is a click operation, a click can be triggered in the virtual interactive interface Instruction, when the first operation is a writing operation, the corresponding writing track or writing content may be displayed in the virtual interactive interface.
所述电子设备用于显示交互图像和虚拟交互界面,可以接收手写笔等输入设备输入的交互数据,在所显示的虚拟交互界面中响应交互数据。所述电子设备可以为头戴式显示设备,虚拟现实眼镜、增强现实眼镜或混合现实眼镜等电子设备。所述相对运动信息,可以通过陀螺仪、惯性传感器、加速度传感器等传感设备采集得到。所述电子设备通过获取手写笔的相对运动信息,可以对手写笔进行更为细致的相对位移检测,从而能够更为准确的确定手写笔的书写轨迹,对书写轨迹能够进行更为准确的还原,便于实现更为精细的交互操作。在可能的实现方式中,所述传感设备设置在手写笔的笔尖位置,通过采集笔尖位置的相对运动信息,能够更为准确的获取手写笔的书写轨迹。The electronic device is used to display an interactive image and a virtual interactive interface, can receive interactive data input by an input device such as a stylus pen, and respond to the interactive data in the displayed virtual interactive interface. The electronic device may be an electronic device such as a head-mounted display device, virtual reality glasses, augmented reality glasses, or mixed reality glasses. The relative motion information can be collected by sensing devices such as gyroscopes, inertial sensors, and acceleration sensors. By acquiring the relative movement information of the stylus, the electronic device can perform more detailed relative displacement detection of the stylus, so that the writing trajectory of the stylus can be determined more accurately, and the writing trajectory can be restored more accurately. Facilitate the realization of more refined interactive operations. In a possible implementation manner, the sensing device is set at the tip position of the stylus pen, and by collecting the relative movement information of the tip position, the writing track of the stylus pen can be obtained more accurately.
在一种实现方式中,所述交互图像为当前场景的图像,可以通过当前场景的图像与交互图像的对应关系,确定手写笔的书写位置。可以包括:根据当前场景的图像,获取手写笔在所述当前场景中的位置,其中,手写笔在当前场景中的位置,可以通过手写笔相对于 手写平面的相对位置关系来表示;根据所述手写笔在所述当前场景中的位置,确定手写笔的笔尖在所述虚拟交互界面中的手写位置,通过手写笔相对于手写平面的相对位置关系,结合预先所设定的映射关系,可以确定笔尖在虚拟交互界面中的对应位置。In an implementation manner, the interactive image is an image of the current scene, and the writing position of the stylus can be determined by the correspondence between the image of the current scene and the interactive image. It may include: acquiring the position of the stylus in the current scene according to the image of the current scene, wherein the position of the stylus in the current scene can be represented by the relative positional relationship of the stylus with respect to the handwriting plane; The position of the stylus in the current scene determines the handwriting position of the tip of the stylus in the virtual interactive interface. The relative position of the stylus with respect to the handwriting plane can be determined by combining with the preset mapping relationship. The corresponding position of the pen tip in the virtual interactive interface.
在确定手写笔的书写轨迹的同时,还可以通过对手写笔在当前场景中的位置进行定位,确定手写笔在当前场景中的位置,可以对处于腾空状态的手写的待书写位置进行跟踪。比如,可以通过移动虚拟图标,包括如虚拟手写笔、虚拟光标的方式,实时跟踪手写笔在虚拟交互界面中的对应位置,从而提升用户书写的便利性。While determining the writing trajectory of the stylus, the position of the stylus in the current scene can also be determined by locating the position of the stylus in the current scene, and the handwriting position to be written in the empty state can be tracked. For example, the corresponding position of the stylus in the virtual interactive interface can be tracked in real time by moving virtual icons, including virtual stylus pens and virtual cursors, so as to improve the convenience of user writing.
在确定手写笔在当前场景中的位置的实现方式中,可以包括:根据摄像头获取当前场景的图像,通过当前场景的图像直接确定手写笔在当前场景中的位置;或者,还包括获取当前场景的深度信息,结合深度信息,更为准确的获取所述手写笔在当前场景中的位置。In the implementation of determining the position of the stylus in the current scene, it may include: acquiring the image of the current scene according to the camera, and directly determining the position of the stylus in the current scene through the image of the current scene; or, also including acquiring the current scene The depth information, combined with the depth information, can obtain the position of the stylus in the current scene more accurately.
通过摄像头所采集的当前场景的图像时,可以根据预先所设定的手写笔特征,识别所述当前场景的图像中的手写笔的初始位置。比如,可以根据当前场景的图像中手写笔的笔尖位置,相对于图像中的其它参照信息的位置,确定手写笔的初始位置。其它参照信息可以为书写平面的边、顶点、书写平面中包括的文字或图案等信息。或者,在一种实现方式中,可以结合当前场景的图像,以及当前场景的深度信息,确定笔尖位置、书写平面的位置,相对于书写平面的位置,根据所确定的相对位置,得到书写笔尖在虚拟交互界面中的对应位置。When the image of the current scene is collected by the camera, the initial position of the stylus in the image of the current scene can be recognized according to the preset characteristics of the stylus. For example, the initial position of the stylus can be determined according to the position of the tip of the stylus in the image of the current scene relative to the position of other reference information in the image. The other reference information may be information such as the edges and vertices of the writing plane, characters or patterns included in the writing plane. Or, in an implementation manner, the image of the current scene and the depth information of the current scene can be combined to determine the position of the pen tip, the position of the writing plane, and the position relative to the writing plane. According to the determined relative position, the position of the writing pen can be obtained. The corresponding position in the virtual interactive interface.
在一种确定手写笔在所述当前场景中的位置的实现方式中,可以包括:在当前场景的图像中检测手写笔的笔尖特征,确定手写笔的笔尖在当前场景的图像中的位置;根据当前场景的图像与深度信息的匹配关系,获得所述笔尖所匹配的深度信息;根据所述笔尖的深度信息,可以确定笔尖的空间坐标,结合所确定的手写平面的空间坐标,可以得到所述笔尖相对于手写平面的位置。In an implementation manner for determining the position of the stylus in the current scene, it may include: detecting the tip feature of the stylus in the image of the current scene, and determining the position of the tip of the stylus in the image of the current scene; The matching relationship between the image of the current scene and the depth information to obtain the depth information matched by the pen tip; according to the depth information of the pen tip, the spatial coordinates of the pen tip can be determined, and the spatial coordinates of the determined handwriting plane can be combined to obtain the The position of the pen tip relative to the handwriting plane.
通过获取当前场景的图像,结合笔尖特征,可以确定笔尖在当前场景的图像中的位置。通过当前场景的图像以及当前场景的图像所对应的深度信息,可以确定当前场景的图像中的物体的深度信息,结合手写笔的笔尖的深度信息,可以确定手写笔的笔尖相对于书写平面的相对位置,根据该相对位置,即可有效的确定手写笔对应于虚拟交互界面中的书写位置。By acquiring the image of the current scene and combining the characteristics of the pen tip, the position of the pen tip in the image of the current scene can be determined. Through the image of the current scene and the depth information corresponding to the image of the current scene, the depth information of the object in the image of the current scene can be determined. Combined with the depth information of the tip of the stylus, the relative position of the tip of the stylus with respect to the writing plane can be determined. According to the relative position, the writing position of the stylus corresponding to the virtual interactive interface can be effectively determined.
在一种手写笔的笔尖特征的设置方式中,所述笔尖特征包括颜色特征、反射光特征或荧光特征中的一项或者多项。包括颜色特征和反射光特征结合、反射光特征和荧光特征相结合,或者颜色特征和荧光特征相结合,或者颜色特征、荧光特征和反射光特征相结合等。In a method for setting the pen tip feature of a stylus pen, the pen tip feature includes one or more of color feature, reflected light feature, or fluorescent feature. Including the combination of color feature and reflected light feature, combination of reflected light feature and fluorescence feature, or combination of color feature and fluorescence feature, or combination of color feature, fluorescence feature and reflected light feature, etc.
通过设置手写笔的笔尖为特定颜色,可以通过颜色检测快速得到笔尖在当前场景的图像中的位置。所述笔尖的颜色可以与书写平面的颜色不同。在一种实现方式中,可以检测当前的书写平面的颜色,并根据当前书写平面的颜色,调整所述笔尖的颜色,从而便于适应不同书写平面的笔尖检测要求。或者,所述笔尖可以设置为反光材料层,通过反光材料反射光线,在当前场景的图像中检测到该反射光线的位置,确定笔尖的位置。或者,所述笔尖可以设置荧光材料层,通过检测当前场景的图像中的荧光位置,确定笔尖的位置。By setting the pen tip of the stylus to a specific color, the position of the pen tip in the image of the current scene can be quickly obtained through color detection. The color of the pen tip may be different from the color of the writing plane. In an implementation manner, the color of the current writing plane can be detected, and the color of the pen tip can be adjusted according to the color of the current writing plane, so as to facilitate adapting to the pen tip detection requirements of different writing planes. Alternatively, the pen tip may be set as a reflective material layer, the light is reflected by the reflective material, the position of the reflected light is detected in the image of the current scene, and the position of the pen tip is determined. Alternatively, the pen tip may be provided with a fluorescent material layer, and the position of the pen tip can be determined by detecting the fluorescent position in the image of the current scene.
在一种确定笔尖在虚拟交互界面中的手写位置的实现方式中,可以包括:当所述手写笔为腾空状态时,获取所述手写笔的位置,以及当前场景的书写平面的位置,确定手写笔相对于书写平面的相对位置关系;根据所述手写笔相对于书写平面的相对位置关系,结合 预先确定的书写平面与虚拟交互界面的映射关系,确定所述手写笔在所述虚拟交互界面中的位置。In an implementation manner of determining the handwriting position of the pen tip in the virtual interactive interface, it may include: when the stylus is in the empty state, acquiring the position of the stylus and the position of the writing plane of the current scene, and determining the handwriting The relative positional relationship of the pen relative to the writing plane; according to the relative positional relationship of the stylus relative to the writing plane, combined with the predetermined mapping relationship between the writing plane and the virtual interactive interface, it is determined that the stylus is in the virtual interactive interface s position.
通过预先设定书写平面与虚拟交互界面的位置关系,在手写笔处于腾空状态时,通过获取手写笔的位置,以及书写平面的位置,可以确定手写笔相对于书写平面的位置。再结合书写平面与虚拟交互界面的映射关系,即可确定手写笔在虚拟交互界面中的位置。通过在手写笔处于腾空状态时检测手写笔的位置,可以确定手写笔在腾空状态时,书写位置在虚拟交互界面中的变化,便于用户实时察了解手写笔在落笔时,对应于虚拟交互界面的书写位置。By pre-setting the positional relationship between the writing plane and the virtual interactive interface, when the stylus is in the empty state, by acquiring the position of the stylus and the position of the writing plane, the position of the stylus relative to the writing plane can be determined. Combined with the mapping relationship between the writing plane and the virtual interactive interface, the position of the stylus in the virtual interactive interface can be determined. By detecting the position of the stylus when the stylus is in the empty state, it can be determined that the writing position of the stylus in the empty state changes in the virtual interactive interface, which is convenient for the user to observe in real time when the stylus is down, which corresponds to the virtual interface. Writing position.
在一种交互图像的生成方式中,所述交互图像通过摄像头采集得到,或者通过预定透明度的显示设备呈现,即按照所设定的透明度,允许现实场景的图像部分穿透所述显示设备,用户可以通过所述预定透明度的显示设备观看到现实场景的图像。In an interactive image generation method, the interactive image is collected by a camera or presented by a display device with a predetermined transparency, that is, according to the set transparency, the image of the real scene is allowed to partially penetrate the display device, and the user The image of the real scene can be viewed through the display device with the predetermined transparency.
其中,通过摄像头采集得到的交互图像,可以为用户当前场景的图像。所述交互图像的图像内容,可以与用户未佩戴所述电子设备时的裸眼的视力范围对应,从而能够与虚拟交互界面混合,得到混合现实的图像。或者,所述交互图像可以通过预定透明度的显示设备中显示。所述预定透明度的显示设备可以与场景信息相关。比如,所述显示设备的透明度可以在场景光线强度增加时,透明度逐渐减小,在场景光线强度降低时,透明度逐渐增大,从而能够得到亮度更为舒适的交互图像。Among them, the interactive image collected by the camera may be an image of the user's current scene. The image content of the interactive image may correspond to the vision range of the naked eye when the user is not wearing the electronic device, so that it can be mixed with the virtual interactive interface to obtain a mixed reality image. Alternatively, the interactive image may be displayed in a display device with a predetermined transparency. The predetermined transparency display device may be related to scene information. For example, the transparency of the display device may gradually decrease when the scene light intensity increases, and when the scene light intensity decreases, the transparency gradually increases, so that an interactive image with more comfortable brightness can be obtained.
在虚拟交互界面显示的实现方式中,可以包括在所述交互图像中的预定区域显示所述虚拟交互界面,即虚拟交互界面的显示位置是固定的,便于记录画面的通用信息,比如记录当前场景的图像所对应的感受、心情或日期等信息;或者,根据当前场景中的平面区域确定所述虚拟交互界面的位置,便于协调书写位置与书写轨迹的显示位置的统一。In the implementation of displaying the virtual interactive interface, it may include displaying the virtual interactive interface in a predetermined area in the interactive image, that is, the display position of the virtual interactive interface is fixed, which is convenient for recording general information of the screen, such as recording the current scene Information such as the feeling, mood or date corresponding to the image; or, the position of the virtual interactive interface is determined according to the plane area in the current scene, so as to facilitate the coordination of the unification of the writing position and the display position of the writing track.
通过在交互图像中的固定区域显示所述虚拟交互界面,可以更为灵活的选择交互图像,比如可以在没有可书写的平面的交互图像中显示所述虚拟交互界面,并且用户可以在交互图像之外的区域进行书写,在交互图像中显示所书写的信息。此时,可以通过摄像头等传感器采集手写笔的位置信息。By displaying the virtual interactive interface in a fixed area in the interactive image, the interactive image can be selected more flexibly. Write in the outer area, and display the written information in the interactive image. At this time, the position information of the stylus can be collected through sensors such as a camera.
通过当前场景中的平面区域确定所述虚拟交互界面的位置时,可以在手写笔的位置显示书写信息,可以使得书写位置与显示内容的对应关系,更符合用户的使用习惯,提高用户使用的便利性。When the position of the virtual interactive interface is determined by the plane area in the current scene, the writing information can be displayed at the position of the stylus, which can make the correspondence between the writing position and the displayed content more in line with the user's usage habits, and improve the convenience of the user. sex.
在根据当前场景中的平面区域确定虚拟交互界面的位置时,可以包括:根据预设的平面区域要求,对当前场景中的平面区域进行筛选;根据筛选后的平面区域的形状和/或位置,确定所述虚拟交互界面的形状和/或位置。其中,所述预设要求可以包括平面区域尺寸范围、平面区域的朝向或平面区域距离摄像头的距离中的一项或者多项。When determining the position of the virtual interactive interface according to the plane area in the current scene, it may include: filtering the plane area in the current scene according to preset plane area requirements; according to the shape and/or position of the filtered plane area, Determine the shape and/or position of the virtual interactive interface. Wherein, the preset requirement may include one or more of the size range of the plane area, the orientation of the plane area, or the distance of the plane area from the camera.
比如,通过预先设定平面区域的尺寸范围,可以筛选尺寸较小的平面区域,可以自动选择尺寸较大或符合尺寸范围要求的平面区域,从而便于用户书写。通过平面区域朝向要求,可以选择平面朝向为向上,或者朝向用户的平面区域,以提高用户书写的便利性。通过平面区域与摄像头之间的距离的筛选,可以将距离较远、不便于用户书写的平面区域筛除,得到更便于用户书写的平面区域。For example, by pre-setting the size range of the plane area, the plane area with a smaller size can be screened, and the plane area with a larger size or meeting the requirements of the size range can be automatically selected, so that it is convenient for the user to write. According to the orientation requirements of the plane area, the plane orientation can be selected as upward or toward the plane area of the user to improve the convenience of the user's writing. By filtering the distance between the plane area and the camera, the plane area that is far away and inconvenient for the user to write can be filtered out, and a plane area that is more convenient for the user to write can be obtained.
在对平面进行筛选的实现方式中,当满足所述平面区域要求的平面区域包括多个时,所述方法还包括:根据预先确定的手写笔的较佳书写位置,在多个平面区域中选择与所述 较佳书写位置更匹配的平面区域。In the implementation of screening the planes, when there are multiple plane areas that meet the requirements of the plane area, the method further includes: selecting among the plurality of plane areas according to the predetermined preferred writing position of the stylus. A plane area that better matches the better writing position.
其中,根据所确定的较佳书写位置选择平面区域时,可以包括:分别获取多个待选的平面区域与所述较佳书写位置的距离;选择距离较近的平面区域。Wherein, when selecting a plane area according to the determined preferred writing position, it may include: respectively obtaining the distances between a plurality of to-be-selected plane areas and the preferred writing position; and selecting a plane area with a relatively close distance.
或者,获取所述较佳书写位置对应的较佳书写区域;分别获取多个待选的平面区域与所述较佳书写区域的相交区域;选择相交区域的面积较大的待选的平面区域。Alternatively, obtain the preferred writing area corresponding to the preferred writing position; respectively obtain the intersection area of a plurality of to-be-selected plane areas and the preferred writing area; select the to-be-selected plane area with a larger area of the intersection area.
所述较佳书写位置可以由用户设定,或者也可以根据用户的书写习惯对较佳书写位置进行统计分析得到,即对用户的书写图像进行分析,确定用户书写次数较多的位置,将所确定的位置作为较佳书写位置。The preferred writing position can be set by the user, or it can be obtained by statistical analysis of the preferred writing position according to the user's writing habits, that is, the user's writing image is analyzed to determine the position where the user writes more frequently, and the The determined position is used as a better writing position.
在确定书写平面与较佳书写位置的距离时,可以为书写平面的边与较佳书写位置的最近距离,或者也可以为书写平面的中心与较佳书写位置的距离。所述较佳书写区域的大小,可以由用户指定,也可以根据用户的书写习惯,自动的统计得到所述较佳书写范围。When determining the distance between the writing plane and the preferred writing position, it can be the closest distance between the edge of the writing plane and the preferred writing position, or it can also be the distance between the center of the writing plane and the preferred writing position. The size of the preferred writing area can be specified by the user, or the preferred writing range can be obtained by automatic statistics according to the user's writing habits.
在一种实现方式中,所述方法还包括:在所述虚拟交互界面中显示编辑按键;在所述虚拟交互界面响应所述书写信息包括:当检测到手写笔在所述编辑按键的对应位置的点击操作时,响应所述编辑按键的对应功能。In an implementation manner, the method further includes: displaying an edit button in the virtual interactive interface; responding to the writing information on the virtual interactive interface includes: when a stylus is detected at a corresponding position of the edit button During the click operation, respond to the corresponding function of the edit button.
通过在所述虚拟交互界面中显示虚拟按键,通过手写笔点击虚拟按键,实虚拟按键对应的功能。比如,虚拟按键可以包括画笔形状、画笔功能等按键,通过点击按键,实现书写笔所画的图像的编辑,或者对当前场景的图像所生成的信息或图像进行编辑。By displaying the virtual keys in the virtual interactive interface and clicking the virtual keys through the stylus, the functions corresponding to the virtual keys are realized. For example, the virtual buttons may include buttons such as a brush shape and a brush function. By clicking the button, the image drawn by the writing pen can be edited, or the information or image generated by the image of the current scene can be edited.
在虚拟交互界面中显示虚拟按键的实现方式中,还可以包括保存和/或向其它用户发送所编辑后的图像或文字内容;或者,选择所述虚拟交互界面的文字内容,向网络发送搜索所选择的文字内容的请求;接收并在所述虚拟交互界面显示所述请求对应的结果。In the implementation of displaying virtual keys in the virtual interactive interface, it may also include saving and/or sending the edited image or text content to other users; or, selecting the text content of the virtual interactive interface and sending the search place to the network. A request for selected text content; receiving and displaying the result corresponding to the request on the virtual interactive interface.
通过触发虚拟按键,可以将交互图像与虚拟交互界面的图像保存,可以通过网络将实时保存的图像传送至其它用户。或者,所述电子设备还可以实时采集音频信息,将音频信息与保存的图像实时传送给其它用户,从而可以实现便捷的在线教学讲解。By triggering the virtual button, the interactive image and the image of the virtual interactive interface can be saved, and the real-time saved image can be transmitted to other users through the network. Alternatively, the electronic device can also collect audio information in real time, and transmit the audio information and saved images to other users in real time, so that convenient online teaching and explanation can be realized.
通过选择虚拟交互界面的文字内容,由虚拟按键触发搜索请求,并接收和显示请求对应的结果,可以便于用户与网络实时互动。所述虚拟交互界面的文字内容,可以为电子设备自动识别的现实场景中的文字媒体的内容,从而便于对实现场景进行更为便捷的查询。或者,所述虚拟交互界面的文字内容也可以多媒体图像中包括的文字内容等。By selecting the text content of the virtual interactive interface, the search request is triggered by the virtual button, and the result corresponding to the request is received and displayed, which can facilitate the user to interact with the network in real time. The text content of the virtual interactive interface may be the content of text media in the real scene automatically recognized by the electronic device, thereby facilitating more convenient query of the realization scene. Alternatively, the text content of the virtual interactive interface may also be text content included in a multimedia image.
在一种对电子设备的书写精度进行校准的实现方式中,所述方法包括通过视觉感知模块获取笔尖在执行绘制时的图像,根据执行绘制时的图像生成第一轨迹;通过相对运动感知单元获取所述笔尖在执行所述绘制时的相对位移信息,根据所述相对位移信息生成第二轨迹;比较所述第一轨迹与第二轨迹的差异,根据所述差异对所述相对运动信息进行校准。In an implementation manner for calibrating the writing accuracy of an electronic device, the method includes obtaining an image of the pen tip when drawing is performed through a visual perception module, and generating a first trajectory according to the image when drawing is performed; acquiring through a relative motion sensing unit The relative displacement information of the pen tip when the drawing is executed, the second trajectory is generated according to the relative displacement information; the difference between the first trajectory and the second trajectory is compared, and the relative motion information is calibrated according to the difference .
通过视觉感知模块生成第一轨迹,通过相对运动信息生成第二轨迹,通过比较第一轨迹与第二轨迹的不同之处,对相对运动信息进行校准,比如对相对位移信息获取的传感设备进行调整,从而可以提升相对运动的精度。通过视觉感知模块获取第一轨迹时,可以通过建立三维场景模型,根据手写笔在交互图像中的位置,以及交互图像的深度信息的方式,确定手写笔的笔尖在三维场景模型中的位置,从而得到更为准确的第一轨迹。The first trajectory is generated by the visual perception module, and the second trajectory is generated by the relative motion information. The relative motion information is calibrated by comparing the difference between the first trajectory and the second trajectory, for example, the relative displacement information is obtained by sensing equipment Adjustment, which can improve the accuracy of the relative movement. When the first trajectory is acquired through the visual perception module, the position of the stylus pen tip in the three-dimensional scene model can be determined by establishing a three-dimensional scene model, according to the position of the stylus in the interactive image and the depth information of the interactive image, and thereby Get a more accurate first trajectory.
第二方面,本申请实施例提供了一种电子设备,所述电子设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面任一项所述的电子设备的交互方法。In the second aspect, an embodiment of the present application provides an electronic device that includes a memory, a processor, and a computer program that is stored in the memory and can run on the processor, and the processor executes all The computer program implements the interactive method of the electronic device as described in any one of the first aspect.
第三方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面任一项所述的电子设备的交互方法。In a third aspect, an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the Interactive method of electronic equipment.
第二方面所述的电子设备、第三方面所述的可读存储介质,与第一方面所述的电子设备的交互方法对应。The electronic device described in the second aspect and the readable storage medium described in the third aspect correspond to the interaction method of the electronic device described in the first aspect.
附图说明Description of the drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application, the following will briefly introduce the drawings needed in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only of the present application. For some embodiments, those of ordinary skill in the art can obtain other drawings based on these drawings without creative work.
图1是本申请实施例提供的一种电子设备结构的示意框图;FIG. 1 is a schematic block diagram of the structure of an electronic device according to an embodiment of the present application;
图2是本申请实施例提供的一种电子设备的使用状态示意图;FIG. 2 is a schematic diagram of a use state of an electronic device provided by an embodiment of the present application;
图3是本申请实施例提供的一种虚拟交互界面的显示示意图;Fig. 3 is a schematic diagram of a display of a virtual interactive interface provided by an embodiment of the present application;
图4是本申请实施例提供的又一虚拟交互界面的显示示意图;4 is a schematic diagram showing another virtual interactive interface provided by an embodiment of the present application;
图5是本申请实施例提供的一种根据较佳书写位置确定虚拟交互界面的示意图;Fig. 5 is a schematic diagram of determining a virtual interactive interface according to a preferred writing position according to an embodiment of the present application;
图6是本申请实施例提供的又一根据较佳书写位置确定虚拟交互界面的示意图;6 is another schematic diagram of determining a virtual interactive interface according to a preferred writing position according to an embodiment of the present application;
图7是本申请实施例提供的一种头戴式显示设备的布局示意图;FIG. 7 is a schematic diagram of the layout of a head-mounted display device provided by an embodiment of the present application;
图8是本申请实施例提供的一种虚拟交互界面示意图;FIG. 8 is a schematic diagram of a virtual interactive interface provided by an embodiment of the present application;
图9是本申请实施例提供的一种电子设备的交互方法的实现流程示意图;FIG. 9 is a schematic diagram of the implementation process of an electronic device interaction method provided by an embodiment of the present application;
图10是本申请实施例提供的一种主体设备的图像采集示意图;FIG. 10 is a schematic diagram of image acquisition of a main device provided by an embodiment of the present application;
图11是本申请实施例提供的一种深度信息计算示意图;FIG. 11 is a schematic diagram of depth information calculation provided by an embodiment of the present application;
图12是本申请实施例提供的一种当前场景图像的检测示意图;FIG. 12 is a schematic diagram of detecting a current scene image provided by an embodiment of the present application;
图13是本申请实施例提供的一种笔尖位置确定示意图;FIG. 13 is a schematic diagram of determining the position of a pen tip provided by an embodiment of the present application;
图14是本申请实施例提供的一种视觉感知设备拍摄图像的示意图;FIG. 14 is a schematic diagram of an image taken by a visual perception device provided by an embodiment of the present application;
图15是本申请实施例提供的一种虚拟交互界面示意图;FIG. 15 is a schematic diagram of a virtual interactive interface provided by an embodiment of the present application;
图16是本申请实施例提供的一种设备校准示意图;FIG. 16 is a schematic diagram of a device calibration provided by an embodiment of the present application;
图17是本申请实施例提供的一种电子设备用于签名的画面示意图;FIG. 17 is a schematic diagram of a screen of an electronic device used for signing according to an embodiment of the present application;
图18是本申请实施例提供的一种电子设备用于直播的画面示意图;FIG. 18 is a schematic diagram of a screen of an electronic device used for live broadcasting according to an embodiment of the present application;
图19是本申请实施例提供的一种电子设备进行书写的画面示意图;FIG. 19 is a schematic diagram of a screen for writing on an electronic device according to an embodiment of the present application;
图20是本申请实施例提供的一种电子设备的交互装置示意图。FIG. 20 is a schematic diagram of an interaction apparatus of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are proposed for a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to avoid unnecessary details from obstructing the description of this application.
以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请实施例中,“一个或多个”是指一 个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。The terms used in the following embodiments are only for the purpose of describing specific embodiments, and are not intended to limit the application. As used in the specification and appended claims of this application, the singular expressions "a", "an", "said", "above", "the" and "this" are intended to also This includes expressions such as "one or more" unless the context clearly indicates to the contrary. It should also be understood that in the embodiments of the present application, "one or more" refers to one, two or more than two; "and/or" describes the association relationship of the associated objects, indicating that there may be three relationships; for example, A and/or B can mean the situation where A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural. The character "/" generally indicates that the associated objects before and after are in an "or" relationship.
本申请实施例提供的电子设备的交互方法可以应用于增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、手机、平板电脑、可穿戴设备、车载设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等设备上,本申请实施例对电子设备的具体类型不作任何限制。The electronic device interaction method provided by the embodiments of this application can be applied to augmented reality (AR)/virtual reality (VR) devices, mobile phones, tablet computers, wearable devices, in-vehicle devices, notebook computers, and ultra mobile On devices such as ultra-mobile personal computers (UMPC), netbooks, and personal digital assistants (personal digital assistants, PDAs), the embodiments of this application do not impose any restrictions on the specific types of electronic devices.
例如,所述电子设备可以是WLAN中的站点(STAION,ST),可以是蜂窝电话、无绳电话、会话启动协议(Session InitiationProtocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、车联网终端、电脑、膝上型计算机、手持式通信设备、手持式计算设备、卫星无线设备、无线调制解调器卡、电视机顶盒(set top box,STB)、用户驻地设备(customer premise equipment,CPE)和/或用于在无线系统上进行通信的其它设备以及下一代通信系统,例如,5G网络中的移动终端或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的移动终端等。For example, the electronic device may be a station (STAION, ST) in a WLAN, a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, Personal Digital Assistant (PDA) devices, handheld devices with wireless communication functions, computing devices or other processing devices connected to wireless modems, in-vehicle devices, car networking terminals, computers, laptop computers, handheld communication devices , Handheld computing devices, satellite wireless devices, wireless modem cards, television set top boxes (STB), customer premise equipment (customer premise equipment, CPE), and/or other equipment used to communicate on the wireless system and download A first-generation communication system, for example, a mobile terminal in a 5G network or a mobile terminal in a public land mobile network (PLMN) network that will evolve in the future.
作为示例而非限定,当所述电子设备为可穿戴设备时,该可穿戴设备还可以是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,如智能手表、头戴式显示设备或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。As an example and not a limitation, when the electronic device is a wearable device, the wearable device can also be a general term for using wearable technology to intelligently design daily wear and develop wearable devices, such as glasses, gloves, Watches, clothing and shoes, etc. A wearable device is a portable device that is directly worn on the body or integrated into the user's clothes or accessories. Wearable devices are not only a kind of hardware device, but also realize powerful functions through software support, data interaction, and cloud interaction. In a broad sense, wearable smart devices include full-featured, large-sized, complete or partial functions that can be achieved without relying on smart phones, such as smart watches, head-mounted display devices, or smart glasses, and only focus on a certain type of application function. Equipment such as smart phones are used together, such as various smart bracelets and smart jewelry for physical sign monitoring.
在一种实现方式中,所述电子设备包括主体设备和手写笔。图1示出的是与本申请实施例提供的电子设备的部分结构的示意框图。参考图1,所述电子设备包括作为主体设备的头戴式显示设备1以及与主体设备可建立连接的手写笔2。其中,所述头戴式显示设备1包括第一通信单元110、视觉感知模块120、深度传感单元130、显示单元140、第一计算处理单元150、第一存储单元160和第一供电单元180,所述手写笔2包括第二通信单元210、相对运动传感单元220、第二计算处理单元230、第二存储单元240和第二供电单元250。本领域技术人员可以理解,图1中示出的电子设备结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。In an implementation manner, the electronic device includes a main device and a stylus. FIG. 1 shows a schematic block diagram of a part of the structure of an electronic device provided in an embodiment of the present application. 1, the electronic device includes a head-mounted display device 1 as a main device and a stylus 2 that can establish a connection with the main device. Wherein, the head-mounted display device 1 includes a first communication unit 110, a visual perception module 120, a depth sensing unit 130, a display unit 140, a first calculation processing unit 150, a first storage unit 160, and a first power supply unit 180 The stylus 2 includes a second communication unit 210, a relative motion sensing unit 220, a second calculation processing unit 230, a second storage unit 240, and a second power supply unit 250. Those skilled in the art can understand that the structure of the electronic device shown in FIG. 1 does not constitute a limitation on the electronic device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different component arrangements.
下面结合图1对电子设备的各个构成部件进行具体的介绍:The following is a detailed introduction to the various components of the electronic device with reference to Figure 1:
所述第一通信单元110可与第二通信单元210通信。或者,所述第一通信单元110、第二通信单元210可以采用短距离通信电路,包括但不限于蓝牙通信电路、红外通信电路、Wifi通信电路等。所述第一通信单元110可与第二通信单元210建立连接链路。所述第一通信单元110或第二通信单元210也可以与其它电子设备建立通信连接。比如,所述第一通信单元110、第二通信单元210可以与智能手机、电脑等设备建立通信链路,将头戴式 显示设备1或手写笔2所采集或处理后的数据发送至其它电子设备,或者通过该链路,由头戴式显示设备1或手写笔2接收其它电子设备所发送的数据。The first communication unit 110 can communicate with the second communication unit 210. Alternatively, the first communication unit 110 and the second communication unit 210 may use short-distance communication circuits, including but not limited to Bluetooth communication circuits, infrared communication circuits, Wifi communication circuits, and the like. The first communication unit 110 can establish a connection link with the second communication unit 210. The first communication unit 110 or the second communication unit 210 may also establish a communication connection with other electronic devices. For example, the first communication unit 110 and the second communication unit 210 can establish a communication link with devices such as smart phones, computers, etc., and send data collected or processed by the head-mounted display device 1 or the stylus 2 to other electronic devices. The device, or through the link, the head-mounted display device 1 or the stylus 2 receives data sent by other electronic devices.
比如,在图2所示的电子设备的使用状态示意图中,用户可以通过佩戴的方式,将所述头戴式显示设备1固定在用户的头部,所述头戴式显示设备1可以通过摄像头采集图像的方式,并将采集的当前场景的图像实时显示在头戴式显示设备中的显示屏。比如,可以将采集的图像分为第一图像和第二图像,分别在头戴式显示设备中的第一显示屏和第二显示屏中显示,使得用户可以通过所述头戴式显示设备观看到当前场景中的真实图像。For example, in the schematic diagram of the use state of the electronic device shown in FIG. 2, the user can fix the head-mounted display device 1 on the user's head by wearing it, and the head-mounted display device 1 can use a camera The way of collecting images, and displaying the collected images of the current scene on the display screen of the head-mounted display device in real time. For example, the captured image can be divided into a first image and a second image, which are respectively displayed on the first display screen and the second display screen in the head-mounted display device, so that the user can watch through the head-mounted display device To the real image in the current scene.
或者,所述头戴式显示设备的显示屏可以为预定透明度的屏幕,比如为半透明结构的显示屏。用户通过该显示屏中透过的光线,可实时查看到当前场景中的画面。Alternatively, the display screen of the head-mounted display device may be a screen with a predetermined transparency, such as a display screen with a semi-transparent structure. The user can view the picture in the current scene in real time through the light transmitted through the display screen.
与此同时,所述头戴式显示设备中的显示屏中还可以显示虚拟画面,所述虚拟画面可以包括虚拟交互界面。比如,所述虚拟交互界面中可以显示图2所示的用户所持有的手写笔所书写的笔迹。或者,所述虚拟交互界面中可以显示对当前场景中的文字媒介进行文字识别的得到文字内容。或者,所述虚拟交互界面还可以包括用户打开的应用程序界面,对所书写的文字、图像的编辑处理等画面内容。At the same time, a virtual screen may also be displayed on the display screen in the head-mounted display device, and the virtual screen may include a virtual interactive interface. For example, the virtual interactive interface may display the handwriting written by the stylus held by the user shown in FIG. 2. Alternatively, the virtual interactive interface may display the text content obtained by performing text recognition on the text medium in the current scene. Alternatively, the virtual interactive interface may also include an application program interface opened by the user, and screen content such as editing and processing of written characters and images.
所述头戴式显示设备1中设置有第一通信单元110,所述手写笔2中设置有第二通信单元210。第一通信单元110和第二通信单元210可以通过蓝牙建立链路连接。The head-mounted display device 1 is provided with a first communication unit 110, and the stylus 2 is provided with a second communication unit 210. The first communication unit 110 and the second communication unit 210 may establish a link connection via Bluetooth.
所述手写笔1中设置有用于检测相对运动的传感设备,可以采集手写笔的相对位移数据。所述相对位移数据可以包括手写笔的笔尖相对于上一检测时刻的位置,手写笔的笔尖在当前检测时刻所移动的距离和移动的方向,将所采集的相对位移数据通过所述链路发送给头戴式显示设备。即用户在使用所述手写笔书写文字、绘画或其它编辑处理动作时,如果检测到手写笔为写状态,则根据预先设定的每两个相邻检测时刻的时间间隔进行检测,获得每个检测时刻相对于上一检测时刻的相对位移数据,即获得当前检测时刻相对于上一检测时刻的相对位移距离和相对位移的方向。根据所述相对位移数据,结合上一检测时刻的笔尖位置,即可确定当前检测时刻的笔尖位置。在确定了每个检测时刻的笔尖位置后,即可得到用户通过手写笔书写时的笔尖轨迹。The stylus 1 is provided with a sensing device for detecting relative movement, and can collect relative displacement data of the stylus. The relative displacement data may include the position of the tip of the stylus relative to the last detection time, the distance and direction of the movement of the tip of the stylus at the current detection moment, and the collected relative displacement data is sent through the link Give head-mounted display devices. That is, when the user uses the stylus pen to write text, draw or other editing processing actions, if the stylus is detected to be in the writing state, the detection is performed according to the preset time interval of every two adjacent detection moments to obtain each The relative displacement data of the detection time relative to the previous detection time, that is, the relative displacement distance and the relative displacement direction of the current detection time relative to the previous detection time are obtained. According to the relative displacement data, combined with the pen tip position at the previous detection time, the pen tip position at the current detection time can be determined. After determining the position of the pen tip at each detection moment, the pen tip trajectory when the user writes with the stylus pen can be obtained.
在一种实现方式中,所述手写笔2还可以通过压力传感器检测手写笔是否为写状态或腾空状态,并将所检测的手写笔状态信息通过所述链路发送给头戴式显示设备,或者也可以由手写笔根据所述手写笔状态信息采集手写笔相对于手写平面的相对位移数据。In an implementation manner, the stylus 2 can also detect whether the stylus is in a writing state or an empty state through a pressure sensor, and send the detected stylus state information to the head-mounted display device through the link, Or, the relative displacement data of the stylus relative to the handwriting plane can also be collected by the stylus pen according to the state information of the stylus pen.
所述视觉感知模块120可以为可见光摄像头。通过所述摄像头可以拍摄外部环境信息生成视频流信号,为当前场景制图(Mapping)、手写笔定位(Localization)、手势、手写笔笔势等动作识别提供数据。在一种实现方式中,可以设置多个摄像头,在不同的视场角度采集场景中的图像,为当前场景制图提供多目立体视觉。其中,当前场景制图可以包括绘制当前场景中包括的物体、物体的图像、物体的大小、物体在当前场景中的位置、物体与用户之间的距离等信息中的一项或者多项。The visual perception module 120 may be a visible light camera. The camera can capture external environment information to generate a video stream signal, and provide data for the current scene mapping (Mapping), stylus localization (localization), gestures, stylus gestures and other action recognition. In an implementation manner, multiple cameras can be set to collect images in the scene at different angles of view to provide multi-view stereo vision for current scene drawing. Wherein, the current scene drawing may include drawing one or more of the objects included in the current scene, the image of the object, the size of the object, the position of the object in the current scene, the distance between the object and the user, and other information.
在一种实现方式中,所述主体设备包括第一摄像头1201和第二摄像头1202,其中,第一摄像头可以为可见光摄像头,第二摄像头可以为可见光摄像头或红外摄像头。通过第一摄像头和第二摄像头采集的图像,结合第一摄像头与第二摄像头的相机参数,包括摄像头内部参数和摄像头外部参数,确定所述图像中的物体的深度信息。根据所确定的图像中的物体的深度信息,即可在当前场景制图时,用于确定物体在当前场景的图像中的位置, 或者确定物体与用户之间的距离等。In an implementation manner, the main device includes a first camera 1201 and a second camera 1202, where the first camera may be a visible light camera, and the second camera may be a visible light camera or an infrared camera. The images collected by the first camera and the second camera are combined with the camera parameters of the first camera and the second camera, including the internal parameters of the camera and the external parameters of the camera, to determine the depth information of the object in the image. According to the determined depth information of the object in the image, it can be used to determine the position of the object in the image of the current scene, or to determine the distance between the object and the user when the current scene is drawn.
在一种实现方式中,所述头戴式显示设备1还可以包括补光单元180。所述补光单元180可以提供可见光补光。当所述第二摄像头为红外摄像头时,所述补光单元180还可以提供红外补光。通过红外补光,配合红外摄像头进行图像采集,在不影响环境观感的前提下,可有效的提升笔尖视觉定位精度和鲁棒性。In an implementation manner, the head-mounted display device 1 may further include a light supplement unit 180. The light supplement unit 180 can provide visible light supplement light. When the second camera is an infrared camera, the supplementary light unit 180 may also provide infrared supplementary light. Through infrared supplement light, and the infrared camera for image collection, it can effectively improve the accuracy and robustness of the pen tip's visual positioning without affecting the perception of the environment.
所述深度传感单元130用于检测当前场景中的物体与头戴显示设备1之间的距离。所述深度传感单元可以包括两个或两个以上的摄像头。或者,所述深度传感单元130可以包括飞行时间测距的摄像头、结构光测距系统、雷达、超声波传感器等测距单元中的一项或多项。The depth sensing unit 130 is used to detect the distance between an object in the current scene and the head-mounted display device 1. The depth sensing unit may include two or more cameras. Alternatively, the depth sensing unit 130 may include one or more of distance measurement units such as a time-of-flight ranging camera, a structured light ranging system, a radar, and an ultrasonic sensor.
通过所述深度传感单元130所检测到的物体的深度信息,结合摄像头所采集的图像,可以对当前场景进行3D建模,对当前场景中的平面进行检测,完成对场景的即时定位与地图构建SLAM(Simultaneous Localization And Mapping)等。Through the depth information of the object detected by the depth sensing unit 130, combined with the image collected by the camera, the current scene can be 3D modeled, the plane in the current scene can be detected, and the real-time location and map of the scene can be completed Build SLAM (Simultaneous Localization And Mapping), etc.
当所述深度传感单元130为两个摄像头时,根据两个摄像头所拍摄的图像,结合两个摄像头参数信息,确定图像中的物体的深度信息。When the depth sensing unit 130 is two cameras, the depth information of the object in the image is determined according to the images captured by the two cameras and the parameter information of the two cameras.
当所述深度传感单元130为单个摄像头时,可以通过向物体发射光束,记录光束的发射时间。并且,当摄像头拍摄到物体反射的光束时,记录光束的接收时间,根据发射时间和接收时间之间的时间差,结合光的传播速度,即可计算得到物体与摄像头之间距离。When the depth sensing unit 130 is a single camera, it can record the emission time of the light beam by emitting a light beam to the object. In addition, when the camera captures the light beam reflected by the object, the receiving time of the recording light beam is recorded, and the distance between the object and the camera can be calculated according to the time difference between the emission time and the receiving time, combined with the propagation speed of the light.
所述显示单元140可用于显示交互图像,所述交互图像可以为摄像头所拍摄的视频图像,或者还可以包括虚拟交互界面。或者所述显示单元140为预定透明度的显示设备。所述预定透明度可以为半透明显示设备,用户可通过所述显示设备看到当前场景,并且在所述半透明显示设备上叠加显示虚拟交互界面。所述预定透明度还可以根据当前场景的亮度而自动改变。比如,当前场景的亮度增加时,可以降低所述显示设备的透明度,当前场景的亮度降低时,可以增加所述显示设备的透明度,从而使得用户通过所述显示单元查看亮度适宜的交互图像。The display unit 140 may be used to display an interactive image, and the interactive image may be a video image taken by a camera, or may also include a virtual interactive interface. Or the display unit 140 is a display device with a predetermined transparency. The predetermined transparency may be a semi-transparent display device through which the user can see the current scene, and superimpose and display a virtual interactive interface on the semi-transparent display device. The predetermined transparency can also be automatically changed according to the brightness of the current scene. For example, when the brightness of the current scene increases, the transparency of the display device can be reduced, and when the brightness of the current scene decreases, the transparency of the display device can be increased, so that the user can view interactive images with appropriate brightness through the display unit.
如图3所示为本申请实施例提供的虚拟交互界面的显示示意图。如图3所示的一种实现方式中,可以在交互图像,比如当前场景图像301上,叠加显示所述虚拟交互界面302。所述虚拟交互界面302,可以为显示交互内容的区域,比如显示手写笔的轨迹和显示手写笔对应的光标的区域。在可能的实现方式中,所述虚拟交互界面还可以包括虚拟图像信息。比如,所述虚拟图像信息可以包括文字编辑界面,所述文字编辑界面中可以包括根据场景图像中所包括的文字图像所识别的文字内容,以及可以用于对所述文字内容进行编辑的虚拟按键等。用户可以使用手写笔在所述虚拟交互界面中进行绘图、签名、文字输入或者对内容进行编辑等处理。Fig. 3 is a schematic diagram showing the display of a virtual interactive interface provided by an embodiment of the application. In an implementation manner as shown in FIG. 3, the virtual interactive interface 302 may be superimposed and displayed on an interactive image, such as the current scene image 301. The virtual interactive interface 302 may be an area where interactive content is displayed, such as an area where the track of a stylus pen and a cursor corresponding to the stylus pen are displayed. In a possible implementation manner, the virtual interactive interface may also include virtual image information. For example, the virtual image information may include a text editing interface, the text editing interface may include text content recognized according to the text image included in the scene image, and virtual keys that can be used to edit the text content Wait. The user can use a stylus pen to perform processing such as drawing, signing, text input or editing content in the virtual interactive interface.
所述虚拟交互界面的显示位置,可以为场景图像中的预定位置。比如,可以如图3所示,在预先设定的场景图像中的右下角位置预定区域,叠加显示所述虚拟交互界面。当然,所述虚拟交互界面可以根据场景图像中所检测到的平面信息来确定。The display position of the virtual interactive interface may be a predetermined position in the scene image. For example, as shown in FIG. 3, the virtual interactive interface may be superimposed and displayed in a predetermined area at the lower right corner of the preset scene image. Of course, the virtual interactive interface can be determined according to the plane information detected in the scene image.
如图4所示,可以根据预先设定的平面区域要求,判断当前场景中的平面区域是否满足预定平面区域要求。当前场景中有平面区域满足预定平面区域要求,则可以直接根据当前场景中的平面区域生成对应的虚拟交互界面。比如图4中检测到当前场景的图像中的桌面303满足预定平面区域要求,则可以根据当前场景的图像中的桌面的位置,确定所述虚 拟交互界面的位置。或者,还可以根据所述桌面的形状,确定所述虚拟交互的形状。As shown in FIG. 4, it can be determined whether the flat area in the current scene meets the predetermined flat area requirements according to the preset flat area requirements. If there is a plane area in the current scene that meets the requirements of the predetermined plane area, the corresponding virtual interactive interface can be directly generated according to the plane area in the current scene. For example, in FIG. 4, if it is detected that the desktop 303 in the image of the current scene meets the requirements of the predetermined plane area, the position of the virtual interactive interface can be determined according to the position of the desktop in the image of the current scene. Alternatively, the shape of the virtual interaction may also be determined according to the shape of the desktop.
所预先设定的平面区域要求可以包括平面尺寸范围、平面朝向、平面距离摄像头或用户的距离等限定条件中的一个或者多个。所述平面大小可以包括所设定的平面最小尺寸的要求。所述平面朝向可以包括朝向向上或朝向用户等,或者,所述平面朝向可以包括平面的倾斜角度范围。比如,所述倾斜角度范围可以包括由水平角度变化至与水平面垂直的角度的范围。为了便于用户在场景图像中的平面通过手写笔进行交互,所述平面距离摄像头或用户的距离可以小于先设定的距离值。比如,所述限定条件可以包括平面大小、平面朝向和平面距离摄像头的距离,通过平面大小筛选、平面朝向筛选和平面与摄像头之间的距离相结合,筛选得到较佳的平面。The preset requirements for the plane area may include one or more of the limited conditions such as the plane size range, the plane orientation, and the distance between the plane and the camera or the user. The size of the plane may include the minimum size requirement of the set plane. The plane orientation may include upwards or toward the user, or the like, or the plane orientation may include the tilt angle range of the plane. For example, the inclination angle range may include a range from a horizontal angle to an angle perpendicular to the horizontal plane. In order to facilitate the user to interact with the plane in the scene image through the stylus, the distance of the plane from the camera or the user may be smaller than the first set distance value. For example, the limiting conditions may include the size of the plane, the orientation of the plane, and the distance between the plane and the camera, and a better plane can be obtained through the combination of plane size screening, plane orientation screening, and the distance between the plane and the camera.
在一种实现场景中,如图5所示,当场景中包括多个平面,且多个平面区域均符合虚拟交互界面的要求。则可以根据预先确定的手写笔的较佳书写位置,来选择多个平面中的较佳平面作为虚拟交互界面。可以根据平面与较佳书写位置的距离进行选择。当平面距离较佳书写位置越近,则优先选择。比如图4中包括符合预先设定的虚拟交互界面的要求的平面区域A和平面区域B,平面区域A距离预先设定的较佳书写位置N较近,选择平面区域A作为虚拟交互界面。In an implementation scenario, as shown in FIG. 5, when the scene includes multiple planes, and the multiple plane areas all meet the requirements of the virtual interactive interface. Then, according to the predetermined preferred writing position of the stylus, a preferred plane among the multiple planes can be selected as the virtual interactive interface. It can be selected according to the distance between the plane and the better writing position. When the plane is closer to the better writing position, the preferred choice is. For example, FIG. 4 includes a flat area A and a flat area B that meet the requirements of a preset virtual interactive interface. The flat area A is closer to the preset better writing position N, and the flat area A is selected as the virtual interactive interface.
可能的实现方式,如图6所示,所述较佳书写位置也可以为预先设定的较佳书写区域。根据所述较佳书写区域确定虚拟交互界面时。可以先根据虚拟交互界面的要求确定多个待选平面,然后将多个待选平面分别与所述较佳书写区域计算相交区域的面积,选择相交面积最大的待选平面作为所述虚拟交互界面。比如,图6中检测得到当前场景中包括平面区域C和平面区域D,平面区域C与平面区域D与较佳书写区域M计算相交面积,选择相交面积较大的平面D作为所述虚拟交互界面。In a possible implementation manner, as shown in FIG. 6, the preferred writing position may also be a preset preferred writing area. When the virtual interactive interface is determined according to the preferred writing area. A number of candidate planes can be determined according to the requirements of the virtual interactive interface, and then the area of the intersection area between the multiple candidate planes and the preferred writing area can be calculated, and the candidate plane with the largest intersection area is selected as the virtual interactive interface . For example, in FIG. 6, it is detected that the current scene includes a flat area C and a flat area D. The flat area C, the flat area D, and the better writing area M are calculated for the intersection area, and the plane D with a larger intersection area is selected as the virtual interactive interface. .
其中,所述手写笔的较佳书写位置,可以通过接收用户指定的方式来确定,或者,也可以根据人体生理结构的特点,在用户的手臂和手肘处于预定角度时,用户所握持的手写笔的笔尖所在位置,确定为所述较佳书写位置。比如,可以通过摄像头采集用户握持手写笔的状态,并检测到手臂和手肘处于预定角度时,记录该时刻的笔尖位置,作为所述手写笔的较佳书写位置。所述较佳书写区域可以通过所述较佳书写位置来确定。比如,所述较佳书写区域可以为较佳书写位置为中心的矩形区域、椭圆区域或其它形状的区域等。Wherein, the preferred writing position of the stylus can be determined by receiving a user designation, or, according to the characteristics of the human body’s physiological structure, when the user’s arm and elbow are at a predetermined angle, the user’s holding The position of the tip of the stylus is determined as the preferred writing position. For example, the state of the user's holding a stylus can be collected through a camera, and when the arm and elbow are detected at a predetermined angle, the position of the pen tip at that moment can be recorded as the preferred writing position of the stylus. The preferred writing area can be determined by the preferred writing position. For example, the preferred writing area may be a rectangular area, an elliptical area or an area of other shapes with the preferred writing position as the center.
当所述电子设备包括头戴式显示设备时,如图7所示头戴式显示设备的布局示意图中,所述显示单元可以包括第一显示单元211和第二显示单元212,可用于分别显示所构建的3D场景的视频或者包括虚拟交互界面的图像,用户通过佩戴所述头戴式显示设备,可观看所生成的3D场景图像。可在所述显示单元和第二显示单元的中部设置视觉感知单元22和深度传感单元23,或者还包括补光单元24等。When the electronic device includes a head-mounted display device, as shown in the schematic layout diagram of the head-mounted display device as shown in FIG. 7, the display unit may include a first display unit 211 and a second display unit 212, which can be used to display The constructed video of the 3D scene or the image including the virtual interactive interface, the user can watch the generated 3D scene image by wearing the head-mounted display device. A visual perception unit 22 and a depth sensing unit 23 can be provided in the middle of the display unit and the second display unit, or a light supplement unit 24 and the like can be further included.
通过显示设备可以在当前场景的交互图像上显示虚拟交互界面,可以在虚拟交互界面上显示用户输入的数据。Through the display device, a virtual interactive interface can be displayed on the interactive image of the current scene, and the data input by the user can be displayed on the virtual interactive interface.
在一种实现方式中,用户可以通过手写笔在场景中的任意平面进行书写,手写笔通过相对运动传感单元检测到手写笔的笔尖的相对位移数据,确定笔尖的书写轨迹,根据所述书写轨迹确定用户的书写数据,包括绘画轨迹、文字内容或虚拟按键操作等。在图3所示的显示固定位置的虚拟交互界面显示所述书写数据,或者响应用户的虚拟按键操作,比如响应虚拟按键的删除操作。In one implementation, the user can write on any plane in the scene with a stylus. The stylus detects the relative displacement data of the stylus tip through the relative motion sensing unit, and determines the writing trajectory of the tip according to the writing. The trajectory determines the user's writing data, including drawing trajectory, text content or virtual key operation, etc. The written data is displayed on the virtual interactive interface showing a fixed position shown in FIG. 3, or in response to a user's virtual key operation, for example, in response to a virtual key deletion operation.
在一种实现方式中,当手写笔为腾空状态时,可以通过摄像头检测并追踪手写笔的笔尖的标识特征,确定手写笔的笔尖位置。通过检测和追踪手写笔在腾空时的笔尖位置,可以确定笔尖在写状态时初始位置。如图4所示,所述虚拟交互界面与场景中的物体的平面在当前场景的图像,即交互图像中的位置相同。通过检测手写笔在写状态时的初始位置,结合笔尖的相对运动信息,可以在虚拟交互界面的对应位置显示对应的书写轨迹,从而使得用户通过头戴式显示设备,在书写位置查看到书写轨迹,从而能够更好的与用户平时的书写习惯相符,提升用户手写笔的书写体验。或者根据写状态的初始位置,接收手写笔在该位置对应的功能按键上触发的点击操作等。In one implementation, when the stylus is in the empty state, the camera can detect and track the identification feature of the stylus tip to determine the position of the stylus tip. By detecting and tracking the position of the pen tip of the stylus in the air, the initial position of the pen tip in the writing state can be determined. As shown in FIG. 4, the virtual interactive interface and the plane of the object in the scene have the same position in the image of the current scene, that is, the interactive image. By detecting the initial position of the stylus in the writing state, combined with the relative movement information of the pen tip, the corresponding writing trajectory can be displayed in the corresponding position of the virtual interactive interface, so that the user can view the writing trajectory at the writing position through the head-mounted display device , So as to better match the user's usual writing habits, and enhance the user's writing experience with the stylus. Or according to the initial position of the writing state, the click operation triggered by the stylus on the function button corresponding to the position is received.
其中,用户使用手写笔在场景中的物体平面中书写的信息,可以包括文字、图像、编辑指令等。Among them, the information written by the user using the stylus pen on the object plane in the scene may include text, images, editing instructions, and so on.
比如,用户在物体平面书写文字时,可以记录用户书写的轨迹,并识别所述轨迹对应的文字,可以在所述虚拟交互界面中显示所识别出的文字。For example, when the user writes characters on the object plane, the track written by the user can be recorded, and the characters corresponding to the track can be recognized, and the recognized characters can be displayed in the virtual interactive interface.
或者,所述虚拟交互界面中包括文字以及对文字进行编辑的按键,用户可以通过手写笔触发虚拟交互界面中的按键,实现对应文字进行编辑处理,并可通过虚拟交互界面显示编辑处理的响应信息。图8所示的虚拟交互界面示意图中,用户通过“批注”按键801,对虚拟交互界面中包括的文字添加批注802。在通过“批注”按键对所述虚拟交互界面中的文字进行批注处理时,用户可以使用手写笔移动至待批注位置,在待批注位置点击,使编辑光标在所述待批注位置保持为闪烁状态。移动手写笔至批注按键801对应位置,可以通过点击等触发方式,触发所述批注按键对应的批注指令,在所述光标位置生成批注802。可以在批注处于激活状态时,在批注框中显示用户添加的批注内容。Alternatively, the virtual interactive interface includes text and keys for editing the text. The user can trigger the keys in the virtual interactive interface through a stylus to realize the editing process of the corresponding text, and display the response information of the editing process through the virtual interactive interface. . In the schematic diagram of the virtual interactive interface shown in FIG. 8, the user adds a comment 802 to the text included in the virtual interactive interface through the "comment" button 801. When the text in the virtual interactive interface is annotated through the "Annotation" button, the user can use the stylus to move to the position to be annotated, and click on the position to be annotated to keep the editing cursor in a blinking state at the position to be annotated . Move the stylus to the position corresponding to the comment button 801, and the comment instruction corresponding to the comment button can be triggered by a trigger method such as clicking, and the comment 802 is generated at the cursor position. When the comment is active, the content of the comment added by the user can be displayed in the comment box.
或者,所述虚拟交互界面可以用于显示用户对当前场景中的文件所输入的签名数据,或者对当前场景中的文字的复制、粘贴、剪切等操作。在可能的实现方式中,还可以对当前场景中的文字媒介,比如书本、纸张所对应的文字图像进行文字识别,得到文本图像对应的文本内容。对所述文本内容进行编辑后,并将编辑后的文本保存,比如保存对文本的修改,或者保存对文本的批注或者签名等。Alternatively, the virtual interactive interface can be used to display the signature data input by the user to the file in the current scene, or to copy, paste, cut, and other operations on the text in the current scene. In a possible implementation manner, it is also possible to perform text recognition on text media in the current scene, such as text images corresponding to books and paper, to obtain text content corresponding to the text images. After editing the text content, the edited text is saved, for example, the modification to the text is saved, or the annotation or signature to the text is saved.
所述第一处理单元150可用于对采集的视频图像、所接收的传感信息进行处理,包括获取当前场景中物体的深度信息,根据物体的深度信息和当前场景的图像,结合相机参数进行SLAM同步定位和建图,根据图像中的手写笔或指尖的特征信息,确定笔尖或指尖的绝对位置。The first processing unit 150 can be used to process the collected video images and the received sensor information, including acquiring the depth information of the object in the current scene, and performing SLAM based on the depth information of the object and the image of the current scene in combination with camera parameters Synchronize positioning and mapping, and determine the absolute position of the pen tip or fingertip according to the characteristic information of the stylus or fingertip in the image.
或者,所述第一处理单元150或第二处理单元230用于根据手写笔所采集的笔尖相对位移数据,根据所述绝对位置和所述相对位移数据,确定手写笔的笔尖的准确位置,从而准确的计算得到手写笔书写的轨迹所对应的数据。Alternatively, the first processing unit 150 or the second processing unit 230 is configured to determine the accurate position of the pen tip of the stylus according to the relative displacement data of the pen tip collected by the stylus, and according to the absolute position and the relative displacement data, so as to Accurately calculate the data corresponding to the trajectory written by the stylus.
所述第一处理单元150还可以包括对当前场景中的文字媒介,比如书本或纸张中的文字进行文字识别,得到可进行编辑的文字内容,结合手写笔输入的编辑信息,对所识别的文字内容进行编辑处理。比如,在当前场景中检测到书本文件或者纸张等文字媒介时,当检测到手写笔处于按压的书写状态,且按压位置处于所述文字媒介所在区域时,可以对所述文字媒介的图像进行文字识别处理,获取所述文字媒介中包括的文字信息,并根据手写笔所书写的数据处理信息,对所述文字媒体进行编辑,包括修改文字内容、添加批注信息、复制选择、翻译或者在当前场景中叠加的虚拟交互界面的控制按键的触发指令等。The first processing unit 150 may also include performing text recognition on the text medium in the current scene, such as text in a book or paper, to obtain text content that can be edited, combining with the editing information input by the stylus, and recognizing the recognized text. The content is edited. For example, when a text medium such as a book file or paper is detected in the current scene, when it is detected that the stylus is in a pressed writing state, and the pressed position is in the area where the text medium is located, the image of the text medium can be written. Recognition processing, acquiring the text information included in the text media, and editing the text media according to the data processing information written by the stylus, including modifying text content, adding annotation information, copying selection, translation, or in the current scene Trigger instructions for the control buttons of the virtual interactive interface superimposed on it.
其中,所述文字媒介检测,可以根据预先设定的文字媒介特征,在采集的场景图像中进行对比检测,获得场景图像中包括的文字媒介。Wherein, the text medium detection may perform comparison detection in the collected scene image according to preset text medium characteristics to obtain the text medium included in the scene image.
所述第一处理单元150或第二处理单元230是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。可选的,第一处理单元150或第二处理单元230可包括一个或多个处理单元;优选的,第一处理单元150可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到第一处理单元150中。The first processing unit 150 or the second processing unit 230 is the control center of the electronic device, which uses various interfaces and lines to connect various parts of the entire electronic device, and runs or executes software programs and/or modules stored in the memory 120 , And call the data stored in the memory 120 to execute various functions of the electronic device and process data, so as to monitor the electronic device as a whole. Optionally, the first processing unit 150 or the second processing unit 230 may include one or more processing units; preferably, the first processing unit 150 may integrate an application processor and a modem processor, where the application processor mainly Processing the operating system, user interface and application programs, etc., the modem processor mainly deals with wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the first processing unit 150.
所述第一存储单元160或第二存储单元240可以可用于存储软件程序以及模块,第一处理单元150通过运行存储在第一存储单元160的软件程序以及模块,第二计算处理单元230通过运行存储在第二存储单元240的软件程序以及模块,从而执行电子设备的各种功能应用以及数据处理。所述第一存储单元160或第二存储单元240可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据(比如视频图像、编辑后的电子文本或图像等)等。此外,所述第一存储单元160或第二存储单元240可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The first storage unit 160 or the second storage unit 240 can be used to store software programs and modules. The first processing unit 150 runs the software programs and modules stored in the first storage unit 160, and the second calculation processing unit 230 runs The software programs and modules stored in the second storage unit 240 execute various functional applications and data processing of the electronic device. The first storage unit 160 or the second storage unit 240 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required by at least one function (such as a sound playback function, an image playback function). Functions, etc.), etc.; the data storage area can store data created based on the use of electronic devices (such as video images, edited electronic text or images, etc.), etc. In addition, the first storage unit 160 or the second storage unit 240 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. .
所述第一供电单元170或第二供电单元250可以为电池,优选的,所述第一供电单元170或第二供电单元250可以通过电源管理系统与计算处理单元逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The first power supply unit 170 or the second power supply unit 250 may be a battery. Preferably, the first power supply unit 170 or the second power supply unit 250 may be logically connected to the computing processing unit through a power management system, thereby through the power management system Realize functions such as managing charging, discharging, and power consumption management.
所述相对运动传感单元220可以包括压力感知单元2201、手写轨迹信息感知单元2202。所述压力感知单元2201可以检测手写笔是否处于书写状态。根据所述压力感知单元2201检测到笔尖与书写平面之间的压力,当所述压力大于预定值时,则认为所述手写笔处于书写状态。所述手写轨迹信息感知单元2202可以包括激光干涉单元和/或惯性感知单元,通过激光干涉单元获取所述笔尖的相对位移信息,通过惯性感知单元感知笔尖的加速度大小和方向的变化,根据加速度的大小和方向的变化,确定笔尖的运动轨迹。或者所述手写轨迹信息感知单元也可以包括摄像头等位移感知传感器,通过笔尖所设置的摄像头所采集的图像的变化,检测感知所述笔尖的相对位置的变化。The relative motion sensing unit 220 may include a pressure sensing unit 2201 and a handwriting trajectory information sensing unit 2202. The pressure sensing unit 2201 can detect whether the stylus is in a writing state. According to the pressure sensing unit 2201 detecting the pressure between the pen tip and the writing plane, when the pressure is greater than a predetermined value, the stylus is considered to be in a writing state. The handwriting trajectory information sensing unit 2202 may include a laser interference unit and/or an inertial sensing unit. The relative displacement information of the pen tip is obtained through the laser interference unit, and the magnitude and direction changes of the acceleration of the pen tip are sensed by the inertial sensing unit. Changes in size and direction determine the movement trajectory of the pen tip. Alternatively, the handwritten trajectory information sensing unit may also include a displacement sensing sensor such as a camera, which detects and perceives changes in the relative position of the pen tip through a change in the image collected by the camera set by the pen tip.
所述惯性感知单元可以为加速度传感器和陀螺仪等,用于检测笔尖的加速度大小和方向的变化。所述惯性感知单元可以包括多个,分别设置在笔尖的不同位置。比如可以在笔尖、笔末端分别设置惯性感知单元,通过惯性感知单元所感知到的加速度的差异,确定手写笔的姿势的变化。The inertial sensing unit may be an acceleration sensor, a gyroscope, etc., for detecting changes in the magnitude and direction of the acceleration of the pen tip. The inertial sensing unit may include a plurality of units, which are respectively arranged at different positions of the pen tip. For example, an inertial sensing unit can be provided at the tip and the end of the pen, and the change in the posture of the stylus can be determined by the difference in acceleration sensed by the inertial sensing unit.
在一种的实现方式中,所述手写笔的笔尖还设置有笔尖特征。所述笔尖特征可以为特殊的颜色标记,或者为红外反光、荧光等标记。当所述手写笔设置有笔尖特征后,摄像头拍摄得到包括手写笔的当前场景的图像时,通过对当前场景的图像进行笔尖特征的检测,根据当前场景的图像中检测到的笔尖特征,确定笔尖在当前场景中的绝对位置。当手写笔的书写平面与虚拟交互界面的位置一致时,根据所确定的笔尖在当前场景中的绝对位置,可以有效的确定笔尖在虚拟交互界面中的位置,实现手写笔的书写位置与书写内容的显示 位置的匹配,从而更好的提升用户的书写体验。In an implementation manner, the tip of the stylus is also provided with a tip feature. The feature of the pen tip can be a special color mark, or a mark such as infrared reflection or fluorescence. When the stylus is provided with the nib feature, when the camera captures an image of the current scene including the stylus, the nib feature is detected on the image of the current scene, and the nib is determined based on the nib feature detected in the image of the current scene The absolute position in the current scene. When the writing plane of the stylus is consistent with the position of the virtual interactive interface, according to the determined absolute position of the pen tip in the current scene, the position of the pen tip in the virtual interactive interface can be effectively determined to realize the writing position and content of the stylus pen Matching of the display position, thereby better improving the user’s writing experience.
可以理解的是,本申请所述电子设备并不局限于此,还可以包括其它未列出的组件。比如,所述手写笔2还可以包括显示屏,通过所述显示屏可以显示所述手写笔的书写状态,或者还可以显示时间等信息。或者所述第二显示单元可以为触摸屏,通过所述第二显示单元,可以调节手写笔对于书写状态检测的敏感程度。比如,手写笔所设定的压力阈值为F1,当手写笔的压力传感器当前检测到手写笔的笔尖的压力为F2,如果F2大于F1,则认为手写笔当前为书写状态。当前检测到的手写笔的笔尖的压力为F3,如果F3小于F1,则认为手写笔当前为腾空状态。通过所述第二显示单元可以调节所述压力阈值的大小。当压力阈值调大时,则书写状态切换的敏感度降低,需要较大的压力才能触发为书写状态。当压力阈值调小时,书写状态切换的敏感度提高,较小的压力即可触发为书写状态。It can be understood that the electronic device described in the present application is not limited to this, and may also include other components not listed. For example, the stylus 2 may also include a display screen, through which the writing status of the stylus pen can be displayed, or information such as time can also be displayed. Or, the second display unit may be a touch screen, and the sensitivity of the stylus to the writing state detection can be adjusted through the second display unit. For example, the pressure threshold set by the stylus is F1, when the pressure sensor of the stylus currently detects that the pressure of the tip of the stylus is F2, if F2 is greater than F1, it is considered that the stylus is currently in the writing state. The currently detected pressure of the pen tip of the stylus is F3. If F3 is less than F1, it is considered that the stylus is currently empty. The size of the pressure threshold can be adjusted through the second display unit. When the pressure threshold is increased, the sensitivity of writing state switching decreases, and a greater pressure is required to trigger the writing state. When the pressure threshold is adjusted smaller, the sensitivity of writing state switching increases, and a smaller pressure can trigger the writing state.
当用户使用混合现实系统等电子设备输入数据信息时,为了解决手写输入数据时,通过电子设备上的摄像头拍摄的图像,通过视觉算法检测笔尖位置来校正手写笔的移动轨迹不准确的问题,以及需要依赖手写板进行输入,使得手写输入不便的缺陷,本申请提出了一种如图9所示的电子设备的交互方法,作为示例而非限定,该方法可以应用于上述电子设备中。When a user uses an electronic device such as a mixed reality system to input data information, in order to solve the problem of handwriting input data, the image captured by the camera on the electronic device is detected by a visual algorithm to detect the position of the pen tip to correct the problem of inaccurate movement trajectory of the stylus, and The need to rely on the handwriting board for input makes handwriting input inconvenient. This application proposes an electronic device interaction method as shown in FIG. 9 as an example and not a limitation. The method can be applied to the above electronic device.
在步骤S901中,手写交互设备显示交互图像。In step S901, the handwriting interactive device displays an interactive image.
所述交互图像,可以是通过视觉感知模块所获取的当前场景的图像,也可以为其它待播放的多媒体图像,比如视频图像、PPT图像等。The interactive image may be an image of the current scene acquired by a visual perception module, or may be other multimedia images to be played, such as video images, PPT images, and so on.
其中,所述交互图像为当前场景的图像时,为了准确获取笔尖在虚拟交互界面所对应的光标位置,可以对当前场景进行实时定位和建图SLAM,以便于根据所重构的三维场景模型,确定所检测到的笔尖在抬笔状态下,相对于场景中的物体的位置的变化,根据所述位置的变化,确定笔尖位置所对应的手写交互区域或手写交互区域中的交互位置。Wherein, when the interactive image is an image of the current scene, in order to accurately obtain the cursor position corresponding to the pen tip on the virtual interactive interface, real-time positioning and mapping SLAM can be performed on the current scene, so as to facilitate the reconstruction of the three-dimensional scene model, Determine the position of the detected pen tip relative to the object in the scene when the pen is raised, and determine the handwriting interaction area or the interaction position in the handwriting interaction area corresponding to the position of the pen tip according to the position change.
其中,所述手写交互区域,可以为手写笔所接触的当前场景中的区域。所述手写交互区域通常为平面区域。比如桌面、墙面或者其它平整的表面等区域。Wherein, the handwriting interaction area may be an area in the current scene touched by the stylus pen. The handwriting interaction area is usually a flat area. For example, areas such as desktops, walls, or other flat surfaces.
下面就场景重构进行说明。The following describes the scene reconstruction.
1.1场景图像获取1.1 Scene image acquisition
可以通过可见光摄像头,比如图1所示的第一摄像头获取当前场景的图像。在可能的实现方式中,还可以采用多个摄像头获取当前场景的多个图像,根据多个图像所对应的摄像头的角度和位置,或者根据图像内容,对所述多个图像进行拼接,得到更为全面的当前场景的图像。所述图像可以为视频或者其它多媒体形式。The image of the current scene can be acquired through a visible light camera, such as the first camera shown in FIG. 1. In a possible implementation manner, multiple cameras can also be used to obtain multiple images of the current scene, and the multiple images can be spliced according to the angle and position of the cameras corresponding to the multiple images, or according to the content of the images, to obtain more information. For a comprehensive image of the current scene. The image can be a video or other multimedia format.
在一种实现方式中,如图10所示,所述主体设备设置有可见光摄像头A和可见光摄像头B,由可见光摄像头A获取第一视频图像P1,可见光摄像头B获取第二视频图像P2,根据预先设定的可见光摄像头A和可见光摄像头B的位置和角度关系,或者根据第一视频图像P1和第二视频图像P2的图像内容,对所述第一视频图像P1和第二视频图像P2进行拼接,得到拼接后的视频图像P,或者还可以对拼接后的视频图像进行增强处理。In one implementation, as shown in Figure 10, the main device is provided with a visible light camera A and a visible light camera B. The visible light camera A obtains a first video image P1, and the visible light camera B obtains a second video image P2. Set the position and angle relationship of the visible light camera A and the visible light camera B, or splicing the first video image P1 and the second video image P2 according to the image content of the first video image P1 and the second video image P2, The spliced video image P is obtained, or the spliced video image can also be enhanced.
1.2获取场景深度信息1.2 Obtaining scene depth information
在获取当前场景的深度信息时,可以基于两个或两个以上的摄像头所拍摄的图像,确定图像中的物体所对应的深度信息,也可以基于深度传感器获取当前场景中的物体与主体设备之间的距离。When acquiring the depth information of the current scene, the depth information corresponding to the object in the image can be determined based on the images taken by two or more cameras, or the depth information between the object in the current scene and the main device can be acquired based on the depth sensor. The distance between.
利用两个或两个以上的摄像头所获取的图像,可以基于三角测距原理,确定图像中的物体的深度信息时。如图11所示,当两个参数相同的摄像头位于同一平面,且两个摄像头的焦距f、两个摄像头的中心距为T均预先已知。在获取图像中的物体的深度信息时,可以根据物体在所拍摄的图像中的位置,确定物体在两个图像中的视差(Xr-Xt),物体的深度信息为Z,根据图11所示的三角形相似原理,可以得到:[T-(Xr-Xt)]/T=(Z-f)/Z,可以求解得到:Using the images acquired by two or more cameras, the depth information of the objects in the image can be determined based on the principle of triangulation ranging. As shown in FIG. 11, when two cameras with the same parameters are located on the same plane, and the focal length f of the two cameras and the center distance T of the two cameras are known in advance. When acquiring the depth information of the object in the image, the disparity (Xr-Xt) of the object in the two images can be determined according to the position of the object in the captured image. The depth information of the object is Z, as shown in Figure 11. The triangle similarity principle of can be obtained: [T-(Xr-Xt)]/T=(Zf)/Z, which can be solved to obtain:
Z=f*T/(Xr-Xt),Z=f*T/(Xr-Xt),
由于焦距f、两个摄像头之间的中心距离B预先已知。通过检测图像中的某一特征点在两幅图像中的位置,即可确定视差(Xr-Xt),因而能够计算得到深度信息Z。Due to the focal length f, the center distance B between the two cameras is known in advance. By detecting the position of a feature point in the image in the two images, the parallax (Xr-Xt) can be determined, and thus the depth information Z can be calculated.
当两个以上的摄像头确定深度信息时,可以选择其中的任意两具摄像头所拍摄的图像进行深度信息的计算,或者还可以计算得到图多个深度信息,根据多个深度信息取平均的方式等优化方法,确定较佳深度信息。When more than two cameras determine the depth information, you can select the images taken by any two of the cameras to calculate the depth information, or you can also calculate multiple depth information in the map, and average according to multiple depth information, etc. Optimization method to determine the best depth information.
在获取所述物体的深度信息时,还可以根据深度传感单元中的飞行时间测距的摄像头、结构光测距系统、雷达测距系统、超声波传感器等测距模块或系统,获取场景中的物体的深度信息。When acquiring the depth information of the object, the distance measurement module or system such as the time-of-flight ranging camera, structured light ranging system, radar ranging system, ultrasonic sensor, etc. in the depth sensing unit can also be used to obtain the information in the scene. Depth information of the object.
1.3匹配图像中的物体和深度信息1.3 Match the object and depth information in the image
当所述深度传感单元为两个或两个以上的摄像头时,根据摄像头所获取的同一物体在两个或两个以上摄像头所拍摄的图像中的位置,计算图像中的物体的深度信息。将所计算的深度信息直接与图像中的位置进行匹配。When the depth sensing unit is two or more cameras, the depth information of the object in the image is calculated according to the position of the same object acquired by the camera in the image captured by the two or more cameras. The calculated depth information is directly matched with the position in the image.
或者,当所述深度传感单元为其它测距单元时,可以根据所测量的物体的位置,确定该物体在图像中所对应的位置。或者还可以根据测距单元所测量的距离信息的特征信息,包括距离的变化特征等,与图像中的物体的匹配关系,确定图像中的物体所对应深度信息。基于所确定的物体的深度信息,可以获取摄像头所拍摄的图像的坐标转换为当前场景中的物体在世界坐标系中的坐标位置的坐标转移矩阵。Or, when the depth sensing unit is another distance measuring unit, the corresponding position of the object in the image can be determined according to the position of the measured object. Or, the depth information corresponding to the object in the image can be determined according to the feature information of the distance information measured by the distance measuring unit, including the change feature of the distance, and the matching relationship with the object in the image. Based on the determined depth information of the object, the coordinates of the image taken by the camera can be obtained and converted into a coordinate transfer matrix of the coordinate position of the object in the current scene in the world coordinate system.
根据摄像头的相机参数信息,可以获取相机图像的图像坐标系中坐标与三维空间的世界坐标系中的坐标转换矩阵。根据所计算的深度信息,结合所述坐标转换矩阵,可以获取摄像头所拍摄的图像的坐标位置在世界坐标系中的坐标位置。According to the camera parameter information of the camera, the coordinate conversion matrix between the coordinates in the image coordinate system of the camera image and the world coordinate system of the three-dimensional space can be obtained. According to the calculated depth information, combined with the coordinate conversion matrix, the coordinate position of the coordinate position of the image captured by the camera in the world coordinate system can be obtained.
1.4当前场景重构1.4 Reconstruction of the current scene
根据所获取的深度信息,可以确定当前场景中的物体的表面形状信息,根据所述形状信息对当前场景中的物体进行重构。根据所获取的图像信息,可对图像中的特征点进行检测,获取图像中的特征信息。According to the acquired depth information, the surface shape information of the object in the current scene can be determined, and the object in the current scene can be reconstructed according to the shape information. According to the acquired image information, the feature points in the image can be detected and the feature information in the image can be obtained.
所述图像中的特征信息,可以为用于描述图像中的物体的特征点,包括如描述物体的角点等。The feature information in the image may be a feature point used to describe an object in the image, including, for example, a corner point describing the object.
在一种实现方式中,特征信息检测还可以包括对场景中的平面检测,确定所述场景中包括的平面区域。可以对所述平面区域进一步进行内边缘检测。其中,内边缘检测是指对于位于所检测到的平面区域内的边缘线特征检测,判断是存存在内边缘区域。In an implementation manner, the feature information detection may further include detecting a plane in the scene, and determining the plane area included in the scene. The inner edge detection may be further performed on the plane area. Among them, the inner edge detection refers to the detection of the edge line feature located in the detected plane area, and it is judged that the inner edge area exists.
如图12所示,通过摄像头采集的当前场景的图像S中,根据平面特征检测,包括如深度抽样检测等,获取由桌面的平面区域S1。对所述平面区域S1进下进行内边缘检测,得到由内边缘所确定的内边缘区域,即平面区域S2。可以判断所述平面区域S2是否包括 方形交互区域,或者识别平面区域S2是否包括书本等文字媒介。其中,所述方形交互区域可以根据预定的方形交互区域的尺寸特征,或者形状特征进行检测。As shown in FIG. 12, in the image S of the current scene collected by the camera, according to plane feature detection, including, for example, depth sampling detection, the plane area S1 of the desktop is obtained. The inner edge detection is performed on the plane area S1 to obtain the inner edge area determined by the inner edge, that is, the plane area S2. It can be determined whether the flat area S2 includes a square interactive area, or whether the flat area S2 includes a text medium such as a book or the like. Wherein, the square interaction area can be detected according to the size feature or shape feature of the predetermined square interaction area.
在识别到所述内边缘区域,即平面区域S2包括文字媒介时,可以将所述文字媒介对应的图像进行文字识别处理,得到文字媒介中包括的文字内容,并可在虚拟交互界面中显示所识别的文字内容,以便于用户对所述文字媒介中的文字内容进行编辑处理。包括如修改文字媒介中的内容、复制文字媒介中的内容或者标注、批注等操作处理。或者还可以根据所选择的文字媒介中的内容,获取与所选择的文字媒介内容相关的信息,包括如获取所选择的内容的翻译信息,或者获取所选择的内容的其它相关搜索信息等。When it is recognized that the inner edge area, that is, the flat area S2, includes the text medium, the image corresponding to the text medium can be subjected to text recognition processing to obtain the text content included in the text medium, and all the text content can be displayed in the virtual interactive interface. The recognized text content is convenient for the user to edit the text content in the text medium. Including operations such as modifying the content in the text medium, copying the content in the text medium, or labeling and remarks. Or, according to the content in the selected text medium, information related to the selected text medium content can be obtained, including, for example, obtaining translation information of the selected content, or obtaining other related search information of the selected content, etc.
基于1.3所重建的当前场景,还可以包括步骤S902,手写交互设备根据手写笔的运动信息获取手写笔的第一操作,所述运动信息包括手写笔的相对运动信息。Based on the current scene reconstructed in 1.3, it may further include step S902. The handwriting interaction device acquires the first operation of the stylus according to the movement information of the stylus, and the movement information includes the relative movement information of the stylus.
所述第一操作可以为写状态时的书写操作、点击操作,或还可以包括手写笔处于腾空状态时的移动操作。The first operation may be a writing operation or a clicking operation in the writing state, or may also include a moving operation when the stylus is in the empty state.
可以通过相对运动传感单元精确的相对位置检测,在手写笔处于书写状态时,获取手写笔在书写时的轨迹信息,即手写笔的书写轨迹。所述相对运动传感单元可以包括陀螺仪、惯性传感器或加速度传感器等传感设备中的一种或多种。The accurate relative position detection of the relative motion sensing unit can be used to obtain the trajectory information of the stylus when the stylus is in the writing state, that is, the writing trajectory of the stylus. The relative motion sensing unit may include one or more of sensing devices such as a gyroscope, an inertial sensor, or an acceleration sensor.
在一种实现方式中,还可以对手写笔的笔尖进行定位,从而能够根据笔尖在空间位置的改变,确定手写笔的书写内容的位置的改变。可以对图像中的手写笔的笔尖位置进行检测,根据所检测到的笔尖在图像中的位置,对手写笔的笔尖进行定位、追踪,具体可以包括:In an implementation manner, the tip of the stylus can also be positioned, so that the change of the position of the writing content of the stylus can be determined according to the change of the position of the stylus in space. The position of the pen tip of the stylus in the image can be detected, and the stylus pen tip can be located and tracked according to the position of the detected pen tip in the image, which can specifically include:
2.1笔尖特征识别2.1 Pen tip feature recognition
根据预先设定的笔尖特征,包括如笔尖所设置的特殊颜色的特征,或者笔尖所设置的荧光特征,在所获取的有色图像中查找笔尖位置,或者根据预先在笔尖设置的红外反光材料的反光特征,在红外图像中查找笔尖位置。Find the position of the pen tip in the acquired colored image according to the pre-set characteristics of the pen tip, including the special color feature set by the pen tip, or the fluorescent feature set on the pen tip, or according to the reflection of the infrared reflective material set on the pen tip in advance Features, find the position of the pen tip in the infrared image.
2.2笔尖深度信息获取2.2 Acquisition of tip depth information
根据所查找的笔尖位置,可以根据预先根据深度传感单元所获取的该笔尖位置所对应的深度信息。在用户佩戴所述主体设备移动,比如佩戴头戴式显示设备移动时,根据摄像头实时获取的图像,实时计算所述笔尖位置所对应的深度信息。According to the searched pen tip position, the depth information corresponding to the pen tip position obtained in advance according to the depth sensing unit can be used. When the user moves while wearing the main device, such as wearing a head-mounted display device, the depth information corresponding to the position of the pen tip is calculated in real time according to the real-time image obtained by the camera.
2.3获取笔尖空间位置2.3 Get the space position of the pen tip
基于步骤1所确定的摄像头的坐标转换矩阵,可根据笔尖在图像中的位置,以及该位置所对应的深度信息,计算得到笔尖所对应的空间位置。Based on the coordinate conversion matrix of the camera determined in step 1, the spatial position of the pen tip can be calculated according to the position of the pen tip in the image and the depth information corresponding to the position.
如图13所示的笔尖位置确定示意图中,图像中包括确定图像中的像素点的位置的uv坐标系,由相机所确定的相机坐标系XcYcZc,以及世界坐标系XYZ,其中,相机坐标系中的原点与成像平面之间的距离根据相机参数所确定。In the schematic diagram for determining the position of the pen tip as shown in Figure 13, the image includes the uv coordinate system that determines the position of the pixel in the image, the camera coordinate system XcYcZc determined by the camera, and the world coordinate system XYZ, where the camera coordinate system is The distance between the origin of and the imaging plane is determined according to the camera parameters.
由摄像头所拍摄的图像中,经过特征检测,确定手写笔的笔尖P在图像中的位置为点A。当所述深度传感单元为双摄像头时,可以通过双摄像头之间的相机中心距、双摄像头焦距、以及手写笔的笔头P在两个摄像头所拍摄的图像的视差,结合图11所示的三角形相似原理,计算得到手写笔的笔头P相对于相机的距离,即笔头P的深度信息。In the image taken by the camera, after feature detection, it is determined that the position of the pen tip P of the stylus in the image is point A. When the depth sensing unit is a dual camera, the camera center distance between the dual cameras, the focal length of the dual cameras, and the parallax of the images captured by the pen tip P of the stylus on the two cameras can be combined as shown in FIG. 11 Based on the triangle similarity principle, the distance of the pen tip P of the stylus relative to the camera is calculated, that is, the depth information of the pen tip P.
或者,也可以根据深度传感器,比如雷达、飞行时间测距的摄像头、结构光测距系统、超声波传感器测量笔头P的距离,得到笔头P与相机的距离。Alternatively, the distance between the pen tip P and the camera can also be measured by a depth sensor, such as a radar, a time-of-flight ranging camera, a structured light ranging system, and an ultrasonic sensor.
根据深度传感单元所检测得到的笔头与相机的距离,即摄像头拍摄的图像中,笔头所对应的图像点A所对应的深度信息后,可以唯一的确定当前场景中的点P。根据步骤1所确定的由图像坐标系到世界坐标系的坐标转换矩阵,结合所述笔尖位置在图像中的位置(点A)、笔尖位置(点A)所对应的深度信息,确定所述笔尖位置在世界坐标系下所对应的世界坐标。并可根据所采集的图像的变化,实时的确定笔尖所对应的世界坐标,对笔尖进行实时追踪。According to the distance between the pen tip and the camera detected by the depth sensing unit, that is, the depth information corresponding to the image point A corresponding to the pen tip in the image captured by the camera, the point P in the current scene can be uniquely determined. According to the coordinate conversion matrix from the image coordinate system to the world coordinate system determined in step 1, the pen point is determined by combining the position of the pen tip in the image (point A) and the depth information corresponding to the pen tip position (point A) The world coordinate corresponding to the position in the world coordinate system. And according to the changes of the collected images, the world coordinates corresponding to the pen tip can be determined in real time, and the pen tip can be tracked in real time.
在步骤S903中,电子设备在所述交互图像上显示虚拟交互界面,在所述虚拟交互界面响应所述第一操作。In step S903, the electronic device displays a virtual interactive interface on the interactive image, and responds to the first operation on the virtual interactive interface.
可以在所述交互图像上的任意位置生成虚拟交互界面,也可以为交互图像上的固定位置生成虚拟交互界面,或者也可以在所述交互图像中包括的平面区域中,选择平面区域所对应的虚拟交互界面。A virtual interactive interface can be generated at any position on the interactive image, or a virtual interactive interface can be generated for a fixed position on the interactive image, or one of the planar areas included in the interactive image can be selected Virtual interactive interface.
当所述笔尖为抬笔状态时,可以根据所述笔尖的空间位置和生成的三维场景模型,确定笔尖在虚拟交互界面中的位置,当所述笔尖为落笔状态时,根据所述手写笔的相对运动传感单元获取所述手写笔在所述虚拟交互界面中的笔迹信息。When the pen tip is in the pen-up state, the position of the pen tip in the virtual interactive interface can be determined according to the spatial position of the pen tip and the generated three-dimensional scene model. When the pen tip is in the pen-down state, according to the stylus The relative motion sensing unit acquires the handwriting information of the stylus in the virtual interactive interface.
其中,所述相对运动传感单元可以为惯性传感器或激活干涉仪等传感设备。Wherein, the relative motion sensing unit may be an inertial sensor or an activation interferometer or other sensing equipment.
为了显示所获取的所述手写笔对应的操作数据,需要预先在电子设备生成虚拟交互界面,根据电子设备类型的不同,所述虚拟交互界面的生成方式也会存在区别。In order to display the acquired operation data corresponding to the stylus pen, it is necessary to generate a virtual interactive interface in the electronic device in advance. Depending on the type of electronic device, the method of generating the virtual interactive interface may also be different.
当所述电子设备为虚拟现实VR设备时,可在所述虚拟现实VR设备的显示单元中直接显示所述虚拟交互界面,并可建立所述虚拟交互界面中的位置与当前场景中的位置的对应关系。根据当前场景中的手写笔的绝对位置,确定手写笔对应的光标所述虚拟交互界面的位置或手写笔的轨迹。When the electronic device is a virtual reality VR device, the virtual interaction interface can be directly displayed on the display unit of the virtual reality VR device, and the position in the virtual interaction interface and the position in the current scene can be established. Correspondence. According to the absolute position of the stylus in the current scene, the position of the cursor corresponding to the stylus on the virtual interaction interface or the trajectory of the stylus is determined.
在所述虚拟现实VR设备中所显示的所述虚拟交互界面,可以为虚拟画面中的某一固定区域,或者也可以为随着视线移动的、位于视线范围内的预定距离的平面区域。The virtual interaction interface displayed in the virtual reality VR device may be a fixed area in a virtual screen, or may also be a plane area that moves with the line of sight and is located at a predetermined distance within the range of the line of sight.
当所述电子设备为增强现实AR设备或混合现实MR设备时,则需要将当前场景与所述虚拟交互界面进行匹配,具体可以包括:When the electronic device is an augmented reality AR device or a mixed reality MR device, it is necessary to match the current scene with the virtual interactive interface, which may specifically include:
3.1确定所显示的当前场景图像3.1 Determine the current scene image displayed
在显示单元所显示的当前场景图像,可以为半透明的显示单元直接透过的当前场景的图像,或者也可以为显示单元,比如摄像头所拍摄当前场景的图像。在所述显示单元显示所述当前场景的图像时,所显示的当前场景图像的视角与通过半透明的显示单元的视角基本一致,从而能够使得匹配虚拟交互界面后,提高用户在所述虚拟交互界面中的操作的调协性。The current scene image displayed on the display unit may be an image of the current scene directly transmitted through the semi-transparent display unit, or may also be a display unit, such as an image of the current scene captured by a camera. When the display unit displays the image of the current scene, the displayed view of the current scene image is basically the same as the view through the semi-transparent display unit, so that after matching the virtual interaction interface, the user’s experience in the virtual interaction is improved. The coordination of the operations in the interface.
3.2显示虚拟交互界面3.2 Display virtual interactive interface
在显示所实时拍摄的当前场景图像,或者透过所述当前场景图像的同时,在所述显示单元还在所述当前场景图像中显示虚拟交互界面,以便于用户与当前场景的图像进行交互。While displaying the current scene image captured in real time or through the current scene image, the display unit also displays a virtual interaction interface in the current scene image to facilitate the user to interact with the current scene image.
所述虚拟交互界面的位置的设定,可以固定在虚拟画面中的固定区域,或者为随视线移动、位于视线范围内的预定距离的平面范围。The setting of the position of the virtual interactive interface may be fixed in a fixed area in the virtual screen, or may be a plane range that moves with the line of sight and is located at a predetermined distance within the range of the line of sight.
或者,所述虚拟交互界面可以与当前场景图像中的手写交互区域的位置对应。根据手写交互区域在现实空间中的位置,在真实场景或实时图像信号中的手写交互区域作为所述 虚拟交互界面。当书写交互区域在图像中的位置发生变化时,所述虚拟交互界面在显示单元中所显示的位置也相应的发生变化。Alternatively, the virtual interaction interface may correspond to the position of the handwriting interaction area in the current scene image. According to the position of the handwriting interaction area in the real space, the handwriting interaction area in the real scene or real-time image signal is used as the virtual interaction interface. When the position of the writing interaction area in the image changes, the position displayed on the display unit of the virtual interaction interface also changes accordingly.
当所述当前场景图像中包括多个平面时,可以根据所述手写笔的位置,确定所述当前场景图像中的手写交互区域,并根据所确定的手写交互区域,调整所述虚拟交互界面。When the current scene image includes multiple planes, the handwriting interaction area in the current scene image may be determined according to the position of the stylus, and the virtual interaction interface may be adjusted according to the determined handwriting interaction area.
比如,在所述当前场景的图像中包括平面X和平面Y,当用户通过手写笔或指尖在平面X输入数据,比如输入公式、草图时,在所述平面X显示所述手写交互区域,当用户手写笔或指尖在平面Y输入数据时,在改变所述虚拟交互界面的位置,在所述平面Y处生成对应的虚拟交互界面。在一种实现方式中,在所述平面Y处生成的虚拟交互界面,可以保留之前所输入的数据,从而能够使得用户输入能够减少环境的约束,大大的提高输入的便利性。For example, the image of the current scene includes plane X and plane Y. When the user inputs data on plane X with a stylus pen or fingertip, such as inputting formulas or sketches, the handwriting interaction area is displayed on plane X, When the user's stylus or fingertip inputs data on the plane Y, the position of the virtual interaction interface is changed, and the corresponding virtual interaction interface is generated on the plane Y. In an implementation manner, the virtual interactive interface generated on the plane Y can retain the previously input data, thereby enabling user input to reduce environmental constraints and greatly improve the convenience of input.
在一种实现方式中,当所述当前场景中包括真实的文字媒介,所述手写笔针对所述真实的文字媒介进行交互时,所述虚拟交互界面可以覆盖所述真实的文字媒介的区域。可以电子化识别真实的文字媒介(文件的内容,并在所述真实的文字媒介的对应位置显示所电子化的内容,且所识别的内容的位置,可以与真实的文字媒介中的内容的位置一一对应,可以在所述显示单元中直接显示当前场景中的真实的文字媒介,或者显示电子化后的文字内容。所述虚拟交互界面还可以包括手写交互区域的物理范围。所述手写交互区域的物理范围,可以根据所述场景边缘等视觉特征来确定。比如,手写交互区域可以为真实的文字媒介所对应的边缘,即纸张所确定的区域。In an implementation manner, when the current scene includes a real text medium and the stylus interacts with the real text medium, the virtual interaction interface may cover an area of the real text medium. It can electronically identify the real text medium (the content of the document, and display the electronic content in the corresponding position of the real text medium, and the position of the recognized content can be compared with the position of the content in the real text medium One-to-one correspondence, the real text medium in the current scene can be directly displayed on the display unit, or the electronic text content can be displayed. The virtual interaction interface can also include the physical range of the handwriting interaction area. The handwriting interaction The physical range of the area can be determined according to the visual features such as the edge of the scene. For example, the handwriting interaction area can be the edge corresponding to the real text medium, that is, the area determined by the paper.
在可能的实现方式中,当所述虚拟交互界面可以为显示单元中的固定区域,在所述虚拟交互界面中的固定位置显示所述虚拟交互界面。所述虚拟交互界面中可以包括预先设定的按键或者按钮等,以便于用户在某些固定场所,比如用户通过本申请所述电子设备完成直播、公式输入、草图绘制等操作。In a possible implementation manner, when the virtual interaction interface may be a fixed area in the display unit, the virtual interaction interface is displayed at a fixed position in the virtual interaction interface. The virtual interactive interface may include preset keys or buttons to facilitate the user in certain fixed places, for example, the user can complete operations such as live broadcast, formula input, and sketch drawing through the electronic device described in this application.
其中,本申请在检测手写笔的状态时,可以通过设置在手写笔的笔头位置的压力感知单元来确定。当所述压力感知单元所感知的压力大于预定值时,则判断所述手写笔处于书写状态,当所述压力感知单元所感知的压力小于或等于预定值时,则判断所述手写笔处于抬笔状态。Wherein, when detecting the state of the stylus pen in the present application, it can be determined by the pressure sensing unit provided at the position of the tip of the stylus pen. When the pressure sensed by the pressure sensing unit is greater than a predetermined value, it is determined that the stylus pen is in a writing state, and when the pressure sensed by the pressure sensing unit is less than or equal to a predetermined value, it is determined that the stylus pen is raised. Pen status.
或者,所述手写笔或指尖的状态也可以通过所获取的笔尖或指尖的绝对位置,以及重构的当前场景中的手写交互平面之间的位置来确定所述手写笔的状态。Alternatively, the state of the stylus pen or the fingertip can also be determined by the acquired absolute position of the pen tip or the fingertip and the position between the handwriting interaction planes in the reconstructed current scene.
当所述手写笔或指尖为抬笔状态时,根据所计算得到的手写笔的笔尖或手指的指尖所确定绝对位置,以及所重构的当前场景图像,确定所述指尖或笔尖在所述当前场景图像中的位置。根据所述指尖或笔尖在所述当前场景图像中的位置,即可确定用户所输入的操作的轨迹的位置。根据所述轨迹的位置所对应的数据当前场景中的内容,或者用户所输入的操作所对应的虚拟交互界面中的内容。When the stylus or fingertip is in the pen-up state, according to the calculated absolute position determined by the stylus pen tip or the fingertip of the finger, and the reconstructed current scene image, it is determined that the fingertip or the pen tip is at The position in the current scene image. According to the position of the fingertip or pen tip in the current scene image, the position of the trajectory of the operation input by the user can be determined. The content in the current scene according to the data corresponding to the position of the track, or the content in the virtual interactive interface corresponding to the operation input by the user.
当手写笔为落笔书写状态时,由于手部或者手写笔的遮挡,由摄像头所采集的图像可能会出现不能显示指尖或笔尖位置的情形,或者由于书写的轨迹在图像中所占的范围较小,均会影响所识别的笔尖或指尖所书写的轨迹内容的准确度。为了克服该问题,通过压力感知单元或视觉图像检测到所述手写笔处于落笔状态时,可通过本申请实施例所述的相对运动传感单元获取所述手写笔的相对移动轨迹信息。When the stylus is in the pen-down writing state, due to the occlusion of the hand or the stylus, the image collected by the camera may not show the position of the fingertip or the pen tip, or the trajectory of the writing occupies a larger range in the image. If it is small, it will affect the accuracy of the recognized pen tip or the track content written by the fingertip. In order to overcome this problem, when the pressure sensing unit or visual image detects that the stylus is in the pen-down state, the relative movement sensing unit described in the embodiments of the present application can obtain the relative movement track information of the stylus.
其中,通过相对运动传感单元获取所述手写笔的相对位移信息时,可以通过手写轨迹 信息感知单元来获取笔尖的相对位移信息。比如,可以通过所述手写轨迹感知单元中的激光干涉单元获取所述笔尖的相对位移信息,根据所述相对位移确定所述手写笔的笔迹。或者,通过惯性感知单元感知所述手写笔的笔尖的加速度的大小和方向的变化,来确定所述笔尖对应的相对位移信息。Wherein, when the relative displacement information of the stylus is acquired by the relative motion sensing unit, the relative displacement information of the pen tip can be acquired by the handwriting trajectory information sensing unit. For example, the relative displacement information of the pen tip may be obtained through the laser interference unit in the handwriting trajectory sensing unit, and the handwriting of the stylus can be determined according to the relative displacement. Or, the change in the magnitude and direction of the acceleration of the pen tip of the stylus pen is sensed by an inertial sensing unit to determine the relative displacement information corresponding to the pen tip.
或者所述手写轨迹感知单元可以包括摄像头,根据摄像头所拍摄的画面的变化,确定笔尖移动的距离和方向。Or the handwriting trajectory sensing unit may include a camera, and the distance and direction of the pen tip movement is determined according to the change of the picture taken by the camera.
在获取所述相对移动轨迹信息时,还可以获取所述压力感知单元所采集的笔尖压力,根据所述笔尖压力的大小,确定所述相对移动轨迹信息的线条粗细,从而得到更为准确的相对移动轨迹信息。When acquiring the relative movement trajectory information, the pen tip pressure collected by the pressure sensing unit can also be acquired, and the line thickness of the relative movement trajectory information is determined according to the magnitude of the pen tip pressure, so as to obtain a more accurate relative movement trajectory information. Movement track information.
所述手写笔所设置的惯性感知单元中,所述惯性感知单元可以包括加速度传感器和/或陀螺仪。所述惯性感知单元可以包括多个,并分别设置在手写笔的不同部位,比如可以分别设置在笔尖和笔末端位置,通过惯性感知单元获取所述手写笔的笔尖和笔末端的加速度的大小和方向,根据所述加速度的大小和方向的变化,确定所述手写笔的姿态信息。可以将所确定的姿态信息,调整所述虚拟交互界面中的手写笔的姿态,使用户能够得到更为真实的书写体验。In the inertial sensing unit provided by the stylus, the inertial sensing unit may include an acceleration sensor and/or a gyroscope. The inertial sensing unit may include a plurality of them, and they may be arranged on different parts of the stylus, for example, they may be arranged at the positions of the pen tip and the end of the pen respectively, and the magnitude and magnitude of acceleration of the tip of the stylus and the end of the pen may be obtained by the inertial sensing unit. The direction is to determine the posture information of the stylus based on the magnitude of the acceleration and the change in the direction. The determined posture information can be adjusted to the posture of the stylus in the virtual interactive interface, so that the user can get a more realistic writing experience.
为了进一步提升用户的书写体验,在获取所述手写笔的书写笔迹之前,还可以包括对所述手写笔的相对运动方向进行校准的步骤。可以在虚拟交互界面与现实图像叠加区域生成引导线。所述引导线可以为直线,也可以为其它形状的曲线。当用户通过手写笔在手写交互区域绘制所述引导线时,通过视觉感知模块获取所述手写笔绘制过程中所对应的第一运动轨迹,通过相对运动传感单元获取所述手写笔的笔尖的相对位移信息所确定第二运动轨迹。通过比较第一运动轨迹与第二运动轨迹的差异,对所述相对运动方向进行校准调整。In order to further improve the user's writing experience, before acquiring the writing handwriting of the stylus pen, the step of calibrating the relative movement direction of the stylus pen may also be included. The guide line can be generated in the area where the virtual interactive interface and the real image are superimposed. The guide line may be a straight line or a curve of other shapes. When the user draws the guide line in the handwriting interaction area with the stylus, the visual perception module obtains the corresponding first movement track during the drawing process of the stylus, and the relative movement sensing unit obtains the nib of the stylus. The relative displacement information determines the second movement trajectory. By comparing the difference between the first movement trajectory and the second movement trajectory, the relative movement direction is calibrated and adjusted.
比如,通过视觉感知模块获取笔尖在图像中的位置,根据视觉感知模块或深度传感单元获取笔尖的深度信息,根据所述笔尖在图像中的位置以及笔尖的深度信息,确定笔尖在当前场景中的空间位置,根据所获得的视频图像,跟踪所述笔尖在当前场景中的空间位置,得到所述笔尖的绝对运动轨迹,即第一运动轨迹。根据运动感知单元获取笔尖的第二运动轨迹,比较所述第一运动轨迹和第二运动轨迹之间的差异。比如所采集的第一运动轨迹为倾斜角度为A的直线,第二运动轨迹的倾斜角度为B的直线,则将所述运动感知单元的向倾斜角度A调整,调整幅度为|B-A|。For example, the position of the pen tip in the image is acquired through the visual perception module, the depth information of the pen tip is acquired according to the visual perception module or the depth sensing unit, and the position of the pen tip in the image and the depth information of the pen tip are used to determine that the pen tip is in the current scene According to the obtained video image, track the spatial position of the pen tip in the current scene to obtain the absolute movement track of the pen tip, that is, the first movement track. The second movement track of the pen tip is acquired according to the movement sensing unit, and the difference between the first movement track and the second movement track is compared. For example, if the collected first motion track is a straight line with an inclination angle of A, and the second motion track is a straight line with an inclination angle of B, the tilt angle A of the motion sensing unit is adjusted to the adjustment range |B-A|.
另外,为了进一步提升所述手写轨迹信息感知单元的感知精度,在所述手写笔上还可以设置距离传感器。通过所述距离传感器检测所述手写笔与手写交互区域之间的距离,当所述距离小于预设值时,通过手写笔上设置的激光干涉单元和/或惯性感知单元对所述笔尖位置进行微调,提高由视觉定位所述笔尖位置的精度,得到更为准确的相对运动信息。In addition, in order to further improve the perception accuracy of the handwriting trajectory information perception unit, a distance sensor may also be provided on the stylus pen. The distance between the stylus pen and the handwriting interaction area is detected by the distance sensor, and when the distance is less than a preset value, the position of the pen tip is measured by the laser interference unit and/or the inertial sensing unit provided on the stylus pen The fine adjustment improves the accuracy of visually positioning the pen tip position, and obtains more accurate relative motion information.
在本申请可能的实现方式中,可以识别当前场景中的真实的文本媒介,保存所识别的文本媒介对应的文本的内容。或者可以接收用户的编辑信息,保存编辑后的内容。In a possible implementation of the present application, the real text medium in the current scene can be identified, and the content of the text corresponding to the identified text medium can be saved. Or you can receive the user's editing information and save the edited content.
对所述文本的内容进行编辑时,可以根据手写笔的位置确定所编辑的文本。比如,可以根据手写笔的笔式、手写笔按键、手势或语音指令,实现对文本的选中、复制、检索或翻译等编辑操作,或者通过扬声器播放所翻译的内容。When editing the content of the text, the edited text can be determined according to the position of the stylus. For example, editing operations such as text selection, copying, retrieval, or translation can be realized according to the pen style of the stylus, the keys of the stylus, gestures or voice commands, or the translated content can be played through the speaker.
在本申请可能的实现方式中,可将场景图像信息和手写交互过程保存或传送至网络,可便于与其它用户共享交互过程。In a possible implementation of this application, the scene image information and the handwriting interaction process can be saved or transmitted to the network, which can facilitate the sharing of the interaction process with other users.
基于本申请所述的电子设备及电子设备的交互方法,可在教育、办公、娱乐等领域得到广泛的应用,手写输入的便利性和输入精度的提升,可大大的提升的操作使用体验,下面简单举例进行说明。Based on the electronic equipment and the electronic equipment interaction method described in this application, it can be widely used in education, office, entertainment and other fields. The convenience and input accuracy of handwriting input can be improved, and the operating experience can be greatly improved. Give a brief example to illustrate.
比如,在办公场景下,用户在阅读纸质文件,比如书本或者其它包括文字的纸件时,可以通过佩戴本申请所述的电子设备。所述电子设备可以包括头戴式显示设备和手写笔。通过头戴式显示设备中的视觉感知模块拍摄如图14所示包括纸质文件1401的图像1402,通过深度传感单元获取所述图像所对应的深度信息,构建所拍摄的图像所对应的3D模型。对所述3D模型中的平面的内边缘进行检测,识别所述图14中的平面1403中所包括纸质文件1401的文字媒介。For example, in an office scenario, a user can wear the electronic device described in this application when reading a paper document, such as a book or other paper containing text. The electronic device may include a head-mounted display device and a stylus pen. The image 1402 including the paper file 1401 as shown in FIG. 14 is captured by the visual perception module in the head-mounted display device, and the depth information corresponding to the image is acquired by the depth sensing unit to construct the 3D corresponding to the captured image. Model. The inner edge of the plane in the 3D model is detected, and the text medium of the paper document 1401 included in the plane 1403 in FIG. 14 is identified.
根据图14所识别的文字媒介,可截取所述文字媒介对应的图像区域,通过OCR识别的方式,识别所述文字媒介中的文字内容,可根据所述文字媒介中的文字内容生成如图15所示的虚拟交互界面151,并根据所述文字媒介在图像中的位置,确定所述虚拟交互界面的位置(不局限于此,所述虚拟交互界面也可根据需要采用固定在图像中的某个位置的方式,或者固定在视线方向的预定距离处的区域)。According to the text medium recognized in Figure 14, the image area corresponding to the text medium can be intercepted, and the text content in the text medium can be recognized by OCR recognition, and the text content in the text medium can be generated as shown in Figure 15 The virtual interactive interface 151 is shown, and the position of the virtual interactive interface is determined according to the position of the text medium in the image (not limited to this, the virtual interactive interface can also be fixed in the image as needed. The method of a position, or the area fixed at a predetermined distance in the direction of the line of sight).
为了提高用户对文本进行批注的便利性,如图15所示,在所述虚拟交互界面中还包括有工具栏152,所述工具栏中包括不同颜色的选择框153,用户可以通过移动手写笔,选择不同颜色对文本内容添加不同的底色,或者通过所选择的颜色为文本添加不同的批注内容,生成批注154等。当然,工具栏的内容不局限于此,还可以包括复制、粘贴、剪切、加粗、撤销等编辑工具等。In order to improve the convenience for users to annotate text, as shown in FIG. 15, the virtual interactive interface also includes a toolbar 152. The toolbar includes selection boxes 153 of different colors. The user can move the stylus , Select different colors to add different background colors to the text content, or add different annotations to the text through the selected color, and generate annotations 154 and so on. Of course, the content of the toolbar is not limited to this, and can also include editing tools such as copy, paste, cut, bold, and undo.
用户通过手写笔对图15所示的虚拟交互界面进行编辑操作时,可以根据手写笔的笔尖所设置的视觉特征,包括如特殊颜色、荧光或反光等特征,确定笔尖在图像中的位置。根据深度传感单元所获取的图像所对应的深度信息,确定所述笔尖对应的深度信息。根据所述笔尖在图像中的位置、笔尖的深度信息,结合坐标转换矩阵,可以计算得到所述笔尖在当前场景中所对应的空间位置。When the user edits the virtual interactive interface shown in FIG. 15 through the stylus pen, the position of the pen tip in the image can be determined according to the visual features set by the stylus pen tip, including features such as special color, fluorescence, or reflection. According to the depth information corresponding to the image acquired by the depth sensing unit, the depth information corresponding to the pen tip is determined. According to the position of the pen tip in the image, the depth information of the pen tip, and the coordinate conversion matrix, the spatial position corresponding to the pen tip in the current scene can be calculated.
根据手写笔的笔尖所设置的压力传感单元,或者根据所检测到的笔尖的空间位置,确定所述手写笔是否为抬笔状态或落笔状态。According to the pressure sensing unit provided on the pen tip of the stylus, or according to the detected spatial position of the pen tip, it is determined whether the stylus is in the pen-up state or the pen-down state.
当所述手写笔为抬笔状态时,可以根据所构建的图像对应的3D模型,以及笔尖在所述3D模型中的位置,确定所述笔尖对应于虚拟交互界面中的光标位置。比如,可以根据所述笔尖的空间位置生成手写交互区域所在平面的垂线,根据所述垂线与所述手写交互区域的相交点,即为所述笔尖所对应的光标位置。在手写笔为抬笔状态时,实时更新所述手写笔的笔尖所对应的光标位置,从而使得用户在对文本进行编辑时,可以清楚的知道在即将落笔时,所对应的即将编辑的内容。When the stylus is in the pen-up state, it can be determined that the pen tip corresponds to the cursor position in the virtual interactive interface according to the 3D model corresponding to the constructed image and the position of the pen tip in the 3D model. For example, the vertical line of the plane where the handwriting interaction area is located may be generated according to the spatial position of the pen tip, and the intersection point of the vertical line and the handwriting interaction area is the cursor position corresponding to the pen tip. When the stylus is in the up state, the cursor position corresponding to the tip of the stylus is updated in real time, so that when editing the text, the user can clearly know the content to be edited when the pen is about to be lowered.
可在手写笔的笔尖设置距离传感器,当检测到所述笔尖距离手写交互区域的距离小于预定值时,可以通过相对运动感知单元对所述笔尖的位置进行调整,从而提高笔尖位置的定位精度。A distance sensor can be provided on the tip of the stylus. When it is detected that the distance of the tip from the handwriting interaction area is less than a predetermined value, the position of the tip can be adjusted by the relative motion sensing unit, thereby improving the positioning accuracy of the tip position.
比如,可以通过视觉感知模块和深度传感单元所确定的笔尖的空间位置对应于虚拟交互界面中的光标位置为M,在所述笔尖距离手写交互区域的距离小于预定值,比如小于0.5厘米时,通过相对运动传感单元确定所述笔尖在虚拟交互界面中的光标位置为N,则可以通过微调的方式,调整所述虚拟交互界面中的光标的位置,从而提高所获取的笔迹的 准确度。其中,所述相对运动传感单元可以通过激光干涉、惯性传感单元或摄像头拍摄图像的方式,来确定笔尖所对应的光标位置。For example, the spatial position of the pen tip that can be determined by the visual perception module and the depth sensing unit corresponds to the cursor position in the virtual interaction interface as M, when the distance between the pen tip and the handwriting interaction area is less than a predetermined value, such as less than 0.5 cm If the relative motion sensing unit determines that the cursor position of the pen tip in the virtual interactive interface is N, then the position of the cursor in the virtual interactive interface can be adjusted by fine-tuning, so as to improve the accuracy of the acquired handwriting . Wherein, the relative motion sensing unit can determine the cursor position corresponding to the pen tip by means of laser interference, inertial sensing unit or camera shooting image.
另外,为了减少相对运动感知单元所检测到的笔尖轨迹的误差,可以在虚拟交互界面中生成一校准按钮,或者也可以在手写笔上设置该校准按钮,当用户触该校准按钮时,可根据所述校准按键的触发信息,在所述虚拟交互界面生成一校准直线。In addition, in order to reduce the error of the pen tip trajectory detected by the relative motion sensing unit, a calibration button can be generated in the virtual interactive interface, or the calibration button can be set on the stylus. When the user touches the calibration button, the calibration button can be The trigger information of the calibration button generates a calibration line on the virtual interactive interface.
如图16所示的设备校准示意图中,对于虚拟交互界面中所生成的校准直线,用户可以通过头戴式显示设备观看到该校准直线。通过视觉感知的方式,用户可以根据该校准直线的绘制提示,根据所感知的直线的位置,在感知的位置处进行绘制。此时,可以通过头戴式显示设备所拍摄到的笔尖的位置,得到笔尖的绝对运动轨迹,即第一轨迹L1。In the device calibration schematic diagram shown in FIG. 16, for the calibration line generated in the virtual interactive interface, the user can view the calibration line through the head-mounted display device. By means of visual perception, the user can draw at the perceived position according to the drawing prompt of the calibration straight line, and according to the position of the perceived straight line. At this time, the absolute movement trajectory of the pen tip, that is, the first trajectory L1, can be obtained through the position of the pen tip captured by the head-mounted display device.
通过手写笔所设置的相对运动传感单元,同样可以获得笔尖的运动轨迹,得到第二轨迹L2。Through the relative motion sensing unit provided by the stylus pen, the motion trajectory of the pen tip can also be obtained, and the second trajectory L2 can be obtained.
根据第一轨迹l1与第二轨迹L2的差异,可以对第二轨迹L2出现的误差进行校准。比如,对于第二轨迹的方向偏离度,校准所述相对运动传感单元中的方向传感器的方向等。According to the difference between the first trajectory 11 and the second trajectory L2, the error occurring in the second trajectory L2 can be calibrated. For example, for the direction deviation degree of the second trajectory, the direction of the direction sensor in the relative motion sensing unit is calibrated.
可以按照预定的时长,对所述相对运动传感单元进行校准,从而能够更进一步保证系统所检测的轨迹信息的精度。The relative motion sensing unit can be calibrated according to a predetermined time period, so that the accuracy of the track information detected by the system can be further ensured.
在对电子化的文字内容进行编辑处理后,可以保存编辑后的文本内容,便于分享给其它用户或者方便用户自己查看。比如查看对书本部分文字的批注信息、对书本部分文字的划线信息等。或者,可以根据工具栏中包括的翻译、朗读等工具,对选择的文本进行翻译和朗读等操作。根据所实现的功能的不同,可以在所述虚拟交互界面显示不同的功能标签,当接收到用户对功能标签的触发指令时,则显示该功能标签对应的功能按键,比如编辑标签下包括批注、删除、复制等功能按键。After editing the electronic text content, the edited text content can be saved for easy sharing with other users or convenient for users to view by themselves. For example, view the annotation information for part of the text in the book, and the cross-line information for part of the text in the book. Or, you can perform operations such as translation and reading aloud on the selected text according to the translation, reading and other tools included in the toolbar. According to the different functions implemented, different function labels can be displayed on the virtual interactive interface. When a user triggers a function label, the function button corresponding to the function label is displayed. For example, the edit label includes comments, Delete, copy and other function keys.
或者,如图17所示为所述虚拟交互界面172可以显示待签名文件。所述合同文件可以通过第一通信单元所接收的合同文件或者其它需要签名的文件对应的图像171。Alternatively, as shown in FIG. 17, the virtual interactive interface 172 may display the file to be signed. The contract document may be the image 171 corresponding to the contract document received by the first communication unit or other documents that need to be signed.
或者,所述待签名文件也可以为当前场景中的合同纸件,通过头戴式显示设备中的视觉感知模块可以得到所述待签名文件的图像,可以在获取到待签名文件对应的图像后,接收用户的签名处理,在所述待签名文件对应的图像中添加签名数据,并可将签名后的图像发送至其它需要签名的用户端,从而方便的实现快捷方便的手写笔签名操作。Alternatively, the document to be signed can also be a contract paper in the current scene. The image of the document to be signed can be obtained through the visual perception module in the head-mounted display device. After the image corresponding to the document to be signed is obtained, , Receive the user's signature processing, add signature data to the image corresponding to the file to be signed, and send the signed image to other users that need to be signed, so as to conveniently implement a quick and convenient handwriting pen signature operation.
当所述待签名文件为当前场景中的纸件所对应的图像,并且所述虚拟交互界面在场景图像中的位置,与手写平面在场景图像中的位置一致时,用户可以直接在纸件的签名位置进行签名操作。根据虚拟交互界面的显示位置与手写平面位置的一致性,可以准确的在待签名文件的图像的签名处生成签名信息,可使得用户感受到使用手写笔在真实的纸件书写的体验,并且能够得到有效的签名文件对应的图像。When the document to be signed is the image corresponding to the paper in the current scene, and the position of the virtual interactive interface in the scene image is consistent with the position of the handwriting plane in the scene image, the user can directly The signature position is used for the signature operation. According to the consistency between the display position of the virtual interactive interface and the position of the handwriting plane, the signature information can be accurately generated at the signature of the image of the document to be signed, so that the user can feel the experience of using the stylus to write on real paper, and can Get the image corresponding to the valid signature file.
当所述虚拟交互界面显示的图像为其它用户所发送的待签名文件的图像时,或者所述虚拟交互界面为场景图像中的固定区域时,可以在所述虚拟交互界面显示手写笔对应的光标,根据所显示的光标的位置,用户可以在手写平面中移动所述手写笔,从而使得光标移动至签名位置,完成对虚拟交互界面中的文件的签名操作,实现高效、安全的混合现实办公操作。When the image displayed on the virtual interactive interface is an image of a document to be signed sent by other users, or when the virtual interactive interface is a fixed area in the scene image, the cursor corresponding to the stylus may be displayed on the virtual interactive interface According to the displayed cursor position, the user can move the stylus pen in the handwriting plane, so that the cursor moves to the signature position, completes the signature operation of the file in the virtual interactive interface, and realizes efficient and safe mixed reality office operations.
又比如,用户可以佩戴所述电子设备进行直播讲解。在所述头戴式显示设备中的显示单元中可以播放所需要讲解的内容。比如,所述需要讲解的内容可以为多媒体文件,包括 如视频、PPT等,或者,所述需要讲解的内容也可以用户现场直播时,由视觉感知单元所采集的当前图像。For another example, the user may wear the electronic device for live explanation. The content that needs to be explained can be played on the display unit in the head-mounted display device. For example, the content that needs to be explained may be a multimedia file, including video, PPT, etc., or the content that needs to be explained may also be the current image collected by the visual perception unit when the user is broadcasting live.
在显示单元播放所述多媒体文件或显示当前场景时,可以设定显示单元所播放的图像的预设区域为虚拟交互界面,或者显示单元所播放的整个画面均为虚拟交互界面。为了不影响播放的画面清晰度,可以设定所述虚拟交换界面为透明图层。当接收到手写笔在所述虚拟交互界面中的标记时,相应的在虚拟交互界面中显示对应的标记信息,便于通过所述电子设备实现对显示单元所显示的内容进行更为方便的讲解操作。比如图18所示,在用户使用本申请所述电子设备时,用户对当前直播的内容,即由视觉感知单元所采集的当前场景的图像,可以通过手写笔绘制标记(图中的虚线区域)等操作,实现更为生动的讲解。When the display unit plays the multimedia file or displays the current scene, the preset area of the image played by the display unit can be set as a virtual interactive interface, or the entire screen played by the display unit is a virtual interactive interface. In order not to affect the clarity of the played screen, the virtual exchange interface may be set as a transparent layer. When receiving the mark of the stylus in the virtual interactive interface, the corresponding mark information is displayed in the virtual interactive interface, so that the electronic device can realize more convenient explanation and operation of the content displayed by the display unit. . For example, as shown in FIG. 18, when the user uses the electronic device described in this application, the user can draw a mark on the current live broadcast content, that is, the image of the current scene collected by the visual perception unit (the dotted area in the figure) with a stylus. Wait for operations to realize a more vivid explanation.
在可能的实现方式中,可以将所述虚拟交互界面和所述显示单元播放的图像合成后发送给其它用户,或者还可以将讲解时的音频合成后发送给其它用户,使得其它用户得到生动的讲解信息。In a possible implementation, the virtual interactive interface and the image played by the display unit can be synthesized and then sent to other users, or the audio during the explanation can be synthesized and sent to other users, so that other users can get vivid Explain the information.
在对显示单元播放的内容进行标记时,可以通过对手写笔的书写状态进行检测。当手写笔为抬笔状态时,可以在所述虚拟交互界面显示笔尖所对应的光标,在手写笔为落笔状态时,根据对所述手写笔的相对运动传感单元获取手写笔的书写轨迹,在所述交互界面中显示对应的轨迹内容。比如在虚拟交互界面播放教学视频时,通过视觉感知确定手写笔在抬笔状态下的光标位置,通过相对运动感知确定手写笔在落笔时所对应的笔迹,在手写交互区域方便的进行标记等操作。可将所述虚拟交互界面、标记信息,以及所采集用户的语音信号,分享给其它用户。When marking the content played by the display unit, the writing state of the stylus can be detected. When the stylus is in the pen-up state, the cursor corresponding to the pen tip can be displayed on the virtual interactive interface. When the stylus is in the pen-down state, the writing trajectory of the stylus is acquired according to the relative motion sensing unit of the stylus, The corresponding track content is displayed in the interactive interface. For example, when playing instructional videos on the virtual interactive interface, visual perception is used to determine the cursor position of the stylus in the pen-up state, and relative motion perception is used to determine the corresponding handwriting of the stylus when the pen is down. It is convenient to perform operations such as marking in the handwriting interaction area. . The virtual interactive interface, tag information, and the collected user's voice signal can be shared with other users.
在可能的实现场景中,还可通过本申请所述电子设备进行公式演算、图纸绘画等。如图19所示,可以在虚拟交互界面中显示与手写交互区域对应的空白区域191,在所述空白区域的周围,包括公式演算或图纸绘画所对应的工具192。通过视觉感知确定手写笔在抬笔状态下的光标位置,通过相对运动感知确定手写笔在落笔时所对应的笔记,使得在进行手写输入时,不需要依赖专门的手写板,且能有效的提高笔迹的输入精度。In possible implementation scenarios, formula calculations, drawing drawings, etc. can also be performed through the electronic device described in this application. As shown in FIG. 19, a blank area 191 corresponding to the handwriting interaction area may be displayed in the virtual interactive interface, and a tool 192 corresponding to formula calculation or drawing and drawing may be included around the blank area. Determine the cursor position of the stylus in the pen-up state through visual perception, and determine the corresponding notes of the stylus when the pen is down through the relative motion perception, so that when handwriting input is performed, it is not necessary to rely on a special handwriting tablet, and can effectively improve The input accuracy of the handwriting.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the foregoing embodiment does not mean the order of execution. The execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
对应于上文实施例所述的电子设备的交互方法,图20示出了本申请实施例提供的装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。Corresponding to the electronic device interaction method described in the above embodiment, FIG. 20 shows a structural block diagram of an apparatus provided in an embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
参照图20,该装置包括:Referring to Figure 20, the device includes:
图像显示模块2001,用于由电子设备显示交互图像;The image display module 2001 is used to display interactive images by electronic devices;
操作信息获取模块2002,用于由电子设备根据手写笔的运动信息获取手写笔的第一操作,其中所述运动信息包括手写笔的相对运动信息;The operation information obtaining module 2002 is configured to obtain the first operation of the stylus pen by the electronic device according to the movement information of the stylus pen, wherein the movement information includes the relative movement information of the stylus pen;
响应模块2003,用于由电子设备在所述交互图像上显示虚拟交互界面,在所述虚拟交互界面响应所述第一操作。The response module 2003 is configured to display a virtual interactive interface on the interactive image by the electronic device, and respond to the first operation on the virtual interactive interface.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成 的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, only the division of the above functional units and modules is used as an example. In practical applications, the above functions can be allocated to different functional units and modules as needed. Module completion, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above. The functional units and modules in the embodiments can be integrated into one processing unit, or each unit can exist alone physically, or two or more units can be integrated into one unit. The above-mentioned integrated units can be hardware-based Formal realization can also be realized in the form of a software functional unit. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application. For the specific working process of the units and modules in the foregoing system, reference may be made to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own focus. For parts that are not described in detail or recorded in an embodiment, reference may be made to related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的系统实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed device and method may be implemented in other ways. For example, the system embodiment described above is merely illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be It can be combined or integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到手写输入装置/混合现实交互设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the implementation of all or part of the processes in the above-mentioned embodiment methods in the present application can be accomplished by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may at least include: any entity or device capable of carrying computer program code to the handwriting input device/mixed reality interactive device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random Access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal, and software distribution medium. For example, U disk, mobile hard disk, floppy disk or CD-ROM, etc. In some jurisdictions, according to legislation and patent practices, computer-readable media cannot be electrical carrier signals and telecommunication signals.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application.

Claims (19)

  1. 一种电子设备的交互方法,其特征在于,所述电子设备的交互方法包括:An electronic device interaction method, characterized in that, the electronic device interaction method includes:
    电子设备显示交互图像;Electronic equipment displays interactive images;
    电子设备根据手写笔的运动信息获取手写笔的第一操作,其中所述运动信息包括手写笔的相对运动信息;The electronic device acquires the first operation of the stylus according to the movement information of the stylus, wherein the movement information includes the relative movement information of the stylus;
    电子设备在所述交互图像上显示虚拟交互界面,在所述虚拟交互界面响应所述第一操作。The electronic device displays a virtual interactive interface on the interactive image, and responds to the first operation on the virtual interactive interface.
  2. 根据权利要求1所述的电子设备的交互方法,其特征在于,所述交互图像为当前场景的图像,所述方法还包括:The interactive method for an electronic device according to claim 1, wherein the interactive image is an image of a current scene, and the method further comprises:
    根据当前场景的图像,获取手写笔在所述当前场景中的位置;Obtaining the position of the stylus in the current scene according to the image of the current scene;
    根据所述手写笔在所述当前场景中的位置,确定手写笔的笔尖在所述虚拟交互界面中的手写位置。According to the position of the stylus in the current scene, the handwriting position of the tip of the stylus in the virtual interactive interface is determined.
  3. 根据权利要求2所述的电子设备的交互方法,其特征在于,所述方法包括:The interactive method of electronic equipment according to claim 2, wherein the method comprises:
    根据摄像头获取当前场景的图像;Obtain the image of the current scene according to the camera;
    或者,还包括获取当前场景的深度信息。Or, it also includes acquiring the depth information of the current scene.
  4. 根据权利要求2或3所述的电子设备的交互方法,其特征在于,获取手写笔在所述当前场景中的位置,包括:The interaction method of an electronic device according to claim 2 or 3, wherein acquiring the position of the stylus in the current scene comprises:
    在当前场景的图像中检测手写笔的笔尖特征,确定手写笔的笔尖在当前场景的图像中的位置;Detect the tip characteristics of the stylus in the image of the current scene, and determine the position of the tip of the stylus in the image of the current scene;
    根据当前场景的图像与深度信息的匹配关系,获得所述笔尖的深度信息;Obtaining the depth information of the pen tip according to the matching relationship between the image of the current scene and the depth information;
    根据所述笔尖的深度信息确定所述笔尖的位置。The position of the pen tip is determined according to the depth information of the pen tip.
  5. 根据权利要求4所述的电子设备的交互方法,其特征在于,所述笔尖特征包括颜色特征、反射光特征或荧光特征中的一项或者多项。The interactive method for an electronic device according to claim 4, wherein the pen tip feature includes one or more of a color feature, a reflected light feature, or a fluorescent feature.
  6. 根据权利要求2所述的电子设备的交互方法,其特征在于,根据所述手写笔在所述当前场景中的位置,确定手写笔的笔尖在所述虚拟交互界面中的手写位置,包括:The interaction method of an electronic device according to claim 2, wherein the determining the handwriting position of the stylus pen tip in the virtual interactive interface according to the position of the stylus pen in the current scene comprises:
    当所述手写笔为腾空状态时,获取所述手写笔的位置,以及当前场景的书写平面的位置,确定手写笔相对于书写平面的相对位置关系;When the stylus is in the empty state, obtain the position of the stylus and the position of the writing plane of the current scene, and determine the relative positional relationship of the stylus with respect to the writing plane;
    根据所述手写笔相对于书写平面的相对位置关系,结合预先确定的书写平面与虚拟交互界面的映射关系,确定所述手写笔在所述虚拟交互界面中的位置。The position of the stylus in the virtual interactive interface is determined according to the relative positional relationship of the stylus relative to the writing plane, combined with a predetermined mapping relationship between the writing plane and the virtual interactive interface.
  7. 根据权利要求1所述的电子设备的交互方法,其特征在于,所述交互图像通过摄像头采集得到,或者通过预定透明度的显示设备采集得到。The interactive method for an electronic device according to claim 1, wherein the interactive image is collected by a camera, or collected by a display device with a predetermined transparency.
  8. 根据权利要求1所述的电子设备的交互方法,其特征在于,电子设备在所述交互图像上显示虚拟交互界面包括:The interactive method for an electronic device according to claim 1, wherein the electronic device displaying a virtual interactive interface on the interactive image comprises:
    在所述交互图像中的预定区域显示所述虚拟交互界面;Displaying the virtual interactive interface in a predetermined area in the interactive image;
    或者,根据当前场景中的平面区域确定所述虚拟交互界面。Or, the virtual interactive interface is determined according to the plane area in the current scene.
  9. 根据权利要求8所述的电子设备的交互方法,其特征在于,根据当前场景中的平面区域确定所述虚拟交互界面的位置,包括:The interaction method for electronic equipment according to claim 8, wherein determining the position of the virtual interaction interface according to the plane area in the current scene comprises:
    根据预设的平面区域要求,对当前场景中的平面区域进行筛选;Filter the flat areas in the current scene according to the preset flat area requirements;
    根据筛选后的平面区域的形状和/或位置,确定所述虚拟交互界面的形状和/或位置。The shape and/or position of the virtual interactive interface is determined according to the shape and/or position of the filtered plane area.
  10. 根据权利要求9所述的电子设备的交互方法,其特征在于,所述预设的平面区域要求包括平面区域尺寸范围、平面区域的朝向或平面区域距离摄像头的距离中的一项或者多项。The interactive method for an electronic device according to claim 9, wherein the preset requirements for a plane area include one or more of the size range of the plane area, the orientation of the plane area, or the distance of the plane area from the camera.
  11. 根据权利要求9所述的电子设备的交互方法,其特征在于,当满足所述平面区域要求的平面区域包括多个时,所述方法还包括:The interactive method for an electronic device according to claim 9, wherein when there are multiple planar areas that meet the requirements of the planar area, the method further comprises:
    根据预先确定的手写笔的较佳书写位置,在多个平面区域中选择与所述较佳书写位置更匹配的平面区域。According to a predetermined preferred writing position of the stylus, a plane region that more closely matches the preferred writing position is selected from among the plurality of plane regions.
  12. 根据权利要求11所述的电子设备的交互方法,其特征在于,根据预先确定的手写笔的较佳书写位置,在多个平面区域中选择与所述较佳书写位置更匹配的平面区域包括:11. The electronic device interaction method according to claim 11, wherein, according to a predetermined preferred writing position of the stylus, selecting a plane region that more closely matches the preferred writing position among the plurality of plane regions comprises:
    分别获取多个待选的平面区域与所述较佳书写位置的距离;Respectively acquiring the distances between a plurality of to-be-selected plane areas and the preferred writing position;
    选择距离较近的平面区域。Select a flat area that is closer.
  13. 根据权利要求11所述的电子设备的交互方法,其特征在于,根据预先确定的手写笔的较佳书写位置,在多个平面区域中选择与所述较佳书写位置更匹配的平面区域包括:11. The electronic device interaction method according to claim 11, wherein, according to a predetermined preferred writing position of the stylus, selecting a plane region that more closely matches the preferred writing position among the plurality of plane regions comprises:
    获取所述较佳书写位置对应的较佳书写区域;Acquiring a better writing area corresponding to the better writing position;
    分别获取多个待选的平面区域与所述较佳书写区域的相交区域;Respectively acquiring the intersection area of a plurality of to-be-selected plane areas and the preferred writing area;
    选择相交区域的面积较大的待选的平面区域。Select a plane area to be selected with a larger area of intersection.
  14. 根据权利要求1所述的电子设备的交互方法,其特征在于,电子设备在所述交互图像上显示虚拟交互界面包括:The interactive method for an electronic device according to claim 1, wherein the electronic device displaying a virtual interactive interface on the interactive image comprises:
    检测所述虚拟交互界面在当前场景对应的手写区域是否包括文字媒介;Detecting whether the handwriting area corresponding to the current scene of the virtual interactive interface includes a text medium;
    如果所述手写区域包括文字媒介,则在所述虚拟交互界面生成所述文字媒介的图像,或者显示由所述文字媒介识别得到的文字内容。If the handwriting area includes text media, an image of the text media is generated on the virtual interactive interface, or text content recognized by the text media is displayed.
  15. 根据权利要求14所述的电子设备的交互方法,其特征在于,所述方法还包括:The interaction method of electronic equipment according to claim 14, wherein the method further comprises:
    在所述虚拟交互界面中显示编辑按键;Displaying an edit button in the virtual interactive interface;
    在所述虚拟交互界面响应所述第一操作包括:Responding to the first operation on the virtual interactive interface includes:
    当检测到手写笔在所述编辑按键的对应位置的点击操作时,响应所述编辑按键的对应功能。When a click operation of the stylus at the corresponding position of the edit button is detected, respond to the corresponding function of the edit button.
  16. 根据权利要求15所述的电子设备的交互方法,其特征在于,所述方法还包括:The interactive method of electronic equipment according to claim 15, wherein the method further comprises:
    保存和/或向其它用户发送所编辑后的图像或文字内容;Save and/or send the edited image or text content to other users;
    或者,选择所述虚拟交互界面的文字内容,向网络发送搜索所选择的文字内容的请求;Or, select the text content of the virtual interactive interface, and send a request for searching the selected text content to the network;
    接收并在所述虚拟交互界面显示所述请求对应的结果。Receiving and displaying the result corresponding to the request on the virtual interactive interface.
  17. 根据权利要求1所述的电子设备的交互方法,其特征在于,所述方法还包括:The interactive method of electronic equipment according to claim 1, wherein the method further comprises:
    通过视觉感知模块获取笔尖在执行绘制时的图像,根据执行绘制时的图像生成第一轨迹;Obtain the image of the pen tip when drawing is performed through the visual perception module, and generate the first trajectory according to the image when drawing is performed;
    通过相对运动信息生成第二轨迹;Generate a second trajectory through relative motion information;
    比较所述第一轨迹与第二轨迹的差异,根据所述差异对所述相对运动信息进行校准。The difference between the first trajectory and the second trajectory is compared, and the relative motion information is calibrated according to the difference.
  18. 一种电子设备,所述电子设备包括存储器、处理器以及存储在所述存储器中并可 在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至17任一项所述的电子设备的交互方法。An electronic device comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program as follows: The interactive method of electronic equipment according to any one of claims 1 to 17.
  19. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至17任一项所述的电子设备的交互方法。A computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to realize the interaction of the electronic device according to any one of claims 1 to 17 method.
PCT/CN2021/079995 2020-05-14 2021-03-10 Electronic device and interaction method therefor WO2021227628A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010407584.X 2020-05-14
CN202010407584.XA CN113672099A (en) 2020-05-14 2020-05-14 Electronic equipment and interaction method thereof

Publications (1)

Publication Number Publication Date
WO2021227628A1 true WO2021227628A1 (en) 2021-11-18

Family

ID=78526294

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079995 WO2021227628A1 (en) 2020-05-14 2021-03-10 Electronic device and interaction method therefor

Country Status (2)

Country Link
CN (1) CN113672099A (en)
WO (1) WO2021227628A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510158A (en) * 2021-12-08 2022-05-17 深圳市康冠商用科技有限公司 Electronic stroke error correction method and device, touch screen device and storage medium
CN115167801A (en) * 2022-09-07 2022-10-11 深圳市方成教学设备有限公司 Information display method based on conference memory all-in-one machine and conference memory all-in-one machine
CN115617174A (en) * 2022-10-21 2023-01-17 吉林大学 Method for constructing interactive virtual exhibition hall
CN115877953A (en) * 2023-02-06 2023-03-31 北京元隆雅图文化传播股份有限公司 Virtual reality glasses

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354772A (en) * 2022-06-28 2024-01-05 荣耀终端有限公司 Method for establishing connection with handwriting pen and electronic equipment
CN115421603B (en) * 2022-11-04 2023-04-07 荣耀终端有限公司 Handwriting processing method, terminal device and chip system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012077273A1 (en) * 2010-12-07 2012-06-14 パナソニック株式会社 Electronic device
CN103809751A (en) * 2014-02-12 2014-05-21 北京智谷睿拓技术服务有限公司 Information sharing method and device
CN107918507A (en) * 2016-10-10 2018-04-17 广东技术师范学院 A kind of virtual touchpad method based on stereoscopic vision
CN109074217A (en) * 2016-03-28 2018-12-21 微软技术许可有限责任公司 Application for multiple point touching input detection
CN110520821A (en) * 2017-07-18 2019-11-29 惠普发展公司,有限责任合伙企业 Input, which is projected three-dimension object, to be indicated

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012077273A1 (en) * 2010-12-07 2012-06-14 パナソニック株式会社 Electronic device
CN103809751A (en) * 2014-02-12 2014-05-21 北京智谷睿拓技术服务有限公司 Information sharing method and device
CN109074217A (en) * 2016-03-28 2018-12-21 微软技术许可有限责任公司 Application for multiple point touching input detection
CN107918507A (en) * 2016-10-10 2018-04-17 广东技术师范学院 A kind of virtual touchpad method based on stereoscopic vision
CN110520821A (en) * 2017-07-18 2019-11-29 惠普发展公司,有限责任合伙企业 Input, which is projected three-dimension object, to be indicated

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510158A (en) * 2021-12-08 2022-05-17 深圳市康冠商用科技有限公司 Electronic stroke error correction method and device, touch screen device and storage medium
CN115167801A (en) * 2022-09-07 2022-10-11 深圳市方成教学设备有限公司 Information display method based on conference memory all-in-one machine and conference memory all-in-one machine
CN115167801B (en) * 2022-09-07 2022-12-02 深圳市方成教学设备有限公司 Information display method based on conference memory all-in-one machine and conference memory all-in-one machine
CN115617174A (en) * 2022-10-21 2023-01-17 吉林大学 Method for constructing interactive virtual exhibition hall
CN115617174B (en) * 2022-10-21 2023-09-22 吉林大学 Method for constructing interactive virtual exhibition hall
CN115877953A (en) * 2023-02-06 2023-03-31 北京元隆雅图文化传播股份有限公司 Virtual reality glasses
CN115877953B (en) * 2023-02-06 2023-05-05 北京元隆雅图文化传播股份有限公司 Virtual reality glasses

Also Published As

Publication number Publication date
CN113672099A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
WO2021227628A1 (en) Electronic device and interaction method therefor
US9733792B2 (en) Spatially-aware projection pen
US8751969B2 (en) Information processor, processing method and program for displaying a virtual image
US10095030B2 (en) Shape recognition device, shape recognition program, and shape recognition method
CN104838337B (en) It is inputted for the no touch of user interface
WO2017118075A1 (en) Human-machine interaction system, method and apparatus
WO2022022036A1 (en) Display method, apparatus and device, storage medium, and computer program
US20150022551A1 (en) Display device and control method thereof
TW201346640A (en) Image processing device, and computer program product
CN104040469A (en) Content selection in a pen-based computing system
CN103617642B (en) A kind of digital book drawing method and device
CN108027663B (en) Combining mobile devices with person tracking for large display interaction
CN104081307A (en) Image processing apparatus, image processing method, and program
WO2018018624A1 (en) Gesture input method for wearable device, and wearable device
WO2021004412A1 (en) Handheld input device, and method and apparatus for controlling display position of indication icon thereof
CN106293099A (en) Gesture identification method and system
US20200326783A1 (en) Head mounted display device and operating method thereof
JP2008117083A (en) Coordinate indicating device, electronic equipment, coordinate indicating method, coordinate indicating program, and recording medium with the program recorded thereon
Yang et al. 3D character recognition using binocular camera for medical assist
JP4703744B2 (en) Content expression control device, content expression control system, reference object for content expression control, and content expression control program
CN114816088A (en) Online teaching method, electronic equipment and communication system
JP2015184986A (en) Compound sense of reality sharing device
JP4550460B2 (en) Content expression control device and content expression control program
EP3088991B1 (en) Wearable device and method for enabling user interaction
CN113434046A (en) Three-dimensional interaction system, method, computer device and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21803101

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21803101

Country of ref document: EP

Kind code of ref document: A1