CN114745579A - Interaction method based on space writing interface, terminal and storage medium - Google Patents

Interaction method based on space writing interface, terminal and storage medium Download PDF

Info

Publication number
CN114745579A
CN114745579A CN202210273350.XA CN202210273350A CN114745579A CN 114745579 A CN114745579 A CN 114745579A CN 202210273350 A CN202210273350 A CN 202210273350A CN 114745579 A CN114745579 A CN 114745579A
Authority
CN
China
Prior art keywords
writing
input window
interface
display
writing interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210273350.XA
Other languages
Chinese (zh)
Inventor
师超
陈长国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210273350.XA priority Critical patent/CN114745579A/en
Publication of CN114745579A publication Critical patent/CN114745579A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Abstract

The application provides an interaction method, an interface, a terminal and a storage medium based on an air writing interface, which are applied to a user terminal, wherein the user terminal is configured to display the air writing interface provided with an input window so as to obtain the content written in the air by a user, the size and the position of the input window are adjustable, and the interaction method comprises the following steps: acquiring an interactive image acquired by a camera associated with a user terminal; recognizing gesture data of a writing part in the interactive image; determining display data of the space writing interface according to the posture data and the position of the input window in the space writing interface so as to update a display picture of the space writing interface based on the display data; the method and the device have the advantages that an interaction mode based on the space writing is realized, and the position and the size of the written content can be changed by adjusting the mode of the input window through setting the input window in the space writing interface, so that the flexibility and the convenience of the space writing are improved.

Description

Interaction method, interface, terminal and storage medium based on space writing interface
Technical Field
The application relates to the technical field of image processing, in particular to an interaction method, an interface, a terminal and a storage medium based on an air writing interface.
Background
The virtual whiteboard is a virtual whiteboard, and can be deployed on any equipment with a display screen, such as a television, a computer, a mobile terminal and the like, and a user can write on the virtual whiteboard so as to realize multi-user interaction.
Some virtual whiteboard products need to be input by means of a mouse, which is not in line with the writing habit of people, and the smoothness of writing and drawing operations is poor. In order to improve the smoothness of the virtual whiteboard input operation, some virtual whiteboard products introduce an air writing or space writing technology. The aerial writing is carried out through hand tracking, and the content written by the user is projected to the virtual whiteboard, so that the writing smoothness and comfort of the virtual whiteboard are improved.
In a virtual whiteboard product based on an aerial writing technology, the whole virtual whiteboard is often directly determined as an inputtable area, and a user needs to adjust the writing area by changing the position relation between hands and a camera, so that the writing flexibility is poor.
Disclosure of Invention
The application provides an interaction method, an interface, a terminal and a storage medium based on an air-spaced writing interface, wherein an input window corresponding to a camera is arranged in the air-spaced writing interface, the content written by a user is input based on the input window, and the flexibility and the application range of the writing of the air-spaced writing interface are improved through the dragging, zooming and other operations of the input window.
In a first aspect, the present application provides an interaction method based on an empty writing interface, where the method is applied to a user terminal, the user terminal is configured to display an empty writing interface provided with an input window, and a size of the input window and a position in the empty writing interface are both adjustable, and the method includes:
acquiring an interactive image acquired by a camera associated with a user terminal;
recognizing gesture data of a writing part in the interactive image;
and determining display data of the space writing interface according to the posture data and the position of the input window in the space writing interface so as to update a display picture of the space writing interface based on the display data.
In a second aspect, the present application provides another interactive method based on an empty writing interface, where the method is applied to an interactive system, where the interactive system includes a plurality of user terminals, at least some of the user terminals are configured to display an empty writing interface provided with an input window, and a size of the input window and a position in the empty writing interface are both adjustable, and the method includes:
acquiring an interactive image acquired by a target camera associated with a target user terminal, wherein the target camera is arranged on the target user terminal, and the target user terminal is one of the user terminals;
recognizing gesture data of a writing part in the interactive image;
determining display data of the spaced writing interface according to the posture data and the position of the input window in the spaced writing interface;
and sending the display data to each user terminal provided with the spaced writing interface so as to update the display picture of the spaced writing interface displayed by the user terminal based on the display data.
In a third aspect, the present application provides an interactive device based on an empty writing interface, where the device is applied to a user terminal, the user terminal is configured to display an empty writing interface provided with an input window, and a size of the input window and a position in the empty writing interface are both adjustable, and the device includes:
the image acquisition module is used for acquiring an interactive image acquired by a camera of the user terminal;
the gesture recognition module is used for recognizing gesture data of a writing part in the interactive image;
and the display updating module is used for determining the display data of the space writing interface according to the posture data and the position of the input window in the space writing interface so as to update the display picture of the space writing interface based on the display data.
In a fourth aspect, the application provides a space writing interface, wherein an input window is arranged on the space writing interface, and the size and the position of the input window on the space writing interface are both adjustable;
the display screen of the space-based writing interface is generated based on the method provided by the first aspect or the second aspect of the application.
In a fifth aspect, the present application provides a user terminal, including:
the system comprises a camera, a display, a processor and a memory which is in communication connection with the processor;
the camera is used for collecting an interactive image;
the display is configured to display an empty writing interface provided with an input window, the input window being adjustable in size and position in the empty writing interface;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored in the memory to implement the method for interacting based on the spaced writing interface provided by the first aspect or the second aspect of the present application.
In a sixth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for implementing the method for interacting based on a spaced-apart writing interface provided in the first or second aspect of the present application when executed by a processor.
In a seventh aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the method for interacting based on a spaced-apart writing interface provided in the first or second aspect of the present application.
The application provides an interaction method, an interface, a terminal and a storage medium based on an air writing interface, aiming at a user terminal supporting display of the air writing interface, an input window is arranged on the air writing interface to display contents written by a user in the air at the input window, the size and the position of the input window are both adjustable, attitude data of a writing part in an interactive image collected by a camera in real time is identified through an image analysis technology, updating of the display contents of the air writing interface is carried out based on the attitude data and the position of the input window in the air writing interface to project the contents written by the user in the air to an area where the input window of the air writing interface is located, the flexibility of the user in the air writing interface is improved through the movement and the size adjustment of the input window arranged on the air writing interface, and meanwhile, the user can input more contents and images with various sizes in the air writing interface, the richness of the input of the space writing interface is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic diagram of an application scenario according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an interaction method based on an empty writing interface according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a structure of a blank writing interface according to an embodiment of the present application;
FIG. 4 is a schematic flowchart of step S203 in the embodiment of FIG. 2;
FIG. 5 is a schematic flowchart illustrating a method for interacting based on an empty writing interface according to another embodiment of the present application;
FIG. 6 is a schematic diagram of an edge region of an interactive image according to an embodiment of the present application;
FIG. 7 is a schematic flowchart illustrating a method for interacting based on an empty writing interface according to another embodiment of the present application;
FIG. 8 is a schematic illustration of a blank writing interface for a multiple input window as provided by one embodiment of the present application;
fig. 9 is a schematic structural diagram of a user terminal according to an embodiment of the present application.
Specific embodiments of the present application have been shown by way of example in the drawings and will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The method provided by the application can be suitable for various application scenes, such as multi-end interactive scenes of audio and video conferences, online education, live broadcasting and the like, can also be applied to multi-end or man-machine interaction scenes in the meta universe, and can also be applied to single-end interactive scenes, such as man-machine interaction scenes.
In a single-end interaction scene, a plurality of users can communicate through one user terminal, or the users can carry out man-machine interaction with the user terminal, and the contents written in the air by the users are displayed through an air-spaced writing interface displayed by the user terminal, so that the convenience and the interestingness of interaction are improved.
Aiming at the application scene of single-end interaction, the application provides an interaction method based on a spaced writing interface, which is applied to a user terminal, the user terminal is configured to display the spaced writing interface provided with an input window, the size of the input window and the position in the spaced writing interface are both adjustable, and the method comprises the following steps:
acquiring an interactive image acquired by a camera associated with a user terminal; recognizing gesture data of a writing part in the interactive image; and determining display data of the space writing interface according to the posture data and the position of the input window in the space writing interface so as to update a display picture of the space writing interface based on the display data.
The user can also store the display picture of the space writing interface or capture the display picture of the space writing interface, so that the content written in the air by the user is stored, and the content is conveniently recorded and published.
In an audio and video conference scene, the conference system comprises a plurality of user terminals participating in a conference, and the user terminals are connected through a network and communicate through conference software. In order to improve the vividness of conference interaction, interaction can be performed based on an air writing interface plug-in arranged in conference software, and when a host or other participants write in the air at user terminals thereof through cameras, the air writing interface of the user terminals thereof and other user terminals supporting display of the air writing interface can display contents written in the air in real time.
In the online education scene, the online education system comprises a teacher terminal and a plurality of student terminals, wherein the terminals are connected through a network terminal to synchronously display teaching courseware, and in order to improve the teaching quality, a teacher can write teaching contents in a mode of simulating the input of a solid whiteboard or a blackboard through an empty writing interface displayed by the teacher terminal.
In a live broadcast scene, the live broadcast system comprises a main broadcast terminal and a plurality of bean vermicelli terminals for watching live broadcasts, wherein the main broadcast can carry out live broadcasts by operating live broadcast software installed on the main broadcast terminal, and the bean vermicelli can watch live broadcast contents of the main broadcast by operating the live broadcast software installed on the bean vermicelli terminals. In order to improve the liveliness of live broadcast, the anchor can realize air writing by operating the blank writing interface plug-in live broadcast software, so that the written content of the anchor is synchronously displayed at the anchor terminal and each fan terminal watching the live broadcast.
Aiming at the multi-terminal interaction application scene, the application provides an interaction method based on an air writing interface, which is applied to an interaction system, such as a conference interaction system, an online education system, a live broadcast system and the like, wherein the interaction system comprises a plurality of user terminals, at least part of the user terminals are configured to display the air writing interface provided with an input window, and the size of the input window and the position in the air writing interface are adjustable, and the method comprises the following steps:
acquiring an interactive image acquired by a target camera associated with a target user terminal, wherein the target camera is arranged on the target user terminal, and the target user terminal is one of the user terminals; recognizing gesture data of a writing part in the interactive image; determining display data of the spaced writing interface according to the posture data and the position of the input window in the spaced writing interface; and sending the display data to each user terminal provided with the spaced writing interface so as to update the display picture of the spaced writing interface displayed by the user terminal based on the display data.
Fig. 1 is a schematic diagram of an application scenario according to an embodiment of the application, as shown in fig. 1, when a plurality of user terminals 102 interact through a network (e.g., a local area network, a wide area network, etc.), such as a video conference, a collaborative office, an online education, etc., an empty writing interface 11, such as a virtual whiteboard, may be displayed on a display screen of the user terminal 102, and a user may implement, through a camera 12 on the user terminal 102, an empty writing on the empty writing interface 11, such as writing a "cloud" of chinese characters, so as to synchronously update display contents on the empty writing interface 11 displayed by each user terminal 102, such as displaying a "cloud" of the user writing in the air on the empty writing interface 11 displayed by each user terminal 102. Through the space writing function of the space writing interface 11, the convenience of multi-terminal interaction is improved, and interaction with other users can be performed without using a solid whiteboard or a blackboard.
In the existing space writing interface product, the camera 12 of the user terminal 102 is often directly projected to all inputtable regions of the space writing interface 11, so that the user can write in the inputtable regions in a space manner, and the recognized content hand-painted by the user is synchronized to the space writing interface 11. Due to the limited field of view of the camera 12, when the user writes in the air by moving the arm only, the number and form of contents that the user can input through the space writing interface 11 are limited. If the user wants to input more contents or more multi-level contents, the relative position between the user and the camera 12 needs to be changed continuously, if the user is far away from the camera to write small Chinese characters, the space-separated writing flexibility and convenience are poor, and when the user is far away from the camera, the space-separated writing accuracy is poor, and the user experience is poor.
In order to improve the flexibility and convenience of the space-separated writing interface 11 in space-separated writing, the application provides a space-separated writing interface provided with an input window with adjustable position and size, and the interaction process based on the space-separated writing interface is as follows: the method comprises the steps of recognizing an interactive image collected by a camera on a target user terminal, obtaining gesture data of a writing part in the interactive image, and determining display data of an air-spaced writing interface, such as a user hand-drawn image, based on the gesture data and the current position of an input window on the air-spaced writing interface, so that the display pictures of the air-spaced writing interfaces displayed by a plurality of user terminals are synchronized based on the display data, and air-spaced writing interaction based on the air-spaced writing interface is realized.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of an interaction method based on a spaced writing interface according to an embodiment of the present disclosure, where the interaction method based on a spaced writing interface is applied to a user terminal, and can be executed by a processor or an image processing module of the user terminal, and the user terminal is any device with an image processing function and a display and a camera, such as any user terminal 102 in an interaction system. The user terminal is configured to display a spaced-apart writing interface provided with an input window, and the input window is adjustable in size and position in the spaced-apart writing interface.
For example, a user may adjust a size of an input window on a spaced writing interface displayed by the user terminal or a position of the input window in the spaced writing interface through an input device of the user terminal, such as a mouse, a keyboard, a touch screen, and the like. If the input window can be enlarged or reduced through "↓" and "↓" in the keyboard or the input touch gestures, the input window can be dragged in a mode of long pressing of a left mouse key or in a touch mode, and therefore the position of the input window in the space writing interface is changed.
As shown in fig. 2, the interaction method based on the spaced writing interface includes the following steps:
step S201, acquiring an interactive image acquired by a camera associated with the user terminal.
The camera associated with the user terminal may be a camera built in or external to the user terminal, that is, a camera integrated with the user terminal itself, or a camera externally connected to the user terminal. The camera may be a 2D camera or a 3D camera. After the spaced writing interface is turned on, when the camera is turned on, each frame of interactive image can be collected according to a set frequency, such as 30fps, 60fps or other frequencies.
Wherein the interactive image includes at least an image of the writing element. The writing part can be a hand of a user, and can also be a writing pen, and the writing pen can be a solid pen, such as a pen, a gel pen, a pencil and the like, and can also be an electronic pen, a pointer or other strip-shaped objects.
Illustratively, the target camera may be a monocular RGB camera, a laser radar, an infrared camera, or the like.
Specifically, the camera of the user terminal may collect each frame of the interactive image based on the set frequency, and send each frame of the interactive image to the image processing module of the user terminal, so that the image processing module processes each frame of the interactive image.
And step S202, recognizing the posture data of the writing part in the interactive image.
The gesture data of the writing component may include one or more of a position of each key point of the writing component, a distance and an angle of the writing component relative to the target user terminal or the target camera, a gesture, a state of each finger, and the like. The state of each finger may include the position of the joint on each finger, the shape of each finger, the connection relationship of each finger, and the like.
Specifically, the interactive image of the current frame acquired by the target camera can be acquired, and the posture data of the writing part in the interactive image of the current frame is identified based on an image analysis algorithm. Namely, every time a new interactive image is acquired by the target camera, the posture data of the writing part in the new interactive image is identified based on an image analysis algorithm.
Specifically, a writing part in the interactive image may be identified first, and an area where the writing part is located may be determined; segmenting the writing part in the interactive image, namely segmenting the interactive image based on the area where the writing part is located to extract the writing part in the interactive image; and identifying the position coordinates of all key points of the extracted writing part based on a key point identification algorithm, and determining the distance and the angle of the writing part relative to the target camera and the pose relationship among all parts of the writing part based on the position coordinates of all key points.
Illustratively, the written parts in the interactive image may be identified based on a deep learning algorithm, such as a convolutional neural network.
Taking the writing part as the user hand as an example, the gesture data of the user hand in the interactive image can be determined based on a gesture recognition algorithm. The gesture recognition algorithm may be a two-dimensional gesture recognition algorithm or a three-dimensional gesture recognition algorithm.
Specifically, the gesture recognition model may be trained based on common gestures, and then gesture data of the user's hand in the interactive image may be recognized based on the trained gesture recognition model.
Further, the type of the writing part in the interactive image can be recognized firstly, and the gesture recognition model is determined based on the type of the writing part; and inputting the interactive image into the gesture recognition model to obtain gesture data of the writing part in the interactive image.
Optionally, when the writing part is a hand of a user, after recognizing the gesture data of the writing part in the interactive image, the method further includes:
and if the gesture of the hand of the user in the gesture data is a preset gesture, adjusting the size of the input window based on the position of each preset key point in the preset gesture.
The preset gesture can be ' or ' < ' for two fingers of the hand of the user. The preset key points are joints and fingertips of the two fingers.
The farther the distance of the corresponding preset key point in the preset gesture is, the larger the size of the adjusted input window is.
In one embodiment, the input window may be resized based on the resizing instruction generated based on the relative motion direction and distance of the first finger and the second finger of the user's hand in the consecutive multiple frames.
The first finger and the second finger are two different fingers, such as a thumb and an index finger, a thumb and a middle finger, an index finger and a middle finger, and the like.
Specifically, if the relative movement direction of the first finger and the second finger is the opposite movement, that is, the distance between the first finger and the second finger is smaller and smaller, the input window is reduced; and if the direction of the relative movement is the backward movement, namely the distance between the first finger and the second finger is larger and larger, the input window is enlarged. The degree of adjustment of the input window may be proportional to the distance of the relative movement.
Further, if the hand of the user is not located in the edge area and is a preset gesture, the size of the input window is adjusted based on the position of each preset key point in the preset gesture.
By zooming the input window based on the preset gesture, the user is supported to input multi-level writing contents on the space writing interface by adjusting the size of the input window, the richness of the writing contents of the space writing interface is improved, and the application range of the space writing interface is enlarged.
Step S203, determining display data of the space writing interface according to the posture data and the position of the input window in the space writing interface, and updating a display picture of the space writing interface based on the display data.
The display data may include, among other things, a display location and a display image. The display position is located within an input window of the spaced-apart writing interface, and the position of the display position in the input window corresponds to the position of a target keypoint of the writing component in the interactive image. The display image may be color sized dots or squares, the color set may be a default color or may be user defined, such as red, green, black or other colors, the size set may be determined by the distance of the writing element from the camera, or the size of the input window, or the number of pixels occupied by the target keypoints of the writing element in the interactive image, or may be a fixed size.
The target key point can be a pen point, a fingertip and the like.
The space writing interface is an interface or an area displayed by a display screen of the user terminal, and can be an interface of any interactive software, such as a virtual whiteboard, the space writing content of the user can be displayed by the space writing interface, the display position of the space writing content on the space writing interface can be adjusted by adjusting the position of the input window, and the size of the space writing content displayed on the space writing interface can be adjusted by adjusting the size of the input window.
In one embodiment, the target keypoints may be multiple, such as the tips of multiple fingers, so that the user can write in the air with multiple fingers simultaneously.
Specifically, after the user terminal opens the blank writing interface product or the blank writing interface plug-in, the blank writing interface may be initialized to display the blank writing interface at the user terminal, and after the camera of the user terminal is opened, the input window of the blank writing interface may be initialized.
For example, the input window on the spaced writing interface is initialized, a blank input window with a default size may be displayed at a default position, such as the center, the upper left corner, and the like, of the spaced writing interface, and the interactive image acquired by the camera may be displayed in the input window in real time with a certain transparency.
Specifically, the position of the written content currently input by the user in the current interactive image may be determined according to the posture data in the current interactive image, the position of the written content in the spaced writing interface may be determined according to the position of the input window in the spaced writing interface and the position of the written content in the current interactive image, and the display data of the spaced writing interface may be generated based on the written content and the position of the written content in the spaced writing interface.
Further, after determining the display data, the blank writing interface displayed by the target user terminal may be updated based on the display data.
Specifically, the display position corresponding to the preset key point can be determined according to the position of the input window in the spaced writing interface and the position coordinates of the preset key point of the writing part in the posture data; acquiring a display image; based on the display position and the display image, display data is generated to display the display image at the display position of the spaced-apart writing interface by rendering the display data.
In one embodiment, the display images may include a history display image and a newly added display image, the history display image is an image displayed on the space writing interface before the current frame interactive image is acquired, and the newly added display image is an image corresponding to the display position.
Specifically, after the display position is determined, display data may be determined based on the display position and the history writing data of the spaced writing interface, and the display data may be rendered on the spaced writing interface to add the content input by the user in the current frame interactive image at the display position of the writing content corresponding to the history writing data on the spaced writing interface.
Further, the display data can be sent to each user terminal of the system provided with the spaced writing interface, so that the spaced writing interface displayed by each user terminal is updated based on the display data.
In one embodiment, the display frames of the spaced writing interfaces of the user terminal can be sent to other user terminals provided with the spaced writing interfaces, so that the display frames of the spaced writing interfaces in the system are consistent.
Specifically, the user terminal may update the display frames of the spaced writing interfaces displayed by the user terminal based on the received display data and the configuration of the display screen of the user terminal, so as to synchronize the display frames of the spaced writing interfaces of the interactive system.
Specifically, the spaced-apart writing interfaces may be rendered based on the determined display data, so that the content of the spaced-apart writing by the user of the target user terminal is displayed in the input window of each spaced-apart writing interface.
In an embodiment, the writing interface and the input window may correspond to different layers, and the content written by the user in the space may be located on the layer where the input window is located, or a new layer may be set as the layer where the content written by the user in the space is located.
In one embodiment, the input window is further provided with a scroll bar, and the user can adjust the size of the input window through the scroll bar.
In one embodiment, a scroll bar is further disposed on the spaced writing interface, and a user can adjust the position of the input window on the spaced writing interface through the scroll bar.
For example, fig. 3 is a schematic structural diagram of an empty writing interface according to an embodiment of the present application, as shown in fig. 3, scroll bars are disposed on the empty writing interface and an input window thereon, a dashed square in fig. 3 is used to indicate an inputtable region of the input window, that is, a user can write in the empty writing or in the air in a visual field range of a target camera corresponding to the inputtable region, the scroll bar disposed on the input window is used to adjust a size of the input window, that is, perform zoom control on the input window, and the scroll bar disposed on the empty writing interface is used to adjust the input window to move in a corresponding direction.
Further, the display data of the space writing interface can be determined according to the posture data of the writing part in the continuous multi-frame interactive images and the current position of the input window in the space writing interface. Wherein, the continuous multi-frame interactive image can be 2 frames, 3 frames or other values.
Specifically, a writing track can be determined based on the posture data of the writing part in the continuous multi-frame interactive image, and the writing track is subjected to coordinate conversion according to the current position of the input window in the space writing interface, so that the display data of the space writing interface is obtained.
The interactive method based on the spaced writing interface provided by this embodiment is directed to a user terminal supporting displaying of the spaced writing interface, where an input window is disposed on the spaced writing interface to display contents written in the air by a user at the input window, and the size and position of the input window are both adjustable, and through an image analysis technique, gesture data of a writing component in an interactive image collected by a camera in real time is recognized, and based on the gesture data and the position of the input window in the spaced writing interface, updating of the display contents of the spaced writing interface is performed to project the contents written in the air by the user to an area where the input window of the spaced writing interface is located, and through movement and size adjustment of the input window disposed on the spaced writing interface, flexibility of input by the user in the spaced writing interface is improved, and at the same time, the user can input more contents and images of various sizes in the spaced writing interface, the richness of the input of the space writing interface is improved.
In one embodiment, the layers of the space writing interface may include three layers, which are, from bottom to top, a whiteboard layer, an input window layer, and an air writing layer, where the whiteboard layer is used to display the space writing interface, and may include a frame of the space writing interface or a container of the space writing interface; the input window layer is used for displaying an input window, and may include a frame of the input window or a container of the input window, and may also include an interactive image acquired by a camera, such as an interactive image with a certain transparency set or an interactive image after being cut; the aerial writing layer is used for displaying the image corresponding to the display data, namely displaying the content written by the user in the air.
In an embodiment, before writing in the air, a user may first adjust the position and size of the input window, for example, the position and size of the input window may be adjusted by dragging the space writing interface and a scroll bar arranged on the input window, or the size of the input window may be adjusted by a keyboard of the user terminal, or may be adjusted by another method, for example, issuing a voice instruction, which is not limited in this application.
Optionally, fig. 4 is a schematic flowchart of the step S203 in the embodiment shown in fig. 2 of the present application, and as shown in fig. 4, the step S203 may include the following steps:
step S401, determining the image coordinate of the writing pen point of the writing part in the interactive image according to the attitude data.
Wherein the writing tip may be determined by one or more preset keypoints.
Specifically, when the writing part is a hand of a user, the writing pen point can be a fingertip of one of the fingers or a midpoint of two fingertips contacting the fingers; when the writing component is a writing pen, the writing pen point is the end of the writing pen close to the camera.
Specifically, the image coordinates of the writing pen point in the interactive image may be determined according to the position of each preset key point in the interactive image.
Further, whether the position of the preset key point is included in the posture data or not can be judged firstly, and if yes, the image coordinate of the writing pen point in the interactive image is determined according to the position of each preset key point in the interactive image.
For example, the fingertip of the index finger or the middle finger in the interactive image may be determined as the writing pen point, and the position of the fingertip of the index finger or the middle finger in the interactive image is the image coordinates of the writing pen point, or the midpoint of the fingertips of the thumb and the index finger in contact is determined as the writing pen point, and the position of the midpoint in the interactive image is the image coordinates of the writing pen point.
Optionally, whether the writing component in the interactive image is in a writing state may be determined according to the recognized gesture data in the interactive image, and if so, the image coordinates of the writing pen point of the writing component in the interactive image may be determined according to the gesture data.
Specifically, when the writing part is a hand of the user, whether the hand of the user is a preset gesture or not can be judged based on the posture data, and if yes, the writing part is determined to be in a writing state.
Illustratively, the preset gesture may be an "OK" gesture or a single-finger extension gesture.
Specifically, when the writing component is a writing pen, the writing pen may be any handheld object, and whether the writing pen is in a writing state or not may be determined based on an angle between the writing pen and the camera in the posture data. And if the angle between the writing pen and the camera is within the preset range, determining that the writing pen is in a writing state. Whether the writing pen is in a writing state or not can be judged based on the distance and the angle between the writing pen and the camera in the posture data. And if the distance between the writing pen and the camera is smaller than the first distance and the angle between the writing pen and the camera is within the preset range, determining that the writing pen is in a writing state.
For example, when the angle between the writing pen and the camera is within the preset range, the pen point of the writing pen may face the camera.
Further, it is possible to determine whether the writing pen is in a writing state based on a position of the writing pen toward one end (pen tip or pen tail) of the camera or a distance from the camera in the posture data. And if the distance between the end of the writing pen facing the camera and the camera is less than the second distance or at least attenuates by a third distance, determining that the writing pen is in a writing state.
By judging the writing state of the writing part, gesture recognition is avoided when a user is in a non-writing state, and the accuracy of gesture recognition triggering is improved.
And step S402, determining the display coordinate of the writing pen point in the space writing interface according to the image coordinate of the writing pen point and the position of the input window in the space writing interface.
Specifically, the first coordinate transformation relation can be determined according to the position of the input window in the spaced writing interface; and according to the first coordinate conversion relation, carrying out coordinate conversion on the image coordinate of the writing pen point to obtain the display coordinate of the writing pen point in the spaced writing interface.
Further, the position of the writing tip in the input window may be determined according to the image coordinates of the writing tip and the size of the input window, and the display coordinates of the writing tip in the spaced-apart writing interface may be determined based on the position of the input window in the spaced-apart writing interface and the position of the writing tip in the input window.
And S403, determining display data of the space writing interface according to the display coordinates of the writing pen point.
Specifically, the writing track corresponding to the continuous multi-frame interactive image may be determined based on the display coordinates of the writing pen point in the continuous multi-frame interactive image, the display data of the spaced writing interface may be determined based on the writing track, and the writing track may be added to the display screen of the spaced writing interface by rendering the display data.
Further, the writing track and the historical writing track or the historical writing data can be integrated into a new writing track, the display data of the space writing interface is determined based on the new writing track, and the new writing track is displayed in the space writing interface by dyeing the display data.
In the embodiment, the image coordinate of the writing pen point is obtained by tracking the writing pen point in the interactive image, and then the coordinate of the writing pen point is converted based on the position of the input window in the spaced writing interface to obtain the position of the writing pen point in the spaced writing interface, namely the display coordinate, so that the display image is added at the display coordinate of the spaced writing interface, the content written in the air by the user is added at the corresponding position of the input window of the spaced writing interface, and the interactive mode is easy to implement. The limitation of the input window to the current inputtable area of the space writing interface enables a user to change the display area of the input content and change the size of the input content by adjusting the input window, and the flexibility of space writing is improved.
In one embodiment, the input window of the spaced writing interface of the user terminal may display the interactive image acquired by the camera of the user terminal in real time, for example, the interactive image acquired by the camera of the user terminal is overlaid on the input window with a certain transparency, such as 50%, 60%, or other value, so as to display the real-time interactive image in the display area of the input window, so that the user may adjust the position of the writing component through the interactive image displayed by the input window, so as to improve the quality of the over-the-air writing.
In an embodiment, the layers of the spaced writing interface may be four layers, which are, from bottom to top, a whiteboard layer, an interactive image layer, an input window layer, and an air writing layer, where the whiteboard layer is used to display the spaced writing interface, such as a frame, a container, and a scroll bar arranged on the spaced writing interface; the interactive image layer is used for displaying an interactive image or an interactive video acquired by the camera with a certain transparency; the input window layer is used for displaying an input window, such as a frame, a container and a scroll bar arranged on the input window; the aerial writing layer is used for displaying the aerial writing content of the user, namely displaying the image corresponding to the data.
For example, the content written in the air by the user can be Chinese characters, English words, lines, various shapes of figures, such as circles, triangles, rectangles and the like.
In one embodiment, the display data may be rendered in a particle effect manner.
Specifically, the interactive image may be displayed in a package transparent and mirror image manner in an input window of the spaced writing interface to assist the user in spaced writing.
In order to facilitate the user to define the input position when writing in the air, so as to improve the user experience of spaced writing, after the camera acquires the interactive image, the interactive image can be cut, zoomed and the like, and the processed interactive image covers the area where the spaced writing interface input window is located with a certain transparency. And after the interactive image acquired by the camera of the user terminal is acquired, rendering a virtual hand at a position corresponding to the writing part of the user in the input window to indicate the position of the current written content of the user in the input window.
Specifically, the corresponding position or area of the writing component in the input window may be determined according to the image coordinates of the writing component in the interactive image.
For example, the gesture of the virtual hand may be a single finger extension gesture, such as an index finger extension.
Optionally, after obtaining the interactive image acquired by the camera of the user terminal, the method further includes:
cropping the interactive image; scaling the cropped interactive image to a first size, wherein the first size is the size of the input window; and overlaying the scaled interactive image to the input window with a set transparency.
Wherein the set transparency may be 50%, 60%, 70%, or other values.
In one embodiment, the step of cropping the interactive image may be omitted, and the interactive image is directly scaled to the first size, so that the scaled interactive image is overlaid on the input window with the set transparency.
Specifically, the clipping process is performed on the interactive image, and may be to clip or delete an edge area of the interactive image. The edge region may be a region of a certain length corresponding to one or more edges of the interactive image.
Specifically, a part of each of the upper, lower, left, and right edges of the captured interactive image may be cut, for example, 10%, 5%, or other proportion of the length in the corresponding direction, the cut image is scaled to the same size as the input window, and the scaled interactive image is covered at the position of the input window of the spaced writing interface with a certain transparency.
Because the recognition precision at the edge of the image is relatively poor, the accuracy of gesture data recognition can be improved by clipping the edge area of the interactive image. The edge area can also be set as an effective area corresponding to the position adjustment of the input window, namely when the writing part is positioned in the edge area, the input window is dragged based on the position of the preset key point of the writing part, so that the position of the input window on the spaced writing interface is changed.
Fig. 5 is a flowchart of an interaction method based on an empty writing interface according to another embodiment of the present application, where this embodiment is based on the embodiment shown in fig. 4, further details of step S401, and adds a step of determining a position of a handwriting component before step S401, as shown in fig. 5, the interaction method based on an empty writing interface according to this embodiment may include the following steps:
step S501, acquiring an interactive image acquired by a camera associated with a user terminal.
Step S502, recognizing the posture data of the writing part in the interactive image.
Step S503, judging whether the writing component is in the edge area of the interactive image according to the posture data.
The edge region may be a region corresponding to one or more edges of the interactive image. The area corresponding to each edge may be an area where a pixel having a distance from the edge of the edge smaller than a set distance is located, and the set distances corresponding to different edges may be different. The set distance may be 5%, 10%, or other percentage of the length of the interactive image in the corresponding direction, or a fixed distance.
Specifically, whether the writing part is in the edge area of the interactive image may be determined according to the position of the key point of the writing part or the position of the outer contour of the writing part in the posture data. The keypoint of the writing element may be the preset keypoint, the writing tip, or another keypoint.
For example, when the position of any key point or any point on the outer contour of the writing part falls into the edge region, the writing part is determined to be located in the edge region of the interactive image.
For example, if the position of any one of the fingertips or any one end of the pen is located, or if the center of the writing part is located in the edge area, it is determined that the writing part is located in the edge area.
Step S504, if the writing part is located in the edge area of the interactive image, determining a moving instruction of the input window according to the image coordinate of the writing part in the interactive image, and updating the position of the input window in the spaced writing interface according to the moving instruction.
The moving instruction may include a moving direction of the input window, and a moving distance or a moving direction.
Specifically, when the writing part is determined to be located in the edge area of the interactive image, the edge or the corner point with the closest distance between the interactive image and the writing part is determined based on the image coordinates of the writing part in the interactive image, such as the image coordinates of the writing pen tip, the fingertip, the center and the like of the writing part, and a movement instruction of the input window is generated based on the edge or the corner point with the closest distance between the interactive image and the writing part, so that the input window is controlled to move along the corresponding edge or corner point.
In one embodiment, the edge regions may be divided to obtain sub-edge regions, and each sub-edge region corresponds to a move instruction. When the writing part is located in one of the sub-edge regions, the input window is controlled to move in the space writing interface based on the moving instruction corresponding to the sub-edge region, so that the position of the input window in the space writing interface is updated.
The sub-edge region may include a region corresponding to each edge of the interactive image, and may further include a region corresponding to each corner of the interactive image. The moving directions of the sub-edge areas are different.
Illustratively, the moving directions of two adjacent sub-edge regions are different by 45 °. The moving direction of the sub-edge region may be a direction from the center of the interactive image to the center or the center of the sub-edge region.
For example, the sub-edge region corresponding to the corner point may be a region where pixels having a distance from the corner point smaller than a preset radius are located. The sub-edge region corresponding to the edge may be a region where a pixel whose distance from the midpoint of the edge is smaller than a preset radius is located, or a region where a rectangle whose center is the midpoint of the edge is located.
Exemplarily, fig. 6 is a schematic diagram of an edge area of an interactive image provided by an embodiment of the present application, as shown in fig. 6, the edge area of the interactive image includes 8 sub-edge areas, namely areas 601 to 608, which are sub-edge areas corresponding to four corner points of ABCD, respectively, and sub-edge areas corresponding to four edges AB, BC, CD, and DA, and a moving direction corresponding to each sub-edge area is indicated by an arrow in the area. When the hand of the user is positioned in the sub-edge area corresponding to the angular point A, the input window of the space writing interface is along the direction DAMoving, when the user's hand is located in the sub-edge region corresponding to the edge AB, the input window of the spaced writing interface is along the direction DABMove, and so on.
And step S505, if the writing part is not positioned in the edge area of the interactive image, judging whether the state of the writing part is a writing state or not according to the posture data.
Optionally, when the writing component is a hand of a user, determining whether the state of the writing component is a writing state according to the posture data includes:
and if the distance between the image coordinates of the fingertips of any two fingers of the hand of the user is smaller than a preset distance, or if the distance between the hand of the user and the camera meets a preset condition, determining that the state of the hand of the user is a writing state.
For example, the preset distance may be 5 pixels, 10 pixels, or other values.
Specifically, the distance between the hand of the user and the camera satisfies a preset condition, and the distance between the preset key point of the hand of the user, such as any one fingertip, and the camera is smaller than the first distance, or the difference obtained by subtracting the distance between the hand of the user and the camera in the current frame of interactive image from the distance between the hand of the user and the camera in the previous frame of interactive image is larger than the first difference. The manner of determining the writing state based on the distance is suitable for the case where the writing member is a writing pen.
By judging the writing state of the writing part, gesture recognition is avoided when a user is in a non-writing state, and the accuracy of gesture recognition triggering is improved.
Further, if the state of the writing part is not the writing state, the in-air writing layer of the space writing interface is not updated.
Specifically, when the writing part is determined to be in the pen-up state according to the posture data, the interactive image is projected to the area where the input window is located on the spaced writing interface with a certain transparency, and the written content of the spaced writing interface is not updated.
Specifically, the writing part is in a pen-lifting state, and the distance between a writing pen point of the writing part and the camera may be greater than a first distance, or the gesture of the hand of the user is a pen-lifting gesture, for example, two fingers are in a "C" shape.
If the writing part is a user hand, when the user hand is not located in the edge area of the interactive image, the user gesture in the gesture data can be acquired, and if the user gesture is a writing gesture, such as a single-finger extending gesture, a double-finger contacting gesture and the like, the user hand is determined to be in a writing state; if the user gesture is a pen-up gesture, determining that the hand of the user is in a pen-up state; and if the user gesture is a clearing gesture, such as a fist-making gesture, clearing the display content of the blank writing interface.
Step S506, if the writing part is in a writing state, determining the image coordinates of the writing pen point of the writing part in the interactive image according to the posture data.
And step S507, determining the display coordinate of the writing pen point in the spaced writing interface according to the image coordinate of the writing pen point and the position of the input window in the spaced writing interface.
Step S508, determining display data of the space-spaced writing interface according to the display coordinates of the writing pen point, and updating the display frame of the space-spaced writing interface based on the display data.
Specifically, when a plurality of writing nibs exist in the interactive image, the image coordinates of each writing nib in the interactive image may be determined separately, so that the display coordinates of each writing nib in the spaced-apart writing interface are determined based on the image coordinates of each writing nib and the position of the input window in the spaced-apart writing interface. And determining display data of the space writing interface based on the display coordinates of the writing nibs, so that the display image written by the corresponding writing nib is displayed at the display coordinate corresponding to the space writing interface.
Specifically, when a plurality of writing nibs exist in the interactive image, only the image coordinates of the target writing nib in the interactive image may be determined, so that the display coordinates of the target writing nib in the spaced-apart writing interface are determined based on the image coordinates of the target writing nib and the position of the input window in the spaced-apart writing interface. And determining display data of the space writing interface based on the display coordinates of the target writing pen point, so that the content written by the user corresponding to the target writing pen point is displayed at the display coordinates of the space writing interface. The target writing tip may be a writing tip occupying the largest number of pixels among the plurality of writing tips or a writing tip of a user authorized for the user terminal by the corresponding user.
Further, the user can delete or erase part of the displayed written content, an eraser button can be arranged on the spaced writing interface, and when the eraser button is in a pressing state, the written content displayed at the display coordinates of the writing pen point can be deleted or erased.
In the embodiment, the posture of the writing part in the interactive image is recognized through an image analysis technology, if the writing part is located in the edge area of the interactive image, the position of the input window on the space writing interface is adjusted along the corresponding direction based on the area where the writing part is located, and the control mode is convenient and fast, so that a user can change the display position of the air writing content on the space writing interface without adjusting the position of the user, and the convenience of air writing is improved; when the writing part is not located in the edge area of the interactive image, whether the writing part is in a writing state or not can be judged based on the posture data of the writing part, if yes, the writing part can be positioned and tracked through the writing pen point, the writing content of a user is displayed at the position, corresponding to the writing pen point, of an input window of the space-separated writing interface, the writing content is judged by increasing the writing state, the accuracy of the writing content display is improved, the user can write the state of the writing part through changing, such as changing gestures, the distance between the writing part and a camera and the like, the writing control of the space-separated writing interface is carried out, and the controllable degree and the writing experience of the space-separated writing interface in the air are improved.
Fig. 7 is a schematic flowchart of an interaction method based on an air-interface writing interface according to another embodiment of the present application, and this embodiment is applied to an interaction system, such as the above-mentioned conference interaction system, online education system, and live broadcast system, for a multi-end interaction scene, and can be pointed by any user terminal of the interaction system, which is provided with a display and a camera. The interactive system comprises a plurality of user terminals, wherein at least part of the user terminals are configured to display an empty writing interface provided with an input window, and the size of the input window and the position in the empty writing interface are adjustable.
In one embodiment, each user terminal in the interactive system may support displaying a virtual window, and an input window is disposed on the virtual window displayed by at least one user terminal.
In one embodiment, a user terminal that does not support the display of virtual windows, such as a landline, may be included in the interactive system.
As shown in fig. 7, the interaction method based on the spaced writing interface applied to the interactive system provided by the present embodiment includes the following steps:
step S701, acquiring an interactive image acquired by a target camera associated with a target user terminal.
The target camera is arranged on a target user terminal, and the target user terminal is one of the user terminals.
Specifically, a camera (target camera) of one user terminal (target user terminal) of the plurality of user terminals is turned on, and if a user turns on an air writing module on an air writing interface of the target user terminal, the target camera of the target user terminal is turned on, and the target camera is controlled to acquire an interactive image in a visual field range of the target camera according to a set frequency.
When the air writing modules on the spaced writing interfaces of the plurality of user terminals are started, the user terminal with the earliest starting time can be determined as a target user terminal, or the user terminal with the highest priority is determined as the target user terminal, so that the target camera of the target user terminal is started, and the target camera is controlled to collect interactive images according to the set frequency.
In one embodiment, multiple input windows, such as 2, 3 or other numbers, may be provided on the spaced writing interface to allow multiple users to write in the air simultaneously via the camera of the user terminal.
When the cameras of a plurality of user terminals displaying the spaced writing interface are opened, namely a plurality of opened target cameras exist, a plurality of input windows are initialized on the spaced writing interface, each input window corresponds to one opened camera, and the areas where the input windows are located are not overlapped with each other.
Specifically, for each target user terminal, a target input window of the multiple input windows displayed on the spaced writing interface of the target user terminal may display an interactive image acquired by a target camera of the target user terminal and corresponding content written in the air by the user in real time, where the target input window is an input window corresponding to the target camera of the target user terminal. The other input windows except the target input window may only display the corresponding content written by the user in the air, that is, the corresponding display data, or may display the interactive image acquired by the corresponding camera and the corresponding content written by the user in the air in real time.
In an embodiment, the user may further set the interactive image (or the content displayed in the interactive image layer) displayed in the input window corresponding to the user or the camera of the user terminal operated by the user as a default image or an image corresponding to the avatar. If the interactive image displayed in the input window corresponding to the user on the other user terminal is set to be the default image or the image corresponding to the virtual image.
For example, fig. 8 is a schematic diagram of an empty writing interface with multiple input windows according to an embodiment of the present application, as shown in fig. 8, three input windows, that is, an input window 81, an input window 82, and an input window 83, are provided on the empty writing interface, each input window corresponds to one target user terminal or one target camera, the empty writing interface shown in fig. 8 is an empty writing interface displayed by the target user terminal corresponding to the input window 81, and the interactive image 801 acquired by the camera corresponding to the user terminal and the writing content (or display data) corresponding to the writing component in the interactive image 801 can be displayed at the input window 81 in real time and with a certain transparency, such as a sinusoidal curve in fig. 8. The input window 82 and the input window 83 only display writing contents (or display data) corresponding to a writing component in an interactive image acquired by a camera of a corresponding user terminal, that is, the input window 82 and the input window 83 respectively display a cosine curve and a parabola drawn by a user in the air.
Step S702, recognizing the posture data of the writing part in the interactive image.
And step S703, determining display data of the space writing interface according to the posture data and the position of the input window in the space writing interface.
Step S704, sending the display data to each user terminal provided with an air interface writing interface, so as to update a display screen of the air interface writing interface displayed by the user terminal based on the display data.
Specifically, the target user terminal sends the determined display data of the spaced writing interface to each user terminal displaying the spaced writing interface in the conference system, so that the display picture of the spaced writing interface of each user terminal is updated, and the contents written in the air by the user corresponding to the target user terminal are displayed on the area where the input window corresponding to the camera of the target user terminal of each spaced writing interface is located.
Specifically, each target user terminal may send the determined display data to each user terminal of the conference system that displays the spaced writing interface, thereby implementing synchronization of the display frames of each spaced writing interface.
And when the user terminal of the conference system receives the display data, loading and rendering the display data, thereby updating the display picture of the spaced writing interface of the user terminal.
The embodiment of the application provides an interactive installation based on writing in the air, the device is applied to user terminal, user terminal is configured to show and is provided with the writing in the air interface of input window, the size of input window and be in all adjustable in the position in the writing in the air interface.
In one embodiment, the user terminal may be one user terminal in an interactive system comprising a plurality of user terminals, at least some of the user terminals being configured to display a spaced-apart writing interface provided with an input window.
The interactive device based on the spaced writing interface comprises: the device comprises an image acquisition module, a gesture recognition module and a display updating module. The system comprises an image acquisition module, a data acquisition module and a data processing module, wherein the image acquisition module is used for acquiring an interactive image acquired by a camera associated with a user terminal; the gesture recognition module is used for recognizing gesture data of a writing part in the interactive image; and the display updating module is used for determining the display data of the space-separated writing interface according to the posture data and the position of the input window in the space-separated writing interface so as to update the display picture of the space-separated writing interface based on the display data.
Optionally, the display update module includes:
an image coordinate determination unit for determining image coordinates of a writing tip of the writing component in the interactive image according to the posture data; the display coordinate determination unit is used for determining the display coordinate of the writing pen point in the space writing interface according to the image coordinate of the writing pen point and the position of the input window in the space writing interface; and the display data determining unit is used for determining the display data of the space writing interface according to the display coordinates of the writing pen point so as to update the display picture of the space writing interface based on the display data.
Optionally, the image coordinate determining unit includes:
a writing state determination subunit, configured to determine, according to the posture data, whether the state of the writing component is a writing state; and the image coordinate determining subunit is used for determining the image coordinate of the writing pen point of the writing part in the interactive image according to the posture data if the writing part is in the writing state.
Optionally, when the writing component is a hand of a user, the writing state determining subunit is specifically configured to:
and if the distance between the image coordinates of the fingertips of any two fingers of the hand of the user is smaller than a preset distance, or if the distance between the hand of the user and the target camera meets a preset condition, determining that the state of the hand of the user is a writing state.
Optionally, the apparatus further comprises:
the position adjusting unit is used for judging whether the writing part is in the edge area of the interactive image or not according to the posture data before judging whether the state of the writing part is in the writing state or not according to the posture data; if so, determining a moving instruction of the input window according to the image coordinates of the writing part in the interactive image, and updating the position of the input window in the spaced writing interface according to the moving instruction.
Correspondingly, the writing state determining subunit is specifically configured to:
and if the writing part is not positioned in the edge area of the interactive image, judging whether the state of the writing part is a writing state or not according to the posture data.
Optionally, the writing element is a hand of a user, and the apparatus further comprises:
and the size adjusting unit is used for adjusting the size of the input window based on the position of each preset key point in the preset gesture if the gesture of the hand of the user in the gesture data is the preset gesture after the gesture data of the writing part in the interactive image is recognized.
Optionally, the apparatus further comprises:
and the display data synchronization module is used for sending the display data to each user terminal provided with the spaced writing interface so as to update the display picture of the spaced writing interface displayed by each user terminal based on the display data.
Optionally, the apparatus further comprises:
the interactive image rendering module is used for cutting the interactive image after the interactive image acquired by a camera of the user terminal is acquired; scaling the cropped interactive image to a first size, wherein the first size is the size of the input window; and overlaying the scaled interactive image to the input window with a set transparency.
The interaction device based on the spaced writing interface provided by the embodiment of the application can be used for executing the technical scheme provided by any embodiment corresponding to the above-mentioned fig. 2, fig. 4 and fig. 5, and the implementation principle and the technical effect are similar, and the embodiment is not described herein again.
The embodiment of the application provides an air-spaced writing interface, wherein an input window is arranged on the air-spaced writing interface, and the size of the input window and the position of the input window on the air-spaced writing interface are both adjustable; the display screen of the space-based writing interface is generated based on the method provided by any one of the previous embodiments of the application.
Fig. 9 is a schematic structural diagram of a user terminal according to an embodiment of the present application, and as shown in fig. 9, the user terminal according to the embodiment includes:
at least one processor 910; and a memory 920 communicatively coupled to the at least one processor; wherein the memory 920 stores computer executable instructions; the at least one processor 910 executes computer-executable instructions stored by the memory to cause the electronic device to perform a method as provided by any of the preceding embodiments.
Alternatively, the memory 920 may be separate or integrated with the processor 910.
For the implementation principle and the technical effect of the electronic device provided by this embodiment, reference may be made to the foregoing embodiments, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the method provided by any one of the foregoing embodiments may be implemented.
The embodiments of the present application further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the method provided in any of the foregoing embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute some steps of the methods described in the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in the incorporated application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The storage medium may be implemented by any type or combination of volatile and non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in an electronic device or host device.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods provided in the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. An interaction method based on an empty writing interface is applied to a user terminal, the user terminal is configured to display the empty writing interface provided with an input window, the size of the input window and the position in the empty writing interface are adjustable, and the method comprises the following steps:
acquiring an interactive image acquired by a camera associated with a user terminal;
recognizing gesture data of a writing part in the interactive image;
and determining display data of the space writing interface according to the posture data and the position of the input window in the space writing interface so as to update a display picture of the space writing interface based on the display data.
2. The method of claim 1, wherein determining display data for a space writing interface based on the pose data and a position of an input window in the space writing interface comprises:
judging whether the state of the writing part is a writing state or not according to the posture data;
if so, determining the image coordinates of the writing pen point of the writing part in the interactive image according to the posture data;
determining display coordinates of the writing pen point in the spaced writing interface according to the image coordinates of the writing pen point and the position of the input window in the spaced writing interface;
and determining the display data of the space writing interface according to the display coordinates of the writing pen point.
3. The method of claim 2, wherein determining whether the state of the writing component is a writing state based on the pose data when the writing component is a user hand comprises:
and if the distance between the image coordinates of the fingertips of any two fingers of the hand of the user is smaller than a preset distance, or if the distance between the hand of the user and the target camera meets a preset condition, determining that the state of the hand of the user is a writing state.
4. The method of claim 2, wherein prior to determining from the gesture data whether the state of the writing component is a writing state, the method further comprises:
judging whether the writing part is in the edge area of the interactive image or not according to the posture data;
if yes, determining a moving instruction of the input window according to the image coordinates of the writing part in the interactive image, and updating the position of the input window in the spaced writing interface according to the moving instruction;
and if not, judging whether the state of the writing part is a writing state or not according to the posture data.
5. The method of claim 1, wherein the writing component is a user hand, and after identifying pose data for the writing component in the interactive image, the method further comprises:
and if the gesture of the hand of the user in the gesture data is a preset gesture, adjusting the size of the input window based on the position of each preset key point in the preset gesture.
6. The method according to any one of claims 1-5, wherein after acquiring the interactive image captured by the camera of the user terminal, the method further comprises:
cropping the interactive image;
scaling the cropped interactive image to a first size, wherein the first size is the size of the input window;
and overlaying the scaled interactive image to the input window with a set transparency.
7. An interaction method based on an empty writing interface is applied to an interaction system, the interaction system comprises a plurality of user terminals, at least part of the user terminals are configured to display the empty writing interface provided with an input window, the size of the input window and the position in the empty writing interface are adjustable, and the method comprises the following steps:
acquiring an interactive image acquired by a target camera associated with a target user terminal, wherein the target camera is arranged on the target user terminal, and the target user terminal is one of the user terminals;
recognizing gesture data of a writing part in the interactive image;
determining display data of the spaced writing interface according to the posture data and the position of the input window in the spaced writing interface;
and sending the display data to each user terminal provided with the spaced writing interface so as to update the display picture of the spaced writing interface displayed by the user terminal based on the display data.
8. An air-spaced writing interface is characterized in that an input window is arranged on the air-spaced writing interface, and the size of the input window and the position of the input window on the air-spaced writing interface are adjustable;
wherein the display of the space writing interface is generated based on the method of any one of claims 1-7.
9. A user terminal, comprising:
the system comprises a camera, a display, a processor and a memory which is in communication connection with the processor;
the camera is used for collecting an interactive image;
the display is configured to display an empty writing interface provided with an input window, the input window being adjustable in size and position in the empty writing interface;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1-7.
10. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1-7.
11. A computer program product, characterized in that it comprises a computer program which, when being executed by a processor, carries out the method of any one of claims 1-7.
CN202210273350.XA 2022-03-18 2022-03-18 Interaction method based on space writing interface, terminal and storage medium Pending CN114745579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210273350.XA CN114745579A (en) 2022-03-18 2022-03-18 Interaction method based on space writing interface, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210273350.XA CN114745579A (en) 2022-03-18 2022-03-18 Interaction method based on space writing interface, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114745579A true CN114745579A (en) 2022-07-12

Family

ID=82277463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210273350.XA Pending CN114745579A (en) 2022-03-18 2022-03-18 Interaction method based on space writing interface, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114745579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024065345A1 (en) * 2022-09-29 2024-04-04 京东方科技集团股份有限公司 Air gesture editing method and apparatus, display system, and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426509A (en) * 2011-11-08 2012-04-25 北京新岸线网络技术有限公司 Method, device and system for displaying hand input
CN102446055A (en) * 2010-09-30 2012-05-09 阿尔派株式会社 Character handwriting input method and device
CN102981699A (en) * 2011-11-01 2013-03-20 微软公司 Adjusting content to avoid occlusion by a virtual input panel
JP2016118947A (en) * 2014-12-22 2016-06-30 日本ユニシス株式会社 Spatial handwriting input system using angle-adjustable virtual plane
KR20170082924A (en) * 2016-01-07 2017-07-17 한국원자력연구원 Apparatus and method for user input display using gesture
CN112199015A (en) * 2020-09-15 2021-01-08 安徽鸿程光电有限公司 Intelligent interaction all-in-one machine and writing method and device thereof
CN112399237A (en) * 2020-10-22 2021-02-23 维沃移动通信(杭州)有限公司 Screen display control method and device and electronic equipment
CN112699796A (en) * 2020-12-30 2021-04-23 维沃移动通信有限公司 Operation method and device of electronic equipment
US20210405762A1 (en) * 2020-06-30 2021-12-30 Boe Technology Group Co., Ltd. Input method, apparatus based on visual recognition, and electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446055A (en) * 2010-09-30 2012-05-09 阿尔派株式会社 Character handwriting input method and device
CN102981699A (en) * 2011-11-01 2013-03-20 微软公司 Adjusting content to avoid occlusion by a virtual input panel
CN102426509A (en) * 2011-11-08 2012-04-25 北京新岸线网络技术有限公司 Method, device and system for displaying hand input
JP2016118947A (en) * 2014-12-22 2016-06-30 日本ユニシス株式会社 Spatial handwriting input system using angle-adjustable virtual plane
KR20170082924A (en) * 2016-01-07 2017-07-17 한국원자력연구원 Apparatus and method for user input display using gesture
US20210405762A1 (en) * 2020-06-30 2021-12-30 Boe Technology Group Co., Ltd. Input method, apparatus based on visual recognition, and electronic device
CN112199015A (en) * 2020-09-15 2021-01-08 安徽鸿程光电有限公司 Intelligent interaction all-in-one machine and writing method and device thereof
CN112399237A (en) * 2020-10-22 2021-02-23 维沃移动通信(杭州)有限公司 Screen display control method and device and electronic equipment
CN112699796A (en) * 2020-12-30 2021-04-23 维沃移动通信有限公司 Operation method and device of electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024065345A1 (en) * 2022-09-29 2024-04-04 京东方科技集团股份有限公司 Air gesture editing method and apparatus, display system, and medium

Similar Documents

Publication Publication Date Title
CN109284059B (en) Handwriting drawing method and device, interactive intelligent panel and storage medium
US9922400B2 (en) Image display apparatus and image display method
US5511148A (en) Interactive copying system
KR102352683B1 (en) Apparatus and method for inputting note information into an image of a photographed object
US10186057B2 (en) Data input device, data input method, and non-transitory computer readable recording medium storing data input program
US9489040B2 (en) Interactive input system having a 3D input space
CN109407954B (en) Writing track erasing method and system
EP2790089A1 (en) Portable device and method for providing non-contact interface
US10650489B2 (en) Image display apparatus, control method therefor, and storage medium
US20100079413A1 (en) Control device
US10013147B2 (en) Image display apparatus
KR20100051648A (en) Method for manipulating regions of a digital image
WO2016121401A1 (en) Information processing apparatus and program
CN110045840B (en) Writing track association method, device, terminal equipment and storage medium
EP3547098B1 (en) Display control apparatus and control method
CN114745579A (en) Interaction method based on space writing interface, terminal and storage medium
US20150220797A1 (en) Information processing system, information processing method, and program
CN106371755B (en) Multi-screen interaction method and system
US11514696B2 (en) Display device, display method, and computer-readable recording medium
CN111813254A (en) Handwriting input device, handwriting input method, and recording medium
CN114816088A (en) Online teaching method, electronic equipment and communication system
CN111103967A (en) Control method and device of virtual object
WO2024065345A1 (en) Air gesture editing method and apparatus, display system, and medium
JP2021036401A (en) Display device, display method and program
CN110941974B (en) Control method and device of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination