CN112565844B - Video communication method and device and electronic equipment - Google Patents

Video communication method and device and electronic equipment Download PDF

Info

Publication number
CN112565844B
CN112565844B CN202011412842.XA CN202011412842A CN112565844B CN 112565844 B CN112565844 B CN 112565844B CN 202011412842 A CN202011412842 A CN 202011412842A CN 112565844 B CN112565844 B CN 112565844B
Authority
CN
China
Prior art keywords
video communication
video
frame
communication device
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011412842.XA
Other languages
Chinese (zh)
Other versions
CN112565844A (en
Inventor
陈喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011412842.XA priority Critical patent/CN112565844B/en
Publication of CN112565844A publication Critical patent/CN112565844A/en
Application granted granted Critical
Publication of CN112565844B publication Critical patent/CN112565844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The application discloses a video communication method, a video communication device and electronic equipment, which belong to the technical field of communication and are used for solving the problem that the teaching mode in the video teaching process is too single. The method comprises the following steps: receiving a first blank input of an operating body during video communication with a second video communication device; responding to the first space input, and displaying a movement track corresponding to the first space input; transmitting the target video frame to the target video communication device; the target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory. The method and the device are applied to rich video teaching mode scenes.

Description

Video communication method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video communication method, a video communication device and electronic equipment.
Background
As the quality of video communication increases, the application range of video communication is also wider and wider, for example, more and more users will use video communication for video teaching.
At present, in the video teaching process, video content of a lecturer is mainly used, and the lecturer and a lecturer interact mainly through voice, so that video teaching is realized.
However, a scenario in which the lecturer instructs the lecturer in real time is required for the lecturer, for example, the lecturer wants to read a paper of the lecturer. At this time, the lecturer can only inform the lecturer of the read-and-read condition of the examination paper through the voice, so that the video teaching can be caused to have too single teaching mode in the process.
Disclosure of Invention
The embodiment of the application aims to provide a video communication method, a video communication device and electronic equipment, which can solve the problem that the teaching mode is too single in the video teaching process.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video communication method, including: receiving a first blank input of an operating body during video communication with a second video communication device; responding to the first space input, and displaying a movement track corresponding to the first space input; transmitting the target video frame to the target video communication device; the target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory.
In a second aspect, embodiments of the present application provide a video communication apparatus, including: the device comprises a receiving module, a display module and a sending module; the receiving module is used for receiving a first space-apart input of the operating body in the process of video communication with the second video communication device; the display module is used for responding to the first space input received by the receiving module and displaying a movement track corresponding to the first space input; a transmitting module for transmitting the target video frame to the target video communication device; the target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, first, in the process of video communication between the first video communication device and the second video communication device, after the first video communication device receives the first space input of the operation body, the first video communication device may display a movement track corresponding to the first space input. The first video communication device may then transmit the target video frame to the target video communication device. The target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory. Through the scheme, compared with the scheme that interaction can only be carried out through voice in the related art, the teaching mode of video teaching can be enriched through the mode of image interaction. Taking the example that the lecturer using the electronic device a is the lecturer using the electronic device B for the examination paper reading, the first video communication device may acquire the examination paper image acquired by the electronic device B, and acquire the reading trace image acquired by the electronic device a. Then, the first video communication apparatus may synthesize the examination paper image with the read trace image, and send the examination paper image with the read trace image to the electronic device B. Finally, the first video communication device can send the test paper image with the read trace to the electronic equipment B, so that a lesson listener using the electronic equipment B can intuitively see the read condition of the test paper after receiving the test paper image with the read trace, and the teaching mode of video teaching is enriched.
Drawings
Fig. 1 is a schematic flow chart of a video communication method according to an embodiment of the present application;
fig. 2 is one of interface schematic diagrams of an application of a video communication method according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of an interface of a video communication method according to an embodiment of the present disclosure;
FIG. 4 is a third exemplary interface diagram of a video communication method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an application of a video communication method according to an embodiment of the present application;
FIG. 6 is a second schematic diagram of an application of a video communication method according to an embodiment of the present application;
FIG. 7 is a diagram illustrating an interface of a video communication method according to an embodiment of the present disclosure;
FIG. 8 is a fifth exemplary interface diagram of a video communication method application according to an embodiment of the present disclosure;
FIG. 9 is a third schematic diagram of an application of a video communication method according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an application of a video communication method according to an embodiment of the present application;
FIG. 11 is a fifth schematic diagram of an application of a video communication method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a video communication device according to an embodiment of the present application;
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein. The objects identified by "first", "second", etc. are generally one type, and the number of the objects is not limited, for example, the first object may be one or a plurality of first objects. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video communication method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
In the following, a video communication method will be exemplarily described by taking an execution subject as a first video communication device. Fig. 1 is a schematic flow chart of a video communication method according to an embodiment of the present application, including steps 201 to 203:
step 201: the first video communication device receives a first space-apart input of the operating body during video communication between the first video communication device and the second video communication device.
In the embodiment of the present application, the operation body may be a user (for example, a finger of the user), an article (for example, a pencil, a pen, a rubber, or a stylus), or any operable object, which is not limited in the embodiment of the present application.
In this embodiment of the present application, the first space-apart input is an input in which the operation body is in the air and is not in contact with the first video communication device.
It should be noted that, the camera of the first video communication device collects the moving process of the operation body, that is, the first video communication device receives the first space input of the operation body.
In this application embodiment, the camera in this application can be leading camera, also can be rearmounted camera, can also be leading camera and rearmounted camera, and specific can be according to actual demand setting, and this embodiment of this application is not limited.
For example, the first space input may be a specific space gesture of the operation body input.
The specific space gesture in the embodiment of the application may be any one of a space single-click gesture, a space sliding gesture, a space dragging gesture, a space long-press gesture, a space area change gesture, a space double-press gesture, a space double-click gesture and a space any number of click gestures.
Step 202: in response to the first space input, the first video communication device displays a movement track corresponding to the first space input.
In the embodiment of the present application, the movement track may be a movement track of the operation body. It is understood that the movement locus may be a virtual movement locus, that is, the movement locus is a locus which is generated by identifying an operation locus of the operation body in the air and is identical to the operation locus.
The above-described movement locus may be determined by the first video communication apparatus from the acquired image of the operation body, for example. For example, when a user draws a "v" with a finger in the air, the first video communication apparatus may acquire a plurality of frame images in the process of drawing the "v" by the user, and thus, the first video communication apparatus may generate a virtual track with a track of the user's finger in the plurality of frame images.
In one example, the specific generation process of the movement track may be as follows: the first video communication device may generate the above-mentioned movement track according to the position in the image where the operating body is located in the acquired image. For example, taking an operation body as a touch pen as an example, the front and rear 2 frames of images acquired by the first video communication device are respectively a front frame of image 1 and a rear frame of image 2, wherein a coordinate point in the image 1 is displayed with a pen point of the touch pen, and a coordinate point B in the image 2 is displayed with a pen point of the touch pen. Then, the first video communication apparatus may superimpose the image 1 and the image 2, and generate a trace 1 by connecting the a coordinate point and the B coordinate point, resulting in an image 2a including the trace 1. After the next frame of image 1 and image 2 acquired by the first video communication device is the next frame of image 3, wherein the C coordinate point in image 3 is displayed with the pen tip of the stylus, the first video communication device may superimpose image 2a and image 3, and connect the B coordinate point and the C coordinate point to generate track 2, so as to obtain image 3a including track 1 and track 2, and so on, the first video communication device may generate the movement track of the pen tip of the stylus.
Step 203: the first video communication device transmits the target video frame to the target video communication device.
The target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory.
In the embodiment of the present application, the target video communication device may or may not include the second video communication device, which is not limited in the embodiment of the present application.
Accordingly, in the case where the target video communication apparatus includes the second video communication apparatus, the second video communication apparatus may receive and display the above-described target video frame from the first video communication apparatus.
In this embodiment of the present application, the first track image frame may include one or more first sub-frames, which is not limited in this embodiment of the present application. The second video frame may include one or more second sub-video frames, which are not limited in this embodiment of the present application.
In an embodiment of the present application, the first track image frame may be acquired by a camera of the first video communication device. The first track image frame may include a real image (for example, a finger image of a user) or may include a virtual image (i.e., the above-mentioned moving track), which is not limited in this embodiment of the present application.
The above-mentioned real image is, for example, a real picture acquired by a camera of the first video communication apparatus, for example, both hands of the user, a stylus used by the user, and the like.
In an embodiment of the present application, the second video frame may be acquired by a camera of the second video communication device. For example, the second video frame may be a test paper image, or a drawing image, or a handwriting image, which is not limited in the embodiment of the present application.
In this embodiment of the present application, the second video frame may be received by the first video communication device or downloaded by the first video communication device, which is not limited in this embodiment of the present application.
It should be noted that the first track image frame may include the operation body described above. For example, if the operation body is a user finger, the first trajectory image frame may include the user finger and a movement trajectory that the user finger is stroked.
The first video communication apparatus and the second video communication apparatus may be augmented reality (augmented reality, AR) devices, and the first track image frame and the second video frame may be AR images.
Alternatively, in the embodiment of the present application, the first video communication apparatus may extract a partial image including the operation body before step 201. For example, taking the operation body as a stylus, as shown in fig. 2 (a), an image 31 acquired by the camera of the mobile phone 1 is displayed on the screen of the mobile phone 1, and the image 31 includes the two hands of the user 1 and the stylus 33 held by the right hand 32 of the user 1. At this time, the mobile phone 1 may extract only the stylus 33 and the image 34 of the right hand 32 of the user 1 from the image 31, and as shown in (b) of fig. 2, the mobile phone 1 may display the stylus 33 and the image 34 of the right hand 32 of the user 1.
For example, the first video communication device may perform image synthesis with the first track image frame as a foreground image and the second video frame as a background image to obtain a target video frame.
It will be understood that the above-mentioned combination of the first track image frame and the second video frame means that the first video communication apparatus may superimpose the first track image frame on the second video frame or perform image fusion on the first track image frame and the second video frame.
For example, when the user 1 using the mobile phone 1 wants to read the test paper image of the user 2 using the mobile phone 2 in real time, as shown in (a) of fig. 3, the mobile phone 1 may display the test paper image 41 sent to the mobile phone 1 by the mobile phone 2 in the screen, and the test paper includes 5 pieces of selection questions. Then, in combination with (b) of fig. 2, as shown in (b) of fig. 3, the mobile phone 1 may display the images of the stylus 42 and the right hand 43 of the user 1 on the test paper image 41, and when the user 1 determines that the answer to the selection question 1 is correct, the user 1 may draw a "Γ" on the selection question 1 using the stylus 42. Then, the mobile phone 1 can obtain a click image whose movement locus is "v". Finally, the mobile phone 1 may use the click-through image as a foreground layer, use the test paper image 41 as a background layer, and display the click-through image superimposed on the test paper image 41 to obtain a test paper image (i.e. the target video frame described above) with click-through trace "v", as shown in fig. 4, the mobile phone 1 may display the test paper image 44 with click-through trace "v" in real time.
Alternatively, in the embodiment of the present application, the first video communication apparatus may specifically perform image synthesis on the first track image frame and the second video frame in a possible manner as follows.
Illustratively, the target video frame may include M frame target sub-video frames, the first track image frame may include M frame first sub-video frames, and the second video frame may include M frame second sub-video frames. Each frame of target sub-video frame is obtained by superposing and displaying a first sub-video frame on a second sub-video frame, and the first sub-video frame corresponds to the second sub-video frame. The first sub-frame of each frame is synthesized by the current video frame acquired by the first video communication device and a synthesized frame, and the synthesized frame is synthesized by the X-frame video frame acquired by the first video communication device before the acquired current video frame.
Wherein M is a positive integer, and X is an integer less than M.
For example, in connection with (b) in fig. 3, first, in the process of collecting that the user 1 draws a "v" on the first choice question by using the stylus 43, the mobile phone 1 may obtain 3 frames of sub-click images, which are the sub-click image 1, the sub-click image 2 and the sub-click image 3, respectively. Meanwhile, the mobile phone 1 can acquire 3 frames of sub-test paper images which are respectively the sub-test paper image 1, the sub-test paper image 2 and the sub-test paper image 3 and are sent to the mobile phone 1 by the mobile phone 2. Next, the mobile phone 1 may display the sub-review image 1 as a foreground layer superimposed on the sub-test paper image 1. The mobile phone 1 may generate the track image 1 from the sub-reading image 1 and the sub-reading image 2, and display the track image 1 as a foreground layer superimposed on the sub-test paper image 2. Then, the mobile phone 1 may generate a track image 2 from the track image 1 and the sub-read image 3, and display the track image 2 as a foreground layer superimposed on the sub-test paper image 3, and transmit the same to the mobile phone 2. Finally, as shown in fig. 4, the mobile phone 1 can display the test paper image 44 with the reading trace "v" in real time, and at the same time, the mobile phone 2 can display the test paper image 44 with the reading trace "v" in real time.
Optionally, in the embodiment of the present application, the user may trigger the video communication device to be in a mode of acquiring the track image according to the requirement.
Illustratively, before the first video communication apparatus in step 201 receives the first space input of the operation body, the method may further include the following steps 201a and 201b:
step 201a: the first video communication device receives a seventh input from the user.
Illustratively, the seventh input may specifically include: the click input of the user on the screen of the first video communication device, or the voice command input by the user, or the specific gesture input by the user can be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, a double-click gesture and any number of click gestures; the click input in the embodiment of the application may be single click input, double click input, or any number of click inputs, and may also be long press input or short press input.
In one example, if the first video communication device is AR glasses, the seventh input may be an input that the user touches the AR glasses lens. I.e. the user's input to touch the AR spectacle lens is associated to trigger the first video communication device in a first mode as described below.
In another example, if the first video communication apparatus is a mobile phone, the seventh input may be a space input by double-clicking the finger of the user in the air, or may be an input by which the user presses the screen of the mobile phone for a long time.
Step 201b: in response to the seventh input, the first video communication device controls the first video communication device to be in the first mode.
The first mode is a mode for acquiring a track image.
For example, in the case of a video call between the user 1 using the AR glasses 1 and the user 2 using the AR glasses 2, when the user 1 wants to read the test paper image of the user 2 in real time, as shown in fig. 5, the user 1 may touch the lens of the AR glasses 1 with a finger from the outside, and at this time, the AR glasses 1 control the AR glasses 1 to be in the read mode (i.e., the first mode described above).
It will be appreciated that with the first video communication device in the first mode, the first video communication device may acquire images and generate track image frames from the acquired images. In the case where the first video communication apparatus is not in the first mode, the first video communication apparatus may acquire an image, but may not generate a track image frame from the acquired image.
It should be noted that, in the case that the first video communication device is in the first mode, the first video communication device may automatically switch to the rear camera to acquire the image. It will be appreciated that if the first video communication device is the rear camera employed, the first video communication device may remain employing the rear camera.
In one example, the step 201b may specifically include the following steps: in response to the seventh input, the first video communication apparatus controls the first video communication apparatus to be in the first mode and transmits the first indication information to the second video communication apparatus. The first indication information is used for indicating that the second video communication device is in a third mode.
Accordingly, the second video communication apparatus may receive the first indication information.
It should be noted that, during the video communication between the first video communication apparatus and the second video communication apparatus, when the second video communication apparatus is in the third mode, the second video communication apparatus may collect an image and send the image to the first video communication apparatus in real time, but when the image is displayed, the second video communication apparatus does not directly display the collected image, but displays the image transmitted by the first video communication apparatus to the second video communication apparatus.
In the video communication method provided in the embodiment of the present application, first, in a process of video communication between the first video communication device and the second video communication device, after the first video communication device receives the first space input of the operation body, the first video communication device may display a movement track corresponding to the first space input. The first video communication device may then transmit the target video frame to the target video communication device. The target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory. Through the scheme, compared with the scheme that interaction can only be carried out through voice in the related art, the teaching mode of video teaching can be enriched through the mode of image interaction. Taking the example that the lecturer using the electronic device a is the lecturer using the electronic device B for the examination paper reading, the first video communication device may acquire the examination paper image acquired by the electronic device B, and acquire the reading trace image acquired by the electronic device a. Then, the first video communication apparatus may synthesize the examination paper image with the read trace image, and send the examination paper image with the read trace image to the electronic device B. Finally, the first video communication device can send the test paper image with the read trace to the electronic equipment B, so that a lesson listener using the electronic equipment B can intuitively see the read condition of the test paper after receiving the test paper image with the read trace, and the teaching mode of video teaching is enriched.
Alternatively, in the embodiment of the present application, the first video communication apparatus may adjust the display position of the operation body, so as to ensure the accuracy of the displayed movement track.
In an example, in a case where the movement track is the movement track of the operation body, before receiving the first space input of the operation body in the step 201, the method may further include the following steps 201c to 201e:
step 201c: in the case where the first video communication apparatus displays a third video frame, the first video communication apparatus displays the first identification in the third video frame.
The third video frame is a video frame collected by the first video communication device; the display position of the first mark is used for indicating the real-time position of the operating body; the third video frame includes the operation body.
The first video communication device may identify an operation body in the third video frame through the image identification model, and display the first identifier at a position of the operation body in the third video frame.
In one example, the first video communication device may display a preset handwriting and prompt the user to control the stylus to move the operation body along with the preset handwriting, and then the first video communication device may identify a feature of the operation body according to the movement condition of the operation body and position the operation body. For example, taking the first video communication device as AR glasses as an example, referring to fig. 5, after the user 1 wears the AR glasses 1 and the user 1 triggers the AR glasses 1 to be in the reading mode, when the user 1 wants to position the stylus, as shown in (a) of fig. 6, the user may hold the stylus with the right hand and touch the pen tip of the stylus with the desktop 51. After the AR glasses 1 (i.e., 52 in fig. 6 (a)) collect the user's right hand and the stylus held by the user's right hand, the AR glasses 1 may display a curved handwriting 54 on the virtual screen 53 and display the text "move the pen point following the handwriting" as shown in fig. 6 (b). The user may then control the tip of the stylus 55 to move over the desktop with the curvy writing 54 so that the AR glasses can recognize the stylus 55 and locate the position of the tip of the stylus 55.
Illustratively, the text, symbol, image, etc. used for indicating the information in the embodiment of the present application may use a control or other container as a carrier for displaying the information, including but not limited to text identifier, symbol identifier, and image identifier.
The first mark may be any shape, for example, the first mark may be circular, rectangular, or square, which may be specifically set according to practical needs, which is not limited in the embodiments of the present application.
In one example, the first identifier may be an identification box.
Step 201d: the first video communication device receives a second spaced-apart input of the first identification by a user of the first video communication device.
For example, the first spaced-apart input may be a specific spaced-apart gesture by a user of the first video communication device to the first identification input.
Step 201e: in response to the second spaced-apart input, the first video communication device determines a display location of the first identifier.
It should be noted that, after the video communication device determines the display position of the first identifier, that is, after the video communication device completes the identifying and positioning of the stylus, the video communication device may display a dot at the identified pen tip position of the stylus, where the dot may move along with the movement of the pen tip.
For example, taking the operation body as the touch pen as an example, in combination with (b) in fig. 2, when the image 34 (i.e., the third video frame) is displayed in the mobile phone 1, the mobile phone 1 can automatically identify the pen tip of the touch pen 33 in the image 34. At this time, as shown in (a) of fig. 7, the mobile phone 1 may display a circular recognition frame 61 in the image 34. When the user wants to adjust the position of the tip of the stylus 33, the user can drag the circular recognition frame 61 with a finger to the position desired to be adjusted (i.e., the second spaced input described above). At this time, as shown in (b) of fig. 7, the mobile phone 1 displays the tip of the stylus 33 at the center position of the circular recognition frame 61, and determines the display position of the circular recognition frame 61.
The video communication method provided by the embodiment of the application can be applied to a scene of positioning the operation body, a user can intuitively check the real-time display position of the operation body through the first identifier displayed by the first video communication device, and meanwhile, the user can quickly adjust the real-time display position of the operation body through the input of the first identifier, so that the user can conveniently and quickly position the operation body, and the accuracy of subsequently generating a moving track can be improved.
Further alternatively, in embodiments of the present application, the video communication device may prompt the user to confirm the stylus position.
For example, the displaying the first identifier in the third video frame in the step 201c may specifically include the following step A1:
step A1: the first video communication device outputs the first information and displays the first identification in the third image.
The first information is used for prompting a user to confirm whether the display position of the first mark is correct or not.
Illustratively, the first information may include at least one of: text, pictures, voice, keys, and the embodiments of the present application are not limited thereto.
Based on the step A1, the step 201e may specifically include the following steps B1 to B3:
step B1: in response to the second spaced-apart input, the first video communication device adjusts a display position of the first identifier.
Step B2: the first video communication device receives an eighth spaced-apart input of the first information by a user.
For example, the eighth space-apart input described above may be a specific space-apart gesture of the operator to the first information input.
Step B3: in response to the eighth spaced-apart input, the first video communication device determines a display location of the first identifier.
For example, in connection with fig. 7 (a), the mobile phone 1 may display a circular identification frame 61 in the image 34, and display the text "please select the pen tip position," and "confirm" button 62 (i.e. the first information mentioned above). When the user wants to adjust the position of the tip of the stylus 33, the user can drag the circular recognition frame 61 to the desired position (i.e., the second spaced input described above) at intervals. At this time, as shown in fig. 7 (b), the mobile phone 1 displays the tip of the stylus 33 at the center of the circular recognition frame 61. If the user wants to confirm that the circular identification frame 61 is the position of the tip of the stylus 33, the user can click the "confirm" button 62 (i.e., the eighth blank input described above), and at this time, the mobile phone 1 can determine the display position of the circular identification frame 61.
The video communication method provided by the embodiment of the application can be applied to a scene for prompting a user to determine the position of the operation body, and the user can determine the real-time display position of the operation body through inputting the first information displayed by the first video communication device, so that the user is prevented from triggering the first video communication device to determine the display position of the operation body after adjusting the first identifier, and the process of adjusting the display position of the operation body is more flexible.
Alternatively, in the embodiment of the present application, the first video communication apparatus may generate the first track image frame according to different track parameters.
Illustratively, in the case where the movement trajectory is the movement trajectory of the operation body, the method may further include the following steps 203a to 203c, before the step 203 described above:
step 203a: the first video communication device recognizes a movement locus of the operation body.
For example, the first video communication apparatus may identify the above-described movement locus according to a position in the image where the operation body is located in the acquired image.
For example, taking an operation body as a touch pen as an example, front and back 2 frames of images acquired by the first video communication device are respectively an image 1 and an image 2, wherein a position A in the image 1 is displayed with a pen tip of the touch pen, a position B in the image 2 is displayed with a pen tip of the touch pen, and the first video communication device can identify a movement track of the pen tip of the touch pen according to the position A and the position B.
Step 203b: the first video communication device obtains the track parameters.
Wherein the track parameters include at least one of: track color, track coarseness. It should be noted that the track parameters include, but are not limited to, the two parameters described above, and may be specifically set according to actual requirements, which is not limited in this embodiment of the present application.
The track parameters described above may be set by default or by a user, which is not limited in the embodiments of the present application.
The track color may be any color, for example, red, black, gray, or chromatic, which is not limited in the embodiments of the present application.
For example, the video communication apparatus may set different track parameters according to different scenes. For example, in a scenario where user 1 reviews a test paper of user 2, the video communication device may set the track color to red; alternatively, in a scenario where user 1 directs user 2's handwriting, the video communication device may set the track color to black.
Step 203c: the first video communication device generates the first trajectory image frame based on the recognized movement trajectory of the operation body in accordance with the trajectory parameter.
The video communication method provided by the embodiment of the invention can be applied to the scene of generating the first track image frame, the first video communication device can generate different track images based on the identified moving track according to different track parameters, and the flexibility of generating the track images can be improved.
Further alternatively, in embodiments of the present application, the user may manually adjust the target parameters.
Illustratively, prior to step 203b described above, the method may further include the following steps C1-C3:
step C1: the first video communication device receives a third spaced-apart input of the operating body when the first video communication device displays a target adjustment control and the target adjustment control indicates the first parameter.
The third space input is a movement input of the first video communication device user control operation body on the target adjustment control.
Illustratively, the target adjustment control described above may include a plurality of sub-adjustment controls, one for adjusting one track parameter.
The target adjustment control may be an adjustment progress bar or a selection area, which is not limited in the embodiment of the present application. The adjustment progress bar comprises a sliding rail and a sliding block, the sliding block can slide on the sliding rail along with the movement of the operation body, and parameters to be adjusted can be changed along with the sliding of the sliding block.
The first parameter is set by default, or may be set by a user, which is not limited in the embodiment of the present application.
Illustratively, the first parameter described above may include at least one of: track color, track coarseness. It should be noted that the first parameter includes, but is not limited to, the foregoing two parameters, and may be specifically set according to actual requirements, which is not limited in this embodiment of the present application.
Step C2: in response to the third spaced input, the first video communication device updates the first parameter to the track parameter.
In one example, the track parameter may be greater than the first parameter, or less than the first parameter, or equal to the first parameter, which is not limited by the embodiments of the present application.
Step C3: the first video communication device receives a fourth space-apart input of the operating body.
The fourth space input is input that the target time length of the user control operation body of the first video communication device, which stays on the target adjustment control, is greater than or equal to a first preset threshold value.
The first preset threshold may be set by default or may be set by a user, which is not limited in this embodiment of the present application.
Based on the step C3, the step 203b may specifically include the following step C4:
step C4: in response to the fourth blank input, the first video communication device obtains the track parameter.
For example, in connection with (b) in fig. 2, as shown in fig. 8, a color adjustment control 72 and a coarseness adjustment control 73 are displayed in a screen 71 of the mobile phone 1, wherein the color adjustment control 72 includes 4 color options, namely, a red option 74, a black option, a yellow option and a blue option. When the user wants to set the track color to red, the user can control the tip of stylus 33 to stay on red option 74 for 1 second; when the user wants to set the track roughness, the user can control the nib of the stylus 33 to drag the slider in the roughness adjustment control 73 (i.e. the third blank input described above), and at this time, the mobile phone 1 can display the corresponding track roughness preview image in real time. After the user determines the thickness of the trajectory line, the user may control the tip of the stylus 33 to stay on the thickness adjustment control 73 for 1 second, at which time the mobile phone 1 may acquire the trajectory color and the trajectory thickness set by the user.
The video communication method provided by the embodiment of the invention can be applied to parameter scenes of setting track images, and the first video communication device can assist a user to quickly and flexibly adjust and set the display parameters of the track by displaying the target adjustment control, so that the track size and the track color of the self-defined moving track are realized, and the track parameter setting efficiency and the track parameter setting flexibility of the user are improved.
Optionally, in the embodiment of the present application, displaying the movement track corresponding to the first space input in the step 202 may specifically include the following step 202a:
step 202a: the first video communication device displays the movement track when the distance between the operating body and the first video communication device is greater than or equal to a second preset threshold value.
For example, the distance between the operating body and the first video communication device may be determined according to a depth camera.
The second preset threshold may be set by default or may be set by a user, which is not limited in this embodiment of the present application.
For example, in conjunction with fig. 8, a depth adjustment control 75 is further displayed in the screen of the mobile phone 1, when the user wants to set a preset distance (i.e. the second preset threshold), the user can control the nib of the stylus 33 to drag the slider in the depth adjustment control 75, and when the user determines the preset distance, the user can control the nib of the stylus 33 to stay on the depth adjustment control 75 for 1 second, so that the mobile phone 1 determines the preset distance.
In one example, taking an operation body as a touch pen as an example, the first video communication device can display handwriting on a screen only when the distance between the pen tip of the touch pen and the first video communication device is greater than or equal to a second preset threshold value, otherwise, the user moves the touch pen, only the user's hand and the pen are displayed on the screen, and handwriting cannot appear. For example, the user only wants to point out a certain place in the test paper and does not want to leave a trace, the user only needs to approach the pen tip of the stylus to the mobile phone, at this time, the mobile phone determines that the distance between the pen tip of the stylus and the mobile phone is smaller than the preset distance (i.e. the second preset threshold), and the mobile phone only displays the hand and the pen of the user and does not display the trace.
In addition, the first video communication apparatus may display the movement trace when the distance between the operation body and the first video communication apparatus is greater than or equal to a second predetermined threshold value, or may display the movement trace when the distance between the operation body and the first video communication apparatus is within a predetermined range. In one example, the preset interval may include the second preset threshold described above.
The video communication method provided by the embodiment of the invention can be applied to a scene of triggering the first video communication device to display the moving track, and the first video communication device can display the moving track only under the condition that the distance between the operating body and the first video communication device meets the second preset threshold, so that the first video communication device can be triggered to display or not display the track image more flexibly.
Optionally, in the embodiment of the present application, the user may trigger the first video communication device and the second video communication device to exchange the corresponding modes according to the requirement.
Illustratively, the method may further include the following steps 204a and 204b:
step 204a: the first video communication device receives a fifth blank input of the operating body when the first video communication device is in the first mode.
For example, the first space input may be a specific space gesture of the operation body input.
Step 204b: in response to the fifth blank input, the first video communication device updates the first mode to the second mode.
The first mode is a mode for acquiring a track image; the second mode is a mode for displaying a fourth video frame, wherein the fourth video frame is a video frame synthesized by a fifth track image frame and a sixth video frame; the fifth track image frame is an image frame acquired by the second video communication device; the sixth video frame is a video frame acquired by the first video communication device.
For example, in the case that the mobile phone 1 is in the first mode, when the user 1 using the mobile phone 1 wants the user 2 to do a test paper in real time to perform a test, the user 1 can press the switch key at intervals for a long time (i.e. the fifth interval input described above). At this time, the mobile phone 1 may update the first mode to the second mode, and send an indication message to the mobile phone 2 to instruct the mobile phone 2 to switch to the first mode. Then, the mobile phone 1 can collect the test paper image and send it to the mobile phone 2, the mobile phone 2 can collect the hand and pen image after receiving the test paper image sent by the mobile phone 1, and generate handwriting and the test paper image transmitted by the mobile phone 1 to synthesize and transmit it to the mobile phone 1, thus, remote virtual answering can be realized.
The video communication method provided by the real-time example can be applied to a scene where the electronic equipment is located, and a user can quickly switch the mode where the electronic equipment is located, so that the function of the electronic equipment is quickly switched, and the flexibility of video communication is improved.
Optionally, in an embodiment of the present application, before the receiving the first space input of the operation body in step 201, the method may further include the following step 205:
step 205: in the case where the first video communication device is in video communication with at least two third video communication devices, the first video communication device determines the second video communication device.
Wherein the target video communication device is at least one of the at least two third video communication devices.
Illustratively, the second video communication device is one of the at least two third video communication devices.
In one example, the second video communication device described above may be system default.
In another example, the second video communication apparatus described above may be set for a user.
Illustratively, the step 205 may specifically include the following steps 205a to 205c:
Step 205a: in the case where the first video communication apparatus is in video communication with at least two third video communication apparatuses, the first video communication apparatus displays N identifications.
Wherein, an identifier corresponds to a third video communication device, and N is a positive integer.
Step 205b: the first video communication device receives a sixth blank input of the operating body to a target identifier of the N identifiers.
For example, the sixth spaced-apart input described above may be a particular spaced-apart gesture of the operator to the target identification input.
Step 205c: in response to the sixth gap input, the first video communication device determines a third video communication device corresponding to the target identification as a second video communication device.
For example, as shown in (a) of fig. 9, taking the first video communication apparatus as an example of the AR device 1, when the AR device 1, the AR device 2, and the AR device 3 perform video teaching, an icon may be displayed at the upper right corner of the virtual screen 81 of the AR device 1, and the image is the lecture-listener icon 82. If the user using the AR device 1 wants to perform image teaching with the image acquired by the AR device 2 as a background image, the user of the AR device 1 can click on the lecturer icon 82 with the stylus 83. At this time, as shown in (b) of fig. 9, the virtual screen 81 of the AR device 1 may be expanded to display 2 lecture-taker identifications, respectively, the lecture-taker identification 84 of the user of the AR device 2 and the lecture-taker identification 85 of the user of the AR device 3. Then, the user using the AR device 1 may click on the lecturer identifier 84 (i.e., the sixth blank input described above) of the user of the AR device 2 with the stylus 83, at which point the AR device 1 determines that the AR device 2 is the second video communication apparatus.
In one example, the number of the participants may be displayed in real time in the participant icon 82, for example, if there are 2 participants, the participant icon 82 may display the number 2; if there are 3 participants, the number 3 may be displayed on the participant icon 82.
The video communication method provided by the embodiment of the invention can be applied to the scene of determining the second video communication device from a plurality of electronic devices, and the first video communication device can assist a user to quickly determine the second video communication device in a mode of displaying the identification, so that the video communication efficiency is improved.
Optionally, in an embodiment of the present application, after the step 205, the method may further include the following step 206:
step 206: the video communication device transmits the request information to the second video communication device.
The request information is used for requesting the second video communication device to send the second video frame to the first video communication device.
Accordingly, the second video communication device may receive the request message.
For example, the second electronic device may send the above-described second video frame to the first video communication apparatus after receiving the blank input of the request message.
Illustratively, the above-mentioned space-apart input may specifically include: the operator identifies a particular blank gesture entered for the target.
For example, in connection with (b) of fig. 9, after the AR device 1 determines that the AR device 2 is the second video communication apparatus, the AR device 1 may send a request message to the AR device 2, and as shown in fig. 10, after receiving the request message, the AR device 2 may display 2 options and a center point 92 of the AR device 2 on a virtual screen 91 of the AR device 2, which are a "start transmission" option 93 and a "reject transmission" option 94, respectively. If the user using the AR device 2 rotates his head, the AR device 2 starts transmitting the image captured by the AR device 2 (i.e. the second video frame described above) to the AR device 1 by moving the center point 92 of the AR device 2 onto the "start transmission" option 93, i.e. selecting the "start transmission" option 93; if the user using AR device 2 turns his head, moving the center point 92 of AR device 2 onto the "reject transfer" option 94, i.e. selecting the "reject transfer" option 94, AR device 2 will not transfer the image to AR device 1.
It should be noted that the first video communication device and the second video communication device may be fixed on a fixing device, for example, as shown in fig. 11, a mobile phone or a tablet may be fixed on a workbench with a bracket.
The table may be a desktop, a white paper, a blackboard, or the like, which is not limited in the embodiments of the present application.
In an example, in a case where the first video communication apparatus is fixed on a table as shown in fig. 11, the second preset threshold may be a distance from the camera when the tip of the stylus touches the table.
It should be noted that, in the video communication method provided in the embodiment of the present application, the execution subject may be a video communication device, or a control module in the video communication device for executing the video communication method. In the embodiment of the present application, a video communication device is used as an example of a video communication method performed by the video communication device, and the video communication device provided in the embodiment of the present application is described.
Fig. 12 is a schematic diagram of a possible structure of a video communication apparatus according to an embodiment of the present application, and as shown in fig. 12, a video communication apparatus 900 includes: a receiving module 901, a display module 902, and a transmitting module 903, wherein: a receiving module 901, configured to receive a first space-apart input of an operation body during video communication with a second video communication device; the display module 902 is configured to respond to the first space input received by the receiving module 901, and display a movement track corresponding to the first space input; a transmitting module 903, configured to transmit a target video frame to a target video communication device; the target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory.
Optionally, the target video frame includes an M-frame target sub-video frame, the first track image frame includes an M-frame first sub-video frame, the second video frame includes an M-frame second sub-video frame, and M is a positive integer; each frame of target sub-video frame is obtained by superposing and displaying a first sub-video frame on a second sub-video frame, and the first sub-video frame corresponds to the second sub-video frame; the first sub-frame of each frame is synthesized by the current video frame acquired by the first video communication device and a synthesized frame, the synthesized frame is synthesized by the X-frame video frame acquired by the first video communication device before the acquired current video frame, and X is an integer smaller than M.
Optionally, the video communication apparatus 900 further includes: a determining module 904; the moving track is the moving track of the operating body; the display module 902 is further configured to display a first identifier in a third video frame if the third video frame is displayed; the receiving module 901 is further configured to receive a second space-apart input of the first identifier by the user of the first video communication device; a determining module 904, configured to determine a display position of the first identifier in response to the second space input received by the receiving module 901; the third video frame is a video frame acquired by the first video communication device; the display position of the first mark is used for indicating the real-time position of the operating body; the third video frame includes an operation body.
Optionally, the video communication apparatus 900 further includes: an identification module 905, an acquisition module 906, and a generation module 907; the moving track is the moving track of the operating body; an identification module 905 for identifying a movement locus of the operation body; an obtaining module 906, configured to obtain a track parameter; a generating module 907, configured to generate a first track image frame based on the movement track of the operation body identified by the identifying module according to the track parameter acquired by the acquiring module; wherein the trajectory parameters include at least one of: track color, track coarseness.
Optionally, the video communication apparatus 900 further includes: an update module 908; the receiving module 901 is further configured to receive a third blank input of the operation body when a target adjustment control is displayed and the target adjustment control indicates the first parameter, where the third blank input is a movement input of the first video communication device user control operation body on the target adjustment control; an updating module 908, configured to update the first parameter to a track parameter in response to the third space input received by the receiving module 901; the receiving module 901 is further configured to receive a fourth space-apart input of the operation body, where the fourth space-apart input is an input that a target time length of the first video communication device user controlling the operation body to stay on the target adjustment control is greater than or equal to a first preset threshold; the obtaining module 906 is specifically configured to obtain the track parameter in response to the fourth space input received by the receiving module 901.
Optionally, the display module 902 is specifically configured to display the movement track when a distance between the operating body and the first video communication device is greater than or equal to a second preset threshold.
Optionally, the video communication apparatus 900 further includes: an update module 908; the receiving module 901 is further configured to receive a fifth blank input of the operation body when the first video communication device is in the first mode; an updating module 908 for updating the first mode to the second mode in response to the fifth blank input received by the receiving module 901; the first mode is a mode for acquiring a track image; the second mode is a mode for displaying a fourth video frame, and the fourth video frame is a video frame synthesized by a fifth track image frame and a sixth video frame; the fifth track image frame is an image frame acquired by the second video communication device; the sixth video frame is a video frame acquired by the first video communication device.
Optionally, the video communication apparatus 900 further includes: a determining module 904; the display module 902 is further configured to display N identifiers, where N is a positive integer, in a case of video communication with at least two third video communication devices, one identifier corresponding to each third video communication device; the receiving module 901 is further configured to receive a sixth blank input of the operating body to a target identifier in the N identifiers; a determining module 904, configured to determine, in response to the sixth spaced input received by the receiving module 901, the third video communication device corresponding to the target identifier as the second video communication device.
In the video communication device provided in the embodiment of the present application, first, in a process of video communication between the first video communication device and the second video communication device, after the first video communication device receives a first space input of the operation body, the first video communication device may display a movement track corresponding to the first space input. The first video communication device may then transmit the target video frame to the target video communication device. The target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory. Through the scheme, compared with the scheme that interaction can only be carried out through voice in the related art, the teaching mode of video teaching can be enriched through the mode of image interaction. Taking the example that the lecturer using the electronic device a is the lecturer using the electronic device B for the examination paper reading, the first video communication device may acquire the examination paper image acquired by the electronic device B, and acquire the reading trace image acquired by the electronic device a. Then, the first video communication apparatus may synthesize the examination paper image with the read trace image, and send the examination paper image with the read trace image to the electronic device B. Finally, the first video communication device can send the test paper image with the read trace to the electronic equipment B, so that a lesson listener using the electronic equipment B can intuitively see the read condition of the test paper after receiving the test paper image with the read trace, and the teaching mode of video teaching is enriched.
The video communication device in the embodiment of the application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video communication device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video communication device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 10, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 13, the embodiment of the present application further provides an electronic device 1100, including a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and capable of running on the processor 1101, where the program or the instruction implements each process of the embodiment of the video communication method when executed by the processor 1101, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 14 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the input unit 104 is configured to receive a first space input of the operation body during video communication with the second video communication device; a display unit 106, configured to respond to the first space input received by the input unit 104, and display a movement track corresponding to the first space input; a radio frequency unit 101 for transmitting a target video frame to a target video communication device; the target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory.
Optionally, the target video frame includes an M-frame target sub-video frame, the first track image frame includes an M-frame first sub-video frame, the second video frame includes an M-frame second sub-video frame, and M is a positive integer; each frame of target sub-video frame is obtained by superposing and displaying a first sub-video frame on a second sub-video frame, and the first sub-video frame corresponds to the second sub-video frame; the first sub-frame of each frame is synthesized by the current video frame acquired by the first video communication device and a synthesized frame, the synthesized frame is synthesized by the X-frame video frame acquired by the first video communication device before the acquired current video frame, and X is an integer smaller than M.
Optionally, the moving track is a moving track of the operating body; the display unit 106 is further configured to display a first identifier in a third video frame if the third video frame is displayed; an input unit 104, further configured to receive a second space-saving input of the first identifier by the user of the first video communication apparatus; a processor 110 for determining a display position of the first identifier in response to the second space input received by the input unit 104; the third video frame is a video frame acquired by the first video communication device; the display position of the first mark is used for indicating the real-time position of the operating body; the third video frame includes an operation body.
Optionally, the moving track is a moving track of the operating body; a processor 110 for recognizing a movement locus of the operation body; and is used for obtaining track parameters; and generating a first trajectory image frame based on the identified movement trajectory of the operation body according to the trajectory parameters; wherein the trajectory parameters include at least one of: track color, track coarseness.
Optionally, the input unit 104 is further configured to receive a third blank input of the operation body when the target adjustment control is displayed and indicates the first parameter, where the third blank input is a movement input of the first video communication device user control operation body on the target adjustment control; a processor 110 for updating the first parameter to a track parameter in response to the third space input received by the input unit 104; the input unit 104 is further configured to receive a fourth space-apart input of the operation body, where the fourth space-apart input is an input that a target time period of the first video communication device user controlling the operation body to stay on the target adjustment control is greater than or equal to a first preset threshold; the processor 110 is specifically configured to obtain the track parameter in response to the fourth space input received by the input unit 104.
Optionally, the display unit 106 is specifically configured to display the movement track when the distance between the operating body and the first video communication device is greater than or equal to a second preset threshold.
Optionally, the input unit 104 is further configured to receive a fifth blank input of the operation body when the first video communication apparatus is in the first mode; a processor 110 for updating the first mode to the second mode in response to the fifth blank input received by the input unit 104; the first mode is a mode for acquiring a track image; the second mode is a mode for displaying a fourth video frame, and the fourth video frame is a video frame synthesized by a fifth track image frame and a sixth video frame; the fifth track image frame is an image frame acquired by the second video communication device; the sixth video frame is a video frame acquired by the first video communication device.
Optionally, the display unit 106 is further configured to display N identifiers, where N is a positive integer, in a case of video communication with at least two third video communication devices, one identifier corresponding to each third video communication device; an input unit 104, configured to receive a sixth blank input of the operating body to a target identifier of the N identifiers; the processor 110 is configured to determine, as the second video communication device, the third video communication device corresponding to the target identifier in response to the sixth spaced input received by the input unit 104.
In the electronic device provided by the embodiment of the application, first, in the process of video communication between the electronic device and the second video communication device, after the electronic device receives the first space input of the operation body, the electronic device may display a movement track corresponding to the first space input. The electronic device may then send the target video frame to the target video communication apparatus. The target video frame is synthesized by a first track image frame and a second video frame acquired by a second video communication device; the first trajectory image frame includes the above-described movement trajectory. Through the scheme, compared with the scheme that interaction can only be carried out through voice in the related art, the teaching mode of video teaching can be enriched through the mode of image interaction. Taking the example that the lecturer using the electronic device a is the lecturer using the electronic device B for the examination paper reading, the electronic device a can acquire the examination paper image acquired by the electronic device B and acquire the reading trace image acquired by the electronic device a. Then, the electronic device a may synthesize the document image with the read trace image and the document image to obtain a document image with the read trace, and send the document image to the electronic device B. Finally, the electronic equipment A can send the test paper image with the read trace to the electronic equipment B, so that a lesson listener using the electronic equipment B can intuitively see the read condition of the test paper after receiving the test paper image with the read trace, thereby enriching the teaching mode of video teaching.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video communication method, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the video communication method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (8)

1. A video communication method applied to a first video communication apparatus, the method comprising:
receiving a first blank input of an operating body during video communication with a second video communication device;
responding to the first blank input, and displaying a moving track corresponding to the first blank input in a superposition manner on a second video frame, wherein the second video frame is a video frame sent after being acquired by the second video communication device;
the step of displaying the movement track corresponding to the first blank input in a superimposed manner on the second video frame comprises the following steps:
displaying the moving track in a superposition way on a second video frame under the condition that the distance between the operating body and the first video communication device is larger than or equal to a second preset threshold value;
transmitting the target video frame to the target video communication device;
the method further includes, prior to receiving the first space input of the operator:
displaying a first identifier in a third video frame when the third video frame is displayed;
receiving a second spaced-apart input by a user of the first video communication device for the first identification;
determining a display location of the first identifier in response to the second spaced-apart input;
The target video frame is synthesized by a first track image frame and a second video frame acquired by the second video communication device; the first trajectory image frame includes the movement trajectory; the moving track is the moving track of the operation body; the third video frame is a video frame collected by the first video communication device; the display position of the first mark is used for indicating the real-time position of the operating body; the third video frame includes the operation body.
2. The method of claim 1, wherein the target video frames comprise M frame target sub-video frames, the first track image frames comprise M frame first sub-video frames, the second video frames comprise M frame second sub-video frames, and M is a positive integer;
each frame of target sub-video frame is obtained by superposing and displaying a first sub-video frame on a second sub-video frame, and the first sub-video frame corresponds to the second sub-video frame;
each frame of first sub-image frame is synthesized by a current video frame acquired by the first video communication device and a synthesized frame, the synthesized frame is synthesized by an X frame video frame acquired by the first video communication device before the acquired current video frame, and X is an integer smaller than M.
3. The method according to claim 1, wherein the movement locus is a movement locus of an operation body;
before sending the target video frame to a target video communication device, the method further comprises:
identifying a movement track of the operation body;
acquiring track parameters;
generating the first track image frame based on the identified moving track of the operation body according to the track parameters;
wherein the trajectory parameters include at least one of: track color, track coarseness.
4. A method according to claim 3, wherein prior to said obtaining the trajectory parameters, the method further comprises:
receiving a third blank input of an operating body under the condition that a target adjustment control is displayed and the target adjustment control indicates a first parameter, wherein the third blank input is a movement input of a first video communication device user for controlling the operating body on the target adjustment control;
updating the first parameter to the trajectory parameter in response to the third blank input;
receiving a fourth interval input of an operation body, wherein the fourth interval input is input that a target time length of the operation body, which is controlled by a user of a first video communication device to stay on the target adjustment control, is greater than or equal to a first preset threshold value;
The obtaining track parameters includes:
and responding to the fourth space input, and acquiring the track parameters.
5. The method according to claim 1, wherein the method further comprises:
receiving a fifth blank input of an operating body when the first video communication device is in a first mode;
updating the first mode to a second mode in response to the fifth blank input;
the first mode is a mode for acquiring a track image; the second mode is a mode for displaying a fourth video frame, and the fourth video frame is a video frame synthesized by a fifth track image frame and a sixth video frame; the fifth track image frame is an image frame acquired by the second video communication device; the sixth video frame is a video frame acquired by the first video communication device.
6. The method of claim 1, wherein prior to receiving the first blank input of the operating body, the method further comprises:
displaying N identifiers under the condition of video communication with at least two third video communication devices, wherein one identifier corresponds to one third video communication device, and N is a positive integer;
Receiving a sixth blank input of an operation body to a target identifier in the N identifiers;
and in response to the sixth spaced input, determining a third video communication device corresponding to the target identification as the second video communication device.
7. A video communication apparatus, the video communication apparatus comprising: the device comprises a receiving module, a display module and a sending module;
the receiving module is used for receiving a first space-apart input of the operating body in the process of video communication with the second video communication device;
the display module is used for responding to the first space input received by the receiving module and displaying a moving track corresponding to the first space input on a second video frame in a superposition mode;
the display module is specifically configured to, in response to the first space input received by the receiving module, superimpose and display a movement track corresponding to the first space input on a second video frame when a distance between the operating body and the video communication device is greater than or equal to a second preset threshold;
the sending module is used for sending the target video frame to the target video communication device;
the display module is further configured to display a first identifier in a third video frame when the third video frame is displayed before the receiving module receives the first blank input of the operation body;
The receiving module is further used for receiving a second space-apart input of the first identifier displayed by the display module from a user of the video communication device;
the display module is further used for responding to the second space input received by the receiving module to determine the display position of the first mark;
the target video frame is synthesized by a first track image frame and a second video frame acquired by the second video communication device; the first trajectory image frame includes the movement trajectory; the moving track is the moving track of the operation body; the third video frame is a video frame collected by the video communication device; the display position of the first mark is used for indicating the real-time position of the operating body; the third video frame includes the operation body.
8. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the video communication method of any one of claims 1 to 6.
CN202011412842.XA 2020-12-04 2020-12-04 Video communication method and device and electronic equipment Active CN112565844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011412842.XA CN112565844B (en) 2020-12-04 2020-12-04 Video communication method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011412842.XA CN112565844B (en) 2020-12-04 2020-12-04 Video communication method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112565844A CN112565844A (en) 2021-03-26
CN112565844B true CN112565844B (en) 2023-05-12

Family

ID=75048963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011412842.XA Active CN112565844B (en) 2020-12-04 2020-12-04 Video communication method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112565844B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025237A (en) * 2021-12-02 2022-02-08 维沃移动通信有限公司 Video generation method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354266A (en) * 2016-09-22 2017-01-25 北京小米移动软件有限公司 Control method and device of terminal as well as terminal
CN106454199A (en) * 2016-10-31 2017-02-22 维沃移动通信有限公司 Video communication method and mobile terminal
CN206712945U (en) * 2017-04-26 2017-12-05 联想新视界(天津)科技有限公司 Video communications system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075727A (en) * 2010-12-30 2011-05-25 中兴通讯股份有限公司 Method and device for processing images in videophone
CN104092958B (en) * 2014-07-01 2017-07-18 广东威创视讯科技股份有限公司 Vision signal mask method, system and device
CN106339094B (en) * 2016-09-05 2019-02-26 山东万腾电子科技有限公司 Interactive remote expert cooperation examination and repair system and method based on augmented reality
CN107077720A (en) * 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 Method, device and the equipment of image procossing
CN108932053B (en) * 2018-05-21 2021-06-11 腾讯科技(深圳)有限公司 Drawing method and device based on gestures, storage medium and computer equipment
CN111614922A (en) * 2019-02-22 2020-09-01 中国移动通信有限公司研究院 Information interaction method, network terminal and terminal
CN110233841B (en) * 2019-06-11 2021-08-10 上海文景信息科技有限公司 Remote education data interaction system and method based on AR holographic glasses

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354266A (en) * 2016-09-22 2017-01-25 北京小米移动软件有限公司 Control method and device of terminal as well as terminal
CN106454199A (en) * 2016-10-31 2017-02-22 维沃移动通信有限公司 Video communication method and mobile terminal
CN206712945U (en) * 2017-04-26 2017-12-05 联想新视界(天津)科技有限公司 Video communications system

Also Published As

Publication number Publication date
CN112565844A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN108958615B (en) Display control method, terminal and computer readable storage medium
US8244233B2 (en) Systems and methods for operating a virtual whiteboard using a mobile phone device
CN111402367B (en) Image processing method and electronic equipment
CN107122113B (en) Method and device for generating picture
CN110400180B (en) Recommendation information-based display method and device and storage medium
EP4246287A1 (en) Method and system for displaying virtual prop in real environment image, and storage medium
JP2016134014A (en) Electronic information board device, information processing method and program
WO2021031843A1 (en) Object position adjustment method, and electronic apparatus
US20130314489A1 (en) Information processing apparatus, information processing system and information processing method
CN105739857A (en) Mobile terminal control method and apparatus
US20190312917A1 (en) Resource collaboration with co-presence indicators
US9116757B2 (en) Data processing apparatus including plurality of applications and method
CN112312217A (en) Image editing method and device, computer equipment and storage medium
CN112882643A (en) Control method of touch pen, control method of electronic equipment and touch pen
CN112565844B (en) Video communication method and device and electronic equipment
KR20210034668A (en) Text input method and terminal
CN110895440A (en) Information processing apparatus and recording medium
CN110519517B (en) Copy guiding method, electronic device and computer readable storage medium
JP2001313761A (en) Handwritten input data display system, coordinate data input device, display device and handwritten input data display device
CN115379113A (en) Shooting processing method, device, equipment and storage medium
CN114816088A (en) Online teaching method, electronic equipment and communication system
CN113542257A (en) Video processing method, video processing apparatus, electronic device, and storage medium
CN110795016A (en) Display method and electronic equipment
JP2013101663A (en) Display system and program thereof
CN216670709U (en) Intelligent conference system and intelligent display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant