CN111314648A - Information processing method, processing device, first electronic equipment and server - Google Patents

Information processing method, processing device, first electronic equipment and server Download PDF

Info

Publication number
CN111314648A
CN111314648A CN202010128574.2A CN202010128574A CN111314648A CN 111314648 A CN111314648 A CN 111314648A CN 202010128574 A CN202010128574 A CN 202010128574A CN 111314648 A CN111314648 A CN 111314648A
Authority
CN
China
Prior art keywords
frame
electronic device
output
video
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010128574.2A
Other languages
Chinese (zh)
Inventor
罗应文
陶嘉明
李斌
武亚强
陈茂刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010128574.2A priority Critical patent/CN111314648A/en
Publication of CN111314648A publication Critical patent/CN111314648A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides an information processing method, a processing device, a first electronic device and a server, wherein a group of frame sequences output by the first electronic device at the latest time can be obtained when a second electronic device is accessed in the screen projection process of the first electronic device, the screen projection video of the first electronic device can be rapidly displayed by decoding and displaying the frame sequences through the second electronic device, and the frame sequences comprise initial frames which can be used as reference frames and do not need to wait, so that the screen splash or the display waiting process cannot occur, and the screen projection efficiency of a video conference is improved.

Description

Information processing method, processing device, first electronic equipment and server
Technical Field
The present invention relates to the field of multimedia communication technologies, and in particular, to an information processing method, a processing apparatus, a first electronic device, and a server.
Background
The screen projection function is a common communication means for developing video conferences in various industries, and has the characteristics of cost saving, time saving, strong real-time performance and no region limitation.
Taking a common electronic classroom scene as an example, the projection of a teacher screen to a student screen is a normal operation, and in the screen projection process, it is likely that some electronic devices of students are firstly accessed into the electronic classroom or some electronic devices of students are disconnected and then are accessed into the electronic classroom again. At present, the two types of newly accessed electronic equipment screen projection schemes are divided into two types: first, newly accessed electronic devices decode the display directly, but become blurred due to the lack of available reference frames; second, a newly accessed electronic device waits for an available reference frame to decode the display, but this can cause the user to wait a longer time to see the video.
Therefore, how to make these newly accessed electronic devices quickly project screens without the process of screen-spending or waiting for display becomes an urgent problem to be solved.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
an information processing method, the method comprising:
under the condition that a first electronic device outputs video, obtaining a group of frame sequences output by the first electronic device, wherein the frame sequences comprise an initial frame output last time by the first electronic device and a difference frame output from the output time of the initial frame to the current time, and the difference frame is a video frame different from the initial frame at least;
and in the case of receiving an access instruction of a second electronic device, transmitting the frame sequence to the second electronic device to enable the second electronic device to output the video.
Preferably, the sending the frame sequence to the second electronic device includes:
obtaining an access state of the second electronic device;
and sending a target frame matched with the access state in the frame sequence to the second electronic equipment.
Preferably, the accessing status includes first access, and the transmitting a target frame in the frame sequence matching the accessing status to the second electronic device includes:
transmitting the initial frame and the difference frame to the second electronic device.
Preferably, the accessing state includes a dropped access, and the sending a target frame in the frame sequence matching the accessing state to the second electronic device includes:
determining a video frame to be decoded and displayed when the second electronic equipment is disconnected;
and sending the video frame of which the output time is positioned after the output time of the video frame displayed by decoding in the initial frame and the difference frame to the second electronic equipment.
Preferably, the frame sequence has a frame identifier for marking each video frame therein, and the frame identifier can characterize the output time of the video frame marked by the frame identifier.
Preferably, the process of determining the initial frame last output by the first electronic device includes:
judging whether a video frame output by the first electronic equipment at the current moment is an initial frame;
if so, taking the video frame output by the first electronic device at the current moment as an initial frame output by the first electronic device for the last time;
if not, taking an initial frame which is output by the first electronic device and has an output time closest to the current time as an initial frame which is output by the first electronic device last time.
An information processing apparatus, the apparatus comprising:
an obtaining module, configured to, when a first electronic device outputs a video, obtain a set of frame sequences that have been output by the first electronic device, where the frame sequences include an initial frame that has been output last time by the first electronic device and a difference frame that is output from an output time of the initial frame to a current time, and the difference frame is a video frame that is at least different from the initial frame;
and the sending module is used for sending the frame sequence to the second electronic equipment under the condition of receiving an access instruction of the second electronic equipment so as to enable the second electronic equipment to output the video.
A first electronic device, the device comprising:
an output component for outputting video;
a processing component, configured to obtain a set of frame sequences that have been output by the output component, where the frame sequences include an initial frame that has been output last time by the first electronic device and a difference frame that is output from an output time of the initial frame to a current time, and the difference frame is a video frame that is at least different from the initial frame; and in the case of receiving an access instruction of a second electronic device, transmitting the frame sequence to the second electronic device to enable the second electronic device to output the video.
A server, the server comprising:
the memory is used for storing an application program and data generated by the running of the application program;
a processor for executing the application to perform the functions of: under the condition that a first electronic device outputs video, obtaining a group of frame sequences output by the first electronic device, wherein the frame sequences comprise an initial frame output last time by the first electronic device and a difference frame output from the output time of the initial frame to the current time, and the difference frame is a video frame different from the initial frame at least; and in the case of receiving an access instruction of a second electronic device, transmitting the frame sequence to the second electronic device to enable the second electronic device to output the video.
Through the technical means, the following beneficial effects can be realized:
it can be known from the foregoing technical solutions that, in the embodiment of the present application, an information processing method is provided, where a second electronic device may obtain a group of frame sequences that are output last time by a first electronic device when accessing in a screen-projecting process of the first electronic device, the second electronic device may quickly display a screen-projecting video of the first electronic device by decoding and displaying the frame sequences, and since the frame sequences include an initial frame that can be used as a reference frame and do not need to wait, a screen-splash or display-waiting process does not occur, thereby improving screen-projecting efficiency of a video conference.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a system architecture diagram of a local electronic classroom as provided herein;
FIG. 2 is a system architecture diagram of a remote electronic classroom as provided herein;
fig. 3 is a flowchart of a method of processing information according to an embodiment of the present application;
FIG. 4 is an exemplary diagram of a set of frame sequences provided in an embodiment of the present application;
fig. 5 is a flowchart of a method of processing information according to a second embodiment of the present application;
fig. 6 is a flowchart of a method of an information processing method according to a third embodiment of the present application;
fig. 7 is a schematic view of a scene provided in the third embodiment of the present application;
fig. 8 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The information processing method disclosed in the present application is applied to a video conference system, and the following description continues by taking a common electronic classroom scene as an example, and it can be understood that other video conference scenes that are not listed are also within the scope of protection of the present application.
See fig. 1 for a system architecture diagram of a local electronic classroom. The screen of the first electronic device is outputting a video, and the first electronic device can project the screen of the first electronic device onto the screen of the second electronic device which is connected through the screen projection function.
See fig. 2 for a system architecture diagram of a remote electronic device. The screen of the first electronic device outputs video, and the server can project the screen of the first electronic device onto the screen of the second electronic device which is connected through the screen projection function.
However, during the screen projection process of the first electronic device/server, there is a high possibility that some second electronic devices are accessed to the local electronic classroom for the first time or some second electronic devices are disconnected and then accessed to the local electronic devices again.
At present, there are two methods to solve the screen projection problem of the two types of newly accessed second electronic devices:
in the first, the second electronic device decodes the display directly. The first electronic device/server directly sends the video frame displayed on the screen of the first electronic device to the second electronic device, but because the second electronic device has no I frame when being newly accessed, if the video frame received by the second electronic device is a P frame or a B frame, the second electronic device lacks an available reference frame, and at this moment, the second electronic device sets a pure reference frame by default, and the screen of the second electronic device is shown to be a flower screen.
Second, the second electronic device may wait and decode the display. The first electronic device/server directly sends the video frame which is displayed on the screen of the first electronic device to the second electronic device, the second electronic device judges the received video frame, if the video frame is not an I frame, the video frame is discarded, decoding is not started until the I frame is received, and the subsequent video frames are normally decoded. But the user using the second electronic device will wait a long time to start seeing the video, which may be 10-20 seconds, and the user experience is poor.
In order to solve the above problem, the present application provides an information processing method, which may be applied to a first electronic device in a local electronic classroom scenario, where the first electronic device includes:
an output component for outputting video;
the processing component is used for obtaining a group of frame sequences output by the output component, wherein the frame sequences comprise an initial frame output by the first electronic equipment for the last time and a difference frame output from the output time of the initial frame to the current time, and the difference frame is a video frame different from the initial frame at least; and under the condition of receiving an access instruction of the second electronic device, transmitting the frame sequence to the second electronic device so as to enable the second electronic device to output the video.
It should be noted that the detailed functions and the extended functions of the processing component of the first electronic device can refer to the description of the following embodiments.
In addition, the information processing method disclosed by the application can be applied to a server in a remote electronic classroom scene, and the server comprises:
the memory is used for storing the application program and data generated by the running of the application program;
a processor for executing an application to perform the functions of: under the condition that the first electronic device outputs a video, obtaining a group of frame sequences output by the first electronic device, wherein the frame sequences comprise an initial frame output by the first electronic device for the last time and a difference frame output from the output time of the initial frame to the current time, and the difference frame is a video frame at least different from the initial frame; and under the condition of receiving an access instruction of the second electronic device, transmitting the frame sequence to the second electronic device so as to enable the second electronic device to output the video.
It should be noted that the detailed functions and the extended functions of the server program can be referred to the description of the embodiments below.
In a first embodiment of the processing method disclosed in the present application, as shown in fig. 3, the method includes the following steps:
step 101: under the condition that the first electronic device outputs video, obtaining a group of frame sequences which are output by the first electronic device, wherein the frame sequences comprise an initial frame which is output by the first electronic device most recently and a difference frame which is output from the output time of the initial frame to the current time, and the difference frame is a video frame which is different from the initial frame at least.
In the embodiment of the present application, a video output by a first electronic device belongs to a compressed video file, and is generally compressed by using H264 coding, and the following description will take H264 coding as an example. For ease of understanding, the following description is made of H264 coding:
the H264 coding mainly considers the temporal correlation of video, and its basic principle is: the first frame of the video output by the first electronic equipment is taken as an initial frame, the subsequent frame refers to the previous frame, and only the difference (the motion place) is recorded, so that the effect of video compression is achieved.
In order to reduce the influence of frame loss (such as network packet loss, data damage and the like) and prediction errors on the whole video, a concept of GOP (group of pictures) is introduced into video coding. A GOP represents a decodable sequence of frames, typically consisting of 1I-frame, several P-frames, and B-frames.
In the embodiment of the present application, a group of frame sequences is a GOP, and accordingly, an initial frame included in the group of frame sequences is an I frame, and a difference frame included in the group of frame sequences is at least a P frame. The following briefly introduces I-frames, P-frames, and B-frames:
i frame: key frames, independent of other frames, with a compression rate of typically 1/7;
p frame: the compression rate of the unidirectional prediction frame can reach 1/20-1/30 depending on the previous I frame or P frame;
b frame: the compression rate of bidirectional prediction frames can reach 1/150-1/200 depending on previous and subsequent I frames or P frames.
Fig. 4 shows an example of a group of frame sequences, where M (P frame interval) is 3 and N (I frame interval) is 13, and I, B, P represents I frame, B frame, and P frame, respectively.
In summary, in the embodiment of the present application, during the process of outputting the video by the first electronic device, with time updating, a group of frame sequences that the first electronic device has last outputted is dynamically obtained, where the frame sequences include an I frame that the first electronic device has last outputted, and a P frame and/or a B frame that is outputted from the output time of the I frame to the current time.
Optionally, the process of determining the initial frame last output by the first electronic device includes the following steps:
judging whether a video frame output by the first electronic equipment at the current moment is an initial frame; if so, taking the video frame output by the first electronic device at the current moment as the initial frame output by the first electronic device for the last time; if not, taking an initial frame which is output by the first electronic device and has an output time closest to the current time as an initial frame which is output by the first electronic device last time.
In the embodiment of the application, whether the video frame output by the first electronic device is an I frame or not is dynamically judged in real time. If the frame is the I frame, taking the I frame as the I frame output by the first electronic equipment for the last time; and if the frame is not the I frame, taking an I frame which is output by the first electronic equipment and is closest to the current moment as the I frame which is output by the first electronic equipment last time.
It should be noted that the first electronic device may be a device that is pre-designated before the video conference is carried out, for example, a fixed teacher end in an electronic classroom is designated as the first electronic device, and certainly, a student end except the teacher end in the electronic classroom is designated as the second electronic device.
In addition, the first electronic device may also be determined by the electronic device by identifying characteristics of the environment where the electronic device is located, that is, before the electronic device enters the video conference, it may determine whether the electronic device is the first electronic device by identifying, for example, facial characteristics, fingerprint characteristics, or a conference account number of the device side. For example, each electronic device in an electronic classroom stores a facial photo of a teacher, before entering the electronic classroom, the electronic device identifies whether the facial photo of the user at the device end of the electronic device matches the facial photo of the teacher, and if the facial photo matches the facial photo of the user at the device end of the electronic device, the electronic device is determined to be the first electronic device, otherwise, the electronic device is determined to be the second electronic device.
Of course, the first electronic device may also be designated by an electronic device having designated rights to enter the video conference. For example, the teacher end in the electronic classroom may designate any electronic device entering the electronic classroom as the first electronic device, and at this time, the electronic devices except the first electronic device in the electronic classroom are the second electronic devices. This increases the flexibility and interest of electronic classroom teaching. In order to remind the user in time, each electronic device entering the electronic classroom can output a notification representing the identity information of the first electronic device.
Step 102: and under the condition of receiving an access instruction of the second electronic device, transmitting the frame sequence to the second electronic device so as to enable the second electronic device to output the video.
In this embodiment of the application, when an access instruction of the second electronic device is received, the group of frame sequences obtained in step S101 is sent to the second electronic device, and the second electronic device sequentially decodes the output video according to the order of the video frames in the frame sequences. This makes it possible to quickly display the screen of the first electronic device without having to touch up the screen or wait for the interface to be displayed.
Therefore, according to the information processing method provided by the embodiment of the application, when the second electronic device accesses the first electronic device in the screen projection process, a group of frame sequences output by the first electronic device last time can be obtained, the second electronic device can quickly display the screen projection video of the first electronic device by decoding and displaying the frame sequences, and the frame sequences contain initial frames which can be used as reference frames and do not need to wait, so that the screen splash or the display waiting process does not occur, and the screen projection efficiency of a video conference is improved.
As an implementation manner of sending the frame sequence to the second electronic device, the second embodiment of the present application discloses an information processing method, as shown in fig. 5, the method includes the following steps:
step 201: under the condition that the first electronic device outputs video, obtaining a group of frame sequences which are output by the first electronic device, wherein the frame sequences comprise an initial frame which is output by the first electronic device most recently and a difference frame which is output from the output time of the initial frame to the current time, and the difference frame is a video frame which is different from the initial frame at least.
Step 202: and under the condition of receiving an access instruction of the second electronic equipment, acquiring the access state of the second electronic equipment.
In the embodiment of the application, in the process of the video conference development, if an access instruction of the second electronic device is received, whether the second electronic device has been accessed in the time period of the video conference development is determined by comparing the device identifier of the access conference in the time period of the video conference development.
If the video conference is not accessed within the time period of the video conference, the access state of the second electronic equipment is the first access; and if the second electronic equipment is accessed within the video conference developing time period, the access state of the second electronic equipment is disconnected.
It should be noted that the above device identifier is an identifier capable of uniquely marking a device, such as an IP address, and further such as a conference account.
And S203, sending the target frame matched with the access state in the frame sequence to the second electronic equipment so as to enable the second electronic equipment to output the video.
In the embodiment of the application, if the access state of the second electronic device in the video conference development time period is the first access, it may be determined that the cache of the video output by the first electronic device does not necessarily exist in the second electronic device. At this time, the set of frame sequences obtained in step S201, including the initial frame and the difference frame, are all transmitted to the second electronic device, and the second electronic device can decode the output video.
If the access state of the second electronic device in the video conference development time period is offline access, it can be determined that the cache of the video output by the first electronic device necessarily exists in the second electronic device. At this time, a portion of the video frames in the group of frame sequences obtained in step S201 that are not cached by the second electronic device may be sent to the second electronic device, and the second electronic device may decode the output video.
Therefore, according to the information processing method provided by the embodiment of the application, when the second electronic device is accessed in the screen projection process of the first electronic device, the target frame matched with the self access state in the group of frame sequences output by the first electronic device for the last time can be obtained, the second electronic device can rapidly display the screen projection video of the first electronic device by decoding and displaying the target frame, and the screen-splash or display-waiting process cannot occur, so that the screen projection efficiency of a video conference is improved.
As an implementation manner of sending a target frame matched with an access state in a frame sequence to a second electronic device when the access state includes a dropped access, a third embodiment of the present application discloses an information processing method, as shown in fig. 6, the method includes the following steps:
step 301: under the condition that the first electronic device outputs video, obtaining a group of frame sequences which are output by the first electronic device, wherein the frame sequences comprise an initial frame which is output by the first electronic device most recently and a difference frame which is output from the output time of the initial frame to the current time, and the difference frame is a video frame which is different from the initial frame at least.
Step 302: and under the condition of receiving an access instruction of the second electronic equipment, acquiring the access state of the second electronic equipment.
And step S303, under the condition that the access state is the offline access, determining that the second electronic equipment decodes the displayed video frame when the second electronic equipment is offline.
And step S304, sending the video frames of the initial frame and the difference frame, of which the output time is positioned after the output time of the video frame displayed by decoding, to the second electronic equipment so as to enable the second electronic equipment to output the video.
In this embodiment of the application, due to the time sequence of video output, each video frame output by the first electronic device corresponds to a unique output time, and after it is determined that a video frame is decoded and displayed when the second electronic device is disconnected, the output time of the video frame that is decoded and displayed can be determined, so that a video frame in the group of frame sequences obtained in step 301, whose output time is after the output time of the video frame that is decoded and displayed, is taken as a video frame that is not cached by the second electronic device and is generated to the second electronic device, and the second electronic device can decode and output a video.
Preferably, the sequence of frames has a frame identity marking each video frame therein, the frame identity being capable of characterizing the output instant of the video frame it marks.
In this embodiment of the present application, the frame identifier may be an output time of a video frame, and may also be other numbers with time sequence. When determining a video frame that is not cached by the second electronic device, quickly locating a video frame whose output time is after the output time of the decoded displayed video frame by comparing the frame identifications.
See the scene diagram shown in fig. 7. The second electronic device accesses the video conference and continuously outputs the video output by the first electronic device. Due to reasons such as network faults, the second electronic device is disconnected and reconnected, the video frame output by the first electronic device is the 2 nd P frame of G0Pn, the video frame decoded and displayed when the second electronic device is disconnected is the I frame of G0Pn, and the effect that the second electronic device can rapidly display the screen projection video can be achieved by comparing frame identifications such as output time and the like and only sending the video frames after the I frame of G0Pn and before the 2 nd P frame to the second electronic device.
Therefore, according to the information processing method provided by the embodiment of the application, when the second electronic device is disconnected during screen projection of the first electronic device, the second electronic device can obtain the video frame of which the output moment in a group of frame sequences output by the first electronic device last time is positioned after the output moment of the video frame decoded and displayed during the disconnection, the second electronic device can rapidly display the screen projection video of the first electronic device by decoding and displaying the obtained video frame, and screen splash or a display waiting process cannot occur, so that the screen projection efficiency of a video conference is improved.
For the convenience of understanding, the present application is described by way of the following scenario examples:
with continued reference to the local electronic classroom shown in fig. 1. Assuming that the teacher end is a first electronic device and the student end is a second electronic device, the student end initially connected to the teacher end has a student end 1, a student end 2 and a student end 3.
Based on this, when giving a lesson in the video, teacher's end can be with the screen projection of self on student's end 1, student's end 2 and student's end 3 screen through throwing the screen function, realize local video teaching. And in the process of outputting the video by the teacher end, the teacher end dynamically stores a group of frame sequences which are output last time by the teacher end.
In the first case, due to a network failure, the student terminal 2 drops and prepares to re-access the local electronic classroom, and at this time, the student terminal 2 sends an access instruction to the teacher terminal, where the access instruction may include an identifier (such as a flag bit 0) indicating that the student terminal is not accessing for the first time and an output time of a video frame decoded and displayed when the student terminal 2 drops.
After receiving the access instruction of the student end 2, the teacher end calls a group of frame sequences stored currently, determines a video frame of which the output time is located after the output time of the access instruction of the student end 2 in the frame sequences, wherein the determined video frame is a video frame missing from the disconnection time to the current time of the student end, and then sends the determined video frame to the student end 2. The student end 2 decodes the received video frames in sequence to quickly display the screen projection video of the teacher end, and the screen is not shown or the process of waiting for display is avoided.
In the second case, the student side 4 is ready to access the local electronic classroom, and the student side 4 sends an access instruction to the teacher side, where the access instruction may include an identifier (e.g., flag 1) indicating the first access.
After receiving the access instruction of the student terminal 4, the teacher terminal retrieves a group of currently stored frame sequences and sends all video frames in the frame sequences to the student terminal 4. The student end 4 decodes the received video frames in sequence to quickly display the screen projection video of the teacher end, and the screen is not shown or the process of waiting for display is avoided.
With continued reference to the remote electronic classroom illustrated in fig. 2. Assuming that a teacher end located in the region 2 is a first electronic device, students ends located in the regions 1 and 2 are second electronic devices, a teacher end, a student end 1 located in the region 1, and a student end 2 and a student end 3 located in the region 2 are initially accessed to the server.
Based on this, when the teacher end gives lessons in videos, the server projects the screen of the teacher end to the screens of the student ends 1, 2 and 3 through the screen projection function, and remote video teaching is achieved. And in the process of outputting the video by the teacher end, the server dynamically stores a group of frame sequences output last time by the teacher end.
In the first case, the student terminal 2 is disconnected and ready to re-access the remote electronic classroom due to the network failure, and then the student terminal 2 sends an access instruction to the server, where the instruction may include an identifier (such as a flag bit 0) indicating that the access is not the first time, and an output time of a video frame decoded and displayed when the student terminal 2 is disconnected.
After receiving the access instruction of the student end 2, the server calls a group of frame sequences stored currently, determines a video frame of which the output time is located after the output time of the access instruction of the student end 2 in the frame sequences, wherein the determined video frame is a video frame missing from the disconnection time to the current time of the student end, and further sends the determined video frame to the student end 2. The student end 2 decodes the received video frames in sequence to quickly display the screen projection video of the teacher end, and the screen is not shown or the process of waiting for display is avoided.
In the second case, the student side 4 is ready to access the remote electronic classroom, and the student side 4 sends an access instruction to the server, where the access instruction may include an identifier (e.g., flag 1) indicating the first access.
The server receives a set of frame sequences that is retrieved and is currently stored, and sends all video frames in the set of frame sequences to the student terminal 4. The student end 4 decodes the received video frames in sequence to quickly display the screen projection video of the teacher end, and the screen is not shown or the process of waiting for display is avoided.
In correspondence with the above information processing method, the present application also discloses an information processing apparatus, as shown in fig. 8, including:
an obtaining module 10, configured to, when a first electronic device outputs a video, obtain a set of frame sequences that have been output by the first electronic device, where the frame sequences include an initial frame that is output last by the first electronic device and a difference frame that is output from an output time of the initial frame to a current time, and the difference frame is a video frame that is at least different from the initial frame;
and a sending module 20, configured to send the frame sequence to the second electronic device when receiving an access instruction of the second electronic device, so that the second electronic device outputs the video.
According to the information processing apparatus provided by the embodiment of the application, when the second electronic device accesses the first electronic device in the screen projection process, a group of frame sequences output by the first electronic device last time can be obtained, the second electronic device can rapidly display the screen projection video of the first electronic device by decoding and displaying the frame sequences, and the frame sequences include the initial frame which can be used as the reference frame and do not need to wait, so that the screen splash or the display waiting process does not occur, and the screen projection efficiency of the video conference is improved.
In another embodiment of the processing apparatus disclosed in the present application, the sending module 20 sends the frame sequence to the second electronic device, including:
obtaining an access state of a second electronic device; and transmitting the target frame matched with the access state in the frame sequence to the second electronic equipment.
In another embodiment of the processing apparatus disclosed in the present application, the accessing status includes first accessing, and the sending module 20 sends a target frame matching the accessing status in the frame sequence to the second electronic device, including:
the initial frame and the difference frame are transmitted to the second electronic device.
In another embodiment of the processing apparatus disclosed in the present application, the access status includes a dropped access, and the sending module 20 sends a target frame matched with the access status in the frame sequence to the second electronic device, including:
determining a video frame to be decoded and displayed when the second electronic equipment is disconnected; and sending the video frame of which the output time is positioned after the output time of the video frame displayed by decoding in the initial frame and the difference frame to the second electronic equipment.
In another embodiment of the processing apparatus disclosed in the present application, the sequence of frames has a frame identification marking each video frame therein, the frame identification being capable of characterizing the output instant of the video frame it marks.
In another embodiment of the processing apparatus disclosed in the present application, the process of determining, by the obtaining module 10, an initial frame that was last output by the first electronic device includes:
judging whether a video frame output by the first electronic equipment at the current moment is an initial frame; if so, taking the video frame output by the first electronic device at the current moment as the initial frame output by the first electronic device for the last time; if not, taking an initial frame which is output by the first electronic device and has an output time closest to the current time as an initial frame which is output by the first electronic device last time.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. An information processing method, the method comprising:
under the condition that a first electronic device outputs video, obtaining a group of frame sequences output by the first electronic device, wherein the frame sequences comprise an initial frame output last time by the first electronic device and a difference frame output from the output time of the initial frame to the current time, and the difference frame is a video frame different from the initial frame at least;
and in the case of receiving an access instruction of a second electronic device, transmitting the frame sequence to the second electronic device to enable the second electronic device to output the video.
2. The method of claim 1, wherein the transmitting the sequence of frames to the second electronic device comprises:
obtaining an access state of the second electronic device;
and sending a target frame matched with the access state in the frame sequence to the second electronic equipment.
3. The method of claim 2, wherein the access status comprises a first access, and the transmitting a target frame of the sequence of frames matching the access status to the second electronic device comprises:
transmitting the initial frame and the difference frame to the second electronic device.
4. The method of claim 2, wherein the access status comprises a dropped access, and the transmitting a target frame of the sequence of frames matching the access status to the second electronic device comprises:
determining a video frame to be decoded and displayed when the second electronic equipment is disconnected;
and sending the video frame of which the output time is positioned after the output time of the video frame displayed by decoding in the initial frame and the difference frame to the second electronic equipment.
5. A method according to claim 4, wherein the sequence of frames has a frame identity marking each video frame therein, the frame identity being capable of characterizing the output instant of the video frame it marks.
6. The method of claim 1, wherein the determining of the most recently output initial frame by the first electronic device comprises:
judging whether a video frame output by the first electronic equipment at the current moment is an initial frame;
if so, taking the video frame output by the first electronic device at the current moment as an initial frame output by the first electronic device for the last time;
if not, taking an initial frame which is output by the first electronic device and has an output time closest to the current time as an initial frame which is output by the first electronic device last time.
7. An information processing apparatus, the apparatus comprising:
an obtaining module, configured to, when a first electronic device outputs a video, obtain a set of frame sequences that have been output by the first electronic device, where the frame sequences include an initial frame that has been output last time by the first electronic device and a difference frame that is output from an output time of the initial frame to a current time, and the difference frame is a video frame that is at least different from the initial frame;
and the sending module is used for sending the frame sequence to the second electronic equipment under the condition of receiving an access instruction of the second electronic equipment so as to enable the second electronic equipment to output the video.
8. A first electronic device, the device comprising:
an output component for outputting video;
a processing component, configured to obtain a set of frame sequences that have been output by the output component, where the frame sequences include an initial frame that has been output last time by the first electronic device and a difference frame that is output from an output time of the initial frame to a current time, and the difference frame is a video frame that is at least different from the initial frame; and in the case of receiving an access instruction of a second electronic device, transmitting the frame sequence to the second electronic device to enable the second electronic device to output the video.
9. A server, the server comprising:
the memory is used for storing an application program and data generated by the running of the application program;
a processor for executing the application to perform the functions of: under the condition that a first electronic device outputs video, obtaining a group of frame sequences output by the first electronic device, wherein the frame sequences comprise an initial frame output last time by the first electronic device and a difference frame output from the output time of the initial frame to the current time, and the difference frame is a video frame different from the initial frame at least; and in the case of receiving an access instruction of a second electronic device, transmitting the frame sequence to the second electronic device to enable the second electronic device to output the video.
CN202010128574.2A 2020-02-28 2020-02-28 Information processing method, processing device, first electronic equipment and server Pending CN111314648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010128574.2A CN111314648A (en) 2020-02-28 2020-02-28 Information processing method, processing device, first electronic equipment and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010128574.2A CN111314648A (en) 2020-02-28 2020-02-28 Information processing method, processing device, first electronic equipment and server

Publications (1)

Publication Number Publication Date
CN111314648A true CN111314648A (en) 2020-06-19

Family

ID=71148388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010128574.2A Pending CN111314648A (en) 2020-02-28 2020-02-28 Information processing method, processing device, first electronic equipment and server

Country Status (1)

Country Link
CN (1) CN111314648A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113365143A (en) * 2021-05-31 2021-09-07 努比亚技术有限公司 Audio popping elimination method and related equipment
CN114567802A (en) * 2021-12-29 2022-05-31 沈阳中科创达软件有限公司 Data display method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101686391A (en) * 2008-09-22 2010-03-31 华为技术有限公司 Video coding/decoding method and device as well as video playing method, device and system
CN102137279A (en) * 2011-03-18 2011-07-27 福州瑞芯微电子有限公司 Method for realizing disconnection continuous playing of on-line video of portable electronic equipment
CN104735463A (en) * 2015-03-26 2015-06-24 南京传唱软件科技有限公司 Streaming media transmission method and system
WO2017203790A1 (en) * 2016-05-25 2017-11-30 株式会社Nexpoint Video splitting device and monitoring method
CN109547860A (en) * 2018-12-07 2019-03-29 晶晨半导体(上海)股份有限公司 A kind of method and IPTV playing device of the video suspension continued broadcasting of program request
CN110213642A (en) * 2019-05-23 2019-09-06 腾讯音乐娱乐科技(深圳)有限公司 Breakpoint playback method, device, storage medium and the electronic equipment of video
CN110784740A (en) * 2019-11-25 2020-02-11 北京三体云时代科技有限公司 Video processing method, device, server and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101686391A (en) * 2008-09-22 2010-03-31 华为技术有限公司 Video coding/decoding method and device as well as video playing method, device and system
CN102137279A (en) * 2011-03-18 2011-07-27 福州瑞芯微电子有限公司 Method for realizing disconnection continuous playing of on-line video of portable electronic equipment
CN104735463A (en) * 2015-03-26 2015-06-24 南京传唱软件科技有限公司 Streaming media transmission method and system
WO2017203790A1 (en) * 2016-05-25 2017-11-30 株式会社Nexpoint Video splitting device and monitoring method
CN109547860A (en) * 2018-12-07 2019-03-29 晶晨半导体(上海)股份有限公司 A kind of method and IPTV playing device of the video suspension continued broadcasting of program request
CN110213642A (en) * 2019-05-23 2019-09-06 腾讯音乐娱乐科技(深圳)有限公司 Breakpoint playback method, device, storage medium and the electronic equipment of video
CN110784740A (en) * 2019-11-25 2020-02-11 北京三体云时代科技有限公司 Video processing method, device, server and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113365143A (en) * 2021-05-31 2021-09-07 努比亚技术有限公司 Audio popping elimination method and related equipment
CN113365143B (en) * 2021-05-31 2024-03-19 努比亚技术有限公司 Audio pop sound eliminating method and related equipment
CN114567802A (en) * 2021-12-29 2022-05-31 沈阳中科创达软件有限公司 Data display method and device
CN114567802B (en) * 2021-12-29 2024-02-09 沈阳中科创达软件有限公司 Data display method and device

Similar Documents

Publication Publication Date Title
CN112752114B (en) Method and device for generating live broadcast playback interactive message, server and storage medium
CN109089131B (en) Screen recording live broadcast method, device, equipment and storage medium based on IOS system
CN111314648A (en) Information processing method, processing device, first electronic equipment and server
CN111641804A (en) Video data processing method and device, terminal, camera and video conference system
CN112423140A (en) Video playing method and device, electronic equipment and storage medium
CN107027064A (en) The processing method and processing device of video file in wireless screen transmission
US20240179372A1 (en) Method and System for Monitoring Playing of Screen Device, and Storage Medium
CN108235111B (en) Information sharing method and intelligent set top box
CN111935497B (en) Video stream management method and data server for traffic police system
CN113259779A (en) Video processing method, device, equipment and storage medium
CN112738620A (en) Media resource screen projection method, equipment and system
CN114501136B (en) Image acquisition method, device, mobile terminal and storage medium
CN114422866B (en) Video processing method and device, electronic equipment and storage medium
CN112153413B (en) Method and server for processing screen splash in one-screen broadcast
CN112153412B (en) Control method and device for switching video images, computer equipment and storage medium
CN111107296B (en) Audio data acquisition method and device, electronic equipment and readable storage medium
CN112839097A (en) Remote control method, equipment and system
CN113691815A (en) Video data processing method, device and computer readable storage medium
CN111475240A (en) Data processing method and system
CN113905272B (en) Control method of set top box, electronic equipment and storage medium
CN113542657B (en) Method and device for recovering black screen
CN112839235B (en) Display method, comment sending method, video frame pushing method and related equipment
CN110636348B (en) Video playing method, device and system
CN110932941B (en) Cloud screen connection state detection method, server and storage medium
CN113556564A (en) Scene recovery method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619

RJ01 Rejection of invention patent application after publication