CN111131817A - Screen sharing method, device, storage medium and screen sharing system - Google Patents

Screen sharing method, device, storage medium and screen sharing system Download PDF

Info

Publication number
CN111131817A
CN111131817A CN201911405055.XA CN201911405055A CN111131817A CN 111131817 A CN111131817 A CN 111131817A CN 201911405055 A CN201911405055 A CN 201911405055A CN 111131817 A CN111131817 A CN 111131817A
Authority
CN
China
Prior art keywords
image
mode
screen sharing
decoding
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911405055.XA
Other languages
Chinese (zh)
Other versions
CN111131817B (en
Inventor
张爱明
王林
赖学武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201911405055.XA priority Critical patent/CN111131817B/en
Publication of CN111131817A publication Critical patent/CN111131817A/en
Application granted granted Critical
Publication of CN111131817B publication Critical patent/CN111131817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay

Abstract

The application provides a screen sharing method, a device, a storage medium and a screen sharing system, wherein the screen sharing method comprises the following steps: respectively determining an image acquisition mode and an image coding mode according to the image acquisition and coding related information; the image collecting and editing related information comprises hardware model information and/or software model information of sending end equipment, and/or image collecting and encoding parameter information, and/or image collecting and encoding evaluation parameter information; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing. The processing process realizes the intelligent switching between the software image acquisition and the hardware image acquisition, so that the image acquisition mode and the image coding mode conform to the actual image acquisition and coding scene, the constraint of the fixed image acquisition and coding mode on the image acquisition and coding effect is broken, and the image acquisition and coding effect can be improved.

Description

Screen sharing method, device, storage medium and screen sharing system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a screen sharing method, a device, a storage medium, and a screen sharing system.
Background
The screen sharing is an image processing technology for sharing the display content of a screen of a certain device to other devices, for example, sharing the screen display content of a teacher-side device to a student-side device in an intelligent teaching scene, sharing the screen display content of a speaker device to each participant device in an intelligent conference scene, and the like.
In the screen sharing process, screen image acquisition of the device to be shared is the basis and prerequisite for realizing screen sharing. A common screen sharing scheme specifies a specific processing mode for image acquisition and encoding on a device screen in advance, for example, image acquisition and encoding are performed by a hardware device, or image acquisition and encoding are performed by a software program. The display content of the screen of the device is constantly changed, and the screen sharing scheme cannot ensure that the preselected image acquisition and coding mode can be suitable for the actual image acquisition and coding scene, so that the image acquisition and coding effect is possibly poor.
Disclosure of Invention
Based on the defects and shortcomings of the prior art, the application provides a screen sharing method, device, equipment, storage medium and screen sharing system, which can select an adaptive image acquisition mode and an adaptive image coding mode according to image acquisition and coding information, so that the image acquisition and coding effect is improved.
A screen sharing method, comprising:
respectively determining an image acquisition mode and an image coding mode according to the image acquisition and coding related information;
the image collecting and editing related information at least comprises at least one of hardware model information of the sending end equipment, software model information of the sending end equipment, image collecting and editing parameter information and image collecting and editing evaluation parameter information; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing;
carrying out image acquisition processing on the screen display content of the sending terminal equipment according to the image acquisition mode, and carrying out coding processing on the acquired image according to the image coding mode to obtain a screen sharing data frame;
and sending the screen sharing data frame to a data distribution server so that the data distribution server sends the screen sharing data frame to a receiving end device.
A screen sharing apparatus comprising:
the image collecting and encoding mode determining unit is used for respectively determining an image collecting mode and an image encoding mode according to the image collecting and encoding related information;
the image collecting and editing related information at least comprises at least one of hardware model information of the sending end equipment, software model information of the sending end equipment, image collecting and editing parameter information and image collecting and editing evaluation parameter information; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing;
the acquiring and coding processing unit is used for acquiring and processing the screen display content of the sending terminal equipment according to the image acquisition mode and coding the acquired image according to the image coding mode to obtain a screen sharing data frame;
and the data sending unit is used for sending the screen sharing data frame to a data distribution server so that the data distribution server sends the screen sharing data frame to receiving end equipment.
A screen sharing device, comprising:
a memory and a processor;
the memory is connected with the processor and used for storing programs;
the processor is used for realizing the screen sharing method by operating the program stored in the memory.
A storage medium having stored thereon a computer program which, when executed by a processor, implements the screen sharing method described above.
A screen sharing system, comprising:
the system comprises sending end equipment, a data distribution server and receiving end equipment;
the sending terminal equipment is used for respectively determining an image acquisition mode and an image coding mode according to the image acquisition and coding related information; the image collecting and editing related information at least comprises at least one of hardware model information of the sending end equipment, software model information of the sending end equipment, image collecting and editing parameter information and image collecting and editing evaluation parameter information; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing; carrying out image acquisition processing on the screen display content of the sending terminal equipment according to the image acquisition mode, and carrying out coding processing on the acquired image according to the image coding mode to obtain a screen sharing data frame; sending the screen sharing data frame to a data distribution server;
the data distribution server is used for sending the screen sharing data frame sent by the sending end equipment to the receiving end equipment;
and the receiving end equipment is used for receiving the screen sharing data frame sent by the data distribution server and decoding and displaying the screen sharing data frame.
The processing process for determining the image acquisition mode and the image coding mode, which is included in the screen sharing method, ensures the strict matching of the determined image acquisition mode and the image coding mode with the actual image acquisition and coding related information, and realizes the intelligent switching of software image acquisition and hardware image acquisition, so that the image acquisition mode and the image coding mode conform to the actual image acquisition and coding scene. Compared with the common screen sharing scheme in the prior art that the preset image acquisition mode and the preset image coding mode are adopted for image acquisition and coding, the sending terminal equipment in the embodiment of the application has more flexible selection of the image acquisition mode and the image coding mode, can flexibly adapt to the change of an image acquisition and coding scene to select the optimal image acquisition and coding mode, breaks through the constraint of the fixed image acquisition and coding mode on the image acquisition and coding effect, and can improve the image acquisition and coding effect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a screen sharing system provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an image acquisition process provided in an embodiment of the present application;
fig. 3 is a schematic processing flow diagram of a data distribution server sending data to a student a device according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a screen sharing method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a screen sharing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a screen sharing device according to an embodiment of the present application.
Detailed Description
The technical scheme of the embodiment of the application is suitable for the screen sharing application scene, for example, the application scene can be applied to the application scene that the screen display content of the teacher end device is shared to each student end device, or the application scene that the screen display content of the conference speaker device is shared to each conference participant device, and the like.
By adopting the technical scheme of the embodiment of the application, the image editing mode with the optimal editing effect can be automatically selected according to the image editing scene, the screen image editing processing during screen sharing can be matched with the actual image editing scene, and the constraint of the fixed image editing mode on the image editing effect is broken.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An embodiment of the present application provides a screen sharing system, as shown in fig. 1, the system includes:
a sending end device 1, a data distribution server 2, and a receiving end device 3.
The screen sharing system described above is used to share the screen display content of the transmitting-end device 1 to the receiving-end device 3. The number of the sending-end device 1 and the data distribution server 2 in the screen sharing system is usually one, the number of the receiving-end devices 3 is plural, and the structure, the function and the device type of each receiving-end device 3 may be the same or different. That is, the screen sharing system described above realizes sharing of screen display contents of a single transmitting-side device 1 to a plurality of receiving-side devices 3 by the data distribution server 2.
The sending end device 1 is configured to determine an image acquisition mode and an image encoding mode respectively according to the image acquisition and encoding related information;
the above-mentioned image collecting and encoding related information refers to related information for collecting and encoding the screen display content of the sending end device 1, and specifically includes:
at least one of hardware model information of the sending end device 1, software model information of the sending end device 1, image editing parameter information, and image editing evaluation parameter information.
The hardware model information of the sending-end device 1 may include hardware model information of the sending-end device 1 itself, for example, information such as a processor model and a memory capacity of the sending-end device 1, or model information of image acquisition hardware installed in the sending-end device 1, for example, Nvidia Graphics card hardware information, Intel Graphics card hardware information, and the like.
As an exemplary implementation manner, since the image capturing hardware generally runs based on the device operating system, the hardware model information may specifically refer to information determined by combining two information, i.e., the operating system and the image capturing hardware, such as DX _ nventc encoding technology implemented by win7 system + Nvidia Graphics card hardware; DX _ qsv editing technology realized by win7 system + Intelgraphics card; and the win10 system + Dxgi _ qsv adopted technology implemented by Intel Graphics card hardware, and the like, and information of the adopted technology can be used as the above-mentioned hardware model information.
The software model information may include software model information of the sending end device 1 itself, for example, model information of an operating system of the sending end device 1, or model information of graphics editing software executed by the sending end device 1.
The image collecting and editing parameter information refers to parameter information for collecting an image of the screen display content of the sending end device 1 and encoding the collected image, and may specifically include any one or more of the following parameter information: desired code rate, buffer size, output resolution, key frame spacing, etc.
Wherein, the expected code rate refers to an expected image acquisition and coding code rate; the cache size refers to the size of the cache during image acquisition and encoding; the output resolution refers to the resolution of the image acquired and output by encoding; the key frame interval refers to a time interval between key data frames obtained by image acquisition and encoding.
The image collecting and encoding evaluation parameter information refers to parameter information for evaluating an image obtained by image collecting and encoding, and may specifically include any one or more of the following evaluation parameters: brightness, sharpness, signal-to-noise ratio, visual resolution, actual code rate, maximum byte amount of a single frame, mean square error of all frame data, identification degree of lines in a picture, reality degree of colors in the picture and the like.
The mean square error of all the frame data is the mean square error of the byte amount of each frame image obtained by encoding. The identification degree of the lines in the picture refers to the difference between the thickness of the lines in the image after the editing and the thickness of the lines in the original image. The reality degree of the color in the picture refers to the difference value between the color of the image after being coded and the color of the original image.
It can be understood that the hardware model information and/or the software model information, the image editing parameter information, and the image editing evaluation parameter information of the sending-end device 1 described above cover parameter information of image editing processing and parameter information of image editing results, and the above information is used as image editing related information and can be used for indicating image editing.
For example, if the image mining related information specifically includes related information of a technology with hardware model information DX _ qsv, an expected code rate of 400kbps, a key frame interval of 1 second, and a maximum byte amount per frame when screen display content of the teacher-side device is shared to the student-side device, the image mining related information indicates that image mining processing is performed on the screen display content of the teacher-side device, specifically performs image mining by using the technology of DX _ qsv, and the expected code rate of image mining is 400kbps, the key frame interval of 1 second, and the maximum byte amount per frame is 1 megabyte when the screen display content of the teacher-side device is shared by the student-side device. And carrying out image acquisition and encoding processing on the screen display content of the teacher side equipment according to the image acquisition and encoding related information to obtain image data with the code rate of 400kbps, the key frame interval of 1 second and the maximum byte amount of 1 million in a single frame.
For example, the specific content of each item of information of the above-mentioned image editing related information may be preset by the user and then automatically read by the sending-end device 1, or may also be directly automatically set by the sending-end device 1 according to the device information.
On the basis that the sending end device acquires the image editing related information, the embodiment of the application sets the sending end device 1 to determine an image acquisition mode and an image coding mode which are adaptive to the image editing related information according to the image editing related information.
The image acquisition mode is based on hardware processing or software processing, that is, an image acquisition mode implemented based on image acquisition hardware or an image acquisition mode implemented based on image acquisition software. The image acquisition mode realized based on image acquisition hardware refers to a mode of acquiring images by means of image acquisition hardware such as Nvidia Graphics card, Intel Graphics card and the like; the image acquisition mode realized based on the image acquisition software is a mode of acquiring images by using the CPU + the image acquisition software of the sending terminal equipment.
The image coding method is an image coding method based on hardware processing or software processing, and for example, an image coding method is performed by hardware such as Nvidia Graphics card and Intel Graphics card; or a mode of carrying out image coding by using the CPU + image acquisition software of the sending terminal equipment.
Further, different image capturing modes and image encoding modes have differences in specific image capturing parameters and image encoding parameters, in addition to differences in implementation based on hardware processing or software processing. Preferably, the difference between the image capturing parameters and the image coding parameters may be a difference between specific values of any parameter in the image capturing and coding parameters, and/or a difference between specific values of any parameter in the image capturing and coding evaluation parameters.
For example, for the image capturing mode based on the hardware processing, when the image capturing parameters are different, different image capturing modes based on the hardware processing are formed, for example, the image capturing mode with the expected code rate of 400kbps based on the hardware processing and the image capturing mode with the expected code rate of 500kbps based on the hardware processing are actually different image capturing modes based on the hardware processing.
It can be understood that the image acquisition mode and the image encoding mode based on software processing may bring great working pressure to the cpu of the sending-end device, because the cpu of the sending-end device needs to support cpu resource requirements of each part of the device itself. Therefore, compared with the image acquisition mode and the image coding mode based on software processing, the image acquisition mode and the image coding mode based on hardware processing can save cpu resources of the sending terminal device, and therefore, under the condition that the condition allows, the image acquisition mode and the image coding mode based on hardware processing should be preferred to perform image acquisition and coding processing. However, not all the sending-end devices are applicable to image editing based on hardware processing, or not all the image editing tasks in the editing scene can be implemented by the image editing mode based on hardware processing, and in order to enable the sending-end device 1 to select the image acquisition mode and the image coding mode applicable to the actual image editing scene, the embodiment of the present application sets that, after the sending-end device 1 acquires the image editing related information, the image acquisition mode and the image coding mode are automatically determined in real time by using the image editing related information.
As an exemplary implementation manner, when the sending-end device 1 determines the image acquisition manner and the image coding manner according to the image editing related information, specifically, according to the image editing related information, the sending-end device queries the image acquisition manner and the image coding manner matched with the image editing related information from a preset editing manner database, as the image acquisition manner and the image coding manner for performing image editing processing;
the editing mode database comprises various image acquisition modes, image coding modes and image editing related information matched with each combination.
Specifically, the combination of the image capturing method and the image coding method refers to a combination of an image capturing method and an image coding method, which is configured by matching an image capturing method based on hardware processing or an image capturing method based on software processing with an image coding method based on hardware processing or an image coding method based on software processing. The combination of the image acquisition mode and the image coding mode defines a specific mode for acquiring and coding the screen display content of the sending end device, for example, the image acquisition mode based on hardware processing is used for acquiring the image, and the image coding mode based on software processing is used for coding the acquired image; or, the image is captured by using an image capturing method based on hardware processing, and the captured image is encoded by using an image encoding method based on hardware processing.
The embodiment of the application respectively utilizes the image coding evaluation parameters to perform image coding evaluation aiming at each combination of the image acquisition mode and the image coding mode in advance, and stores each combination of the image acquisition mode and the image coding mode and the evaluation result corresponding to each combination into a coding mode database.
Meanwhile, the embodiment of the application also combines the evaluation result of each combination of the image acquisition mode and the image coding mode to determine the combination of the image acquisition mode and the image coding mode matched with each image acquisition and coding related information.
Each image editing related information is different image editing related information obtained by combining hardware model information and/or software model information, and/or image editing parameter information, and/or image editing evaluation parameter information.
Illustratively, in the embodiments of the present application, a matching relationship between the combination of the various image capturing methods and the image coding method and the various image capturing related information is established in advance through the following processes:
firstly, carrying out image acquisition and coding processing according to the image acquisition and coding parameter information by utilizing the combination of each image acquisition mode and each image coding mode to obtain an image acquisition and coding result;
evaluating the image editing result of each combination of the image acquisition mode and the image coding mode based on the editing evaluation parameter information to obtain an evaluation result;
then, a combination of the image capturing mode and the image coding mode with the optimal evaluation result corresponding to each image capturing related information is determined as a combination of the image capturing mode and the image coding mode matched with each image capturing related information.
For example, in the embodiment of the present application, the image acquisition mode based on hardware processing and the image acquisition mode based on software processing are respectively and uniformly sampled to obtain various combinations of image acquisition modes and image coding modes.
For example, the expected code rate is uniformly sampled at intervals of 200Kb/S to 1 Mb/S; uniformly sampling the buffer memory with the size of 200Kb to 1Mb at an interval of 200 Kb; sampling the output resolution according to three grades of high, medium and low; sampling the input frame rate according to 10, 15, 24 and 30 grades; the keyframe interval is sampled at intervals 20 from 10 to 120. And then, matching with 3 mainstream hardware encoding modes and software encoding modes, the total number of the obtained combinations of the image acquisition modes and the image encoding modes is about N-5-3-4-6-7200.
Then, a section of dynamic and static combined picture is designed, such as a 10-minute picture, a two-minute fighting picture + a two-minute static picture (bright color) + a two-minute fighting picture + a two-minute static picture (monotonous color) + a two-minute frequent dynamic and static combined picture. The reason for this design is that, for example, when a teacher goes to a class, the teacher plays a PPT segment and puts a video segment, and the PPT segment and the video segment are frequently alternated, and the frequent change of the picture type has a great influence on the encoder, so that the information amount of the dynamic picture is very large, the information amount of the static picture is very small, and the adaptive capacity of the encoder is examined.
And respectively carrying out image acquisition and coding processing on the pictures by utilizing the combination of the 7200 image acquisition modes and the image coding modes. After encoding, the encoding results obtained by the combination of 7200 image acquisition modes and image encoding modes are evaluated. The parameter criteria evaluated were mainly: brightness, acutance, signal-to-noise ratio, visual resolution, actual code rate, single frame maximum byte amount, mean square error of all frame data, identification degree of lines in a picture and truth degree of colors in the picture.
Wherein, the brightness, the sharpness, the signal-to-noise ratio and the visual resolution can be calculated by adopting the existing mode.
The calculation modes of the actual code rate, the maximum byte amount of a single frame, the mean square error of all frame data, the identification degree of lines in the picture and the truth degree of colors in the picture are as follows:
the actual code rate is the total size of the coded data divided by the total playing time: and bps is B/T, wherein B is the total data size and T is the total playing time. The closer the actual code rate is to the expected code rate, the better the coding result.
Maximum byte amount per frame: max { F1,F2,F3,F4,...Fn,}. Wherein FiIs the data amount of the coded ith frame of image, and n represents the number of the acquired image frames in a fixed time interval. Maximum of single frameThe smaller the byte amount, the better the encoding result.
Mean square error of all frame data:
Figure BDA0002348408970000091
wherein u is { F1,F2,F3,F4,...FnAnd } average value. The smaller the mean square error of all frame data, the better the encoding result.
The line identification degree is mainly that an original line is added at a specified position in an image in advance, the gradient of the line position is calculated at the same position after encoding through convolution operation to identify the line, and the thickness difference of the identified line and the original line is compared.
The above-mentioned calculation of the gradient of the line position by convolution operation can be implemented according to the following convolution calculation formula:
Figure BDA0002348408970000092
wherein f (i-K, j-L) is a pixel value of a pixel point with coordinates (i-K, j-L) in the image, H (K, L) is an element of a K row and L column matrix in a K x L dimensional matrix H, each element in the matrix H is a convolution operator for calculating a gradient, and the matrix H can be a pre-designed matrix. G (i, j) is the gradient value of the pixel point with the coordinate (i, j) in the calculated identification degree line.
Then, the line width is calculated according to the following formula:
Wi=Pg1-Pg2
Figure BDA0002348408970000101
wherein, P is the coordinates of two side edges of the line obtained by the gradient (the position of the original line added in the image is known, and two coordinates with gradient value larger than the empirical threshold are found on two sides of the known line position as Pg1And Pg2,Pg1And Pg2Is a set of points which form the two side edges of the line), Pg1And Pg2Are the nearest two side edges, WiIs the width of the line at the ith row of pixels, and m is the total number of rows of pixels in the identification line.
Referring to the gradient value and line width calculation formula, gradient value coordinates (which may be an empirical value) that can be used as the two side edges of the line are selected according to a threshold (which may be an average value), and then the detected line width is calculated according to the coordinates. The closer the detection line width is to the original line width (the width of the designed line), the better the encoding result is.
Reality of colors in the picture: comparing the color differences of the two figures
Figure BDA0002348408970000102
Wherein, IniIs the color value (e.g., RGB color value or HSL color value) of the ith pixel point of the input image OutiIs the color value of the ith pixel point of the coded image. The smaller the color difference value delta I is, the better the coding result is.
Then, according to the calculation method of each evaluation parameter, the combination evaluation of the 7200 image acquisition modes and the image coding modes can be realized.
In practical implementation of the technical solution of the embodiment of the present application, a greater variety of combinations of image capturing modes and image coding modes can be designed by combining the evaluation parameters, and then, by referring to the evaluation parameter calculation method, evaluation of combinations of various image capturing modes and image coding modes based on the evaluation parameters of image capturing and coding can be achieved.
Furthermore, according to the evaluation results of various combinations, the combination of the image acquisition mode and the image coding mode corresponding to the same image acquisition and coding related information and having the optimal evaluation result can be selected as the combination of the image acquisition mode and the image coding mode matched with the image acquisition and coding related information.
It can be understood that, in the embodiment of the present application, the combination of the image acquisition mode and the image coding mode with the optimal evaluation result corresponding to the image acquisition and coding related information is used as the combination of the image acquisition mode and the image coding mode matched with the image acquisition and coding related information, and is stored in the coding mode database.
For example, assuming that a certain piece of image coding related information includes hardware model information, expected code rate information, and key frame interval information of a sending end device, a combination of an image acquisition mode and an image coding mode that is matched with the image coding related information refers to a combination of an image acquisition mode and an image coding mode that are optimal for an image coding result obtained by performing image coding processing according to the hardware model information, the expected code rate information, and the key frame interval information included in the image coding related information.
According to the processing of the embodiment of the application, corresponding to each image collecting and editing related information, the combination of the image collecting mode and the image coding mode matched with the image collecting and editing related information is stored in the collecting and editing mode database. When the sending end device is performing image editing, the combination of the image acquisition mode and the image coding mode matched with the image editing related information can be inquired and determined from the editing mode database according to the specific content of the acquired image editing related information, namely, the combination is used as the image acquisition mode and the image coding mode for performing image editing processing.
Or, in the actual implementation of the technical solution of the present application, after the evaluation of the image editing result of each image acquisition mode and image coding mode is completed, each combination and the evaluation result of each combination are directly stored in the editing mode database. After the sending terminal equipment acquires the image editing related information, inquiring the combination of the image acquisition mode and the image coding mode with the optimal evaluation result from the editing mode database according to the acquired image editing related information, and using the combination as the image acquisition mode and the image coding mode for image editing processing.
As can be seen from the above description, the sending-end device 1 determines the processing procedure of the image acquisition mode and the image coding mode, and ensures that the determined image acquisition mode and the determined image coding mode are strictly matched with the actual image acquisition and coding related information, and the image acquisition mode and the image coding mode with the optimal acquisition and coding effect are combined, so that the image acquisition mode and the image coding mode conform to the actual image acquisition and coding scene. Compared with the common screen sharing scheme in the prior art that the preset image acquisition mode and the preset image coding mode are adopted for image acquisition and coding, the sending terminal equipment in the embodiment of the application has more flexible selection of the image acquisition mode and the image coding mode, can flexibly adapt to the change of an image acquisition and coding scene to select the optimal image acquisition and coding mode, breaks through the constraint of the fixed image acquisition and coding mode on the image acquisition and coding effect, and can improve the image acquisition and coding effect.
After determining the image acquisition mode and the image coding mode, the sending end device 1 performs image acquisition processing on the screen display content of the sending end device according to the determined image acquisition mode, and performs coding processing on the acquired image according to the determined image coding mode to obtain a screen sharing data frame.
Then, the sending end device 1 sends the screen sharing data frame obtained by the editing to the data distribution server 2, so that the data distribution server 2 sends the screen sharing data frame to the receiving end device 3, thereby realizing the purpose of sharing the screen display content of the sending end device 1 to the receiving end device 3.
Illustratively, the transmitting-end device 1 encodes the acquired image into a key frame and a difference frame, that is, the screen sharing data frame includes a screen sharing key data frame and a screen sharing difference data frame. When the screen sharing key data frame or the screen sharing difference data frame is obtained by encoding, the sending end device 1 immediately sends the obtained screen sharing key data frame or the obtained screen sharing difference data frame to the data distribution server 2.
It should be noted that, when the image acquisition mode and the image coding mode determined by the sending end device 1 are the image acquisition mode and the image coding mode based on hardware processing, if an editing failure occurs in the image editing process, the sending end device 1 automatically switches to the image acquisition mode and the image coding mode based on software processing to perform image editing processing, that is, performs image acquisition and coding in a cpu + image editing software mode.
As a preferred implementation manner, when performing image acquisition processing on the screen display content of the sending end, the sending end device 1 performs intelligent adjustment control on image acquisition frequency, and when the refreshing frequency of the screen display content of the sending end device is higher, the sending end device 1 performs image acquisition processing on the screen display content of the sending end device at higher image acquisition frequency; when the refresh frequency of the screen display content of the sending end device is low, the sending end device 1 performs image acquisition processing on the screen display content of the sending end device at a low image acquisition frequency.
Illustratively, when the sending end device 1 performs image acquisition on the screen display content of the sending end device according to the determined image acquisition mode, determining whether the screen display content of the sending end device is a dynamic image in real time, and if the screen display content of the sending end device is the dynamic image, performing image acquisition processing on the screen display content of the sending end device 1 according to a preset first image acquisition frequency; and if the image is not a dynamic image, performing image acquisition processing on the screen display content of the sending terminal device 1 according to a preset second image acquisition frequency. Wherein the first image acquisition frequency is greater than the second image acquisition frequency.
Through the image acquisition frequency adjustment processing, the image acquisition frequency can be matched with the refreshing frequency of the screen display content of the sending end device 1, so that the system resource utilization efficiency of the sending end device 1 is improved.
As an exemplary implementation, referring to fig. 2, a schematic processing flow diagram of adjusting the image capturing frequency by the sending end device 1 is shown.
The sending end device 1 sets two stages of processes to acquire images of screen display contents.
Every time △ t1 time passes, the new and old Image difference △ Image of the screen is calculated, if the screen is changed (△ Image is nonzero), the new Image is put into an Image buffer, a take signal is sent to call the second-stage flow to take the Image, and the Image taking interval △ t2 of the second-stage flow is set to 1000/24 milliseconds, so that the Image taking interval is 24 frames per second.
If the screen has no change (△ Image is zero), the drawing interval of the second-stage flow is set to △ t 2-1000/2 ms, so that the second-stage flow takes 2 frames per second.
And the second stage flow will immediately take the image every △ t2 time or when receiving the take signal, and send the image into the encoding module to encode the image.
When the new and old Image difference △ Image is large, for example, larger than a certain threshold, the screen display content may be considered as a dynamic Image because the refresh frequency of the dynamic Image is higher and the new and old Image difference is larger, and when the new and old Image difference △ Image is smaller, for example, smaller than a certain threshold, the screen display content may be considered as not a dynamic Image because the refresh frequency of the non-dynamic Image is lower and the new and old Image difference is smaller, for example, the new and old Image difference △ Image of the still Image is zero.
Based on the above processing of the sending end device 1, when the screen display content of the sending end device 1 is a dynamic image, a higher acquisition frequency is maintained, for example, 24 frames per second, so that the playing end can watch the dynamic image more smoothly; when the screen display content of the sending-end device 1 is a static image, the acquisition frequency is switched to a lower acquisition frequency, for example, 2 frames per second, so that the use of the CPU can be reduced, and the whole operating system is more stable and has lower power consumption.
Alternatively, the new-Image difference △ Image may be calculated according to the following formula:
ΔImage=Max{abs(ColorPre1-Color1),...,abs(ColorPren-Colorn)}
where n is the number of pixels, ColorPre is the Color value of the previous frame Image, Color is the Color value of the current Image, and the computed new and old Image difference △ Image indicates that the Image has changed if it is greater than RGB {0, 0, 0} (assuming that the Image pixel values are represented by RGB Color values).
After receiving the screen sharing data frame sent by the sending-end device 1, the data distribution server 2 distributes the screen sharing data frame to each receiving-end device 3.
It should be noted that the screen sharing data frame sent by the sending end device 1 may be a screen sharing key data frame or a screen sharing difference data frame, and correspondingly, the data distribution server 2 sequentially forwards the received screen sharing key data frame or screen sharing difference data frame to each receiving end device 3 according to the sequence of the received data frames. It will be understood that the forwarding order of the received data frames by the data distribution server 2 matches the order in which the data frames were received.
In the above data distribution process, it should be ensured that the data frames received by the respective receiving end devices 3 are kept synchronized. In order to implement the synchronous forwarding of the screen sharing data frame to each receiving end device 3, the data distribution server 2 also synchronously sends the screen sharing synchronization signal to each receiving end device 3 in the process of sending the screen sharing data frame sent by the sending end device 1 to each receiving end device 3.
Illustratively, the data distribution server 2 establishes a data transmission pipeline for each receiving-end device 3 connected thereto, and the data distribution server 2 transmits a data frame to each receiving-end device 3 through the data transmission pipeline established for each receiving-end device 3.
Meanwhile, the data distribution server 2 is internally provided with a key data frame buffer area for buffering the screen sharing key data frame. Each time the data distribution server 2 receives the screen sharing key data frame transmitted by the transmitting-end device 1, the data stored in the key data frame buffer area is updated with the received screen sharing key data frame. Then, the latest screen sharing key data frame is always stored in the key data frame buffer.
Further, the data distribution server 2 also synchronously transmits a screen sharing synchronization signal when transmitting a data frame to each receiving-end device 3 through each data transmission pipeline.
When the data distribution server 2 transmits the screen sharing data frame transmitted by the transmitting end device 1 to the receiving end device 3, it is first determined whether the screen sharing data frame is a screen sharing key data frame or a screen sharing difference data frame.
If the key data frame is the screen sharing key data frame, the screen sharing key data frame is directly sent to each receiving end device 3 through a sending pipeline, and meanwhile, the screen sharing key data frame is stored in a key data frame buffer area.
And if the frame is the screen sharing difference data frame, processing the screen sharing difference data frame according to the currently sent screen sharing synchronization signal and the time tag of the screen sharing difference data frame.
Illustratively, the processing the screen sharing difference data frame specifically includes: and judging whether the time interval corresponding to the time tag of the screen sharing difference data frame and the time interval corresponding to the currently sent screen sharing synchronization signal are greater than a set first duration, namely judging whether the time of the screen sharing difference data frame lags behind the screen sharing synchronization signal by more than the first duration.
If the time length is longer than the set first time length, the screen sharing difference data frame is discarded, and the forwarding processing of the subsequent screen sharing data frame is continuously executed.
And if the time length is not more than the set first time length, sending the screen sharing difference data frame to the receiving end equipment 3.
It is to be understood that the time of the screen sharing synchronization signal described above may indicate the screen display content sharing progress of the transmitting-end device 1. If the time of the time stamp of the data transmitted to the receiving-end device 3 through the data transmission pipeline corresponds to the time of the screen sharing synchronization signal, it may be considered that the data transmitted to the receiving-end device 3 is synchronized with the screen display content sharing progress; if the time stamp of the data transmitted to the sink device 3 through the data transmission pipeline is behind the time of the screen sharing synchronization signal, it can be determined that the data transmitted to the sink device 3 is behind the screen display content sharing schedule.
The embodiment of the present application uses the first time length as a criterion for determining whether the data sent to the receiving-end device 3 is behind the screen display content sharing progress. If the interval between the time corresponding to the time tag of the currently transmitted screen sharing differential data frame and the time corresponding to the currently transmitted screen sharing synchronization signal is greater than the set first duration, that is, the currently transmitted screen sharing differential data frame lags behind the screen display content sharing progress by more than the first duration, discarding the screen sharing differential data frame, and continuing to process the subsequent screen sharing data frame, otherwise, transmitting the currently transmitted screen sharing differential data frame to the receiving end device 3 through the data transmission pipeline, so that the receiving end device 3 performs decoding display.
As for the screen sharing key data frame, since it contains a large amount of image information, as long as the data distribution server 2 receives the screen sharing key data frame, it immediately forwards it to each receiving end device 3, so that each receiving end device 3 decodes and displays an image as soon as possible.
It can be understood that the above-mentioned processing of discarding the screen sharing difference data frame by the data distribution server 2 realizes the skip sending of the screen sharing difference data frame, and because the screen sharing difference data frame only contains the difference part data of the key data frame shared by the screen, discarding some difference data frames does not affect the decoding and displaying of the image by the receiving end device 3, but can enable the receiving end device to catch up with the screen sharing progress.
As an exemplary implementation manner, when the data distribution server 2 determines whether the time of the screen sharing difference data frame to be sent currently lags behind the screen sharing synchronization signal by more than a first time length, that is, determines that the screen sharing difference data frame should be discarded, it further determines whether the time corresponding to the time tag of the screen sharing difference data frame and the interval of the time corresponding to the screen sharing synchronization signal to be sent currently are greater than a set second time length; wherein the second duration is longer than the first duration.
And if the time length is greater than the set second time length, discarding the screen sharing difference data frame group where the screen sharing difference data frame is located.
And if the time length is not more than the set second time length, discarding the screen sharing difference data frame.
Specifically, when the data distribution server 2 determines that the time corresponding to the time tag of the current screen sharing differential data frame to be sent lags behind the time of the current screen sharing synchronization signal and exceeds the second duration, that is, the current screen sharing differential data frame to be sent lags behind the screen display content sharing progress and exceeds the second duration, all screen sharing differential data frame groups where the current screen sharing differential data frame to be sent is located are discarded, otherwise, only the current screen sharing differential data frame to be sent is discarded.
It can be understood that when the screen sharing differential data frame to be sent lags behind the screen display content sharing progress by more than the second time length, that is, lags behind too much, the whole set of differential data frames where the screen sharing differential data frame to be sent is located are directly and completely discarded, which is more beneficial for the receiving end device 3 to catch up with the screen sharing progress.
For example, the second time period may be determined by referring to the following formula:
GroupTime=GroupSize*(1000/FrameRate)
wherein, GroupTime is the second time length, and the unit is millisecond, GroupSize is the interval between the screen sharing key data frames, and FrameRate is the set processing frame number per second.
The first duration can be flexibly set according to the actual application scene or the actual screen sharing synchronization requirement, but is ensured to be smaller than the second duration.
With reference to the data distribution processing procedure described above, the following describes a data transmission procedure of the data distribution server 2 with reference to a specific example:
assuming that the sending terminal device 1 is a teacher terminal device, after the data distribution server 2 establishes a connection with the student a device, a new data sending pipeline is established, which is dedicated to sending data to the student a device.
Then, referring to fig. 3, the process of the data distribution server 2 transmitting data to the student a device is as follows:
after establishing connection with student A equipment and establishing a data transmission assembly line, firstly judging whether a key data frame buffer area is empty, and if not, transmitting a screen sharing key data frame stored in the key data frame buffer area to the student A equipment; and if the data frame is empty, judging whether the screen sharing data frame to be transmitted on the data transmission pipeline exists or not.
If yes, taking a frame of screen sharing data frame from the data sending pipeline;
judging whether the frame lags behind the screen sharing synchronization signal and exceeds a threshold value;
if yes, judging whether the frame is a key frame again, and if not, discarding the frame; if the key frame is the key frame, the key frame is sent to the student A device.
If the frame does not lag the screen sharing synchronization signal by more than the threshold, the frame is sent to the student A device.
It can be seen that, when performing data distribution, the data distribution server 2 distinguishes between the key data frame and the difference data frame, and discards the difference data frame in due time to ensure the synchronicity of sharing the screen display content of the sending end device 1 to each receiving end device 3.
Further, since the data distribution server 2 stores the latest screen sharing key data frame in the key data frame buffer area, in the data distribution process, if a new receiving end device is newly connected to the data distribution server 2 and requests the data distribution server 2 for image data, that is, a new receiving end device participates in screen sharing in the screen sharing process, the data distribution server 2 sends the screen sharing key data frame stored in the key data frame buffer area to the newly added receiving end device, so that the receiving end device decodes and displays an image as soon as possible.
It can be understood that the above-mentioned key data frame buffer enables the receiving end device joining in the screen sharing in the middle to display the image at the fastest speed, and ensures the synchronization of the whole screen sharing process as a whole.
When the receiving end device 3 receives the screen sharing data frame sent by the data distribution server 2 and performs decoding display, the receiving end device is specifically configured to:
firstly, receiving a screen sharing data frame sent by a data distribution server 2;
the screen sharing data frame may be a screen sharing key data frame or a screen sharing difference data frame.
Then, the receiving end device 2 determines an image decoding mode according to the image decoding related information;
the image decoding related information comprises at least one of image acquisition and coding parameter information, image decoding evaluation parameter information, receiving end equipment hardware model information and receiving end equipment software model information.
The above-mentioned image editing parameter information is selected and determined by the sending end device 1, and when the receiving end device 3 receives the screen sharing data frame forwarded by the data distribution server 2, the image editing parameter information of the screen sharing data frame is already determined, so that the receiving end device 3 only needs to determine the image editing parameter information thereof through the received screen sharing data frame.
The above-mentioned image decoding evaluation parameter information refers to parameter information for evaluating the image decoding result of the receiving end device 3, and may specifically be decoding time, decoded image effect, and the like, where the decoded image effect may be measured by parameters such as decoding definition, brightness, sharpness, and signal-to-noise ratio.
The above-mentioned receiving end device hardware model information may specifically refer to signal information of a hardware decoding apparatus in the receiving end device 3, or hardware model information of the receiving end device 3 itself, for example, device type information of the receiving end device 3.
The above-mentioned receiving-end device software model information refers to model information of image data decoding software installed on the receiving-end device 3, or software system model information of the receiving-end device 3, and the like.
The image decoding method is an image decoding method based on hardware processing or an image decoding method based on software processing. The image decoding method based on hardware processing refers to a processing method of performing image decoding by using image decoding hardware on the receiving end device 3; the above-described image decoding method based on software processing is a processing method of performing image decoding by cpu + image decoding software of the receiving-side device 3.
In general, an image decoding method based on software processing may bring a great working pressure to the cpu of the receiving device, because the cpu of the receiving device needs to support cpu resource requirements of each part of the device itself. Therefore, the image decoding method based on the hardware processing can save cpu resources of the receiving device compared to the image decoding method based on the software processing, and therefore, the image decoding method based on the hardware processing should be preferred under the condition that the condition allows. However, not all the receiving end devices are adapted to the image decoding based on the hardware processing, or not all the image decoding tasks in the decoding scene can be realized by the image decoding based on the hardware processing, and in order to enable the receiving end device 3 to select the image decoding method adapted to the actual image decoding scene, the embodiment of the present application sets that, after the receiving end device 3 receives the screen sharing data frame, the image decoding related information is determined, and then the image decoding method is automatically determined in real time by using the image decoding related information.
As an exemplary implementation manner, when the receiving-end device 3 determines the image decoding manner according to the image decoding related information, specifically, according to the image decoding related information, the image decoding manner matched with the image decoding related information is inquired from a preset decoding manner database as the image decoding manner for performing the image decoding processing.
The decoding mode database comprises information of various image decoding modes and image decoding related information matched with each image decoding mode.
The various image decoding methods described above include an image decoding method based on hardware processing and an image decoding method based on software processing. The image decoding method based on hardware processing and the image decoding method based on software processing respectively comprise at least one, wherein the specific values of the image acquisition and coding parameter information or the image decoding evaluation parameter information corresponding to different types of image decoding methods based on hardware processing or different types of image decoding methods based on software processing are different. That is, different image acquisition and encoding parameter information or image decoding evaluation parameter information may constitute different image decoding methods based on hardware processing and different image decoding methods based on software processing.
It can be understood that the image decoding method described above has a difference in image acquisition and encoding parameter information or image decoding evaluation parameter information in addition to the difference based on hardware processing or software processing.
In the embodiment of the present application, image decoding evaluation is performed in advance for each image decoding method by using the image decoding evaluation parameter information, and each image decoding method and the evaluation result corresponding to each image decoding method are stored in the decoding method database.
Meanwhile, the embodiment of the application also combines the evaluation result of each image decoding mode to determine the image decoding mode matched with each image decoding related information. For example, the image decoding method matched with the image decoding-related information specifically refers to an image decoding method corresponding to the image decoding-related information and having an optimal evaluation result.
As an exemplary implementation manner, the embodiment of the present application establishes a matching relationship between various image decoding manners and image decoding-related information in advance by performing the following processing:
evaluating the decoding result of each image decoding mode based on the image decoding evaluation parameter information to obtain an evaluation result; the decoding results of the various image decoding modes refer to decoding results obtained by decoding the same screen sharing data frame by the various image decoding modes;
and respectively determining the image decoding mode with the optimal evaluation result corresponding to each type of image decoding related information as the image decoding mode matched with each type of image decoding related information.
Specifically, the specific processing procedure for evaluating each image decoding method and the specific processing procedure for determining the image decoding method matching with the image decoding-related information are both evaluated in combination with the above-described combination of each image capturing method and image encoding method, and determining that a specific process of a combination of an image capturing mode and an image coding mode matched with each image capturing and coding related information corresponds, the processing idea is to determine various image decoding modes, then decode the same image data frame by using various image decoding modes, and evaluating the decoding result of each image decoding mode by using the same image decoding evaluation parameter, and finally determining the image decoding mode matched with the relevant information of each image decoding according to the evaluation result, wherein the specific contents can be described above and are not repeated here.
As a preferred implementation manner, in the embodiment of the present Application, when the receiving end device 3 queries, according to the image decoding related information, that the image decoding manner matched with the image decoding related information is a decoding manner based on hardware processing from a preset decoding manner database, the receiving end device 3 further pre-calls an Application Programming Interface (API) of the hardware device corresponding to the decoding manner based on hardware processing.
If the application program interface of the hardware equipment can be successfully called, the decoding mode based on hardware processing is used as a determined image decoding mode;
and if the application program interface of the hardware equipment cannot be successfully called, taking the decoding mode based on the software processing as the determined image decoding mode.
Specifically, the types of the devices of the receiving device 3 are very diverse, for example, pads of various android systems and pads of IOS systems or pads of windows systems, and in the above-mentioned decoding method database establishing stage, it is necessary to cover various types of devices as much as possible.
Meanwhile, for the receiving end equipment of unknown type, the embodiment of the application also sets a set of calling scheme to pre-call the receiving end equipment so as to determine that the receiving end equipment can be used for image decoding processing.
Each hardware device provides a unique critical API interface. When the receiving end device 3 determines that the image decoding mode matched with the image decoding related information is the hardware processing-based decoding mode by querying the decoding mode database, calling a key API (application program interface) interface of the hardware device corresponding to the hardware processing-based decoding mode, if the calling fails, indicating that the hardware device is unavailable, and at the moment, switching to performing image decoding processing by using software, namely, using the software processing-based decoding mode as the finally determined image decoding mode; if the calling is successful, the hardware equipment is available, and the image decoding mode based on the hardware processing is taken as the determined image decoding mode.
The key API interface pre-call is not only applicable to hardware devices of unknown types, but also applicable to hardware devices corresponding to various known image decoding modes based on hardware processing stored in a decoding mode database. That is, if the receiving-side device determines that the decoding method based on the hardware processing is the image decoding method for performing the image decoding processing, before performing the image decoding by using the decoding method based on the hardware processing, it is verified whether the hardware device corresponding to the decoding method based on the hardware processing is available through the aforementioned critical API interface pre-call, thereby ensuring the reliability of the hardware decoding.
Further, the embodiment of the present application further records the number of times that the hardware device corresponding to each decoding mode based on the hardware processing is called and the number of times that the decoding succeeds. When a decoding method based on a certain hardware process is set as an image decoding method and is used for decoding a screen sharing data frame, the number of times of calling and the number of times of successful decoding of a hardware device corresponding to the decoding method based on the hardware process are stored in the decoding method database as the decoding result statistical data of the decoding method based on the hardware process, regardless of whether the decoding is successful or not.
Based on the above setting, before the receiving end device 3 pre-calls the application program interface of the hardware device corresponding to the decoding method based on the hardware processing, it is determined whether the application program interface of the hardware device corresponding to the decoding method based on the hardware processing needs to be pre-called according to the statistical data of the decoding result corresponding to the decoding method based on the hardware processing.
As an exemplary implementation manner, the embodiment of the present application determines whether an application program interface of a hardware device corresponding to the decoding manner based on hardware processing needs to be pre-invoked according to the following manner:
determining whether the application program interface of the hardware device corresponding to the decoding mode based on the hardware processing needs to be called in advance by calculating the value of TF in the following formula:
Figure BDA0002348408970000211
wherein TC is the total number of times that the hardware device corresponding to the decoding mode based on the hardware processing is called, SC is the number of times that the hardware device corresponding to the decoding mode based on the hardware processing successfully decodes, ROUND is rounding operation, MOD is remainder operation, Random represents any number from the remainder operation,
Figure BDA0002348408970000212
is a one-dimensional matrix or can be viewed as a collection of numbers. Such as TC 6, SC 2, the probability of successful decoding by the hardware device is one third,
Figure BDA0002348408970000213
the result of the calculation is the set 0,1,2, and the probability of Random getting 0 from it is also one third. If the value of the TF calculated according to the calculation formula is not 0, a key API interface of the hardware equipment is called in advance to carry out prejudgment; if the calculated TF is 0, the key API interface of the hardware equipment does not need to be called in advance for prejudgment, and the decoding mode based on hardware processing can be directly used as a determined image decoding mode.
It can be understood that if the hardware device corresponding to the hardware processing-based decoding method can successfully decode each time it is called, TF is always 0, and the hardware processing-based decoding method can be directly used as the determined image decoding method each time. Assuming that the decoding success rate of the hardware device corresponding to the hardware processing-based decoding method is half (SC/TC is 1/2), that is, only one half of the total number of times the hardware processing-based decoding method is called is successfully decoded, one half of TF is 0, and one half of the probability is that the hardware processing-based decoding method is directly used as the determined image decoding method. If only one third of the decoding times in the total times of calling the decoding mode based on the hardware processing is successful, one third of TF is 0, one third of TF has a probability of directly using the decoding mode based on the hardware processing as a determined image decoding mode, and the like.
Optionally, it may also be determined whether the application program interface of the hardware device corresponding to the hardware-processing-based decoding mode needs to be pre-invoked by calculating a value of TF in the following formula:
TF=Random(0,0,...,0,1,...,1)
wherein, the number of 1 in the set (0, 0.., 0, 1.., 1) in the formula is the value of TC-SC; the number of 0 is the value of SC and Random indicates any value taken from the following set. It is understood that the probability of taking 0 from the set is the probability of successful decoding of the hardware-based decoding method as an image decoding method. According to the formula, when the value of the TF is 0, the decoding mode based on the hardware processing can be directly used as a determined image decoding mode, and at the moment, a key API interface of the hardware equipment corresponding to the decoding mode based on the hardware processing does not need to be called in advance; when the value of the TF is not 0, the key API interface of the hardware device corresponding to the hardware-based decoding method needs to be pre-called.
It can be understood that the method judges whether to directly use the decoding mode based on the hardware processing as the determined image decoding mode or not and directly associates the success rate of calling the decoding mode based on the hardware processing in the historical image decoding work, thereby achieving the purpose of intelligently judging whether the decoding mode based on the hardware processing inquired and determined from the decoding mode database can be used as the image decoding mode according to the historical work experience.
According to the above description, after determining the image decoding mode according to the image decoding related information, the receiving end device 3 decodes the received screen sharing data frame according to the determined image decoding mode, and displays the decoded image.
The receiving-end device 3 may determine an image decoding method that matches an actual image decoding scene according to the actual image decoding related information by performing the above-described process of determining an image decoding method. The screen sharing data frame is decoded according to the determined image decoding mode, so that the resource utilization efficiency of the receiving terminal device 3 can be improved, that is, the decoding mode based on hardware processing or the decoding mode based on software processing is reasonably called to decode the screen sharing data frame, and the decoding efficiency is improved.
Corresponding to the image processing system described above, an embodiment of the present application further proposes a screen sharing method, which is applicable to a sending-end device in the image processing system shown in fig. 1, and as shown in fig. 4, the screen sharing method includes:
s401, respectively determining an image acquisition mode and an image coding mode according to image acquisition and coding related information;
the image collecting and editing related information at least comprises at least one of hardware model information of the sending end equipment, software model information of the sending end equipment, image collecting and editing parameter information and image collecting and editing evaluation parameter information; the image coding parameter information comprises any one or more of expected code rate, cache size, output resolution and key frame interval; the image acquisition, coding and evaluation parameter information comprises any one or more of brightness, acutance, signal-to-noise ratio, visual resolution, actual code rate, single-frame maximum byte quantity, mean square deviation of all frame data, identification degree of lines in a picture and truth degree of colors in the picture; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing.
S402, carrying out image acquisition processing on the screen display content of the sending terminal equipment according to the image acquisition mode, and carrying out coding processing on the acquired image according to the image coding mode to obtain a screen sharing data frame;
s403, sending the screen sharing data frame to a data distribution server, so that the data distribution server sends the screen sharing data frame to a receiving end device.
The processing procedure for determining the image acquisition mode and the image coding mode in the screen sharing method ensures that the determined image acquisition mode and the image coding mode are strictly matched with the actual image acquisition and coding related information, so that the image acquisition mode and the image coding mode conform to the actual image acquisition and coding scene. Compared with the common screen sharing scheme in the prior art that the preset image acquisition mode and the preset image coding mode are adopted for image acquisition and coding, the sending terminal equipment in the embodiment of the application has more flexible selection of the image acquisition mode and the image coding mode, can flexibly adapt to the change of an image acquisition and coding scene to select the optimal image acquisition and coding mode, breaks through the constraint of the fixed image acquisition and coding mode on the image acquisition and coding effect, and can improve the image acquisition and coding effect.
As an exemplary implementation manner, the determining an image acquisition manner and an image coding manner according to the image mining related information respectively includes:
according to the image mining related information, inquiring an image acquisition mode and an image coding mode matched with the image mining related information from a preset mining mode database as an image acquisition mode and an image coding mode for image mining processing;
the editing mode database comprises various image acquisition modes, image coding modes and image editing related information matched with each combination.
As an exemplary implementation manner, a matching relationship between a combination of various image acquisition manners and image coding manners and image acquisition related information is established in advance by the following processing:
based on the image mining and editing evaluation parameter information, evaluating the result of image mining and editing processing of each combination of the image acquisition mode and the image coding mode according to the image mining and editing parameter information to obtain an evaluation result;
and respectively determining the combination of the image acquisition mode and the image coding mode with the optimal evaluation result corresponding to each image acquisition and coding related information as the combination of the image acquisition mode and the image coding mode matched with each image acquisition and coding related information.
As an exemplary implementation manner, the performing, according to the image acquisition manner, image acquisition processing on the screen display content of the sending end device includes:
determining whether the screen display content of the sending terminal equipment is a dynamic image;
if the image is a dynamic image, carrying out image acquisition processing on the screen display content of the sending terminal equipment at a preset first image acquisition frequency according to the image acquisition mode;
if the image is not a dynamic image, carrying out image acquisition processing on the screen display content of the sending terminal equipment at a preset second image acquisition frequency according to the image acquisition mode;
wherein the first image acquisition frequency is greater than the second image acquisition frequency.
Specifically, please refer to the corresponding content introduction of the embodiment of the screen sharing system for the specific processing content of each step of the screen sharing method, which is not described herein again.
Corresponding to the screen sharing method described above, an embodiment of the present application further provides a screen sharing apparatus, as shown in fig. 5, the apparatus including:
the editing mode determining unit 100 is configured to determine an image acquisition mode and an image coding mode according to the image editing related information;
the image collecting and editing related information at least comprises at least one of hardware model information of the sending end equipment, software model information of the sending end equipment, image collecting and editing parameter information and image collecting and editing evaluation parameter information; the image coding parameter information comprises any one or more of expected code rate, cache size, output resolution and key frame interval; the image acquisition, coding and evaluation parameter information comprises any one or more of brightness, acutance, signal-to-noise ratio, visual resolution, actual code rate, single-frame maximum byte quantity, mean square deviation of all frame data, identification degree of lines in a picture and truth degree of colors in the picture; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing.
The encoding and decoding processing unit 110 is configured to perform image acquisition processing on the screen display content of the sending end device according to the image acquisition mode, and perform encoding processing on the acquired image according to the image encoding mode to obtain a screen sharing data frame;
a data sending unit 120, configured to send the screen sharing data frame to a data distribution server, so that the data distribution server sends the screen sharing data frame to a receiving end device.
The screen sharing device determines the processing process of the image acquisition mode and the image coding mode, and ensures the strict matching of the determined image acquisition mode and the image coding mode with the actual image acquisition and coding related information, so that the image acquisition mode and the image coding mode conform to the actual image acquisition and coding scene. Compared with the common screen sharing scheme in the prior art that the preset image acquisition mode and the preset image coding mode are adopted for image acquisition and coding, the screen sharing device in the embodiment of the application has more flexible selection of the image acquisition mode and the image coding mode, can flexibly adapt to the change of an image acquisition and coding scene to select the optimal image acquisition and coding mode, breaks through the constraint of the fixed image acquisition and coding mode on the image acquisition and coding effect, and can improve the image acquisition and coding effect.
As an exemplary implementation manner, when the editing mode determining unit determines the image acquisition mode and the image coding mode respectively according to the image editing related information, the editing mode determining unit is specifically configured to:
according to the image mining related information, inquiring an image acquisition mode and an image coding mode matched with the image mining related information from a preset mining mode database as an image acquisition mode and an image coding mode for image mining processing;
the editing mode database comprises various image acquisition modes, image coding modes and image editing related information matched with each combination.
As an exemplary implementation manner, a matching relationship between a combination of various image acquisition manners and image coding manners and image acquisition related information is established in advance by the following processing:
based on the image mining and editing evaluation parameter information, evaluating the result of image mining and editing processing of each combination of the image acquisition mode and the image coding mode according to the image mining and editing parameter information to obtain an evaluation result;
and respectively determining the combination of the image acquisition mode and the image coding mode with the optimal evaluation result corresponding to each image acquisition and coding related information as the combination of the image acquisition mode and the image coding mode matched with each image acquisition and coding related information.
As an exemplary implementation manner, when the editing processing unit 110 performs image acquisition processing on the screen display content of the sending end device according to the image acquisition manner, the editing processing unit is specifically configured to:
determining whether the screen display content of the sending terminal equipment is a dynamic image;
if the image is a dynamic image, carrying out image acquisition processing on the screen display content of the sending terminal equipment at a preset first image acquisition frequency according to the image acquisition mode;
if the image is not a dynamic image, carrying out image acquisition processing on the screen display content of the sending terminal equipment at a preset second image acquisition frequency according to the image acquisition mode;
wherein the first image acquisition frequency is greater than the second image acquisition frequency.
Specifically, please refer to the corresponding content description of the embodiment of the screen sharing system for the specific processing content of each step of the screen sharing device, which is not described herein again.
Corresponding to the above screen sharing method and apparatus, an embodiment of the present application further provides a screen sharing device, as shown in fig. 6, the device includes:
a memory 200 and a processor 210;
wherein, the memory 200 is connected to the processor 210 for storing programs;
the processor 210 is configured to implement the processing steps of the screen sharing method disclosed in any of the above embodiments by running the program stored in the memory 200.
Specifically, the screen projection processing device may further include: a bus, a communication interface 220, an input device 230, and an output device 240.
The processor 210, the memory 200, the communication interface 220, the input device 230, and the output device 240 are connected to each other through a bus. Wherein:
a bus may include a path that transfers information between components of a computer system.
The processor 210 may be a general-purpose processor, such as a general-purpose Central Processing Unit (CPU), microprocessor, etc., an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with the present invention. But may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The processor 210 may include a main processor and may also include a baseband chip, modem, and the like.
The memory 200 stores programs for executing the technical solution of the present invention, and may also store programs for operating systems and other critical services. In particular, the program may include program code including computer operating instructions. More specifically, memory 200 may include a read-only memory (ROM), other types of static storage devices that may store static information and instructions, a Random Access Memory (RAM), other types of dynamic storage devices that may store information and instructions, a disk storage, a flash, and so forth.
The input device 230 may include a means for receiving data and information input by a user, such as a keyboard, mouse, camera, scanner, light pen, voice input device, touch screen, pedometer, or gravity sensor, among others.
Output device 240 may include equipment that allows output of information to a user, such as a display screen, a printer, speakers, and the like.
Communication interface 220 may include any device that uses any transceiver or the like to communicate with other devices or communication networks, such as an ethernet network, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The processor 2102 executes programs stored in the memory 200 and invokes other devices that may be used to implement the various steps of the screen sharing methods provided by embodiments of the present application.
Another embodiment of the present application further provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the screen sharing method provided in any of the above embodiments.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps in the method of the embodiments of the present application may be sequentially adjusted, combined, and deleted according to actual needs.
The modules and sub-modules in the device and the terminal in the embodiments of the application can be combined, divided and deleted according to actual needs.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of a module or a sub-module is only one logical division, and there may be other divisions when the terminal is actually implemented, for example, a plurality of sub-modules or modules may be combined or integrated into another module, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules or sub-modules described as separate parts may or may not be physically separate, and parts that are modules or sub-modules may or may not be physical modules or sub-modules, may be located in one place, or may be distributed over a plurality of network modules or sub-modules. Some or all of the modules or sub-modules can be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, each functional module or sub-module in the embodiments of the present application may be integrated into one processing module, or each module or sub-module may exist alone physically, or two or more modules or sub-modules may be integrated into one module. The integrated modules or sub-modules may be implemented in the form of hardware, or may be implemented in the form of software functional modules or sub-modules.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software cells may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising", without further limitation, means that the element so defined is not excluded from the group consisting of additional identical elements in the process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A screen sharing method, comprising:
respectively determining an image acquisition mode and an image coding mode according to the image acquisition and coding related information;
the image collecting and editing related information at least comprises at least one of hardware model information of the sending end equipment, software model information of the sending end equipment, image collecting and editing parameter information and image collecting and editing evaluation parameter information; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing;
carrying out image acquisition processing on the screen display content of the sending terminal equipment according to the image acquisition mode, and carrying out coding processing on the acquired image according to the image coding mode to obtain a screen sharing data frame;
and sending the screen sharing data frame to a data distribution server so that the data distribution server sends the screen sharing data frame to a receiving end device.
2. The method according to claim 1, wherein the determining the image capturing mode and the image coding mode respectively according to the image capturing and coding related information comprises:
according to the image mining related information, inquiring an image acquisition mode and an image coding mode matched with the image mining related information from a preset mining mode database as an image acquisition mode and an image coding mode for image mining processing;
the editing mode database comprises various image acquisition modes, image coding modes and image editing related information matched with each combination.
3. The method according to claim 2, wherein the matching relationship between the combination of various image capturing modes and image coding modes and the image capturing and coding related information is established in advance by:
based on the image mining and editing evaluation parameter information, evaluating the result of image mining and editing processing of each combination of the image acquisition mode and the image coding mode according to the image mining and editing parameter information to obtain an evaluation result;
and respectively determining the combination of the image acquisition mode and the image coding mode with the optimal evaluation result corresponding to each image acquisition and coding related information as the combination of the image acquisition mode and the image coding mode matched with each image acquisition and coding related information.
4. The method according to claim 1, wherein the image capturing the screen display content of the sending end device according to the image capturing mode includes:
determining whether the screen display content of the sending terminal equipment is a dynamic image;
if the image is a dynamic image, carrying out image acquisition processing on the screen display content of the sending terminal equipment at a preset first image acquisition frequency according to the image acquisition mode;
if the image is not a dynamic image, carrying out image acquisition processing on the screen display content of the sending terminal equipment at a preset second image acquisition frequency according to the image acquisition mode;
wherein the first image acquisition frequency is greater than the second image acquisition frequency.
5. A screen sharing device, comprising:
a memory and a processor;
the memory is connected with the processor and used for storing programs;
the processor is configured to implement the screen sharing method according to any one of claims 1 to 4 by executing the program stored in the memory.
6. A storage medium having stored thereon a computer program which, when executed by a processor, implements the screen sharing method of any one of claims 1 to 4.
7. A screen sharing system, comprising:
the system comprises sending end equipment, a data distribution server and receiving end equipment;
the sending terminal equipment is used for respectively determining an image acquisition mode and an image coding mode according to the image acquisition and coding related information; the image collecting and editing related information at least comprises at least one of hardware model information of the sending end equipment, software model information of the sending end equipment, image collecting and editing parameter information and image collecting and editing evaluation parameter information; the image acquisition mode is an image acquisition mode based on hardware processing or software processing, and the image coding mode is an image coding mode based on hardware processing or software processing; carrying out image acquisition processing on the screen display content of the sending terminal equipment according to the image acquisition mode, and carrying out coding processing on the acquired image according to the image coding mode to obtain a screen sharing data frame; sending the screen sharing data frame to the data distribution server;
the data distribution server is used for sending the screen sharing data frame sent by the sending end equipment to the receiving end equipment;
and the receiving end equipment is used for receiving the screen sharing data frame sent by the data distribution server and decoding and displaying the screen sharing data frame.
8. The system according to claim 7, wherein the data distribution server synchronously transmits a screen sharing synchronization signal to the receiving end device in the process of transmitting the screen sharing data frame transmitted by the transmitting end device to the receiving end device;
then, the sending end device sends the screen sharing data frame sent by the data distribution server to the receiving end device, and the method specifically includes:
judging whether the screen sharing data frame is a screen sharing key data frame or a screen sharing difference data frame;
if the key data frame is the screen sharing key data frame, the screen sharing key data frame is sent to the receiving end equipment;
if the screen sharing difference data frame is the screen sharing difference data frame, judging whether the time interval corresponding to the time label of the screen sharing difference data frame and the time corresponding to the currently sent screen sharing synchronization signal is greater than a set first time length or not;
if the time length is larger than the set first time length, discarding the screen sharing difference data frame;
and if the time length is not more than the set first time length, sending the screen sharing difference data frame to the receiving terminal equipment.
9. The system according to claim 8, wherein discarding the screen sharing difference data frame if greater than a set first duration comprises:
if the time interval is longer than the set first time length, further judging whether the time interval corresponding to the time label of the screen sharing difference data frame and the time interval corresponding to the currently sent screen sharing synchronization signal are longer than a set second time length; wherein the second duration is greater than the first duration;
if the time length is longer than the set second time length, discarding the screen sharing difference data frame group where the screen sharing difference data frame is located;
and if the time length is not more than the set second time length, discarding the screen sharing difference data frame.
10. The system of claim 7, wherein the data distribution server is further configured to:
when a screen sharing key data frame is received, updating data stored in a key data frame buffer area by using the screen sharing key data frame;
and if an image data request sent by a receiving terminal device newly connected with the data distribution server is received, sending the screen sharing key data frame stored in the key data frame cache region to the receiving terminal device.
11. The system according to claim 7, wherein the receiving end device receives the screen sharing data frame sent by the data distribution server and performs decoding display, specifically comprising:
receiving a screen sharing data frame sent by the data distribution server;
determining an image decoding mode according to the image decoding related information; the image decoding related information at least comprises at least one of image acquisition and coding parameter information, image decoding evaluation parameter information, receiving end equipment hardware model information and receiving end equipment software model information; the image decoding mode is an image decoding mode based on hardware processing or an image decoding mode based on software processing;
and decoding the received screen sharing data frame according to the image decoding mode, and displaying the decoded image.
12. The system according to claim 11, wherein the receiving end device determines the image decoding manner according to the image decoding related information, and specifically includes:
according to the image decoding related information, inquiring an image decoding mode matched with the image decoding related information from a preset decoding mode database to serve as an image decoding mode for image decoding processing;
the decoding mode database comprises information of various image decoding modes and image decoding related information matched with each image decoding mode.
13. The system according to claim 12, wherein when the image decoding method matched with the image decoding-related information, which is obtained by querying from a preset decoding method database according to the image decoding-related information, is a hardware processing-based decoding method, the receiving end device is further configured to:
pre-calling an application program interface of the hardware equipment corresponding to the decoding mode based on the hardware processing;
if the application program interface of the hardware equipment can be successfully called, the decoding mode based on the hardware processing is used as an image decoding mode for image decoding processing;
and if the application program interface of the hardware equipment cannot be successfully called, taking the decoding mode based on the software processing as the image decoding mode for carrying out the image decoding processing.
14. The system according to claim 13, wherein before pre-calling the application program interface of the hardware device corresponding to the decoding manner based on the hardware processing, the receiving device is further configured to:
judging whether an application program interface of the hardware equipment corresponding to the decoding mode based on the hardware processing needs to be pre-called according to the decoding result statistical data corresponding to the decoding mode based on the hardware processing; wherein, the decoding result statistical data corresponding to the hardware processing-based decoding mode comprises the decoding times and decoding success times of the hardware equipment corresponding to the hardware processing-based decoding mode;
if the application program interface of the hardware equipment corresponding to the decoding mode based on the hardware processing needs to be pre-called, the application program interface of the hardware equipment corresponding to the decoding mode based on the hardware processing is pre-called;
and if the application program interface of the hardware equipment corresponding to the decoding mode based on the hardware processing does not need to be called in advance, taking the decoding mode based on the hardware processing as an image decoding mode for carrying out image decoding processing.
CN201911405055.XA 2019-12-31 2019-12-31 Screen sharing method, device, storage medium and screen sharing system Active CN111131817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911405055.XA CN111131817B (en) 2019-12-31 2019-12-31 Screen sharing method, device, storage medium and screen sharing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911405055.XA CN111131817B (en) 2019-12-31 2019-12-31 Screen sharing method, device, storage medium and screen sharing system

Publications (2)

Publication Number Publication Date
CN111131817A true CN111131817A (en) 2020-05-08
CN111131817B CN111131817B (en) 2023-12-01

Family

ID=70506088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911405055.XA Active CN111131817B (en) 2019-12-31 2019-12-31 Screen sharing method, device, storage medium and screen sharing system

Country Status (1)

Country Link
CN (1) CN111131817B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112416278A (en) * 2020-11-10 2021-02-26 北京五八信息技术有限公司 Screen sharing method and device, electronic equipment and storage medium
CN112929704A (en) * 2021-01-26 2021-06-08 游密科技(深圳)有限公司 Data transmission method, device, electronic equipment and storage medium
CN112995575A (en) * 2021-05-13 2021-06-18 广州朗国电子科技有限公司 Wireless screen projection transfer device, transfer control method and wireless screen projection system
CN113485780A (en) * 2021-07-22 2021-10-08 辽宁向日葵教育科技有限公司 Desktop transmission method based on web server
CN114356264A (en) * 2021-12-30 2022-04-15 威创集团股份有限公司 Signal generation method, device, equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1773054A2 (en) * 2005-10-07 2007-04-11 Samsung Electronics Co., Ltd. Method for performing video communication service and mobile communication terminal therefor
CN102457544A (en) * 2010-10-26 2012-05-16 深圳市誉融科技有限公司 Method and system for acquiring screen image in screen sharing system based on Internet
CN104753971A (en) * 2013-12-25 2015-07-01 北京新媒传信科技有限公司 Client based on teleconference and media source transmission method
CN106453915A (en) * 2016-10-31 2017-02-22 努比亚技术有限公司 Information processing method and mobile terminal
CN109792540A (en) * 2016-10-01 2019-05-21 英特尔公司 The hardware-accelerated method for video coding and system controlled using every frame parameter
CN110248192A (en) * 2019-06-12 2019-09-17 腾讯科技(深圳)有限公司 Encoder switching, decoder switching, screen sharing method and screen share system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1773054A2 (en) * 2005-10-07 2007-04-11 Samsung Electronics Co., Ltd. Method for performing video communication service and mobile communication terminal therefor
CN102457544A (en) * 2010-10-26 2012-05-16 深圳市誉融科技有限公司 Method and system for acquiring screen image in screen sharing system based on Internet
CN104753971A (en) * 2013-12-25 2015-07-01 北京新媒传信科技有限公司 Client based on teleconference and media source transmission method
CN109792540A (en) * 2016-10-01 2019-05-21 英特尔公司 The hardware-accelerated method for video coding and system controlled using every frame parameter
CN106453915A (en) * 2016-10-31 2017-02-22 努比亚技术有限公司 Information processing method and mobile terminal
CN110248192A (en) * 2019-06-12 2019-09-17 腾讯科技(深圳)有限公司 Encoder switching, decoder switching, screen sharing method and screen share system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112416278A (en) * 2020-11-10 2021-02-26 北京五八信息技术有限公司 Screen sharing method and device, electronic equipment and storage medium
CN112416278B (en) * 2020-11-10 2021-12-03 北京五八信息技术有限公司 Screen sharing method and device, electronic equipment and storage medium
CN112929704A (en) * 2021-01-26 2021-06-08 游密科技(深圳)有限公司 Data transmission method, device, electronic equipment and storage medium
CN112995575A (en) * 2021-05-13 2021-06-18 广州朗国电子科技有限公司 Wireless screen projection transfer device, transfer control method and wireless screen projection system
CN112995575B (en) * 2021-05-13 2021-10-01 广州朗国电子科技股份有限公司 Wireless screen projection transfer device, transfer control method and wireless screen projection system
CN113485780A (en) * 2021-07-22 2021-10-08 辽宁向日葵教育科技有限公司 Desktop transmission method based on web server
CN113485780B (en) * 2021-07-22 2022-04-29 辽宁向日葵教育科技有限公司 Desktop transmission method based on web server
CN114356264A (en) * 2021-12-30 2022-04-15 威创集团股份有限公司 Signal generation method, device, equipment and readable storage medium
CN114356264B (en) * 2021-12-30 2023-12-05 威创集团股份有限公司 Signal generation method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN111131817B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN111131817A (en) Screen sharing method, device, storage medium and screen sharing system
US10957358B2 (en) Reference and non-reference video quality evaluation
US10609390B1 (en) Adaptive screen encoding control
CN106454407A (en) Video live broadcast method and device
US8385425B2 (en) Temporal video filtering for real time communication systems
CN105163134A (en) Video coding parameter setting method, device and video coding device for live video
CN108632666B (en) Video detection method and video detection equipment
CN102959955A (en) Sharing an image
JP2017503399A (en) Handling of video frames damaged by camera movement
CN112887666B (en) Video processing method and device, network camera, server and storage medium
CN113660465A (en) Image processing method, image processing device, readable medium and electronic equipment
CN110415318B (en) Image processing method and device
WO2023056896A1 (en) Definition determination method and apparatus, and device
CN111447451A (en) Image coding and decoding method and device
CN107734278B (en) Video playback method and related device
CN113542890B (en) Video editing method, device, equipment and medium
CN112533029B (en) Video time-sharing transmission method, camera device, system and storage medium
CN107004018B (en) Data processing method and device
US20210058616A1 (en) Systems and Methods for Selective Transmission of Media Content
US6697429B2 (en) Motion video image transmission system using frame-to-frame comparison processing
CN112650596B (en) Cross-process sharing method, device and equipment for target data and storage medium
CN114727152B (en) Cross-platform multi-browser compatible monitoring video real-time preview method
CN113160342B (en) Encoding method and device based on feedback, storage medium and electronic equipment
US11936698B2 (en) Systems and methods for adaptive video conferencing
KR20040049425A (en) Mobile station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant