CN114466228A - Method, equipment and storage medium for improving screen projection display fluency - Google Patents

Method, equipment and storage medium for improving screen projection display fluency Download PDF

Info

Publication number
CN114466228A
CN114466228A CN202111669613.0A CN202111669613A CN114466228A CN 114466228 A CN114466228 A CN 114466228A CN 202111669613 A CN202111669613 A CN 202111669613A CN 114466228 A CN114466228 A CN 114466228A
Authority
CN
China
Prior art keywords
projected
video stream
screen
gray
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111669613.0A
Other languages
Chinese (zh)
Other versions
CN114466228B (en
Inventor
王海波
蔡富东
孔志强
唐尊良
吴亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Senter Electronic Co Ltd
Original Assignee
Shandong Senter Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Senter Electronic Co Ltd filed Critical Shandong Senter Electronic Co Ltd
Priority to CN202111669613.0A priority Critical patent/CN114466228B/en
Publication of CN114466228A publication Critical patent/CN114466228A/en
Application granted granted Critical
Publication of CN114466228B publication Critical patent/CN114466228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a method, equipment and a storage medium for improving the smoothness of screen projection display, wherein the method comprises the following steps: determining the corresponding coordinates of the area to be projected in the first video stream to be projected based on the full-screen image of any frame in the first video stream to be projected; based on the coordinates of the area to be projected, intercepting and graying all frame images in the first video stream to be projected so as to generate a first gray image set; and combining the gray images in the first gray image set and the frame rate of the first to-be-projected-screen video stream, under the condition that the data transmission quantity in unit time is determined to be larger than the preset video stream transmission bandwidth, coding a plurality of gray images in the first gray image set to generate a second to-be-projected-screen video stream, and sending the second to-be-projected-screen video stream to the display equipment. According to the method, the smoothness of the screen projection display can be improved under the conditions that the occupied transmission bandwidth is small, and the image resolution and the frame rate are not infinitely reduced.

Description

Method, equipment and storage medium for improving screen projection display fluency
Technical Field
The present application relates to the field of wireless communications technologies, and in particular, to a method, a device, and a storage medium for improving smoothness of screen projection display.
Background
With the development of communication technology, smart phones are popular, and meanwhile, many internet of things devices based on android operating systems are not designed with screens due to cost considerations, so that display and touch control cannot be performed. Therefore, it is a requirement to screen such devices on a mobile phone for display and reversely control the devices through the mobile phone, but the internet of things devices cannot use all bandwidth for the screen projection function due to the limitation of the interface type and application scene of the internet of things devices, and the screen projection function must occupy the bandwidth as little as possible.
In order to improve the fluency of screen projection display, the existing technical scheme is to reduce the video stream data amount during network transmission by reducing the resolution of images and the frame rate during transmission, so as to obtain smoother user experience under the condition of insufficient bandwidth. However, the resolution cannot be infinitely reduced, otherwise, the screen projection display is blurred, the user experience is affected, the same frame rate cannot be infinitely reduced, and otherwise, the screen projection display is blocked. Therefore, how to improve the smoothness of projection display becomes an urgent problem to be solved under the conditions of occupying less transmission bandwidth and not infinitely reducing the resolution and the frame rate of the image.
Disclosure of Invention
The embodiment of the application provides a method, equipment and a storage medium for improving the smoothness of screen projection display, which are used for solving the following technical problems: how to improve the smoothness of the display flow of the projection screen under the condition of occupying less transmission bandwidth and not infinitely reducing the resolution ratio and the frame rate of an image.
In a first aspect, the present application provides a method for improving smoothness of screen projection display, where the method includes: acquiring a full-screen image of any frame in a first video stream to be projected, and determining a corresponding area coordinate to be projected in the first video stream to be projected based on the full-screen image; intercepting all frame images in the first video stream to be projected based on the coordinates of the area to be projected so as to obtain a corresponding intercepted image set; carrying out graying processing on the intercepted image set to generate a first grayscale image set; determining the compression rate of each gray image in the first gray image set, and judging whether the data transmission quantity in unit time is greater than the preset video stream transmission bandwidth or not based on the compression rate, the resolution of each gray image and the frame rate of the first to-be-projected video stream; under the condition that the data transmission quantity in unit time is determined to be larger than the preset video stream transmission bandwidth, determining a plurality of gray level images in the first gray level image set as gray level images to be coded; and coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to the display equipment.
According to the method for improving the smoothness of the screen projection display flow, the full-screen image of any frame in the first screen video flow to be projected is obtained at first, and the corresponding coordinates of the area to be projected in the first screen video flow to be projected are determined according to the full-screen image. Then all frame images in the first to-be-projected video stream are intercepted, and the intercepted images are subjected to graying processing to generate a first grayscale image set. According to the embodiment of the application, the data transmission amount in unit time can be reduced by carrying out gray processing on the intercepted image, namely, the bandwidth required by the data transmission amount is reduced to a certain extent. Obtaining the data volume in unit time based on the compression ratio and the resolution ratio of the gray level images and the frame rate of a first to-be-projected video stream, and determining a plurality of gray level images in a first gray level image set as to-be-encoded gray level images under the condition that the data transmission volume in unit time is determined to be larger than the preset video stream transmission bandwidth; and then coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to the display equipment. According to the method, the phenomenon that the projection screen display is fuzzy and unsmooth due to the fact that the resolution ratio and the frame rate of the image are infinitely reduced is effectively avoided, and the smoothness of the projection screen display can be improved under the condition that the occupied transmission bandwidth is small and the resolution ratio and the frame rate of the image are not infinitely reduced.
In an implementation manner of the present application, graying the captured image set to generate a first grayscale image set specifically includes: determining a first color value corresponding to each pixel point in the screenshot image in a color space; the color space corresponding to the screenshot image is a YUV color space; performing gray level calculation on the first color value corresponding to each pixel point through a preset calculation rule to determine a first gray value corresponding to each pixel point; and graying each screenshot image in the intercepted image set based on the first gray value corresponding to each pixel point to generate a first gray image set.
In an implementation manner of the present application, performing gray scale calculation on the first color value corresponding to each pixel point according to a preset calculation rule to determine the first gray scale value corresponding to each pixel point specifically includes: calculating each pixel point in the screenshot image through a black plug matrix to determine a black plug value corresponding to each pixel point; determining the coefficient of a preset gray level conversion function corresponding to each pixel point based on the black plug value; and performing gray calculation on the first color value corresponding to each pixel point based on a gray conversion function to determine the first gray value corresponding to each pixel point.
In an implementation manner of the present application, determining, based on the black plug value, a coefficient of a preset grayscale conversion function corresponding to each pixel point specifically includes: determining a black plug value preset interval corresponding to the black plug value; and determining the coefficient of the preset gray level conversion function corresponding to each pixel point based on the black plug value preset interval.
In an implementation manner of the present application, based on a coordinate of a region to be screen-projected, all frame images in a first video stream to be screen-projected are captured to obtain a corresponding captured image set, which specifically includes: generating a screen area interception instruction aiming at the first screen video stream to be projected based on the coordinates of the screen area to be projected; and intercepting all frame images in the first video stream to be shot based on the coordinates of the area to be shot contained in the intercepting instruction so as to obtain a corresponding intercepted image set.
According to the method and the device, all the frame images in the first video stream to be projected are captured, so that all image areas acquired by a user in subsequent video playing are areas needing to be projected.
In an implementation manner of the present application, after obtaining a full-screen image of any one frame in a first video stream to be projected, the method further includes: sending the full-screen image to a display device; the display equipment responds to the area selection instruction, selects an area to be subjected to screen capturing in the full-screen image, and determines coordinates of the area to be subjected to screen capturing based on the area to be subjected to screen capturing.
In an implementation manner of the present application, after a plurality of gray scale images to be encoded are encoded to obtain a second video stream to be projected, and the second video stream to be projected is sent to a display device, the method further includes: the display equipment analyzes the coding format of the second video stream to be projected so as to determine a corresponding coding rule; and decoding the second video stream to be projected based on the coding rule to obtain the second video stream to be projected.
In one implementation manner of the present application, under the condition that it is determined that the data transmission amount in a unit time is not greater than a preset video stream transmission bandwidth, a first grayscale image set is encoded to obtain a third video stream to be projected, and the third video stream to be projected is sent to a display device; and the third screen to be projected video stream contains screen projection information of the Internet of things equipment to be projected.
In a second aspect, an embodiment of the present application further provides an apparatus for improving smoothness of screen projection display, where the apparatus includes: a processor; and a memory having executable code stored thereon, which when executed, causes the processor to perform a method according to any one of claims 1-8.
In a third aspect, an embodiment of the present application further provides a non-volatile computer storage medium for improving a screen projection display smoothness, where the non-volatile computer storage medium stores computer-executable instructions, and the computer-executable instructions are configured to: acquiring a full-screen image of any frame in a first video stream to be projected, and determining a corresponding area coordinate to be projected in the first video stream to be projected based on the full-screen image; intercepting all frame images in the first video stream to be projected based on the coordinates of the area to be projected so as to obtain a corresponding intercepted image set; carrying out graying processing on the intercepted image set to generate a first grayscale image set; determining the compression rate of each gray image in the first gray image set, and judging whether the data transmission quantity in unit time is greater than the preset video stream transmission bandwidth or not based on the compression rate, the resolution of each gray image and the frame rate of the first to-be-projected video stream; under the condition that the data transmission quantity in unit time is determined to be larger than the preset video stream transmission bandwidth, determining a plurality of gray level images in the first gray level image set as gray level images to be coded; and coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to the display equipment.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a method for improving fluency of screen projection display according to an embodiment of the present disclosure;
fig. 2 is a schematic view of an internal structure of an apparatus for improving smoothness of screen projection display according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method, equipment and a storage medium for improving the smoothness of screen projection display, which are used for solving the following technical problems: how to improve the smoothness of the display flow of the projection screen under the condition of occupying less transmission bandwidth and not infinitely reducing the resolution ratio and the frame rate of an image.
The technical solutions proposed in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for improving fluency of screen projection display according to an embodiment of the present disclosure. As shown in fig. 1, a method for improving smoothness of screen projection display provided in an embodiment of the present application mainly includes the following steps:
step 101, obtaining a full-screen image of any frame in a first video stream to be projected, and determining a corresponding coordinate of a region to be projected in the first video stream to be projected based on the full-screen image.
Because many internet of things devices do not consider not to have a designed screen, and therefore cannot perform display and touch control, it is very necessary to project a virtual screen corresponding to the internet of things device onto a display device for viewing or operation, and it should be noted that the display device of the present application includes, but is not limited to, a mobile phone and a tablet computer, and existing mobile communication devices with display screens can all be used as display devices. In addition, it can be understood that no matter what content (including but not limited to an interactive interface or a video image presentation) is presented by the virtual screen, the virtual screen to be projected is converted into a video stream for transmission in the process of transmitting the virtual screen to the display screen.
In an embodiment of the application, in order to improve the display fluency of screen projection, any one of full-screen images acquired in a first to-be-projected-screen video stream may be sent to the display device, and the display device determines a to-be-captured area and generates coordinates of the to-be-captured area in response to an area selection instruction. It can be understood that, in order to enable a user to select an area needing to be projected through the viewing of a full-screen image, the application performs screen capture on any frame of the full-screen image in the acquired video stream. The method includes the steps that any frame of full-screen image is selected as an area to be projected according to different requirements of a user, namely, when the user plays a certain frame of image, the area to be projected cannot be determined, and when the user plays a certain frame of image, the area to be projected can be determined. The method comprises the steps that a user can select a next area needing to be projected after obtaining a full-screen image, wherein the area needing to be projected corresponds to an area selection instruction, the instruction can be generated by manually operating a display device, or can be automatically selected through an image preset in a server, namely when the server detects that the full-screen image is the same as one image preset in the server, the area selection instruction corresponding to the preset image is generated. After the selection is completed, the coordinates of the area to be projected of the full-screen image can be determined, so that the corresponding coordinates of the area to be projected in the video stream to be projected are determined.
And 102, intercepting all frame images in the first video stream to be projected based on the coordinates of the area to be projected so as to obtain a corresponding intercepted image set.
In one embodiment of the application, after the coordinates of the area to be projected are determined, an area to be projected intercepting instruction for a first video stream to be projected is generated based on the coordinates of the area to be projected; it can be understood that the interception instruction of the area to be projected includes coordinate information of the area to be projected. And based on the coordinate information of the area to be projected contained in the capturing instruction, capturing the frame images in the first video stream to be projected frame by frame. After all frame images in the first video stream to be projected are intercepted, an intercepted image set is constructed, and then the intercepted images intercepted by all the frame images are added into the intercepted image set.
103, carrying out graying processing on the intercepted image set to generate a first grayscale image set;
in one embodiment of the present application, after determining the set of truncated images, graying is performed on the set of truncated images. It can be understood that the gray image does not contain color information and only contains brightness information, so that after the captured image is grayed, the compression rate of the image can be effectively improved, and the required transmission bandwidth can be reduced.
First, it should be noted that, in the television system, the color space corresponding to the image is the YUV color space, and therefore, the color space corresponding to each screenshot image in the captured image set is also the YUV color space. It is also noted that the representation of color values in YUV color space includes three components (Y, U, V), where Y is the luminance component and U and V are the two chrominance components that make up the color.
Taking the grayscale of a clipped image as an example, first, a first color value corresponding to each pixel point in the clipped image needs to be determined. Because the gray value perceived by human eyes still has a certain difference with the brightness value of the image, in order to make the obtained gray image more conform to the perception of human eyes, the gray value of the first color value corresponding to each pixel point is calculated by a preset calculation rule, so as to determine the first gray value corresponding to each pixel point.
Specifically, firstly, each pixel point in the screenshot image is calculated through a black plug matrix to determine a black plug value corresponding to each pixel point. Then, a black plug value preset interval corresponding to the black plug value is determined, and a coefficient of a preset gray scale conversion function corresponding to the pixel point is determined based on the black plug value preset interval. It should be noted that the preset gray scale conversion function of the present application is a function applied to combine U and V in the color values into two chrominance components to generate a gray scale value conforming to human eye perception. The preset gradation conversion function can be expressed as the following equation:
yi=Yi+aiUi+biVi
wherein, yiExpressing the gray value, Y, of the ith pixeliThe brightness value of the ith pixel point in a YUV color model, UiIs the U chroma component of the ith pixel point, aiIs the U chroma component coefficient, V, of the ith pixel pointiIs the V chrominance component of the ith pixel point, biThe coefficient of the V chrominance component of the ith pixel point is obtained. It can be understood that the coefficients corresponding to the two chrominance components, that is, the coefficients required for presetting the grayscale conversion function, may be determined based on the black plug value preset interval corresponding to each pixel point. It is also understood that each preset interval of black plug values corresponds to two coefficients.
Further, after determining the coefficient of the preset gray scale conversion function, performing gray scale calculation on the first color value corresponding to each pixel point based on the gray scale conversion function to determine the first gray scale value corresponding to each pixel point. Therefore, a gray image corresponding to the screenshot image is obtained, and the first gray image set can be obtained by performing the graying processing method on all the screenshot images in the screenshot image set.
104, determining the compression ratio of each gray image in the first gray image set, and judging whether the data transmission quantity in unit time is greater than the preset video stream transmission bandwidth or not based on the compression ratio, the resolution ratio of each gray image and the frame rate of the first to-be-projected video stream;
in an embodiment of the present application, since the bandwidth occupied by video streaming transmission is to be reduced as much as possible, a bandwidth needs to be preset for the first to-be-projected video stream to determine whether the data transmission amount per unit time is greater than the preset video streaming transmission bandwidth. In addition, since the compression rate of each gray image in the first gray image set is constant, and the frame rate of the first to-be-projected video stream and the resolution of each gray image are also constant, it can be directly determined whether the data transmission amount of the gray images in unit time is greater than the preset video stream transmission bandwidth.
And 105, under the condition that the data transmission quantity in the unit time is determined to be larger than the preset video stream transmission bandwidth, determining a plurality of gray level images in the first gray level image set as gray level images to be coded.
In an embodiment of the present application, in a case where it is determined that the data transmission amount per unit time is greater than the preset video stream transmission bandwidth, it indicates that the data transmission amount per unit time at this time does not meet the preset requirement, and therefore, in a case where the compression rate and resolution of the grayscale images cannot be reduced, the frame rate of the video stream corresponding to the first grayscale image set can be appropriately reduced, so as to reduce the data transmission amount per unit time.
It should be noted that how to determine the plurality of grayscale images in the first grayscale image set as the grayscale images to be encoded is not limited in this application, and different grayscale image selection rules may be selected according to a specific scene. The selection rules are for example: and removing one gray image every other preset gray image, wherein the rest gray images are gray images to be coded.
And 106, coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to the display equipment.
In an embodiment of the present application, after determining a grayscale image to be encoded, the image to be encoded is encoded to generate a corresponding second video stream to be projected, where the second video stream to be projected is a video stream corresponding to a corresponding area to be projected in the first video stream to be projected.
In an embodiment of the application, after a second video stream to be projected is determined, the second video stream to be projected is sent to a display device, the display device receives the second video stream to be projected, then analyzes a coding format of the second video stream to be projected so as to determine a coding rule corresponding to the second video stream to be projected, and decodes the second video stream to be projected based on the coding rule, so as to obtain and display a plurality of gray-scale images to be coded in the second video stream to be projected.
In one embodiment of the present application, the method further comprises: under the condition that the data transmission quantity in the unit time is determined to be not more than the preset video stream transmission bandwidth, the data transmission quantity in the unit time does not need to be reduced, so that the gray level images in the first gray level image set can be directly encoded to generate a third video stream to be projected, and the third video stream to be projected can be sent to the display equipment.
Based on the same inventive concept, the embodiment of the present application further provides a device for improving the smoothness of the screen projection display, and the internal structure of the device is shown in fig. 2.
Fig. 2 is a schematic view of an internal structure of an apparatus for improving smoothness of screen projection display according to an embodiment of the present disclosure. As shown in fig. 2, the apparatus includes: a processor 201; the memory 202 has executable instructions stored thereon, which when executed, cause the processor 201 to perform a method for improving fluency of a projected display as described above.
In an embodiment of the present application, the processor 201 is configured to obtain a full-screen image of any frame in the first video stream to be projected, and determine, based on the full-screen image, a corresponding coordinate of a region to be projected in the first video stream to be projected; intercepting all frame images in the first video stream to be projected based on the coordinates of the area to be projected so as to obtain a corresponding intercepted image set; carrying out graying processing on the intercepted image set to generate a first grayscale image set; determining the compression rate of each gray image in the first gray image set, and judging whether the data transmission quantity in unit time is greater than the preset video stream transmission bandwidth or not based on the compression rate, the resolution of each gray image and the frame rate of the first to-be-projected video stream; under the condition that the data transmission quantity in unit time is determined to be larger than the preset video stream transmission bandwidth, determining a plurality of gray level images in the first gray level image set as gray level images to be coded; and coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to the display equipment.
Some embodiments of the present application provide a non-volatile computer storage medium corresponding to fig. 1 for improving the smoothness of a projected screen display, storing computer-executable instructions configured to:
acquiring a full-screen image of any frame in a first video stream to be projected, and determining a corresponding area coordinate to be projected in the first video stream to be projected based on the full-screen image;
intercepting all frame images in the first video stream to be projected based on the coordinates of the area to be projected so as to obtain a corresponding intercepted image set;
carrying out graying processing on the intercepted image set to generate a first grayscale image set;
determining the compression rate of the gray level image, and judging whether the data transmission amount in unit time is greater than the preset video stream transmission bandwidth or not based on the compression rate, the resolution of the gray level image and the frame rate of the first video stream to be projected;
under the condition that the data transmission quantity in unit time is determined to be larger than the preset video stream transmission bandwidth, determining a plurality of gray level images in the first gray level image set as gray level images to be coded;
and coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to the display equipment.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. Especially, for the internet of things device and medium embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
The system and the medium provided by the embodiment of the application correspond to the method one to one, so the system and the medium also have the beneficial technical effects similar to the corresponding method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for improving fluency of screen-projection display, the method comprising:
acquiring a full-screen image of any frame in a first video stream to be projected, and determining a corresponding area coordinate to be projected in the first video stream to be projected based on the full-screen image;
intercepting all frame images in the first video stream to be projected based on the coordinates of the area to be projected so as to obtain a corresponding intercepted image set;
graying the intercepted image set to generate a first grayscale image set;
determining the compression rate of each gray image in the first gray image set, and judging whether the data transmission quantity in unit time is greater than a preset video stream transmission bandwidth or not based on the compression rate, the resolution of each gray image and the frame rate of the first video stream to be projected;
under the condition that the data transmission quantity in unit time is determined to be larger than the preset video stream transmission bandwidth, determining a plurality of gray level images in the first gray level image set as gray level images to be coded;
and coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to a display device.
2. The method of claim 1, wherein graying the set of captured images to generate a first set of grayscale images, comprises:
determining a first color value corresponding to each pixel point in the screenshot image in a color space; the color space corresponding to the screenshot image is a YUV color space;
performing gray level calculation on the first color value corresponding to each pixel point according to a preset calculation rule to determine a first gray level value corresponding to each pixel point;
and graying each screenshot image in the intercepted image set based on the first gray value corresponding to each pixel point to generate a first gray image set.
3. The method according to claim 2, wherein the performing gray scale calculation on the first color value corresponding to each pixel point according to a preset calculation rule to determine the first gray scale value corresponding to each pixel point specifically comprises:
calculating each pixel point in the screenshot image through a black plug matrix to determine a black plug value corresponding to each pixel point;
determining the coefficient of a preset gray scale conversion function corresponding to each pixel point based on the black plug value;
and performing gray calculation on the first color value corresponding to each pixel point based on the gray conversion function to determine the first gray value corresponding to each pixel point.
4. The method according to claim 3, wherein determining the coefficients of the predetermined gray scale conversion function corresponding to each pixel point based on the black plug value specifically comprises:
determining a black plug value preset interval corresponding to the black plug value;
and determining the coefficient of the preset gray scale conversion function corresponding to each pixel point based on the black plug value preset interval.
5. The method according to claim 1, wherein the method for improving smoothness of a screen projection display stream includes, based on the coordinates of the area to be projected, capturing all frame images in the first video stream to be projected to obtain a corresponding captured image set, and specifically includes:
generating a screen area interception instruction aiming at the first screen video stream to be projected based on the coordinates of the screen area to be projected;
and intercepting all frame images of the first video stream to be shot on the basis of the coordinate information of the area to be shot contained in the intercepting instruction so as to obtain a corresponding intercepted image set.
6. The method of claim 1, wherein after acquiring a full-screen image of any one frame of the first video stream to be projected, the method further comprises:
sending the full screen image to the display device;
the display equipment responds to an area selection instruction, selects an area to be subjected to screen capturing in the full-screen image, and determines coordinates of the area to be subjected to screen capturing based on the area to be subjected to screen capturing.
7. The method of claim 1, wherein after the encoding the plurality of grayscale images to be encoded to obtain a second video stream to be projected and transmitting the second video stream to be projected to a display device, the method further comprises:
the display equipment analyzes the coding format of the second video stream to be projected so as to determine a coding rule corresponding to the second video stream to be projected;
and decoding the second video stream to be projected based on the encoding rule to obtain a second video stream to be projected.
8. The method of claim 1, wherein the method further comprises:
under the condition that the data transmission quantity in unit time is not larger than the preset video stream transmission bandwidth, coding the first gray image set to obtain a third video stream to be projected, and sending the third video stream to be projected to display equipment; and the third to-be-projected-screen video stream contains screen projection information of the to-be-projected-screen Internet of things equipment.
9. An apparatus for improving fluency of a projected screen display, the apparatus comprising:
a processor;
and a memory having executable code stored thereon, which when executed, causes the processor to perform a method as claimed in any one of claims 1-8.
10. A non-transitory computer storage medium for improving projected display smoothness, the computer storage medium having stored thereon computer-executable instructions configured to:
acquiring a full-screen image of any frame in a first video stream to be projected, and determining a corresponding area coordinate to be projected in the first video stream to be projected based on the full-screen image;
intercepting all frame images in the first video stream to be projected based on the coordinates of the area to be projected so as to obtain a corresponding intercepted image set;
graying the intercepted image set to generate a first grayscale image set;
determining the compression rate of each gray image in the first gray image set, and judging whether the data transmission quantity in unit time is greater than a preset video stream transmission bandwidth or not based on the compression rate, the resolution of each gray image and the frame rate of the first video stream to be projected;
under the condition that the data transmission quantity in unit time is determined to be larger than the preset video stream transmission bandwidth, determining a plurality of gray level images in the first gray level image set as gray level images to be coded;
and coding the plurality of gray level images to be coded to obtain a second video stream to be projected, and sending the second video stream to be projected to a display device.
CN202111669613.0A 2021-12-31 2021-12-31 Method, equipment and storage medium for improving smoothness of screen projection display Active CN114466228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111669613.0A CN114466228B (en) 2021-12-31 2021-12-31 Method, equipment and storage medium for improving smoothness of screen projection display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111669613.0A CN114466228B (en) 2021-12-31 2021-12-31 Method, equipment and storage medium for improving smoothness of screen projection display

Publications (2)

Publication Number Publication Date
CN114466228A true CN114466228A (en) 2022-05-10
CN114466228B CN114466228B (en) 2023-09-05

Family

ID=81408151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111669613.0A Active CN114466228B (en) 2021-12-31 2021-12-31 Method, equipment and storage medium for improving smoothness of screen projection display

Country Status (1)

Country Link
CN (1) CN114466228B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060010392A1 (en) * 2004-06-08 2006-01-12 Noel Vicki E Desktop sharing method and system
US20130254330A1 (en) * 2011-12-10 2013-09-26 Logmein, Inc. Optimizing transfer to a remote access client of a high definition (HD) host screen image
US9035992B1 (en) * 2013-04-08 2015-05-19 Google Inc. Bandwidth modulation system and method
US20150293691A1 (en) * 2014-04-11 2015-10-15 Samsung Electronics Co., Ltd Electronic device and method for selecting data on a screen
CN105580383A (en) * 2013-09-30 2016-05-11 高通股份有限公司 Method and apparatus for real-time sharing of multimedia content between wireless devices
CN106203261A (en) * 2016-06-24 2016-12-07 大连理工大学 Unmanned vehicle field water based on SVM and SURF detection and tracking
WO2017176036A1 (en) * 2016-04-05 2017-10-12 오성근 User motion-based content sharing method and system
CN107770600A (en) * 2017-11-07 2018-03-06 深圳创维-Rgb电子有限公司 Transmission method, device, equipment and the storage medium of stream medium data
CN111787240A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Video generation method, device and computer readable storage medium
CN112492395A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 Data processing method and device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060010392A1 (en) * 2004-06-08 2006-01-12 Noel Vicki E Desktop sharing method and system
US20130254330A1 (en) * 2011-12-10 2013-09-26 Logmein, Inc. Optimizing transfer to a remote access client of a high definition (HD) host screen image
US9035992B1 (en) * 2013-04-08 2015-05-19 Google Inc. Bandwidth modulation system and method
CN105580383A (en) * 2013-09-30 2016-05-11 高通股份有限公司 Method and apparatus for real-time sharing of multimedia content between wireless devices
US20150293691A1 (en) * 2014-04-11 2015-10-15 Samsung Electronics Co., Ltd Electronic device and method for selecting data on a screen
WO2017176036A1 (en) * 2016-04-05 2017-10-12 오성근 User motion-based content sharing method and system
CN106203261A (en) * 2016-06-24 2016-12-07 大连理工大学 Unmanned vehicle field water based on SVM and SURF detection and tracking
CN107770600A (en) * 2017-11-07 2018-03-06 深圳创维-Rgb电子有限公司 Transmission method, device, equipment and the storage medium of stream medium data
CN111787240A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Video generation method, device and computer readable storage medium
CN112492395A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 Data processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN114466228B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN110795056B (en) Method, device, terminal and storage medium for adjusting display parameters
CN108235037B (en) Encoding and decoding image data
CN109688465B (en) Video enhancement control method and device and electronic equipment
CN110766637B (en) Video processing method, processing device, electronic equipment and storage medium
CN108495054B (en) Method and device for processing high dynamic range signal and computer storage medium
CN113055742A (en) Video display method, device, terminal and storage medium
CN110858388B (en) Method and device for enhancing video image quality
CN114979625A (en) Video quality evaluation method, device, equipment, storage medium and program product
CN112437301B (en) Code rate control method and device for visual analysis, storage medium and terminal
CN106921840B (en) Face beautifying method, device and system in instant video
CN114466228B (en) Method, equipment and storage medium for improving smoothness of screen projection display
CN115293994B (en) Image processing method, image processing device, computer equipment and storage medium
CN109120979B (en) Video enhancement control method and device and electronic equipment
CN110740316A (en) Data coding method and device
CN113691737B (en) Video shooting method and device and storage medium
CN110941413B (en) Display screen generation method and related device
CN111866514B (en) Method and device for compressing video and decompressing video
CN113992859A (en) Image quality improving method and device
CN110858389B (en) Method, device, terminal and transcoding equipment for enhancing video image quality
CN108495053B (en) Metadata processing method and device for high dynamic range signal
CN113099237A (en) Video processing method and device
CN114827666A (en) Video processing method, device and equipment
CN113132703B (en) Image processing method and apparatus
CA2858413C (en) Encoding and decoding using perceptual representations
Sowerby et al. The Effect of Foveation on High Dynamic Range Video Perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant