CN115988170A - Method and device for clearly displaying Chinese and English characters in real-time video screen combination in cloud conference - Google Patents

Method and device for clearly displaying Chinese and English characters in real-time video screen combination in cloud conference Download PDF

Info

Publication number
CN115988170A
CN115988170A CN202310268371.7A CN202310268371A CN115988170A CN 115988170 A CN115988170 A CN 115988170A CN 202310268371 A CN202310268371 A CN 202310268371A CN 115988170 A CN115988170 A CN 115988170A
Authority
CN
China
Prior art keywords
characters
character
wide
length
amplified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310268371.7A
Other languages
Chinese (zh)
Other versions
CN115988170B (en
Inventor
马华文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
G Net Cloud Service Co Ltd
Original Assignee
G Net Cloud Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by G Net Cloud Service Co Ltd filed Critical G Net Cloud Service Co Ltd
Priority to CN202310268371.7A priority Critical patent/CN115988170B/en
Publication of CN115988170A publication Critical patent/CN115988170A/en
Application granted granted Critical
Publication of CN115988170B publication Critical patent/CN115988170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a method and a device for clearly displaying Chinese and English characters in a real-time video screen-closing mode in a cloud conference, wherein the method comprises the following steps: inputting characters to be generated; obtaining an amplification factor according to the actual size of the character to be generated for amplification; acquiring the font size and the blank edge width of the amplified characters; classifying the amplified characters, and calculating blank edge parameters; converting the amplified text characters into character strings, converting the character strings into wide character strings, and setting character parameters for the wide character strings; calculating the actual length of the wide character string; zooming the amplified characters according to the actual length of the wide character string, and adapting to video split screen; a canvas is created and initialized, and the scaled text is drawn on the canvas.

Description

Method and device for clearly displaying Chinese and English characters in real-time video screen combination in cloud conference
Technical Field
The invention relates to the technical field of multimedia information processing, in particular to a method and a device for clearly displaying Chinese and English characters in a real-time video screen-combination mode in a cloud conference.
Background
With the rapid development of the video cloud conference and the diversification of video conference media, the time domain and region limitations are broken, and the video conference can be rapidly carried out anytime and anywhere. By realizing the real-time video screen-combining function in the cloud conference, seamless access of third-party video conference equipment to the cloud conference is realized. In a cloud conference, chinese and English characters and various prompting phrases in the conference need to be displayed in a real-time video screen.
At present, the existing real-time video screen-combining text display schemes are divided into a third-party tool scheme and an Opencv scheme. The third party tool scheme needs a special server to generate pictures of various Chinese and English characters, the character generation effect not only depends on the third party, but also occupies equipment resources and bandwidth resources, namely, the third party tool has limitations and unreliable display effect. The Opencv scheme does not support chinese character conversion. Namely, the above-mentioned related art has a problem that simultaneous display of chinese and english is not supported.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to improve the problem that the related art does not support simultaneous display of chinese and english, thereby providing a method and an apparatus for clearly displaying real-time video screen-closed chinese and english characters in a cloud conference.
In order to solve the technical problem, the disclosed embodiments of the present invention at least provide a method and an apparatus for clearly displaying chinese and english characters in a real-time video screen-closed manner in a cloud conference.
In a first aspect, the disclosed embodiment of the present invention provides a method for clearly displaying real-time video screen-closed chinese and english characters in a cloud conference, including:
the method for clearly displaying the Chinese and English characters in the real-time video screen combination in the cloud conference comprises the following steps:
s1: inputting characters to be generated, and acquiring the actual size of the characters to be generated;
s2: obtaining an amplification factor according to the actual size of the character to be generated, and amplifying the character to be generated to a preset size by using the amplification factor;
s3: acquiring the font size and blank size of the amplified characters;
s4: classifying the amplified characters, and calculating blank edge parameters of the amplified characters according to the classification;
s5: setting the font of the amplified character, converting the amplified character into a character string, converting the character string into a wide character string, and setting character parameters for the wide character string, wherein the character parameters comprise the character size, the character spacing and the background color;
s6: calculating the actual length of the wide character string;
s7: zooming the amplified characters according to the actual length of the wide character string, and adapting to video split screen;
s8: a canvas is created and initialized, and the scaled text is drawn on the canvas.
Optionally, the specific step of S3 is as follows:
s3.1: acquiring the font size of the amplified characters;
s3.2: acquiring the height of the amplified character;
s3.3: the following formula is used: the size of the space = (text height-text size) × magnification factor, and the size of the space is calculated.
Optionally, the specific step of S4 is,
the text is classified into a right-hand display type, a center display type, and a left-hand display type,
if the characters are classified into right display types, calculating blank edge parameters on the left sides of the characters;
if the character is classified as a centered display type, calculating blank edge parameters of two sides of the character;
if the character is classified as a left-right display type, a blank edge parameter on the right side of the character is calculated.
Optionally, the step S6 of calculating the actual length of the wide character string specifically includes:
s6.1: obtaining each broad character in the broad character string, and obtaining the width and the dot pattern of each broad character;
s6.2: after the bitmap rendering of each wide character is converted into a bitmap, counting the number of the wide characters in the wide character string;
s6.1: calculated according to the following formula: wide string actual length = number of wide characters + word spacing + length of wide string.
Optionally, the specific step of S7 is,
dividing the actual length of the wide character string by the amplification factor to carry out scaling so as to obtain the actual generated character length;
judging whether the length of the actually generated characters is larger than the maximum width of the split video screen;
if the length of the actually generated characters is larger than the maximum split-screen width of the video, taking the maximum split-screen width as the width of a generated image to be intercepted and displayed, wherein the image width = the length of the actually generated characters plus a blank edge parameter;
and if the length of the actually generated characters is smaller than the maximum width of the split screen of the video, the image width = the length of the actually generated characters + the blank edge parameter.
Optionally, the specific step of S8 is:
s8.1: creating and initializing a canvas according to the background color of the character parameters;
s8.2: obtaining wide characters in the wide character string and obtaining index numbers of the wide characters;
s8.3: correspondingly obtaining a dot matrix according to the index number of the wide character, and converting the dot matrix into a bitmap format;
s8.4: acquiring the height and width of a bitmap, and calculating the offset of the bitmap;
s8.5: and filling and drawing the characters in the canvas according to the height, the width and the offset of the bitmap.
In a second aspect, an embodiment of the present disclosure further provides a device for clearly displaying chinese and english characters in a real-time video screen-closed in a cloud conference, including:
the character amplifying module is used for amplifying the characters to be generated according to the obtained amplifying coefficient;
the edge calculation module is used for calculating blank edge parameters of the characters according to the display mode;
the length calculation module is used for calculating the length of the actually generated characters;
and the drawing module is used for drawing the characters on the canvas.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, the disclosed embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored, and the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
according to the invention, by adopting a mode of converting dot matrix diagrams into vector diagrams, the characters can not have sawtooth and incomplete phenomena, so that the definition of the characters is improved, the names of participants and various prompting phrases can be processed on the server, the names and various prompting phrases can be displayed quickly and clearly, and the experience and feeling of a user in a cloud conference are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 shows a flowchart of a method for clearly displaying chinese and english characters on a screen of a real-time video in a cloud conference according to an embodiment of the present disclosure;
FIG. 2 illustrates a comparison of magnification parameters provided by the disclosed embodiment of the present invention;
fig. 3 is a schematic structural diagram illustrating a device for clearly displaying chinese and english characters in a real-time video screen combination in a cloud conference according to an embodiment of the disclosure;
fig. 4 shows a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Example 1
As shown in fig. 1 and fig. 2, a flowchart of a method for clearly displaying chinese and english characters on a screen in real time in a cloud conference according to an embodiment of the present disclosure includes:
s1: inputting characters to be generated, and acquiring the actual size of the characters to be generated;
s2: acquiring an amplification factor according to the actual size of the character to be generated, and amplifying the character to be generated to a preset size by using the amplification factor;
specifically, after the actual size of the character to be generated is obtained, the corresponding amplification factor is obtained by referring to fig. 2, the amplified font size, i.e., the preset size, is obtained by multiplying the actual size by the amplification factor, and the problems of jagging and incomplete characters can be reduced when the downsampling is performed through amplification.
S3: and acquiring the font size and the blank edge width of the amplified characters. Specifically, S3 includes the steps of:
s3.1: acquiring the font size of the amplified characters;
s3.2: acquiring the height of the amplified character;
s3.3: the following formula is used: the size of the space = (text height-text size) × magnification factor, and the size of the space is calculated.
S4: classifying the amplified characters, and calculating blank edge parameters of the amplified characters according to the classification;
specifically, the characters are classified into a right display type, a middle display type and a left display type, and if the characters are classified into the right display type, blank edge parameters on the left side of the characters are calculated; if the characters are classified into the centered display type, calculating blank edge parameters of two sides of the characters; and if the characters are classified into the left display type, calculating a blank edge parameter on the right side of the characters.
S5: setting the font of the amplified character, converting the amplified character into a character string, converting the character string into a wide character string, and setting character parameters for the wide character string, wherein the character parameters comprise the character size, the character spacing and the background color;
s6: calculating the actual length of the wide character string;
specifically, the step S6 of calculating the actual length of the wide character string includes:
s6.1: obtaining each wide character in the wide character string, and obtaining the width and the dot pattern of each wide character; specifically, a bitmap corresponding to the wide character can be obtained through table lookup of a freetype interface.
S6.2: after the bitmap rendering of each wide character is converted into a bitmap, counting the number of the wide characters in the wide character string;
s6.1: calculated according to the following formula: wide string actual length = number of wide characters + word spacing + length of wide string.
S7: zooming the amplified characters according to the actual length of the wide character string, and adapting to video split screen;
specifically, the specific step of S7 is to scale by dividing the actual length of the wide character string by the amplification factor to obtain the actual generated character length; judging whether the length of the actually generated characters is larger than the maximum width of the split screen of the video; if the length of the actually generated characters is larger than the maximum split-screen width of the video, taking the maximum split-screen width as the width of a generated image to be intercepted and displayed, wherein the image width = the length of the actually generated characters plus a blank edge parameter; and if the length of the actually generated characters is smaller than the maximum width of the split screen of the video, the image width = the length of the actually generated characters + the blank edge parameter.
S8: creating and initializing canvas, and drawing the zoomed characters on the canvas;
specifically, the specific step of S8 is:
s8.1: creating and initializing a canvas according to the background color of the character parameters;
s8.2: obtaining wide characters in the wide character string and obtaining index numbers of the wide characters;
s8.3: correspondingly obtaining a dot matrix according to the index number of the wide character, and converting the dot matrix into a bitmap format;
s8.4: acquiring the height and width of a bitmap, and calculating the offset of the bitmap;
s8.5: and filling and drawing the characters in the canvas according to the height, the width and the offset of the bitmap.
It can be understood that the technical scheme that this embodiment provided, through the mode that adopts dot-matrix diagram to vector diagram, realize that the phenomenon of sawtooth and incomplete can not appear in the characters to promote the definition of characters, make can handle participant's name and various suggestion wordings on the server, can show fast and clearly, thereby promote user's experience and experience in the cloud meeting.
Example 2
As shown in fig. 3, an embodiment of the present invention further provides a device for displaying chinese and english characters on a screen in a real-time video in a cloud conference, including:
and the character amplification module is used for amplifying the characters to be generated according to the obtained amplification factor. Specifically, the actual size of the character to be generated is obtained first, the amplification factor is obtained according to the actual size of the character to be generated, and then the character to be generated is amplified to the preset size.
And the edge calculation module is used for calculating blank edge parameters of the characters according to the display mode. Specifically, the type of the text is identified, and then the margin edge parameter of the text is calculated according to the type of the text.
And the length calculation module is used for calculating the length of the actually generated characters. Specifically, the width of the wide character string in the character string and the number of the wide character string are obtained, and then the length of the actually generated characters is calculated.
And the drawing module is used for drawing the characters on the canvas. Specifically, a current point is positioned on the canvas according to the width, the height and the offset of the bitmap, then the current point is color filled, and after the filling and drawing of the point are completed, the next point is continuously positioned until the drawing of the character is completed.
Example 3
Based on the same technical concept, an embodiment of the present application further provides a computer device, which includes a memory 1 and a processor 2, as shown in fig. 4, where the memory 1 stores a computer program, and the processor 2 implements any one of the methods described above when executing the computer program.
The memory 1 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 1 may be an internal storage unit, such as a hard disk, of the real-time video screen-combination Chinese and English character clear display method system in the cloud conference in some embodiments. In other embodiments, the memory 1 may also be an external storage device of a real-time video screen-on clear chinese and english characters display method system in a cloud conference, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the memory 1 may include both an internal storage unit of a real-time video screen-closed Chinese and English character clear display method in a cloud conference and an external storage device. The memory 1 may be used to store not only application software installed in a method for clearly displaying chinese and english characters in a real-time video closed screen in a cloud conference and various data, such as codes of a program for a method for clearly displaying chinese and english characters in a real-time video closed screen in a cloud conference, but also temporarily store data that has been output or will be output.
The processor 2 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor or other data Processing chip in some embodiments, and is configured to execute program codes stored in the memory 1 or process data, for example, execute a program for performing a method for displaying characters in chinese and english in real-time video on screen in a cloud conference.
The disclosed embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the method for refreshing application page content provided in the embodiments disclosed in the present invention includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the method described in the above method embodiments, which may be specifically referred to the above method embodiments, and are not described herein again.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any one of the methods of the preceding embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK) or the like.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (9)

1. A method for clearly displaying Chinese and English characters in a real-time video screen combination mode in a cloud conference is characterized by comprising the following steps:
s1: inputting characters to be generated, and acquiring the actual size of the characters to be generated;
s2: obtaining an amplification factor according to the actual size of the character to be generated, and amplifying the character to be generated to a preset size by using the amplification factor;
s3: acquiring the font size and the font height of the amplified characters, and calculating the blank size;
s4: classifying the amplified characters, and calculating blank edge parameters of the amplified characters according to the classification;
s5: setting the font of the amplified character, converting the amplified character into a character string, converting the character string into a wide character string, and setting character parameters for the wide character string, wherein the character parameters comprise character size, character spacing and background color;
s6: calculating the actual length of the wide character string;
s7: zooming the amplified characters according to the actual length of the wide character string, and adapting to video split screen;
s8: a canvas is created and initialized, and the scaled text is drawn on the canvas.
2. The method for clearly displaying the real-time video screen-closed Chinese and English characters in the cloud conference as claimed in claim 1, wherein the specific steps of S3 are as follows:
s3.1: acquiring the font size of the amplified characters;
s3.2: acquiring the height of the amplified character;
s3.3: the following formula is used: the size of the space = (text height-text size) × magnification factor, and the size of the space is calculated.
3. The method for clearly displaying the real-time video on screen Chinese and English characters in the cloud conference according to claim 1 or 2, wherein the specific steps of S4 are,
the text is classified into a right-hand display type, a center display type, and a left-hand display type,
if the characters are classified into the right display type, calculating blank edge parameters on the left sides of the characters;
if the characters are classified into the centered display type, calculating blank edge parameters of two sides of the characters;
and if the characters are classified into the left display type, calculating a blank edge parameter on the right side of the characters.
4. The method for clearly displaying the real-time video screen-closed Chinese and English characters in the cloud conference as claimed in claim 1, wherein the step S6 of calculating the actual length of the wide character string comprises the following specific steps:
s6.1: obtaining each wide character in the wide character string, and obtaining the width and the dot pattern of each wide character;
s6.2: after the bitmap rendering of each wide character is converted into a bitmap, counting the number of the wide characters in the wide character string;
s6.1: calculated according to the following formula: wide string actual length = number of wide characters + word spacing + length of wide string.
5. The method for clearly displaying the real-time video on screen Chinese and English characters in the cloud conference as claimed in claim 1, wherein the specific steps of S7 are,
dividing the actual length of the wide character string by the amplification factor for scaling to obtain the actual generated character length;
judging whether the length of the actually generated characters is larger than the maximum width of the split screen of the video;
if the length of the actually generated characters is larger than the maximum split-screen width of the video, taking the maximum split-screen width as the width of a generated image to be intercepted and displayed, wherein the image width = the length of the actually generated characters plus a blank edge parameter;
and if the length of the actually generated characters is smaller than the maximum split-screen width of the video, the image width = the length of the actually generated characters plus the blank edge parameter.
6. The method for clearly displaying the real-time video on screen Chinese and English characters in the cloud conference according to claim 1, wherein the step S8 comprises the following specific steps:
s8.1: creating and initializing a canvas according to the background color of the character parameters;
s8.2: acquiring a broad character in a broad character string, and acquiring an index number of the broad character;
s8.3: correspondingly obtaining a dot matrix according to the index number of the wide character, and converting the dot matrix into a bitmap format;
s8.4: acquiring the height and width of a bitmap, and calculating the offset of the bitmap;
s8.5: and filling and drawing the characters in the canvas according to the height, the width and the offset of the bitmap.
7. The utility model provides a real-time video closes clear display device of chinese and english character on screen in cloud meeting which characterized in that includes:
the character amplifying module is used for amplifying the characters to be generated according to the obtained amplifying coefficient;
the edge calculation module is used for calculating blank edge parameters of the characters according to the display mode;
the length calculation module is used for calculating the length of the actually generated characters;
and the drawing module is used for drawing the characters on the canvas.
8. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the method of any of claims 1 to 6.
9. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 6.
CN202310268371.7A 2023-03-20 2023-03-20 Method and device for clearly displaying English characters in real-time video combined screen in cloud conference Active CN115988170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310268371.7A CN115988170B (en) 2023-03-20 2023-03-20 Method and device for clearly displaying English characters in real-time video combined screen in cloud conference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310268371.7A CN115988170B (en) 2023-03-20 2023-03-20 Method and device for clearly displaying English characters in real-time video combined screen in cloud conference

Publications (2)

Publication Number Publication Date
CN115988170A true CN115988170A (en) 2023-04-18
CN115988170B CN115988170B (en) 2023-08-11

Family

ID=85966867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310268371.7A Active CN115988170B (en) 2023-03-20 2023-03-20 Method and device for clearly displaying English characters in real-time video combined screen in cloud conference

Country Status (1)

Country Link
CN (1) CN115988170B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116527986A (en) * 2023-04-26 2023-08-01 天地阳光通信科技(北京)有限公司 Split-screen text display control method, device and storage medium
CN116795315A (en) * 2023-06-26 2023-09-22 广东凯普科技智造有限公司 Method and system for realizing continuous display of character strings on LCD (liquid crystal display) based on singlechip

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014066762A (en) * 2012-09-24 2014-04-17 Sharp Corp Display device, television receiver, and program
CN105955935A (en) * 2016-04-29 2016-09-21 乐视控股(北京)有限公司 Text control realization method and apparatus
CN110971847A (en) * 2018-09-28 2020-04-07 杭州海康威视数字技术股份有限公司 Screen display content superposition method and device, electronic equipment and storage medium
CN111209721A (en) * 2018-11-16 2020-05-29 北京京东尚科信息技术有限公司 Bitmap font realization method and device, electronic equipment and storage medium
CN112689119A (en) * 2021-03-11 2021-04-20 全时云商务服务股份有限公司 Processing method and device for screen combination of recorded videos in cloud conference
CN113705156A (en) * 2021-08-30 2021-11-26 上海哔哩哔哩科技有限公司 Character processing method and device
CN113747104A (en) * 2020-09-30 2021-12-03 常熟九城智能科技有限公司 Method and device for displaying document in video conference and cloud server
CN114827718A (en) * 2022-06-27 2022-07-29 全时云商务服务股份有限公司 Method and device for self-adaptive alignment display of real-time video screen-combining characters in cloud conference

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014066762A (en) * 2012-09-24 2014-04-17 Sharp Corp Display device, television receiver, and program
CN105955935A (en) * 2016-04-29 2016-09-21 乐视控股(北京)有限公司 Text control realization method and apparatus
CN110971847A (en) * 2018-09-28 2020-04-07 杭州海康威视数字技术股份有限公司 Screen display content superposition method and device, electronic equipment and storage medium
CN111209721A (en) * 2018-11-16 2020-05-29 北京京东尚科信息技术有限公司 Bitmap font realization method and device, electronic equipment and storage medium
CN113747104A (en) * 2020-09-30 2021-12-03 常熟九城智能科技有限公司 Method and device for displaying document in video conference and cloud server
CN112689119A (en) * 2021-03-11 2021-04-20 全时云商务服务股份有限公司 Processing method and device for screen combination of recorded videos in cloud conference
CN113705156A (en) * 2021-08-30 2021-11-26 上海哔哩哔哩科技有限公司 Character processing method and device
CN114827718A (en) * 2022-06-27 2022-07-29 全时云商务服务股份有限公司 Method and device for self-adaptive alignment display of real-time video screen-combining characters in cloud conference

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116527986A (en) * 2023-04-26 2023-08-01 天地阳光通信科技(北京)有限公司 Split-screen text display control method, device and storage medium
CN116795315A (en) * 2023-06-26 2023-09-22 广东凯普科技智造有限公司 Method and system for realizing continuous display of character strings on LCD (liquid crystal display) based on singlechip
CN116795315B (en) * 2023-06-26 2024-02-09 广东凯普科技智造有限公司 Method and system for realizing continuous display of character strings on LCD (liquid crystal display) based on singlechip

Also Published As

Publication number Publication date
CN115988170B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN115988170A (en) Method and device for clearly displaying Chinese and English characters in real-time video screen combination in cloud conference
JP5307761B2 (en) Method and system for real-time personalization of electronic images
US10129385B2 (en) Method and apparatus for generating and playing animated message
CN110533594B (en) Model training method, image reconstruction method, storage medium and related device
US11908107B2 (en) Method and apparatus for presenting image for virtual reality device, device and non-transitory computer-readable storage medium
US20070279418A1 (en) Remoting sub-pixel resolved characters
CN109636885B (en) Sequential frame animation production method and system for H5 page
CN105956133B (en) Method and device for displaying file on intelligent terminal
WO2023019995A1 (en) Training method and apparatus, translation presentation method and apparatus, and electronic device and storage medium
CN112991171A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114827718A (en) Method and device for self-adaptive alignment display of real-time video screen-combining characters in cloud conference
CN104424174B (en) Document processing system and document processing method
US11763504B2 (en) Method for displaying electronic price tag, server and storage medium
US9905030B2 (en) Image processing device, image processing method, information storage medium, and program
CN113038184B (en) Data processing method, device, equipment and storage medium
CN109543130A (en) The display methods and device of label under three-dimensional scenic
CN103839217A (en) Watermark image realization method
CN110996026B (en) OSD display method, device, equipment and storage medium
CN109933262B (en) GIF screen capturing method and device
CN111399788B (en) Media file playing method and media file playing device
KR102185851B1 (en) Method for Producting E-Book and Computer Program Therefore
CN115469867A (en) Method and device for determining style information of page component
CN116778029A (en) Text drawing method, system, device and medium based on graph visualization
JP2006259662A (en) Method for forming bitmap character by scaling bitmap bit stroke data
CN118036554A (en) Method, apparatus, computer device and medium for adaptively changing content size

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant